Are Fake Coding Tests the New Supply-Chain Backdoor?

Are Fake Coding Tests the New Supply-Chain Backdoor?

A job offer that looks routine, a Git clone that feels harmless, and a code editor that opens without complaint—this familiar sequence has turned into the most effective way yet to breach developer laptops and smuggle malware into trusted repositories. Security analysts tied the campaign to Void Dokkaebi, also known as Famous Chollima, and mapped a playbook that blends persuasive recruiting with IDE booby traps and git sleight of hand. The aim is not just one victim. It is propagation. By luring engineers at crypto and AI firms into running “coding tests,” the operators set off an infection that rides version-control workflows and defaults in Visual Studio Code, turning everyday collaboration into a covert distribution network for a Node.js remote access tool.

Inside the Playbook

The Social Ruse and Initial Execution

The setup begins with convincing recruiter outreach that mirrors current hiring patterns in blockchain analytics, L2 scaling, and model tooling for AI inference. Candidates are steered to clone a repository from GitHub, GitLab, or Bitbucket, framed as a timed challenge aligned with role requirements. The repo is polished: realistic README, CI badges, even issues seeded with trivial bugs. The trap sits in .vscode/tasks.json. When VS Code prompts for workspace trust, saying yes triggers prewired tasks that fetch or launch a backdoor without any build step or test command ever being typed. This open-time execution sidesteps pipeline scanners, which rarely emulate a developer granting trust, and it hides behind legitimate Node.js and npm activity that blends into normal setup instructions.

Once the machine is tethered, the operation pivots from simple foothold to quiet spread. Analysts observed a Node.js variant of the DEVSPOPPER RAT using WebSocket for command-and-control and HTTP for exfiltration, with session multiplexing that let multiple operators share the same host. The malware avoided CI/CD sandboxes by checking for headless containers and ephemeral runners, then idled until a human typed or VS Code APIs signaled a real editor session. With persistence in place, Void Dokkaebi shifted to tampering with local projects. It injected obfuscated JavaScript into source files and concealed it with trailing whitespace to push the payload below the fold in default views. The result looked like routine app code until the file scrolled, enabling a worm-like spread each time the victim reused a template or pushed a “minor fix.”

From One Laptop to Many: Version-Control Manipulation

The campaign’s most insidious component lived in a script named temp_auto_push.bat, which rewrote git history while impersonating the original author. After a staged commit embedded the payload, the script cloned author metadata, preserved timestamps and messages, and executed a force-push that blended into legitimate rebases. On casual inspection, the tampered commit chain appeared intact, and code-review diffs often missed the payload due to its whitespace camouflage. Branch protection helped little if force-pushes were allowed and reviewers trusted familiar signatures. With each push, the poisoned code hit new forks and mirrors, and the same VS Code tasks trick stowed away in nested directories to trigger on open in the next environment.

Evidence of scale surfaced as telemetry aggregated across platforms. By March 2026, Trend Micro tallied more than 750 infected repositories, over 500 malicious VS Code task configurations, and 101 instances of the commit-tampering tool. Infection markers appeared in repos tied to DataStax and Neutralinojs, underscoring how quickly supply-chain risk can leap from niche tests to widely consumed code. The RAT’s choice to anchor on developer endpoints explained the persistence: hardened CI/CD gates never saw the first execution, and source scanners skipped hidden sections that editors didn’t render. Building on this foundation, the operators leveraged everyday collaboration—pulls from upstream, feature branches, and hotfix rebases—to sustain propagation without another DM or recruiter call.

Defense and Fallout

Breaking the Chain on Developer Endpoints

Mitigation started with reframing interview hygiene. Disposable virtual machines became the default venue for coding tests, with clipboard isolation and no mounts to host directories. Organizations added .vscode/ to .gitignore across templates and turned on repository rulesets that reject uploads containing workspace configurations. On commit integrity, teams enforced GPG or SSH signatures, disabled force-pushes to protected branches, and mandated pull requests with diff whitespace visualization enabled. These steps directly disrupted temp_auto_push.bat’s value proposition by making silent history rewrites conspicuous, while reviewer checklists flagged suspicious globals like global_!_! or global_V and any stealthy trailing-space blocks embedded near import statements.

Monitoring closed the loop where policy could not. Endpoint detection and response was prioritized for developer laptops, tuned to catch WebSocket beacons from Node.js processes and odd HTTP exfil paths. Network controls flagged unsanctioned calls to blockchain APIs such as api.trongrid.io and Binance Smart Chain RPC endpoints, which the operators used as opportunistic infrastructure and decoys. On the hunt front, teams scanned for the presence of temp_auto_push.bat and audited for git reflog anomalies that implied rewritten histories masquerading as routine squash merges. Crucially, incident response playbooks treated initial repo cleanup as insufficient and reimaged build boxes only after workstation compromise had been ruled out, because the RAT’s design explicitly bypassed CI/CD and lived off the land on endpoints.

Toward Durable Resilience

The broader lesson reached beyond this campaign: the developer workstation had become the soft underbelly of modern supply chains. Traditional guardrails—static analysis, container scans, reproducible builds—were necessary but failed when the first instruction executed at project open inside a trusted IDE. Durable resilience depended on workflow-aware controls: sandbox-first trials for any external code, signed commits enforced by policy, and repo hygiene that refused editor automation in codebases. Moreover, education mattered. Engineers learned to treat workspace trust prompts like privilege escalations, to skim beyond the fold in diffs, and to treat unsolicited “challenge repos” with the same suspicion reserved for unexpected attachments.

Practical next steps were clear and specific. Teams locked down branch protections, required code owners for packages with transitive reach, and added pre-receive hooks to reject commits modifying .vscode directories. Build mirrors fetched from read-only upstreams and validated signatures before syncing. Interview processes moved to browser-based sandboxes or ephemeral VMs with recording, and security groups published indicators and YARA rules for Node.js RAT artifacts. Taken together, these measures shifted power back to defenders by breaking the attack at its quietest link—the moment trust was granted—and made the cost of history forgery and whitespace obfuscation outweigh the attacker’s payoff.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later