Every red team operator eventually learns the same lesson the hard way. You spend weeks building out your command and control infrastructure, configuring your implants, setting up redirectors, testing your callbacks. The engagement kicks off. Everything works. Then some analyst at the target org drops your teamserver IP into a browser, gets back a blank page or a connection refused, and suddenly your infrastructure is burned.
The problem isn't your C2 framework. The problem is that your redirector is either too dumb or too complicated. Most setups fall into one of two camps: an Apache mod_rewrite rule that checks the User-Agent and calls it a day, or a sprawling Terraform deployment that takes longer to provision than the actual engagement.
I wanted something in between. Something that validates traffic properly, runs at the edge with zero server management, and can import a C2 profile without me hand-translating every URI into a rewrite rule. So I did what any reasonable person would do... partnered up with Claude Opus 4.6 and built my own. I know how that sounds. Keep reading.
What it actually does
Oblique Relay is a Cloudflare Workers script that sits between your implants and your teamserver. Every inbound request runs through a validation pipeline. If the request looks like it came from your implant, it passes through to the backend. If it doesn't, the request hits a decoy response. That's either a 302 redirect to Wikipedia or whatever legitimate site you configure, or a boring static page that says nothing.
The validation pipeline checks method, path prefix, required headers, User-Agent pattern, source country, and time window. All six checks have to pass. The first failure kicks the request to the decoy path. There's no partial credit.
The whole thing runs on Cloudflare's edge network, which means your implant callbacks resolve to Cloudflare IP space. Not your VPS. Not your cloud instance. Cloudflare. The same IP ranges that serve millions of legitimate websites. Good luck writing a firewall rule for that without breaking half the internet.
Here's what the validation pipeline looks like step by step. A request has to clear all six gates. The first failure ejects it to the decoy path:
The profile system
Here's where things get interesting. Most redirectors have their filtering rules baked into config files that require a redeploy to change. Oblique Relay stores its profile in Cloudflare KV, which means you can update it at runtime through an authenticated API call. No redeploy, no downtime, no re-running your CI pipeline.
The profile is a JSON document that controls the entire validation pipeline:
json{
"paths": ["/api/v1/status", "/api/v1/data"],
"methods": ["GET", "POST"],
"headers": {"X-Request-ID": "^[a-f0-9-]{36}$"},
"ua_pattern": "Mozilla/5\\.0",
"geo_allow": ["US", "CA"],
"geo_deny": ["CN", "RU"],
"time_window": {"start": "08:00", "end": "22:00", "tz": "America/New_York"},
"jitter_ms": 100,
"backends": {"/content": "https://alt-teamserver.example.com"}
}
Geo-fencing uses the CF-IPCountry header that Cloudflare attaches to all requests for free. Time windowing lets you restrict callbacks to business hours so your implant traffic blends with normal web activity. Jitter adds a random delay before responding to make timing analysis harder. And the backends map lets you route different path prefixes to different teamservers, which is useful when you're running multiple C2 channels on one redirector.
The profile lives in Cloudflare KV and is read on each request. You update it through the operator API. The worker picks up changes immediately with no redeploy:
But the feature I'm most proud of is the profile import system.
Importing C2 profiles directly
If you've ever set up a redirector for Cobalt Strike, you know the pain. You write your malleable profile, get your URIs and headers and user-agent dialed in, and then you have to manually replicate all of that in your redirector config. It's tedious, error-prone, and every mismatch between the two configs is a potential detection opportunity.
Oblique Relay has parsers that read native C2 profile formats and translate them into the relay's normalized schema. You POST your malleable profile to the import endpoint and it extracts the URIs, methods, headers, and user-agent into the validation pipeline automatically.
bashcurl -X POST -H "X-Auth-Token: $SECRET" \
-d @malleable.profile \
"https://relay.example.com/__profile/import"
It supports Cobalt Strike malleable profiles, Sliver HTTP configs, Mythic JSON profiles, Havoc's yaotl format, and PoshC2's config format. Each parser only extracts request headers, never response headers, because the relay only needs to validate what the implant sends, not what the teamserver returns.
It also has a dry run mode. Pass ?dry_run=true and it shows you what the parsed profile would look like without actually saving it. Useful for sanity checking before you commit to a config change mid-engagement.
All extracted strings get run through regex escaping before they become part of the validation rules. This matters because C2 profiles can contain arbitrary strings, and you don't want a path like /api/v1/status(check) accidentally becoming a regex capture group in your redirector. It's one of those boring defensive coding things that only matters when it prevents something terrible.
Session tracking
One of the problems with traditional redirectors is that they're stateless. A request comes in, it either passes or bounces, and that's it. You have no visibility into patterns across requests. Is this the same implant calling back repeatedly? Did the request count from a particular IP just spike in a way that suggests someone is probing your infrastructure?
Oblique Relay uses Cloudflare Durable Objects to maintain per-session state. Sessions are keyed by an X-Session-ID header if present, falling back to the source IP. Each session tracks request count, first and last seen timestamps, and the last 100 paths accessed with timestamps.
If a session exceeds 1,000 requests, it trips an auto-flag. This is a simple heuristic, but it catches the obvious case where someone is fuzzing your redirector trying to find valid paths. Sessions expire after 24 hours of inactivity via Durable Object alarms, and a KV-based index lets you list all active sessions without having to enumerate every DO instance.
bash# list active sessions
curl -H "X-Auth-Token: $SECRET" https://relay.example.com/__sessions
# drill into a specific session
curl -H "X-Auth-Token: $SECRET" https://relay.example.com/__sessions/abc-123
This is the kind of operational awareness that's hard to get from a stateless redirector. When you're running a multi-week engagement, knowing that a new IP started hitting your redirector 30 minutes after the blue team's shift change is the difference between staying operational and getting burned.
The security model
There are two separate trust boundaries in the system, and keeping them separate was a deliberate decision.
Operator endpoints like health checks, profile management, metrics, and sessions are protected by PROFILE_SECRET. You send the token in an X-Auth-Token header, it's validated using HMAC-SHA256 (timing-safe, not a naive string compare), and you're in. Standard stuff.
Implant traffic is validated by the profile pipeline only. There's no shared secret on this path. This might seem like a gap until you think about how C2 frameworks actually work. Cobalt Strike, Sliver, Mythic, Havoc, PoshC2... none of them inject custom auth headers into their HTTP callbacks by default. If you required an auth token on the implant path, you'd reject real callbacks out of the box.
Instead, the profile itself is the security boundary. An attacker would need to know the exact C2 configuration to craft requests that pass validation. And even if they somehow did, the backend C2 server has its own encryption and key exchange. Proxied garbage is just noise at the next layer.
Operators who want belt-and-suspenders can add required headers to the profile. Set "headers": {"X-Custom": "^secret-value$"} and configure the same header in your C2 framework. It's framework-dependent and requires touching both configs, but the option is there.
Testing it for real
I don't trust security tools that only have unit tests against mocked responses. Mocks prove your code works against your assumptions. They don't prove your code works against reality.
Oblique Relay has 146 unit tests across 12 files, and those are fine for catching regressions. But the real confidence comes from the E2E suites.
The Sliver E2E suite starts a real Sliver server, generates a real implant, imports the real Sliver C2 profile into the relay, compiles the beacon, runs it through the relay, and verifies that the beacon registers and executes tasks. Not simulated tasks. Real whoami commands and file downloads, with the traffic flowing through the relay the entire time.
The Mythic suite is even more involved. It stands up the full Mythic stack, all nine containers: postgres, rabbitmq, the Mythic server, Hasura GraphQL, nginx, the HTTP C2 profile container, and the Poseidon agent. It authenticates via the REST API, generates a Poseidon payload with the relay as the callback host, runs the agent, and verifies task execution through the database. This takes a few minutes to spin up, but when those 33 tests pass, you know the relay actually works with Mythic in production, not just in theory.
Running it yourself
The whole project is plain JavaScript. No TypeScript, no build step, no bundler. You can read the entire worker script and understand exactly what it does. That was intentional. When your redirector is sitting between your implants and your teamserver during a live engagement, you need to be able to audit the code quickly and trust it completely.
bashgit clone https://github.com/errantpacket/Oblique-Relay.git
cd Oblique-Relay
npm install
# set your secrets
wrangler secret put BACKEND_URL
wrangler secret put DECOY_URL
wrangler secret put PROFILE_SECRET
# deploy
npm run deploy
The repo also includes a local HTML dashboard in tools/dashboard.html for managing the relay from your browser. Five tabs covering health, profile management, metrics with charts, filterable logs, and session details with path history. Zero dependencies. Open the file, point it at your relay URL, and you're operational.
Why Cloudflare Workers
Why not a VPS with nginx? Why not a Lambda function? Why Workers specifically?
Three reasons. First, Workers run at the edge, which means your implant traffic terminates at whatever Cloudflare POP is closest to the implant, not at a single server with a single IP that can be blocked. Second, the execution model is perfect for a redirector. Each request is an independent invocation with a 30-second wall clock timeout. There's no long-running process to crash, no state to corrupt, no server to patch. Third, the ecosystem gives you KV for config and logging, Durable Objects for session tracking, and native access to Cloudflare headers like CF-IPCountry, all without running any infrastructure.
The tradeoff is that Workers can only make HTTP/HTTPS requests. You can't proxy raw TCP or DNS through a Worker. This means Oblique Relay is specifically an HTTP redirector. If your C2 framework uses something other than HTTP callbacks, this isn't the right tool. But for the vast majority of engagements where the implant is calling back over HTTPS, which is almost all of them, it fits perfectly.
Adding your own C2 parser
The parser system was designed from the start to be extended. Every parser is a single JS file that exports a name and a parse function. The name maps to the ?format= query parameter on the import endpoint. The parse function takes raw config text and returns a normalized profile object. That's the entire contract.
The repo includes a detailed guide at docs/adding-a-parser.md that walks through the full process: creating the parser module, registering it in the index, writing the required test coverage, and the security requirements around regex escaping and request-vs-response header separation. That last point trips people up more than anything else. C2 configs define headers in both directions, but the relay only cares about what the implant sends, not what the teamserver responds with. Include response headers in your parser output and the relay rejects every request because browsers don't send those headers. The guide calls this out explicitly because it's the most common parser bug.
The repo includes a Claude Code slash command at .claude/commands/ that scaffolds a new parser for you. If you're using Claude Code for development, you can run the command and it will generate the parser file, test file, and registration boilerplate following all of the project's conventions. It knows about the security requirements, the test coverage categories, and the normalized schema. It's the fastest way to go from "I want to add support for framework X" to a working parser with tests.
The checklist for a new parser is straightforward: create the parser module with escapeRegex() on all extracted strings, register it in the index, write tests covering the happy path, path normalization, header escaping, invalid input rejection, and dry run mode. Run npm test and npm run lint. Update the CLAUDE.md. That's it.
Wrapping up
Red team infrastructure doesn't need to be complicated, but it does need to be deliberate. The days of pointing your implant at a bare VPS and hoping nobody looks too closely are over. Defenders are smarter, detection is faster, and the margin for sloppy OPSEC keeps shrinking.
Oblique Relay is one piece of that puzzle. It handles the part where your callbacks hit the internet and something has to decide whether to let them through or send back a convincing nothing. It does that job on infrastructure you don't have to manage, behind IP space you can't be singled out on, with a validation pipeline that matches your actual C2 config instead of a hand-rolled approximation of it.
If you're running red team engagements and your redirector is still an Apache mod_rewrite rule, give Oblique Relay a look. If you find bugs, open an issue. If you write a parser for a framework I haven't covered, send a PR.