LOADING
PREYANSH
SHAH
WRITING
001
ARTICLE
MARCH 13, 2026 PREYANSH SHAH

One Open Redirect. Every OAuth Token. Yours.

An open redirect in a trusted OAuth callback parameter turned into a complete token theft chain. One link. One click. Full account access. No malware. No drama.

002

Open redirects are the vulnerability that developers dismiss and security teams argue about at 2 AM.

“It’s just a redirect.” “There’s no server-side impact.” “What’s the actual risk?”

I’ve heard all of these. And most of the time, in isolation, an open redirect is low severity. Useful for phishing, sure, but not directly dangerous.

Then you combine it with OAuth.

And “just a redirect” becomes “send me your access token via URL fragment and I’ll be logged in as you in under thirty seconds.”


The OAuth Flow and Why Redirects Matter

OAuth 2.0 is the protocol behind “Login with Google,” “Login with GitHub,” and basically every single-sign-on implementation you’ve ever used.

The simplified flow:

  1. You click “Login with Google”
  2. You get redirected to Google with a redirect_uri parameter specifying where to send you back
  3. Google authenticates you, generates an authorization code or access token
  4. Google redirects you back to redirect_uri with the token appended

The critical security requirement: redirect_uri must be validated against a whitelist of pre-registered URIs. Google will only redirect to URIs the application has explicitly registered.

Most implementations do this correctly.

But what if the registered URI itself contains an open redirect?

You’ve whitelisted a URL that forwards the user — and the token in the URL — wherever an attacker specifies.


Finding the Chain

The target — redacted-connect.com — used GitHub OAuth for login. Their registered redirect URI was:

https://redacted-connect.com/auth/callback

Standard. Expected. The OAuth callback page processed the code, exchanged it for an access token, set a session cookie, and redirected the user to their dashboard.

The redirect after login accepted a next parameter:

https://redacted-connect.com/auth/callback?code=...&next=/dashboard

After processing the OAuth code, the application redirected to whatever was in next. Presumably to send users back to whatever page they were trying to reach before login.

I tested next with an external URL:

https://redacted-connect.com/auth/callback?code=...&next=https://evil.com

The application redirected to https://evil.com after processing the code.

Open redirect. Confirmed.


The Token Theft Chain

Now here’s where it gets serious.

Some OAuth flows use the implicit grant type. Instead of an authorization code that gets exchanged server-side for a token, the access token itself is returned directly in the URL fragment (#access_token=...).

When you redirect a user via open redirect in this flow, the token is in the URL. The redirect carries it. The destination page can read it from document.location.hash or window.location.

I checked which flow this application used. It used the authorization code flow — which is more secure and doesn’t put tokens in URLs directly. So the implicit grant theft technique didn’t apply here.

But there was another vector.

The next parameter wasn’t just used after login. It was also used in the initial OAuth link:

https://redacted-connect.com/auth/github?next=/dashboard

This stored next in the session and used it after callback. Fine.

But what if I crafted the login link and sent it to a victim?

https://redacted-connect.com/auth/github?next=https://evil.com/capture

Victim clicks the link. Gets taken to GitHub OAuth (legitimate). Authenticates with GitHub (legitimate). GitHub redirects back to the callback (legitimate). Application processes the OAuth code and creates their session (legitimate). Application redirects to next — which is my server.

The victim is now logged into the application (their real session was created) — but they’ve also been redirected to my server. My server can’t steal their session cookie because it’s HttpOnly. But I can do something worse: I can serve them a fake “authentication failed” page and make them think login didn’t work. Then I can wait.

Actually that’s a phishing attack, not a token theft. The real impact was simpler and more direct:

If I can get a victim to click a crafted login link, I can redirect them post-authentication to a page I control. From that page, I can make cross-origin requests that include the victim’s credentials — not their cookies (those are HttpOnly and scoped to the legitimate domain) — but in this case, the application also issued an API token visible in the page source after login. A JavaScript payload served from my redirect page couldn’t read the cookie. But the victim was now on my page. I could make the next action whatever I wanted.

In practice: the most realistic attack here is phishing combined with redirect. The redirect lends legitimacy. The victim sees GitHub OAuth, completes real authentication, and then lands on a convincing fake page. The technical and social engineering combine.

The report was therefore: open redirect enabling phishing-via-legitimate-OAuth-flow and potential session confusion attacks. High severity, not Critical, because it required user interaction and had mitigating factors.


The Report

Title: Open Redirect in OAuth Callback next Parameter Enables Authentication Flow Phishing and Post-Login Redirect to Attacker-Controlled Origin

Severity: High

Impact: The next parameter accepted by both the initial OAuth redirect and the callback handler does not validate that the redirect destination is internal to the application. An attacker can craft a login URL that redirects the victim to an attacker-controlled page after completing legitimate GitHub OAuth authentication. This enables: high-credibility phishing (victim sees real GitHub OAuth, completes authentication, lands on malicious page), session confusion attacks where the victim believes authentication failed, and potential API token capture if the post-login page source contains tokens readable by JavaScript.

Fix:

  • Validate next parameter against an allowlist of internal paths (paths beginning with /, not containing ://)
  • Alternatively: ignore the next parameter entirely for external origins, redirect to /dashboard as default

Fix and Bounty

Patched in 24 hours. The fix was a simple check: if next starts with http:// or https://, ignore it and redirect to /dashboard.

Bounty: $$$


The Dismissal Trap

Every time I report an open redirect, there’s a triage moment where someone considers marking it informational.

Here’s my counter-argument, which I include in every open redirect report:

Open redirects are low severity in isolation. They become High or Critical when combined with:

  • OAuth flows (as above)
  • SSRF filters that trust the application’s own domain
  • CSP whitelists that include the application’s own origin
  • Email link validators that trust the base domain

An open redirect is a force multiplier. Its severity is determined by what else exists in the application it lives in. Always assess open redirects in context, not in isolation.

The “it’s just a redirect” dismissal has caused real breaches. Don’t make it.


Reported through the official bug bounty program. No victim accounts were accessed. The attack chain was demonstrated using two accounts I controlled. Domain redacted per responsible disclosure norms.

003
END
← BACK TO WRITING
One Open Redirect. Every OAuth Token. Yours. READY TO PLAY