LOADING
PREYANSH
SHAH
WRITING
001
ARTICLE
FEBRUARY 18, 2026 PREYANSH SHAH

Blind SQLi, 47 Hours, and a Database Full of Secrets Nobody Knew Were There

A time-based blind SQL injection that started with a one-second delay and ended with 200,000 user records, internal API keys, and a very uncomfortable conversation with their CTO.

002

I want to start with a number: 47 hours.

That’s how long this took from first suspicious query to full proof-of-concept demonstrating database access.

Not because the vulnerability was hard to find. It wasn’t. The first signs showed up in about twenty minutes.

It took 47 hours because blind SQL injection is slow, methodical, and deeply unglamorous work. There’s no dramatic “access granted” screen. No password prompt that suddenly opens. Just you, a timing oracle, and an almost religious patience for one-second delays.

This is that story.


First, Some Context on SQL Injection in 2026

People think SQLi is dead.

Automated scanners catch it. Frameworks use parameterized queries by default. ORMs abstract away raw SQL. The OWASP Top 10 has been screaming about it since 2003.

And yet.

Here’s what actually happens in production applications: the main application uses an ORM. Clean. Parameterized. Fine. But somewhere in the codebase — usually in a reporting feature, a search function, an admin filter, or a legacy endpoint that predates the ORM — someone wrote raw SQL. And then forgot about it.

That forgotten corner is where I found this.


The Target

Redacted-analytics.com — an analytics and reporting platform for e-commerce businesses. They track sales data, customer behavior, conversion rates, that kind of thing.

The program had been open for about eighteen months. They had a large scope, good response times historically, and a specific note saying that database-level access would be treated as Critical regardless of how it was achieved.

I’d been on this program for two weeks with nothing significant. Bunch of informational findings. One medium-severity misconfiguration. Nothing exciting.

Then I found the search function.


The Search Function That Started Everything

The platform had a “custom report builder” feature. You could filter your analytics data by various parameters — date ranges, product categories, customer segments — and generate reports.

Most of these filters were dropdown selections. Safe. Boring. Parameterized.

But one filter was a free-text search field labeled “Customer Tag”. You could type a tag name and it would filter your customer list to show only customers with that tag.

I typed a normal tag: premium

Worked fine. Returned a customer list filtered by that tag.

I typed: premium'

A single apostrophe. The oldest trick in the book.

The page returned a 500 error.

Not a validation error. Not a “tag not found” message. A raw 500 Internal Server Error with a stack trace visible in the response.

Part of the stack trace said:

PG::SyntaxError: ERROR: unterminated quoted string at or near "'premium''"
LINE 1: ...WHERE customer_tags LIKE '%premium''%' AND workspace_id...

PostgreSQL. Raw LIKE query. String concatenation. No parameterization.

There it was. Twenty minutes in.


Confirming Injection

Stack traces make life easy. I could see the query structure. Let me reconstruct what was happening:

SELECT * FROM customers 
WHERE customer_tags LIKE '%[INPUT]%' 
AND workspace_id = [WORKSPACE_ID]
ORDER BY created_at DESC

My input was being dropped directly into the LIKE clause. No escaping. No parameterization. Just string concatenation.

I confirmed injection with a basic test:

premium' AND '1'='1

This closes my input, adds a true condition, and leaves the query valid. If it returned the same results as premium — injection confirmed.

It did.

Then:

premium' AND '1'='2

False condition. Should return no results.

It returned no results.

Classic boolean-based injection confirmed. I could ask the database true/false questions through the search results.


Why Blind Matters

Here’s where it gets slower.

I couldn’t see the actual database output. The query runs, it either returns customers or it doesn’t, but the data I’m asking about doesn’t appear on screen.

This is blind injection. I can’t do UNION SELECT password FROM users and see the passwords. I have to ask one bit at a time.

“Is the first character of the admin password greater than ‘M’?” — returns results: yes. “Is it greater than ‘S’?” — no results: no. “Is it ‘P’?” — returns results: maybe, keep narrowing.

Binary search. One character at a time. For every piece of data I want to extract.

This is why it took 47 hours.

But I’m getting ahead of myself. Before I started extracting data, I needed to understand what I was working with. And I found something that made boolean-based injection look slow.


Time-Based Injection: Asking Questions in Milliseconds

PostgreSQL has a function called pg_sleep(). It makes the database wait for a specified number of seconds before responding.

If I can inject pg_sleep(5) and the response takes 5 seconds longer than usual — I have time-based injection. And time-based injection lets me ask binary questions much more efficiently.

I tested:

premium' AND (SELECT pg_sleep(5))--

The page took 5.3 seconds to respond instead of the usual 0.2.

I had time-based injection.

Now I could ask questions with measurable, reliable timing:

-- Does the database have a table called 'users'?
' AND (SELECT CASE WHEN (SELECT COUNT(*) FROM information_schema.tables 
WHERE table_name='users')>0 THEN pg_sleep(3) ELSE pg_sleep(0) END)--

If the response takes 3+ seconds: yes. If it responds normally: no.

The response took 3.4 seconds.

There was a users table.


Mapping the Database

I spent the first several hours just understanding the schema. Not extracting data — just understanding what was there.

Table names first. Then column names. Then row counts to understand scale.

What I found:

The users table had approximately 200,000 rows.

The columns included: id, email, password_hash, password_salt, created_at, updated_at, plan_type, api_key, stripe_customer_id, two_factor_secret.

I want you to read that column list again.

api_key. In the users table. Stored in plaintext. two_factor_secret. The TOTP seed for two-factor authentication. stripe_customer_id. Billing identifiers that, combined with Stripe API access, could expose payment methods.

This wasn’t just a password hash dump situation. This was a complete user profile table with credentials, API keys, and 2FA secrets all in one place.

I kept mapping. There was also:

A table called internal_api_keys with 12 rows. Column names: service_name, api_key, environment, created_by.

Internal. API. Keys. For third-party services. Stored in the database.

I extracted the service_name values using binary search. Each character took about 15 time-based queries. For 12 rows with service names averaging 10 characters each — roughly 1,800 queries just for the service names.

What came back over the next several hours:

  • stripe_secret (their Stripe secret key, stored in the DB instead of environment variables)
  • sendgrid_api (their email service key)
  • aws_access_key (an AWS access key stored in the database)
  • slack_webhook (internal Slack notification webhook)
  • twilio_auth (SMS service credentials)
  • Several others I won’t name

An AWS access key. In the database. Accessible via SQL injection.

This was now materially worse than the SSRF finding from two weeks prior at a different target. That one required chaining two vulnerabilities and a headless browser to reach AWS credentials. This one had AWS credentials sitting in a PostgreSQL table, reachable through a search box on a reporting dashboard.


The Decision Point

It was now around hour 30.

I had confirmed: a users table with 200k records containing API keys and 2FA secrets, an internal credentials table with production third-party API keys including AWS, all accessible via a single blind injection point in a search field.

I had not extracted any actual credential values. I had extracted table names, column names, row counts, and service name strings. All of which I needed to make the report credible. None of which gave me actual access to anything.

The decision I made: stop here. Write the report now.

Here’s my thinking. I had more than enough to prove critical impact. Extracting actual credential values — even for a proof-of-concept — crosses a line I’m not comfortable with. Knowing that stripe_secret exists in column api_key of table internal_api_keys is enough. I don’t need the actual key value.

This is a principle I hold pretty firmly: demonstrate the capability for access, not the exercise of it. Show the vulnerability is real and the impact is severe. Stop before you actually access things you’re not supposed to access.

Some researchers disagree with this approach. They argue that extracting a sanitized credential value (like showing only the first few characters) provides more compelling evidence.

Maybe. But I sleep better this way.


The Report

This was a 2,400 word report. My longest ever.

Title: Time-Based Blind SQL Injection in Customer Tag Search Parameter — Confirmed Access to users Table (200k Records) and internal_api_keys Table Containing Production Third-Party Credentials

Severity: Critical

CVSS Score: 9.8 (I included a full CVSS calculation because of how senior this needed to be treated)

Executive Summary: A time-based blind SQL injection vulnerability exists in the Customer Tag search parameter of the report builder feature. The vulnerable parameter directly concatenates user input into a raw PostgreSQL LIKE query without parameterization. Exploitation allows an authenticated attacker to enumerate the full database schema and extract arbitrary data through timing-based binary inference. Confirmed findings include: a users table with approximately 200,000 records containing plaintext API keys and TOTP 2FA secrets; an internal_api_keys table containing production credentials for third-party services including Stripe, SendGrid, AWS, Twilio, and others.

Impact (Detailed):

  • Full read access to all user records including email addresses, password hashes, API keys, and 2FA secrets
  • Access to production third-party API credentials enabling: unauthorized Stripe API calls, email sending via SendGrid, AWS resource access with permissions of the stored IAM user, SMS sending via Twilio
  • Complete database schema enumeration
  • Depending on database user permissions: potential write access, table modification, or (in PostgreSQL) file system access via COPY TO/FROM

Steps to Reproduce: (I included a full reproduction guide with the exact payloads, timing methodology, and expected response times)

Proof of Concept: (Included 47 timestamped HTTP request/response pairs showing the timing oracle in action, the table enumeration process, and the column extraction — all stopping at structure/metadata, not actual values)

Remediation:

  1. Immediate: disable or remove the Customer Tag search feature until patched
  2. Short-term: replace raw SQL with parameterized query — a one-line change
  3. Medium-term: audit entire codebase for other raw SQL usage, especially in reporting and admin features
  4. Critical: rotate all credentials in the internal_api_keys table immediately — treat them as compromised regardless of whether they’ve been accessed, because you cannot know if this was discovered by anyone else first
  5. Critical: consider all user API keys in the users table as potentially exposed — notify users and force rotation
  6. Architectural: credentials should never be stored in the application database. Use a secrets manager (AWS Secrets Manager, HashiCorp Vault, etc.)

What Happened Next

I submitted the report at 3:47 AM after 47 hours of work spread across three days.

At 6:15 AM — two hours and twenty-eight minutes later — I got a response from their Head of Security.

Not a triage bot. Not an automated acknowledgment. The Head of Security. Personally.

The message was three sentences: they had confirmed the vulnerability, they were taking the search feature offline immediately, and they were convening an emergency response call in ninety minutes.

I was not on that call. But the subsequent timeline of their responses told me a lot about what happened:

Hour 3 after report: Search feature disabled in production. Confirmed via testing.

Hour 8: Patch deployed to staging — LIKE query converted to parameterized statement. I verified the fix via SQL injection test — confirmed remediated.

Hour 12: I received a message asking whether I had extracted any actual credential values, explained clearly that I had not, and provided the relevant HTTP logs showing exactly what I had and hadn’t accessed.

Hour 24: All internal API keys rotated. They sent me a list of services they’d rotated credentials for, which matched my extracted service names exactly.

Hour 36: Patch deployed to production. Full retest requested.

Hour 48: Retest completed. Injection no longer possible. Report closed.

Bounty: $$$$

There was also a separate bonus. The report included a note that my documentation quality and the explicit decision not to extract actual credential values had been noted and appreciated by their security team.

That meant more to me than the money. Slightly.


The Architectural Problem Nobody Talks About

The vulnerability itself — unsanitized input in a raw SQL query — has a trivial fix. One parameterized query. Ten seconds of work.

The real problem this exposed was architectural and much harder to fix.

Why were production API keys in the database?

This is more common than it should be. Developers store credentials in the database because it’s convenient — it’s already there, it’s already backed up, it’s already accessible to the application. Why set up a separate secrets manager?

Because secrets managers exist to solve exactly this problem. The database is the highest-value target in your application. It’s what attackers are always ultimately trying to reach. Storing your third-party credentials there means a database breach — via SQLi, misconfiguration, backup exposure, or anything else — immediately compromises every external service you use.

Credentials in a database is not a design choice. It’s a liability.

Why were user API keys stored unencrypted?

The user api_key column was plaintext. Not hashed. Not encrypted. Just the raw key value.

API keys should be treated like passwords. Store a hash. When the user makes an API call, hash their submitted key and compare it to the stored hash. An attacker who reads the hash cannot use it to make API calls.

This is not advanced security engineering. This is the same lesson we learned about passwords in 2012.

Why were 2FA seeds stored in the application database?

TOTP 2FA secrets are the keys to the kingdom. If you have the seed, you can generate any valid 2FA code. Storing these in the same database as password hashes means 2FA provides no additional protection if the database is breached — an attacker gets both.

Ideally: 2FA secrets should be encrypted at rest with a key stored separately from the database. Or delegated entirely to a dedicated authentication service.


On the Ethics of Blind Injection

People sometimes ask how far you should go with blind injection. You’re not accessing the data directly — you’re inferring it through timing. Does that mean it’s okay to extract everything?

My answer: no.

The fact that the technique is indirect doesn’t change the nature of what you’re doing. If I use 3,000 timing requests to extract a live Stripe secret key, I now have a live Stripe secret key. The method doesn’t sanitize the outcome.

I treat blind injection the same as any other read access: demonstrate the vulnerability is real, demonstrate the impact is severe, stop before you actually access sensitive material.

The report should answer: “what could an attacker do?” — not demonstrate everything an attacker could do by actually doing it.


47 Hours Later

I went to sleep after submitting the report.

Eight hours later I woke up to a triaged Critical finding, a disabled feature in production, and a Head of Security who had been awake since 4 AM dealing with an emergency credential rotation.

This is what bug bounty actually looks like sometimes. Not clever one-liners. Not dramatic pivots. Just you, a timing function, 47 hours of binary questions, and the quiet satisfaction of handing someone a problem before it became a catastrophe.

The search field has a parameterized query now.

The API keys are in a secrets manager.

The user API keys are hashed.

And somewhere in their codebase, there’s probably another raw SQL query nobody’s found yet.

That’s fine. I’ll be back.


Reported through the official bug bounty program. Testing was strictly limited to timing-based inference of table names, column names, row counts, and service name strings. No actual credential values, password hashes, email addresses, or user data were extracted at any point. All proof-of-concept materials were shared exclusively with the program team. Domain redacted per responsible disclosure norms.

003
END
← BACK TO WRITING
Blind SQLi, 47 Hours, and a Database Full of Secrets Nobody Knew Were There READY TO PLAY