If you’re seeing headlines about Google API keys exposed, it’s not just another “someone leaked secrets on GitHub” story. The bigger issue is that keys that used to be treated as public-ish identifiers (especially for Maps or Firebase) can become Gemini-capable credentials when the Gemini/Generative Language API is enabled in the same Google Cloud project.

That means an old AIza… key sitting in your public website code can quietly become a credential attackers can abuse for Gemini access, data exposure, and costly usage, without the original developer ever getting a clear warning at the moment the risk changed.


What Google API keys were originally designed for

Google Cloud API keys (AIza…) were widely used as project identifiers for routing and billing, especially in client-side scenarios like Maps JavaScript embeds. Google’s own documentation and ecosystem historically reinforced that some of these keys are not treated like traditional “secrets,” because access control is expected to come from other layers (like API restrictions and app restrictions).

This is also why you’ll see older sites with a Maps key directly in HTML/JS — it was normal practice for years.

Important distinction: this is not the same thing as a Service Account JSON key (which is a real secret credential and must never be public).


What changed after Gemini arrived

Gemini introduced a new class of endpoints where the stakes are much higher: uploaded files, cached content/context, and billable LLM usage.

The core change is simple but dangerous:

  • When the Gemini / Generative Language API is enabled on a Google Cloud project, existing API keys in that project can gain access to Gemini endpoints, even if those keys were originally created for something like Maps or Firebase.

This creates a security gap because many organizations already have keys deployed publicly under older guidance and patterns.


The two problems behind “Google API keys exposed”

Retroactive privilege expansion

A key can go from “okay to live in public JS” to “Gemini-capable” based on a later project-level change.

Example scenario:

  1. You created a Maps key years ago and embedded it in your front-end (common legacy setup).
  2. Someone enables Gemini in that same Google Cloud project for an internal prototype.
  3. That same public key can now authenticate to Gemini endpoints.

This is why Google API keys exposed is so alarming: the risk isn’t only leakage — it’s how the platform’s capabilities changed around existing keys.

Insecure defaults (“Unrestricted” keys)

Google’s own Cloud docs emphasize that API keys are often unrestricted by default, and unrestricted keys are insecure because they can be used by anyone from anywhere.

If a key is unrestricted and Gemini is enabled, you have the perfect storm: a key that works broadly and may already be sitting in public code.


What an attacker can do with a public AIza… key

The abuse pattern is straightforward:

  • Scrape the key from a public page (HTML/JS bundle)
  • Use it to call Gemini endpoints (if enabled and allowed)

If the project has Gemini usage (uploads, cached contents, or active workloads), a stolen key can potentially be used to:

  • Access Gemini-related stored artifacts (uploads/cached contents depending on how the project uses the API)
  • Burn quotas
  • Run up billable usage

Even if the attacker never touches your servers, the key can become the entry point.

Google API keys exposed after Gemini changed the rules
A public AIza… key can become Gemini-capable when AI APIs are enabled in the same project.

How big is the exposure?

Security researchers scanned the November 2025 Common Crawl dataset and reported 2,863 live keys that could authenticate to Gemini-related endpoints, including keys tied to large organizations, and even examples linked to Google itself.

This matters because it suggests the issue is not limited to small hobby projects. It’s a platform pattern that many teams can fall into, especially when projects evolve over time.


Why “restrict by HTTP referrer” isn’t enough

Many teams rely on application restrictions such as HTTP referrer allow-lists for browser keys. While these controls help in some scenarios, they’re not a complete solution.

A more reliable approach is:

  • Restrict by API (only allow the exact APIs needed)
  • Restrict by application type (web referrer, Android package+SHA, iOS bundle ID, server IP as applicable)
  • Avoid sharing a project between “public embed keys” and “sensitive AI usage” where practical

What you should do right now (practical checklist)

Step 1: Check whether Gemini is enabled in any GCP project

Go to:
Google Cloud Console → APIs & Services → Enabled APIs & services

Search for:

  • “Generative Language API”
  • “Gemini”

If it’s not enabled, you’re not exposed to this specific “Gemini privilege expansion” risk in that project.

Step 2: Audit every API key in that project

Go to:
APIs & Services → Credentials

Look for:

  • Keys marked Unrestricted
  • Keys that explicitly allow Generative Language API / Gemini

Step 3: Confirm none of those keys are public

Check:

  • Website HTML source
  • Front-end JS bundles
  • Public repositories
  • Old landing pages and legacy scripts

Prioritize older keys first, because those are the most likely to have been deployed publicly under previous assumptions.

Step 4: Rotate exposed keys fast (and plan for safe rollout)

If you find a Gemini-capable key in public code:

  • Rotate it (create a new key, update usage, then disable the old key)
  • Monitor usage/billing for spikes during the change window

Quick recap:

  • Google API keys exposed often means a legacy key is public and Gemini was enabled later in the same project.
  • Unrestricted keys make the situation worse because they can work broadly across enabled APIs.
  • The fix is an audit + key restriction + rotation, ideally with project separation for public vs sensitive workloads.

The cleanest long-term fix: separate “public embed” from “AI workloads”

If your org uses Maps/Firebase keys in front-end code, consider splitting architecture like this:

  • Project A: Public client-side use (Maps embed, Firebase web app usage)
  • Project B: Gemini/AI usage (uploads, model calls, production AI features)

This reduces the chance that enabling a powerful API later accidentally upgrades old keys.

If you’re building your own security baseline and want a broader checklist, start with our cybersecurity best practices guide.


How to Scan Codebases for Exposed API Keys

Pattern matching for AIza… strings is easy, but it doesn’t tell you whether a key is still active or whether it has Gemini access.

Security teams often prefer scanners that can:

  • Detect keys
  • Validate whether the key is live (where safe and authorized)
  • Help prioritize remediation

If you want an AI-assisted workflow for scanning repositories, identifying hardcoded secrets, and reviewing security issues across large codebases, follow our detailed Claude Code Security vulnerability scanning tutorial.

TruffleHog is commonly mentioned in the discussion around this issue because it can verify discovered keys rather than just regex-match them.

For incident response and investigation workflows, structured OSINT investigation tools can help you identify where exposed keys were published, whether in public repositories, cached pages, or historical snapshots.

Diagram showing how a Maps API key can gain Gemini access
The risk appears when Gemini is enabled later in the same Google Cloud project.

Recommended key settings (least privilege)

If you must keep API keys, aim for these principles:

Restrict by API

Only allow the specific APIs the key needs. For example, a Maps embed key should be restricted to Maps APIs only, not “all enabled APIs.”

Restrict by application type

Use the correct application restriction:

  • HTTP referrer restrictions for browser keys
  • Android restrictions for Android apps
  • iOS restrictions for iOS apps
  • IP restrictions for server-side keys (when appropriate)

Avoid reusing keys across environments

Do not use the same key across:

  • Dev/staging/prod
  • Public embed and private server workloads
  • AI and non-AI services

How Google says they’re addressing it

Google’s official documentation emphasizes restricting API keys and avoiding unrestricted configurations.
In parallel, reporting and commentary around this specific Gemini behavior indicates active remediation efforts such as identifying leaked keys and adjusting defaults for newly created Gemini keys (especially in AI Studio workflows).

To stay aligned with official guidance, these are good reference points:

Checklist for auditing Gemini access on Google Cloud API keys
Check enabled APIs, restrict keys, rotate anything public, then monitor usage.

Is This a Vulnerability or a Misconfiguration?

It’s important to clarify that this issue is not a traditional “remote code execution” vulnerability. It’s a configuration and default-behavior problem caused by how API keys interact with newly enabled services inside a Google Cloud project. The risk comes from privilege expansion and lack of isolation — not from a direct exploit in Gemini itself.


Conclusion: treat legacy keys like they can change risk overnight

The biggest lesson from Google API keys exposed is that credential risk is not always static. When platforms evolve, especially when AI capabilities are added, old keys can become more powerful than the original developer intended.

If you do only one thing today: audit for Gemini/Generative Language API enablement across projects, then restrict and rotate any public keys that gained Gemini access.


Disclaimer

This guide is for educational and defensive security purposes. Only test and audit Google Cloud projects and keys you own or have explicit permission to assess.