technicalApril 14, 2026·Updated May 5, 2026·10 min

Why My Site Is Not Showing on Google: 8 Causes and Fixes

A diagnostic checklist for sites that do not appear on Google. Indexing, robots.txt, canonical, thin content, sandbox: eight root causes and how to fix each.

Juliette
By Juliette
Bloomwise's SEO expert

Key takeaways

  • Before assuming a penalty, confirm your site is actually indexed using a `site:` query and the Pages report in Google Search Console.
  • The most common blocker for a new site is an accidental noindex tag or a Disallow line in robots.txt, often a leftover from staging.
  • A misconfigured canonical can silently make your pages disappear in favour of a version you did not intend to publish.
  • Thin or duplicate content is the second most common reason small sites fail to rank, far more than any algorithmic penalty.
  • New sites need 3 to 6 months to earn rankings on competitive keywords. If you just launched, most of the fix is patience plus consistent publishing.

"Why my site not showing on Google" is the most common panic question small business owners send us. Before you blame an algorithm or pay for SEO help, work through this eight-step diagnostic in order. In the audits we run at Bloomwise, the root cause is almost always a technical detail you can fix in ten minutes. This guide is the exact order our team uses, with the fix for each issue, what to verify after, and pointers to deeper resources. If your problem is strategic and not technical, start with our guide to choosing SEO keywords instead.

A quick warning before we dive in. Do not skip ahead to "Step 8: manual action". Most "Google penalised me" stories turn out to be a stale tag or a thin-content cluster. Take the steps in order. Save the exotic explanations for when the obvious causes are ruled out.

Step 1: confirm your site is actually indexed

Type site:yourdomain.com into Google. Count the results.

  • Zero results: your site is not indexed at all. Continue to step 2.
  • Some results, missing pages: partial indexing. Jump to step 4.
  • All pages present: indexing is fine, your problem is ranking, not indexing. Jump to step 7.

This single test separates "Google has never seen my site" from "Google knows about my site but does not rank it". The fixes are completely different. The site: operator is a fast visual check. The authoritative answer lives in Search Console under Pages, where every URL gets classified as indexed, not indexed with a reason, or excluded on purpose.

For a single specific URL, use the URL Inspection tool at the top of Search Console. Paste the URL and Google will tell you whether it is indexed, when it was last crawled, what canonical it picked, and whether the page is mobile-friendly. Stuck on one stubborn URL? That tool answers in plain English why a site not indexed at all is not the same as a site indexed-but-not-ranking.

Step 2: read the Google Search Console Pages report

Open Search Console, then Indexing, then Pages. You will see two charts and a table:

  • Indexed: pages Google has in its index. Good.
  • Not indexed: with a reason for each one (noindex, canonical, soft 404, crawled-but-not-indexed, redirect error, server error, and a few others).

Click into Not indexed, then into each reason category to see the full list of affected URLs. The "Crawled, currently not indexed" bucket is the most common puzzle for a small site. Google found the page, decided it was not worth indexing, and moved on. That usually means thin content, a near duplicate elsewhere, or low perceived authority on the topic.

Bookmark the Pages report. Check it weekly during the first three months after a launch or migration. Most google indexing issues surface there in days, not in the Twitter thread you read three weeks later.

Step 3: hunt down robots.txt blocking rules

Open yourdomain.com/robots.txt in your browser. Look for any line that starts with Disallow:. A line like Disallow: / blocks every crawler from every page. A line like Disallow: /blog/ kills your entire blog from search. Wildcards like Disallow: /*? can also block important parametrised URLs you forgot about.

Common culprits behind a robots.txt blocking issue:

  • Leftover Disallow: / from a staging environment that was forgotten on launch day.
  • A CMS toggle like "Discourage search engines from indexing this site" (WordPress enables this by default during install).
  • A Disallow: /wp-admin/ that accidentally matches a production path with the same prefix.
  • An overly aggressive crawl-delay value that throttles Googlebot to a trickle, indirectly stalling indexation on bigger sites.

Fix the offending line, save the file, then submit the updated robots.txt in Search Console under Settings, then Crawling. Google usually re-fetches the file within a day. To verify, run the URL Inspection tool on a page that was previously blocked and confirm "Crawl allowed" reads Yes.

Robots.txt blocks crawling, not indexing. A page disallowed in robots.txt but linked from elsewhere can still appear in Google as a URL-only listing with no description, which looks broken to anyone who finds it. To remove a page from results entirely, you need a noindex tag, not a Disallow.

Step 4: noindex removal for accidental tags

View the HTML source of your page (right-click, View Source) and search for noindex. If you find:

<meta name="robots" content="noindex">

or

<meta name="robots" content="noindex, nofollow">

Google will not index that page. Period. The tag is often left enabled after a migration or flipped on by a plugin. Remove it, save, then request indexing in Search Console.

The same directive can also live in the HTTP response header as X-Robots-Tag: noindex. Open your browser's DevTools, switch to the Network tab, reload the page, click the document request, then read the Response Headers. If you see X-Robots-Tag: noindex, your noindex sits at the server level (commonly nginx, Apache, or a Vercel/Netlify config), and the meta tag in the HTML may not even exist.

For JavaScript-rendered sites (a lot of SPAs and headless setups), the noindex can be injected by the framework after page load. Use the URL Inspection tool's "View crawled page" option to see the HTML Google actually rendered. If a noindex shows up there but not in your source view, your client-side rendering is the culprit. Proper noindex removal in that case means fixing the client-side template, not the server response.

In practice, what we often see: a developer copies a staging branch to production with the staging noindex still hardcoded in a layout component. The fix is one line. The cost of not finding it is months of invisibility.

Step 5: verify canonical tags

A canonical tag tells Google which URL is the "real" version of a page. Misused canonicals can silently make your pages disappear.

Common canonical mistakes:

  • Every page canonicalises to the homepage. Classic plugin misconfiguration on WordPress.
  • Canonical points to a non-existent URL or a 404.
  • Canonical points to an HTTP version while the site is HTTPS.
  • www and non-www versions canonicalise to each other inconsistently.
  • Pagination canonicalises every page back to page 1, which hides pages 2 onwards from the index.

Look for the canonical in the page source: <link rel="canonical" href="...">. It should point to the page you are on, or a deliberate canonical master if you are managing duplicates on purpose. If it points elsewhere and you do not know why, check your CMS canonical settings and any SEO plugin overrides. The URL Inspection tool also shows the canonical Google chose, which sometimes differs from the one you declared. That gap is where bad canonical surprises live.

Step 6: audit content thinness and duplication

Google is harsh on thin and duplicate content. A page is at risk when:

  • Word count sits below 300 words of unique content. We recommend 800 to 1500 for blog posts that hope to earn indexation.
  • The same text appears across ten or more pages on your site (common in ecommerce category pages and city-targeted landing pages).
  • The boilerplate (header, footer, sidebar) makes up 80% or more of the HTML.
  • No unique images, no original quotes, no original data. Just rewritten generic information already published a hundred times.

A thin page will be crawled but not indexed. The fix is to either merge thin pages into one substantial page or beef up the unique content per page. Classic mistake: shipping fifty city pages with the same paragraph and a different <h1>. Google sees one article repeated fifty times.

For programmatic SEO at scale, each page needs at least one unique, locally relevant data point. A real address, a verified phone number, a customer review, a region-specific statistic. Without that anchor, Google groups them as duplicate variants and indexes only one.

Our guide to writing SEO-optimised blog posts covers the structure and depth Google rewards. Short version: cover the topic completely, answer the actual user question, and add at least one piece of insight that the top ten results do not already have.

Step 7: domain age, authority, and the google sandbox

A new site (less than 6 months old) with no backlinks often ranks poorly on competitive keywords even when the technical setup is flawless. The pattern is sometimes called the google sandbox effect, although Google denies the term itself. The signal Google does confirm is "site authority", a directional score for how much trust your domain has earned.

Acceleration levers that actually work:

  • Submit your sitemap to Search Console and resubmit after every batch of new content.
  • Earn 3 to 5 genuine backlinks from related sites. Niche directories, partner mentions, podcast appearances, guest posts on adjacent blogs.
  • Publish on a steady cadence. One post per week for three months beats ten posts in one week, because Google rewards predictable freshness.
  • Build internal links from your strongest pages to your newer ones. A new article with zero internal links is invisible to crawlers in practice.
Site age What to expect
0 to 1 month Indexation begins, no meaningful rankings yet
1 to 3 months Long-tail queries start appearing in Search Console with the occasional click
3 to 6 months First rankings on accessible (low-competition) keywords, the sandbox effect lifts
6 to 12 months Growth on mid-competition keywords, brand searches start to compound
12+ months Compound growth if publishing is consistent, otherwise plateau

If your numbers do not roughly match this curve, the issue is rarely the sandbox itself. More likely: publishing slowed, internal linking is weak, or the topical cluster is too narrow to signal expertise on anything in particular.

Step 8: rule out a manual action

Open Search Console, then Security and manual actions. If Google has penalised your site, the reason will be listed here with a fix procedure. Manual actions are rare for legitimate sites. They do happen for sites that bought backlinks at scale, generated thousands of low-quality AI pages, or stuffed pages with hidden text.

If you find one, the fix is to remove the offending content or links, document what you changed, then submit a reconsideration request directly from the Manual Actions page. Recovery usually takes weeks of review. Google is strict on repeat offenders.

What to do after the fix

Once you have addressed the technical blocker, request indexing in Search Console via the URL Inspection tool for the affected URLs and wait 5 to 14 days. Track results with our guide to measuring SEO results so you know whether the fix actually worked. If the issue returns, a plugin or CMS setting is silently reintroducing the problem. Check your plugin change log and any recent CMS upgrades. What many overlook: an "auto-update" toggle that quietly flips a noindex back on after every release.

Run the eight steps. The site still does not appear? The issue is almost always strategic rather than technical at that point. Are your target keywords realistic for your domain authority? Does your content actually match search intent? Is your topical cluster wide enough to signal expertise to Google? Our pillar guide on the SEO + GEO + AI Search score breaks down the four signal categories Google uses to decide which sites get visibility, and where most small sites silently lose points.


Most "my site is not on Google" problems are technical, not strategic, and solvable in a single afternoon. Run the eight-step diagnostic in order. Fix what you find. Give the index 5 to 14 days to catch up. Bloomwise automatically catches the first six causes on every audit, so within minutes you know whether you are looking in the right place. If you still see nothing after that, the problem is no longer indexing. It is ranking. And the fix for ranking is quality content plus time.

💡
No Search Console yet? Set it up now. Free, ten minutes via DNS verification, and it is the single most useful SEO tool for a small site. The earlier you verify, the earlier you get data. Without that data, every diagnosis here is guesswork.
⚠️
"No issues detected" in Security and manual actions? Then stop blaming Google for a penalty. Most "I was penalised" stories are unresolved technical problems from steps 1 to 6. Go back to step 1 and run the diagnostic carefully.

Want to know where your site stands?

bloomwise audits your site in 2 minutes and gives you an SEO score with priorities to fix.

Get started

Questions fréquentes