Ads

 Crawled — currently not indexed is a Google Search Console status that indicates a given page has been visited (crawled) by Googlebot, but it’s currently not indexed by Google. This issue is resolved once Google indexes the affected URLs.

sitemap


If your page is crawled but not indexed, it won’t appear in search results, and it won’t get any organic traffic from Google. 


This article presents the possible causes for the Crawled — currently not index status and ways of fixing them.



Where can you find the Crawled: currently not indexed status?


You can find pages on your site that are Crawled — currently not indexed in the Index Coverage (Page indexing) report and the URL Inspection Tool in Google Search Console.

Index Coverage (Page indexing) report
Crawled— as of now not recorded has a place in the Not filed class of the Page indexing report. This report contains all unindexed pages on your site, whether or not Google thinks they aren't listed as a result of a blunder or deliberately.

index errror



After clicking on the Crawled — currently not indexed status below the chart, you’ll see a list of affected URLs. You should examine it and prioritize fixing the issue for pages most valuable to you. 

The report is also available for export. However, you can export only up to 1000 URLs. If more pages are affected, you can increase the number of exported URLs by filtering pages specific to sitemaps. For example, if you have two sitemaps, each with 1000 URLs, you can export both of them separately.



URL Inspection Tool



The URL Inspection Tool in Google Search Console can also inform you about URLs that are Crawled — currently not indexed.

inspection



The top section of the tool informs you on whether the URL can be found on Google or not. If the inspected URL belongs in the Not indexed category in the Index Coverage (Page Indexing) report, the URL Inspection Tool will report the following: “This page is not indexed. Pages that aren’t indexed can’t be served on Google.”

Below in the URL Inspection tool, you can find more specific information about the current Coverage status of the inspected URL — in the case above the URL was Crawled — currently not indexed.


Reporting bug: your page might be actually indexed



After noticing the Crawled — currently not indexed status, the first thing you should do is investigate if your page is really not indexed.

It's normal to see a page set apart as Slithered — at present not filed in the List Inclusion (Page ordering) report, while the URL Examination device shows that the page is really recorded.

The URL Review device permits you to really look at insights concerning a particular URL, including:

Ordering issues,
Organized information mistakes,
Versatile Convenience,
View stacked assets (e.g., JavaScript).
You can likewise demand ordering for a URL or see a delivered variant of a page.


Causes and solutions for the Crawled — currently not indexed status

Presently, we should make quick work of the issue — what makes the status show up and how you might fix it.

Google doesn't offer an unmistakable response why a given page was slithered however not ordered, yet there are a couple of potential motivations behind why the status could show up, including:

  • Ordering delay,
  • Page doesn't fulfill quality guidelines,
  • Page got deindexed,
  • Site design issue,
  • Copy content issues.
  • Ordering delay
It's normal that Google visits a page, yet a significant chunk of time must pass to record it. The Web is endlessly large, and Google needs to focus on which pages get ordered to save assets.
If you just circulated your page, it might be totally normal that it's not arranged right now, and you truly need to believe that Google will document your substance.

Solution

You can’t influence the crawling and indexing of your page in the short term, but there are a few things you can do to help your website in the long run:

Create an indexing strategy to help Google prioritize the right pages on your site. To do so, you need to decide which pages should be indexed and the best method to communicate it to Google. 
Ensure there are internal links to the pages you care about. It will assist with researching track down the pages and look further into their unique situation.
Make a very much enhanced sitemap. It's a straightforward text document that rundowns your important URLs. Google will utilize it as a guide to find the pages quicker.


Page doesn’t meet quality standards


Google can’t index all of the pages on the Internet. Its storage space is limited, and that’s why it needs to filter out the low-quality content. 

Google’s goal is to provide the highest quality pages that best answer users’ intent. It means that if a page is of lower quality, Google will most likely ignore it to leave the storage space available for higher quality content. What's more, we can anticipate that the quality guidelines should get just stricter later on.

Arrangement
As a site proprietor, you ought to guarantee your page gives top-notch content. Check assuming it's probably going to fulfill your clients' purpose and add great quality substance if necessary.

Furthermore, you can utilize tips on quality substances from Google's Quality Raters Rules. Even though the document is meant mainly for Search Quality Raters to assess the quality of a website, webmasters can use it to get some insights on how to improve their own sites. If you want to learn more, check out our guide on Quality Raters Guidelines.

User-generated content
User-generated content might be a problem from the standpoint of quality.

For example, let’s assume you have a forum, and someone asks a question. Even though there might be many valuable replies in the future, at the time of crawling, there were none, so Google may classify the page as low-quality content.

Page got deindexed

A URL can experience the ill effects of the Slithered — at present not listed status since it was recorded before, however Google chose to deindex it after some time.

Assuming you can't help thinking about why some stuff could vanish from the file, almost certainly, they are simply supplanted by more excellent substance.

Moreover, you ought to focus on calculations refreshes. It's conceivable another calculation carried out, and your page was impacted by it.

Tragically, deindexing could likewise be brought about by a bug on Google's side. For instance, Web crawler Land once got deindexed on the grounds that Google wrongly expected the webpage was hacked.

Solution

The solution to deindexed pages is closely related to its quality. You should always ensure your page serves the best quality content and is up to date. Don’t assume that once a page is indexed, you don’t need to do anything with it ever again. Keep monitoring it and implement changes and improvements if necessary.


Website architecture issue

When John Mueller was asked about possible reasons a page was marked with the Crawled — currently not indexed status, he mentioned another possible cause — poor website structure.

Let’s imagine a situation where you have a good quality page, but the only way Google found it is because you put it in your sitemap.

Google might look at the page and crawl it, but since there are no internal links, it would assume the page has less value than other pages. There’s no semantic or structural information to help it evaluate the page. That might be one of the reasons why Google decided to focus on other pages and leave this one out of the index after crawling it.

Solution

Good website architecture is key to helping you maximize the chances of getting indexed. It allows search engine bots to discover your content and better understand the relation between pages. 

That’s why it’s crucial to provide a good website architecture and ensure there are internal links to the page you want to be indexed. 

If you want to learn more about website structure, check out our article on How To Build A Website That Ranks And Converts. 

Duplicate content

Google needs to introduce special and significant substance to clients. That is the reason when it understands during creeping that a few pages are indistinguishable or almost indistinguishable, it could file only one of them.

Generally, the other one gets marked as "Copy" in the List Inclusion (Page ordering) report. Be that as it may, it's not generally the situation, and some of the time Google allots the Slithered — presently not recorded status all things being equal.

It's not totally clear why Google could pick Slithered — right now not ordered over a committed status for copy content. One of the potential clarifications is that the status will change later after Google chooses if there's a more reasonable one for the page.

Another choice may be a detailing bug. Google could just commit an error while relegating the situations with. Sadly, the circumstance is more difficult on the grounds that Slithered — presently not listed doesn't give you as much data as a devoted status for copy content.

How to check in the event that a copy page is appearing in the query items?

Go to the page that is not ordered and duplicate an irregular text section.
Glue the text in Google Search in quotes.
Break down the outcomes. Assuming an alternate URL with your duplicated text appears, it could imply that your page isn't filed on the grounds that Google picked an alternate URL to list.
Arrangement
Above all else, you ought to guarantee you make unique pages. In the event that essential — add remarkable substance.

Sadly, copy content may be inescapable (e.g., you have a versatile and work area form). You don't have a lot of command over what shows up in query items, yet you can give Google a few clues about the first form.

On the off chance that you notice a ton of copy content filed, assess the accompanying components:

Authoritative labels: these HTML labels tell web indexes which adaptations are the first ones.
Inward connections: guarantee inner connections are highlighting your unique substance. Google could involve it as a sign of which page is more significant.
XML Sitemaps: guarantee just the standard adaptation is in your sitemap.
Recall that these are just clues, and Google isn't committed to follow them. For the situation depicted by Adam Gent, Google picked the RSS channel variant to list, despite the fact that numerous canonicalization signals highlighted an alternate unique URL. Adam tackled the issue by setting up a 404 to guarantee just the first variant remained. He likewise proposed setting up a X-robots HTTP header on all feed URLs would prevent them from being filed.

Post a Comment

Previous Post Next Post

ads

ads