Crawl Errors Rising

lazer

Well-known member
Hey :)

So, I was looking at GWT today and noticed a steady increase in crawl errors. The weird thing is that the URL's it is "failing" on, haven't been physically present on my server for over a year (converted from vB to xF in April 2012).

Also, since installing the Resources add-on, we closed off the forum that was doing the job up until then. Google is still trying to index that forum but is listing 1,000's of "Access Denied" errors, as the forum is no longer open to the public.

Can you spot when we installed RM and closed the related forum? :p
rise.webp

Why is Google still looking at these URL's (and subsequently returning a Soft404 or "access denied" errors) and how do I tell them to stop?

Cheers!
 
Hey Shaun
I'm not sure you need to do anything; Google knows they are 404 and should eventually drop them from its index.
Although, they haven't done so already, after over a year.
graph doesn't show the number of URLs (it could be 50 or 5000) - but if you'd like to tell Google to stop, have a read here: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663419
The figure is around 2,000 URL's. Thanks for the link, I already knew about that but was hoping to find something a little less "manual" :)
 
I've got 404's for page links that are 2 years older and more - when you click on the link (in Crawl Errors) to analyse them, you get a tab which says "Linked from" - presumably these links back to the URL from other site are what keep encouraging Google to try and index it?

If it's a soft/hard 404 issue it might be worth opening a discussion here on XF regarding the presentation of 404's to SEs - maybe something could be done to make it a more "hard" 404 error and encourage them to drop the links sooner?
 
Top Bottom