CSS files can cause problems

For this reason, when there are technical errors on a website, and they are being solv on the same day, it is preferable to force the robots.txt file to return a 503 error and thus stop the complete indexing of the site until the problem is fix. This is much better than blocking crawling, since the latter has more severe implications and a simple 503 is totally temporary.

No response. Another possibility is that

The server does not return anything or takes too long to do so (due to configuration problems or because the machine is too saturat). In these cases Google uses the cache it has of this file for a while. That is, it interprets the last robots.txt file it could access and works as if it were the one upload.
9. Blocking JS and  and is even frown upon by the search engine.

Google recommends not blocking

CSS and JS files. In the past, these files were block because they were of no use to spiders. But now Google robots are able to interpret HTML and thus place content in context: they know whether text is large or small; the background color; what position it occupies in the design; or how visible the content is to users… So Google asks us to let it zalo database access these files so it can evaluate the entire website.

 

special data

 

If we do not give them access to these files

they start sending us notifications and in practice the authority/quality they perceive from our website decreases.

This doesn’t mean that we can never shopify provides many built block a JS file (we all know why ) but we do have to avoid this blocking in general.

Google enters 400 content but not if it is block

400 content (404 pages not found, or 401 pages that are usually under login, etc.) is access by spiders. Google tries and finds that when visiting the pages they usa data do not respond and therefore do not index.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top