Here the bot lands on an empty
Server Failures If there are server failures, just when the bot enters to read our robots.txt and cannot understand the instructions we give it there, then it may be the case that it enters one of the pages that we had blocked, however, Google will try to enter Robots.txt again and upon understanding the instructions it will stop crawling that page. For example, when Google enters our website and wants to enter Robots.txt and finds: Status code 200: The Google bot enters japan number data Robots.txt without problems and obeys the rules stated there. Status code 300: Here the bot goes to the page it has been redirected to without any problems and reads the sitemap as if it were the original route without any problems. Status code 40X:page, with no rules, so it will do whatever it sees fit, entering all pages regardless of your crawling strategy.
Status code 50X: Several things can happen here. If robots.txt reports a 50X error continuously, after 30 days the Googlebot will use the last cached copy of robots.txt and if it is not available, Google assumes that there are no crawl restrictions. Source:Crawling is not Indexing Something that we SEOs never tire of repeating is that there is no indexing without crawling, but there can be crawling without indexing .
頁:
[1]