X

Robots.txt on Root Domain Not Needed, Says Gary Illyes

Robots.txt on Root Domain is not deemed a requirement, as stated by Gary Illyes in a recent LinkedIn post.  Illyes, an analyst...

Robots.txt on Root Domain Not Needed, Says Gary Illyes

Robots.txt on Root Domain is not deemed a requirement, as stated by Gary Illyes in a recent LinkedIn post. 

Illyes, an analyst on the Google Search Team, recently shared a post where he disagrees with the notion that the robots.txt file is required to be placed at the root domain. He then mentions about a less popular component of REP (Robots Exclusion Protocol). 

Furthermore, Illyes says that two different robot.txt files can be placed on the main website and the CDN (Content Delivery Network). One central robots.txt consisting of all the rules can be placed in the CDN. This helps to keep a record of rules to be managed by the website’s developers. Thus, the centralisation of management allows developers to monitor crawl directives. 

In addition, there will be a reduced risk of inconsistent directives among the CDN and primary website. This strategy also enables adjustable setups meant for websites with a more complicated construction or websites consisting of numerous CDNs and subdomains. 

Robots.txt Turning 30

Illyes has also mentioned in his post that the Robots Exclusion Protocol is turning 30 years old in 2024. 

The purpose of robot.txt is to inform crawlers of the URLs that can be accessed on the website. It serves the purpose of reducing the overload of requests on websites.

Suggested:

SEO Tools Aren’t the Writing Guides You Need – Google’s Advice.

8 Outdated SEO Practices You Should Leave Behind.

Written by Yibeni Tungoe
Journalism & Mass Communication student at North Eastern Hill University.
Profile  

Leave a Reply

Your email address will not be published. Required fields are marked *