Our Beginner’s Guide to robots.txt
For people who are new to robots.txt, it can seem quite foreign and confusing. The truth is robots.txt isn’t as frightening as it seems. Incorrect use of the robots.txt file is harmful because it might affect your rankings in the SERP. With that said, if you own a website, it’s worth learning about robots.txt; it plays a fundamental role in how search engines work. Read on for some further information regarding robots.txt so that you can incorporate it into your own website…
What is robots.txt Used for?
Simply put, robots.txt a text file that website owners or developers placed on a web server to instruct search engine robots and other crawlers how to crawl the pages on their site. To elaborate, it tells crawlers which pages or files on a site it can or cannot crawl. If you have pages on your website that are less significant than others, you would use robots.txt to focus on the more important content that you’d prefer to appear in the search results.
Please be aware that robots.txt is not a tool designed to keep a page of your website out of Google. This is because the URL might still be found in external resources and still show up in the SERP. Instead, you will need to use noindex tags to achieve this. If there is private content on your website that you do not want to be found, we would recommend using passwords as well as robots.txt. If your website is hosted by a service such as Wix or Drupal, you may not have the ability to edit your robots.txt file.
Creating a robots.txt file
You can use practically any text editor to create a robots.txt file. A website can only have one robots.txt, which must be located at the root of the website and cannot be placed as a subdirectory. For example:
There are lots of details online regarding robots.txt, should you wish to learn more about what it is or how to create it. Alternatively, you’re welcome to get in touch and find out how we can help you with the design, build and optimisation of your website.