Can I stop parts of my website from showing in Google search results?
In short, yes. By implementing a text file known as a robots.txt file, you can instruct Google’s search engine crawlers not to crawl and index a certain page or pages of your website. When installed the robots.txt file ensures that sections of your website are kept private and not searchable on Google or other search engines.
Why stop Google from displaying web pages in search results?
Also if you have sensitive or premium value information on your website that you may want to be available to paying customers or subscribers but not freely available to the general public, a robots.txt file is a valuable tool to use.
How do I use a robots.txt file?
A robots.txt file is a text file which can be created using a standard text editing tool such as Notepad. There are various websites which will generate the file for you, including Google’s Webmaster Tools which offers a free robots.txt file generation via the following process:
1. In your Google Webmaster Tools account, on the left hand ‘Dashboard’ toolbar click on ‘Site Configuration’. (If you don’t have a Webmaster Tools account, register one, it’s free)
2. Under ‘Site Configuration’ click on ‘Crawler Access’.
3. Within the Crawler Access page, click the ‘Generate robots.txt’ tab.
4. Follow the four step process, including downloading your robots.txt file and uploading your file to your web server.
The robots.txt file should be placed in the root directory of your site, so search engines such as Google can find it by navigating to http://www.nameofmywebsite.com.au/robots.txt.
If implemented correctly, the robots.txt file will stop the relevant web pages from being displayed in Google search results. It is important to ensure that the file applies to pages you don’t want searched only and not pages you are trying to direct web traffic towards.