In order for your website to be found by other people, search engine crawlers, also often known as bots or spiders, will crawl each page on your website looking for changes to text and links to update their search indexes.
How to Control search engine crawlers with a robots.txt file
Website owners can use a file called robots.txt to define instructions how these search engine crawlers should crawl a website and when a search engine starts crawling a site, it should request the robots.txt file first and follow the rules within it. We say "should", because there nothing to force crawlers to look for this file and obey the instructions and most "bad" crawlers either don't read the file or simply ignore any instructions that they want to !