Slickplan

Custom robots.txt Generator: Create a Google Robots txt File

14-day free trial
No credit card required

About our custom robots.txt generator

Take control over where search engine crawlers go with the robot.txt file, a tiny file with big SEO power which does just that. Put simply, creating and adding this to your root directory tells Google what to index and what to skip. Our custom robots.txt generator makes it easy to quickly generate a robots txt file that's error-free and does the job.

Why and how to generate robot txt with our tool

Before jumping into how the robots.txt file generator works, let's dig a little deeper into why you'd want one in the first place. Not all pages on your site represent SEO value. Think check-out confirmation pages, login pages, duplicate content, admin and staging areas of a site, etc. It's not just that these sorts of pages don't improve SEO if they're included in a crawl, it's that they can actively work against your search engine optimization efforts by taking up precious crawl budget. That may mean missing genuinely valuable content in favor of pages that don't really matter. Moreover, it's not just Google that crawls your site, excluding other 3rd party crawlers can help keep your site speedy.

How to use the ROBOTS.txt generator

A robots txt generator lets you block portions of the site from indexing so Google goes exactly where you want it to. Without further ado, here's how to create a robots txt file.

1.
Select “allowed” or “refused”. The default for our robots txt file generator is that all robots, or crawlers, are allowed
2.
Individually select which search robots you'd like to refuse access to
3.
Set your crawl delay
4.
Type in any directories you want to exclude from crawling being very careful with both letter cases and symbols
5.
Add your sitemap URL
6.
Download the file and with the robot txt file download in hand, add it to your root directory. Alternatively, you can copy the content and paste it to an existing robots.txt file

Explanation of our robots text file generator output

After you create a robots.txt file, you might find yourself wondering what exactly all of that jargon is that you're looking at in all those groups of text. Let's break down the output directives of our robots txt generator online.

OutputExplanation
User-agentThis is the search engine crawler that the following lines of text will apply to. There are tons of user-agents out there but some of the most common are Googlebot, Bingbot, Slurp and Baiduspider (all case sensitive). The one exception is an *. The asterisk applies to all crawlers (except AdsBot crawlers, those need to be named individually).
DisallowAlways the second thing you'll see in each grouping, disallow lists what you don't want a crawler to access or index. Leaving this blank means you're not disallowing anything from that user-agent's crawler and they can index your entire site. On the flipside, if you want your entire site blocked from that crawler, you'll see a “/”. You can also have particular directories or pages listed here, all of which would have to be listed on separate lines.
AllowThis essentially lets you create exceptions to the disallow directive for particular directories, subdirectories or pages.
Crawl-delayExactly what it sounds like, the number you see here represents a delay in seconds before a crawler will access your site in an attempt to save bandwidth and not generate a traffic peak. Google doesn't support this one anymore, i.e., Google ignores this, but other search engines may still follow this directive.
SitemapWhile it's smart to submit your sitemap directly to Google Search Console, there are other search engines out there and this robot txt file generator directive tells their crawlers where your sitemap is.

Best practices on how to a create robots.txt file for SEO purposes

This is all stuff we handle for you when creating a robots.txt file but it's still good to know some best practices in case you need to make changes down the road or want to know how to make a robots.txt file that gets the job done on your own.

1.
The file must live at the top level, root directory of your site.
2.
Keep in mind when you generate robots.txt files that everything is case-sensitive. If you create a robots txt directive to block the “Photo” directory, for example, but enter “photo”, it will still be crawled.
3.
Pay careful attention to symbols like backslashes in both domains and when populating directive fields like disallow. Accidentally leaving disallow completely blank, for instance, means you're allowing that crawler to access
4.
Each directive must be on its own line.
5.
The directives created by a robot.txt generator don't block a page, domain or directory from Google. If you want something to not appear at all, you'll want to use a “noindex” tag rather than the robots.txt file.
6.
Avoid conflicting rules as they may lead to crawling issues that mean important content gets skipped.

FAQs about our robots.txt builder

Checkout our other SEO tools

XML sitemap generator

XML sitemap validator

HTTPS/HTTP header checker

Robots.txt generator

Redirect checker

UTM Builder