Every website has a front door. When Google visits your site, the very first thing it looks for is your robots.txt file. This tiny file tells search engines which pages to visit and which ones to skip.
If you do not have one — or if yours has a mistake — search engines may waste time on the wrong pages. That means your best content gets ignored.
Our free Robots.txt Generator fixes this in seconds. No coding skills needed. Just fill in the form, hit Generate, and copy your file. It is that simple.
What Is a Robots.txt File?
A robots.txt file is a plain text file. You place it at the root of your website. Its job is simple: it tells search engine bots what they can and cannot crawl.
Think of it as a note on your front door. It might say "Welcome, please come in" to Google. Or it might say "Sorry, the basement is off limits" for some folders you do not want anyone to see in search results.
The file is also called the Robots Exclusion Protocol. It has been used on the web since 1994. Almost every search engine — Google, Bing, Yahoo, DuckDuckGo — reads it before crawling your pages.
📌 Quick FactYour robots.txt file must always be at: https://yourdomain.com/robots.txt — never anywhere else.
The file itself is very short. A basic one might be just two lines. But the right two lines can make a big difference for how Google sees your site.
Why Does Robots.txt Matter for SEO?
Search engines do not have unlimited time on your site. Google gives each website a crawl budget. That means bots will only visit a set number of pages each day.
If you have 500 pages but 200 of them are junk — old test pages, admin panels, duplicate content — Google might spend all its budget on those. Your real pages get left out.
A good robots.txt file guides bots straight to your best content. Here is what it can do for your SEO:
BenefitWhat It DoesSaves crawl budgetStops bots wasting time on pages that do not matterBlocks duplicate contentKeeps Google from indexing the same content twiceHides private pagesKeeps admin, login, and staging pages out of search resultsPoints to your sitemapHelps Google find all your important pages fasterControls bot accessYou can block some bots but let others in
💡 Good to Know Robots.txt is not a security tool. It asks bots to stay out, but it does not lock them out. For real security, use passwords or server-side rules.
How to Use the Robots.txt Generator — Step by Step
Our generator makes the whole process fast and easy. You do not need to know any code. Just follow these steps:
-
Choose your default setting.Decide if you want all bots to be allowed or blocked by default. Most sites choose "Allowed" here.
-
Set a crawl delay (optional).If your server is slow, you can ask bots to wait a few seconds between page visits. This stops them from slowing your site down.
-
Add your sitemap URL.Paste the link to your sitemap here. This helps bots find all your pages. If you do not have one, leave this blank.
-
Choose settings for each search robot.You can allow or block Google, Bing, Yahoo, Baidu, and others one by one. Leave them as "Same as Default" if you are not sure.
-
Add restricted folders.Type in any folder paths you want to hide. For example:/wp-admin/or/private/.
-
Click Generate.Your robots.txt file will appear instantly. Copy it and upload it to your website's root folder.
📁 Where to UploadSave the file as robots.txt (lowercase, no capital letters). Upload it so it lives at yourdomain.com/robots.txt.
Key Directives — The Words Inside Your Robots.txt
A robots.txt file uses simple commands called directives. Each line does one job. Here are the main ones you need to know:
DirectiveWhat It DoesExampleUser-agentNames the bot this rule is for. Use for all bots.User-agent: AllowLets a bot visit this page or folder.Allow: /blog/DisallowStops a bot from visiting this page or folder.Disallow: /wp-admin/Crawl-delayTells bots to wait before loading the next page.Crawl-delay: 10SitemapPoints bots to your XML sitemap.Sitemap: https://example.com/sitemap.xml
Each set of rules starts with a User-agent line. Below it, you list the Allow or Disallow rules for that bot. Each rule goes on its own line. No two rules share a line.
⚠️ RememberFile paths in robots.txt are case-sensitive. /Admin/ and /admin/ are two different folders. Always use lowercase to be safe.
Sample Robots.txt Files You Can Copy
Here are some ready-to-use examples. Pick the one that fits your site best.
Sample Robots.txt Files You Can Copy
Here are some ready-to-use examples. Pick the one that fits your site best.
Allow everything (most common):
robots.txt — Allow All
# Allow all bots to crawl everything
User-agent: *
Allow: /
Sitemap: https://yourdomain.com/sitemap.xml
Block your admin area (WordPress users):
robots.txt — WordPress
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Disallow: /wp-login.php
Sitemap: https://yourdomain.com/sitemap.xml
Block a specific bot only:
robots.txt — Block One Bot
# Block Bingbot from one page
User-agent: Bingbot
Disallow: /special-offer.html
# Let everyone else in
User-agent: *
Allow: /
Block your whole site (use with care):
robots.txt — Block All (Staging Sites)
# Keep everything hidden — use on staging only!
User-agent: *
Disallow: /
Common Robots.txt Mistakes That Can Hurt Your SEO
One small typo in your robots.txt can stop Google from seeing your whole site. Here are the most common mistakes — and how to avoid them:
-
❌ Blocking your whole website by accident. If you write Disallow: / under User-agent: *, no bot can see any page. Your site will vanish from Google. Always double-check this line.
-
❌ Blocking your CSS and JavaScript files. Google needs to see your site exactly as a visitor does. If you block style files, Google cannot render your pages. This can hurt your rankings.
-
❌ Using the wrong case. /Admin/ is not the same as /admin/. If you block the wrong version, the folder stays open to bots.
-
❌ Forgetting a trailing slash. Folder paths must end with /. Without it, the rule may not work as expected.
-
❌ Leaving Disallow empty. A blank Disallow: line means "allow everything." This is fine, but make sure it is what you want.
-
❌ Not testing the file. After uploading, always check your file in Google Search Console using the robots.txt Tester tool.
Robots.txt for WordPress — What You Should Know
WordPress is the most popular website platform in the world. If your site runs on WordPress, there are a few things to keep in mind when writing your robots.txt file.
By default, WordPress creates a virtual robots.txt file for you. But it is very basic. You can do much better with a custom one.
Here are the most important folders to block for a WordPress site:
-
/wp-admin/ — your dashboard. Block this, but keep /wp-admin/admin-ajax.php open.
-
/wp-login.php — your login page. No need for Google to index it.
-
/?s= — search results pages. These create thousands of duplicate pages.
-
/tag/ and /category/ (optional) — tag archive pages can create duplicate content.
🔌 Using Yoast SEO or Rank Math?These plugins let you edit your robots.txt file from inside WordPress. Go to your SEO plugin settings and look for the robots.txt editor. This is the easiest way to manage the file on a WordPress site.
Remember: after making any changes, go to Google Search Console and use the robots.txt Tester. It will show you if any important pages are accidental blocked.
Frequently Asked Questions About Robots.txt
What is a robots.txt file?
A robots.txt file is a small text file at the top level of your website. It tells search engine bots which pages they can visit and which ones to skip. Think of it as a welcome sign — or a "Do Not Enter" sign — for web crawlers.
Do I really need a robots.txt file?
Yes, especially if your site has more than a few pages. Without one, search engines will crawl everything — including pages you may not want indexed, like login pages, admin panels, or test pages. A robots.txt file keeps bots focused on your best content.
Does robots.txt affect my SEO?
Yes. A good robots.txt file saves your crawl budget, prevents duplicate content issues, and helps Google find your most important pages faster. A bad one can accidentally hide your site from search engines entirely.
Where do I put my robots.txt file?
Always at the root of your website. The full URL should be yourdomain.com/robots.txt. If you put it anywhere else, search engines will not find it.
Can I block just one search engine?
Yes. Use the bot's specific name in the User-agent line. For example, write User-agent: Bingbot to set rules for Bing only, while leaving Google and others unaffected.
Does blocking a page in robots.txt delete it from Google?
Not always. If other websites link to that page, Google may still know it exists and show its URL in search results — just without a description. To fully remove a page from search, use a noindex meta tag instead.
How do I test my robots.txt file?
The easiest way is to use the robots.txt Tester in Google Search Console. Paste your file, type in a URL, and click Test. It will tell you if that URL is allowed or blocked.
Can bad bots ignore my robots.txt file?
Yes. Trusted search engines like Google and Bing follow robots.txt rules. But spam bots, scrapers, and malware crawlers often ignore them. For real security, use server-side tools like .htaccess rules or a firewall.