Robots.txt Manager
Customize your site’s robots.txt file directly from the WordPress admin panel. Control which parts of your site search engines can crawl, add sitemap references, and manage crawler behavior—all without FTP access or file editing.
Use Cases
- Block search engines from crawling admin, login, and private areas
- Add your XML sitemap location for better search engine discovery
- Control crawl rate to prevent server overload from aggressive bots
- Block specific user agents (bad bots, scrapers)
- Manage staging site visibility during development
Where to Find It
Configure your robots.txt content in the Switchboard module settings. Your changes take effect immediately at yoursite.com/robots.txt.
How It Works
- Open the Robots.txt Manager module settings
- Edit the robots.txt content in the textarea
- Save your changes
- WordPress serves your custom content at
/robots.txt
The module overrides WordPress’s default virtual robots.txt generation with your custom content.
Basic Syntax
Robots.txt uses a simple directive format:
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Sitemap: https://yoursite.com/sitemap.xmlUser-agent
Specifies which crawler the rules apply to.
| Value | Meaning |
|---|---|
* | All crawlers |
Googlebot | Google’s crawler only |
Bingbot | Bing’s crawler only |
Disallow
Tells crawlers not to access a path.
| Directive | Effect |
|---|---|
Disallow: / | Block entire site |
Disallow: /private/ | Block /private/ directory |
Disallow: /page.html | Block specific file |
Allow
Explicitly allows access (useful for exceptions within disallowed areas).
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.phpSitemap
Points crawlers to your XML sitemap for better indexing.
Sitemap: https://yoursite.com/sitemap.xmlCommon Configurations
Standard WordPress Setup
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Disallow: /wp-includes/
Sitemap: https://yoursite.com/sitemap.xmlWith Feed and Trackback Blocking
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Disallow: /wp-includes/
Disallow: /feed/
Disallow: /trackback/
Disallow: /comments/feed/
Sitemap: https://yoursite.com/sitemap.xmlBlock Specific Directories
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Disallow: /private/
Disallow: /members-only/
Disallow: /staging/
Disallow: /test/
Sitemap: https://yoursite.com/sitemap.xmlBlock Specific Bots
# Allow all normal crawlers
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
# Block bad bots
User-agent: AhrefsBot
Disallow: /
User-agent: SemrushBot
Disallow: /
User-agent: MJ12bot
Disallow: /
Sitemap: https://yoursite.com/sitemap.xmlStaging Site (Block Everything)
User-agent: *
Disallow: /This blocks all search engine crawling. Use only on staging/development sites, not production.
Sitemap Reference
Always include your sitemap location:
Sitemap: https://yoursite.com/sitemap.xmlIf using multiple sitemaps:
Sitemap: https://yoursite.com/post-sitemap.xml
Sitemap: https://yoursite.com/page-sitemap.xml
Sitemap: https://yoursite.com/category-sitemap.xmlThis helps search engines discover and index your content efficiently.
Testing Your Robots.txt
View Live File
Visit yoursite.com/robots.txt in your browser to see the active content.
Google’s Testing Tool
- Go to Google Search Console
- Select your property
- Use the robots.txt tester to check for issues
Test URL Blocking
In Search Console’s robots.txt tester:
- Enter a URL path
- See if it’s blocked or allowed
- Identify which rule affects it
Important Considerations
It’s a Suggestion, Not Security
Robots.txt is a convention that well-behaved crawlers follow. It does not:
- Prevent access to files
- Hide content from determined visitors
- Provide security or privacy
- Block malicious bots that ignore rules
For actual access control, use password protection or login requirements.
Don’t Block Important Resources
Avoid blocking CSS, JavaScript, or images that crawlers need to render your pages:
# Bad - may hurt SEO
Disallow: /wp-content/Search engines need to access these files to properly understand and rank your content.
Check for Physical File
If you have a physical robots.txt file in your WordPress root directory, it takes precedence over this virtual file. Delete the physical file to use this module’s content.
FAQ
Does this file actually exist on my server?
No, WordPress generates it dynamically. This module overrides WordPress’s default generation with your custom content. No actual file is created.Will changes affect my SEO immediately?
Changes take effect immediately at your/robots.txt URL. However, search engines cache robots.txt and may take time to re-crawl it. Major changes can take days to weeks to fully take effect.I blocked a section but Google still shows those pages?
Robots.txt prevents crawling, not indexing. If pages were already indexed or have external links, they may remain in search results. Use meta noindex tags to remove indexed pages.Should I block /wp-content/uploads/?
Generally no. This blocks images from appearing in image search and can hurt SEO. Only block if you have specific private files in uploads.How do I know if bots are following my rules?
Check your server logs for crawler activity. You can also use Google Search Console’s Coverage report to see how Google is crawling your site.What if I make a mistake?
Robots.txt errors won’t break your site. At worst, you might block content from search engines temporarily. Fix the error and search engines will see the updated file on their next crawl.Get access to all 147 modules with a single license