🍋 SEO Tools
Robots.txt Analyzer
Parse and test robots.txt rules against URLs
chars
words
sentences
lines
Analysis Options
Results
Result
About Robots.txt Analyzer
Paste your robots.txt content and test whether specific URLs are allowed or blocked for different user agents. Identify syntax errors and common configuration mistakes.
How It Works
The tool parses robots.txt according to the Robots Exclusion Protocol. It groups rules by user-agent, resolves wildcards and $ anchors, and tests URLs against the most specific matching rule.
Step by Step
- 1 Paste your robots.txt content into the input area
- 2 Enter a URL to test in the options panel
- 3 Click Analyze to see parsed rules and test results
Tips
- Always include a Sitemap directive pointing to your sitemap.xml
- Use 'User-agent: *' for rules that apply to all crawlers
- Be careful with Disallow: / — it blocks your entire site
- Test critical pages like /blog/ and /products/ individually
Frequently Asked Questions
What is robots.txt?
Robots.txt is a text file at your site's root that tells search engine crawlers which pages they can and cannot access. It follows the Robots Exclusion Protocol.
Does robots.txt prevent indexing?
No. Robots.txt controls crawling, not indexing. Google can still index a page (showing URL in results) even if blocked by robots.txt. Use 'noindex' meta tags to prevent indexing.