Create a robots.txt file to control how search engines and AI bots crawl your website. Use presets or build custom rules.
User-agent: * Allow: /
GSCdaddy scans your Google Search Console data and tells you exactly what to optimize — with AI-powered action plans for every page.
Try GSCdaddy FreeA robots.txt file is a plain text file placed at the root of your website (e.g. yoursite.com/robots.txt) that tells search engine crawlers which pages or sections they are allowed or not allowed to access. It follows the Robots Exclusion Protocol standard.
No. robots.txt prevents crawling, not indexing. If other sites link to a page you've disallowed in robots.txt, Google may still index it based on external signals. To prevent indexing, use a 'noindex' meta robots tag instead.
It depends on whether you want AI companies to use your content for training. Bots like GPTBot (OpenAI), Google-Extended (Gemini training), and CCBot (Common Crawl) can be blocked if you want to opt out of AI training while still allowing regular search crawlers.
The robots.txt file must be placed in the root directory of your website so it's accessible at yoursite.com/robots.txt. In Next.js, you can create a robots.ts file in your app directory that generates it automatically.
Crawl-delay tells bots to wait a specified number of seconds between requests. Google ignores this directive (use Google Search Console's crawl rate setting instead), but Bing and other bots respect it. Use it if your server is struggling with crawl load.