All Collections
Issue Overview
Why is there a difference between URLs analyzed and URLs found?
Why is there a difference between URLs analyzed and URLs found?
Marlon Dean Neuhuber avatar
Written by Marlon Dean Neuhuber
Updated over a week ago

If the Dashboard shows that more URLs have been found than have been analyzed, it’s usually due to the project settings. Here are the main reasons for the discrepancy:

  1. Insufficient URL volume

  2. Subdomains are excluded

  3. URLs are excluded via robots.txt

  4. URLs are excluded via blacklist

  5. Only certain URLs are crawled according to whitelist

  6. A subfolder has been specified

You may check all of those in your project settings. At the end of your project settings you may use the tab “Previous analyses” to see if the discrepancy has always existed or if only newer crawls are affected. This might help you figure out if the discrepancy is due to a recent change in the settings.

To see if robots.txt are followed or subdomains have been excluded from the crawl, please review the section “What should be analyzed?” in your project settings.

A bit further down you may see the tab “Advanced analysis.” Here you can check to make sure that you haven’t limited the analysis to a single subfolder (“Advanced analysis” -> “What to analyze” -> “Analyze subfolder”) or see whether or not you listed your subdomains (“Analyze Subdomains”).

You may also check your blacklist and whitelist to see if there is anything to influence the crawl (“Advanced analysis” -> “Ignore /Include URLs”).

Did this answer your question?