Undeniably, Screaming Frog SEO spider is one of the most popular, effective, easy-to-use and great all-around SEO tools for technical audits and much more. You can identify 99% of the issues (if you know where to look) that might affect your SEO and hinder your website’s performance and rankings.
As such, it has become something of an industry-standard today and I guarantee that you won’t meet an SEO that hasn’t used it or at least heard about it. Wanted to quickly grab the opportunity to state that I am in no way affiliated with Screaming Frog or the company behind it. I have been using Screaming Frog for a number of years and it has proven quite an invaluable tool which is why I recommend it as the go-to web crawler. At the end of the article, I’ll also provide suggestions for other, similar tools. Finally, please note that Screaming Frog is free to use for up to 500 pages. If you want to crawl more (and enjoy some additional features there is an annual fee of $149).
Without further ado, here are 7 methods to improve your onsite SEO with Screaming Frog (with practical guidelines):
1. JavaScript Mobile SEO Audit
Mobile-first indexing has been in effect for some time now and Google has been notifying webmasters about the shift as they continue to roll out the update for more and more websites.
As mobile users are now responsible for the majority of searches on Google, it’s essential that your mobile SEO strategy is optimized to avoid missing out on valuable organic traffic.
Fortunately, Screaming Frog SEO Spider provides all the necessary tools you need to conduct a technical SEO audit with a mobile-first approach. Before using the tool, it’s important to configure it correctly. By default, JavaScript may be disabled, so it may need to be enabled as the crawler needs to be able to read the Document Object Model (DOM) after JavaScript has been loaded and applied on the page.
To do so go to “Configuration” > “Spider” > “Rendering”, and from the dropdown select “JavaScript”. Leave the “Enable Rendered Page Screen Shots” selected, set “AJAX Timeout” to 5 secs, and finally, select “Googlebot Mobile: Smartphone” from the “Window Size” dropdown.
Then enter the URL you want in the to field and hit “Start”. If you have ever run a “simple” crawl previously, you’ll discover that with this configuration it takes a lot more to crawl the entire website (up to x5 longer for each URL); that’s quite alright and normal just let Screaming Frog SEO Spider do its job.
Once the crawl is completed, there is a number of checks you can perform to identify underlying JavaScript rendering issues:
Check for “Blocked Resources”
From the tabs at the top, select “Response Codes” and then “Blocked Resource” via the dropdown below.
You can also monitor any blocked resources via the bottom menu in the “Rendered Page” tab. If you have any resources that are blocked by your robots.txt then you need to allow the user-agent to crawl them. (You can exclude the robots.txt features from Screaming Frog).
After having selected “Blocked Resource” you can extract everything by selecting “Bulk Export” > “Response Codes” > “Blocked Resource Inlinks” / “Blocked by Robots.txt inlinks”.
Manually Scan Rendered Pages
You can check the rendered version of any page you like via the bottom panel by selecting the “Rendered Page” tab. As we have configured the crawler to simulate a Smartphone, you’ll see what Google’s smartphone crawler sees. Check if there are any issues with parts of the page not rendering properly. If there are, crosscheck your “Blocked Resources” to see if those are reported there.
Compare Page Source Code and DOM-processed / JavaScript Rendered Code
What’s being rendered after JavaScript is loaded could be completely different from the page source code. There’s no point in crawling websites with JavaScript without processing DOM and any dynamically generated content. To compare source code and DOM with Screaming Frog, you need to go to “Configuration” > “Spider” > “Advanced” > check both “Store HTML” and “Store rendered HTML”
Then the “View Source” tab in the bottom panel will be populated.
You’ll find discrepancies between the two panels, that’s quite normal and nothing alarming. You need to ensure that all content is properly rendered and identified on the right panel. If not, you need to pinpoint and fix the elements that aren’t being picked up by the crawler.
Note: you can perform the exact same audit for a desktop client just by selecting “Googlebot Desktop” in the “Rendering” configuration of Screaming Frog.
// Additional resources:
How Does Mobile-First Indexing Work, and How Does It Impact SEO
Best practices for mobile-first indexing
2. Find Redirect Chains
Redirect chains can prove quite problematic as they are putting extra strain on search crawlers to reach the end page and needlessly eating up your crawl budget. Usually, a website accrues redirect chains as new more users publish new content which deprecates old content and then they redirect the old page to the new one. Multiply that by tens, hundreds, if not thousands of times and by multiple users and it’s chaotic if you don’t have the proper control and logging everything (even then, redirect chains are unavoidable).
Another common case is when products and services that are no longer sold by a company; they are removed from the website and perhaps the old URL 301 redirects to a new page with a similar product (or to a new model for example). As the time goes by more and more redirects are implemented and there even more chances that you end up with redirect chains.
To better illustrate what a redirect chain is imagine that you have a Page A that features a product you’re currently selling. After it goes out of stock you decide not to restock and stop selling said product. But you launch a new, similar product in its place. Now that has a new URL (Page B) that contains the new product’s features, price etc. and it even has a similar name. Now some webmasters will choose to redirect Page A to Page B to provide customers who are looking for the discontinued product with a better alternative. Usually, the aforementioned case routinely repeats itself, only next time, Page B will be redirected to Page C. So then you have the following redirect structure:
Page A > Page B > Page C
This is a very common case after a website migration to a new URL or after transitioning to an HTTPS protocol.
Redirect chains should be broken down and simplified. In the previous example where Page A redirects to Page B, and in turn, Page B to a new Page C, we should break down the chain and point Page A to Page C and Page B to Page C. So instead of a “2-level-redirect”, there are two “level-1-redirects”:
Page A > Page C
Page B > Page C
Once again, Screaming Frog has a nifty little report that digs out every redirect chain on your website and generates a helpful excel doc that maps everything neatly.
All you have to do is perform a simple website scan and then click on “Reports” > “Redirect Chains”.
After you get your list with all the redirect chains, it’s time to break them down and implement the right redirections in your .htaccess file (Apache server) by adding the following line for each and every redirect you need:
Redirect 301 /old-file.html http://www.mydomain.com/new-file.html |
view rawgistfile1.txt hosted with ❤ by GitHub
A parenthesis: there are more ways to implement 301 redirect and several free plugins that can take care of redirects on WordPress, Magento, etc.
// Additional resources:
Change page URLs with 301 redirects – Google
How to create a 301 redirect in WordPress – Yoast
Beginner’s Guide to Creating Redirects in WordPress
How to Properly Implement a 301 Redirect
3. Fix Missing or Duplicate Meta Tags, H1s, Titles
Another common issue that 90% of the websites out there encounter is duplicate or missing meta descriptions, page titles, H1s, etc. As you pump out more and more content and as new pages are being created on a steady basis, it’s unavoidable that some of them may be missing the necessary meta tags, or end up having duplicate metadata.
How to fix duplicate and missing metadata:
With Screaming Frog it’s extremely easy to identify all pages that are suffering from any of these issues. Simply perform a normal crawl of your website and via the panel on the top right hand side you can select the element you want (H1, Page Titles, H2, Images, etc) and then click either “Missing” or “Duplicate” (which are nested underneath).
As you can see, Screaming Frog doesn’t identify just the Missing / Duplicate issues, but it flags elements that exceed the optimal character length. From there you can simply extract the affected pages via the “Export” option and go on prioritizing the onsite fixes starting from the more critical pages (e.g. homepage, services page, product category pages, etc.).
4. Find & Fix Broken Links (404s)
Broken links are bound to happen and it’s one of those ongoing checks you need to perform on a weekly/monthly basis (depending on the website size and your workload). While they don’t directly negatively affect your SEO, they provide a poor user experience which in turn might drive away confused or annoyed users who can’t seem to find what they want or who might even think that your website is broken or untrustworthy. User experience is key when it comes to SEO (and actually pretty much for all matters digital) and God knows how much weight Google puts on UX, so you should too!
How to find and fix broken links with Screaming Frog SEO Spider:
Commence a normal crawl with Screaming Frog and then select the “Response Codes” tab from the top menu and from the dropdown, select “Client Error (4xx)”. The results will be filtered and only the pages that returned a 4xx error will be listed. From there, export the list of pages and prioritize their fixing.
5. Create an XML Sitemap
Whilst using Screaming Frog to create an XML Sitemap is not, in all honesty, the fastest and most efficient method, nevertheless you have the ability to do so. Most modern CMS have their own tools to generate XML sitemaps which then you can submit to Google and other search engines for indexing, plus there are numerous plugins that can do the same thing quick and easy (i.e. Yoast for WordPress).
That said, Screaming Frog is a comprehensive tool that offers a wide range of options. To create an XML sitemap with Screaming Frog commence a site audit and once that’s completed go to “Sitemaps” > “Create XML Sitemap”. Save the sitemap on your local drive and then proceed to submit it to Google Search Console and any other search engine you want.
6. Audit Backlinks
As you know, backlinks have always been one of the most important ranking factors and this doesn’t seem to be changing any time soon. As such, it’s important to monitor the status of your previously, hard-earned links. You can always do that manually, by following all your backlinks and checking if they are still there but that’s rather laborious and tedious.
Screaming Frog SEO Spider features a handy bulk backlink checker. The first step would be to save your URLs in a .txt or .CSV file including the full URL string. Then, you need to go to “Configuration” > “Custom” > “Search”, select “Does Not Contain” and enter your URL:
Now click “Mode” > “List” and select “Upload” > “From a File” and upload the aforementioned list (.txt or .CSV):
Click “OK” and commence the crawl. Ideally, you want a blank list with no entries meaning that all targets contain your URL. If Screaming Frog returns addresses that do not contain your URL then you can reach out to those websites and inquire as to why your link was removed and try to have it reinstated.
7. Identify Trailing Slash Issues
slashes “/” are those forward-tilted bars that you see on URLs and they denote the folder/directory structure of the website. For example, gmkdigital.com/blog/post-name — this URL indicates that we are reading the “post-name” file which is located in the /blog directory which in turn is a subdirectory of the main domain “gmkdigital.com”. They also provide a type of navigation and a more experienced user may easily pinpoint where they “are” within a website’s structure by looking at the URL string (also navigate back and forth – sometimes).
What are trailing slashes?
Trailing slashes are forward slashes that are appended right at the end of the last character of a URL string, e.g.: www.gmkdigital.com/. Historically trailing slashes were introduced and used to indicate that the URL is a directory/folder and not a file. But with the prevailing of modern CMS like WordPress, Magento, Drupal, etc. things got more streamlined and there wasn’t so much a need to do anything like that manually as the content management system would take care of everything automatically.
Should you be using trailing slashes or not?
It doesn’t matter! You can go for whichever version you prefer. Neither method will affect your rankings or your SEO negatively. Google treats both versions equally so, do whatever your heart desires. That being said, several webmasters recommend the trailing slash version as it allegedly helps with pagespeed (haven’t tested that myself so I can’t attest to the credibility of this). The only directive is that you have to be consistent. Either use trailing slashes across the domain or don’t.
Note: your root domain is excluded from this rule as either the trailing slash and the non-trailing slash are the same and servers, crawlers and browsers know to resolve the correct address.
How to find trailing slash issues
Until now everything sounds dandy, so why are we discussing about how to “fix trailing slash issues”? Well, even though Google (and most search engines) treat the slash and non-slash versions equally, they also treat them separately. This means that in the eyes of Google the following two URLs are two different entities:
- www.gmkdigital.com/blog/example-post
- www.gmkdigital.com/blog/example-post/
In 99% of the cases though these pages are the same and have the exact same content and they are both accessible and indexed due to some kind of error during a page launch or a website redesign or a domain migration, etc.
What this means in practical terms is that if you have several pages whose trailing slash and non-trailing slash URLs are indexed and return a 200 response code, that counts as duplicate content and Google won’t know which one to rank and focus on.
To find if your website has both URL versions indexed, fire up the old Screaming Frog and perform a normal scan, then sort the URLs by name and check if there are any duplicates. Now, if there are duplicates and both return a 200 response code that could be an issue. If that’s the case, check if there is a canonical tag on one page and the corresponding tag on the other page. If not, then you have a problem. If any of them returns a 301 response with a redirect pointing to the other page, then you’re good.
How to fix trailing slash issues and force redirects
To fix the trailing slash issue you need to “force redirects” so that all pages redirect the same URL. To do that you have to edit your .htaccess file in your server and add the following lines of code:
/*** enforce a no-trailing-slash policy ***/ | |
RewriteCond %{REQUEST_FILENAME} !-d | |
RewriteRule ^(.*)/$ /$1 [L,R] | |
/*** enforce a trailing-slash policy ***/ | |
RewriteEngine on | |
RewriteRule ^([^.]*[^/])$ $1/ [R=301,L,NE] |
view rawgistfile1.txt hosted with ❤ by GitHub
Now, some may say that implementing canonical tags is enough to signal to Google which page is the original, the master and which contains the duplicate content. Whilst that holds some water, it’s more of a “bandage” rather than the best solution. If you have the option, I would suggest to opt for the redirect route.
// Additional resources:
Best Practices for Speeding Up Your Web Site – Yahoo developer network
Google On Trailing Slashes & How It Impacts SEO & Search Rankings
To slash or not to slash – Google Webmasters Blog
Subdirectories and subdomains – Matt Cutts
—
Screaming Frog Seo Spider Alternatives
As promised at the beginning of this guide, I’d be providing some excellent alternatives to Screaming Frog which, more or less, provide the same functionality. Now I have used some of them, not all of them, and even though they are excellent, I still prefer Screaming Frog (at this point probably because I am used to it and haven’t felt that I can’t do something specific with it).
Xenu Link Sleuth
BeamUsUp
Sitebulb
DeepCrawl
Scrutiny
Netpeak Spider
Spotibo