When it comes to enterprise level websites, relatively small issues can quickly be applied to millions of URLs and have a severe impact on organic performance. This was the case for TheKnowledgeAcademy.com (TKA) when we identified an issue with the way their site was displaying key pieces of content and information.
The Knowledge Academy provides training solutions and courses to thousands of businesses and professionals around the world. The website lists tens of thousands of courses in multiple countries and as such, has over 100,000 URLs indexed in the UK alone.
Before finding the below issue, they were already ranking well for thousands of keywords thanks to the high-quality links being built to the site through regular Digital PR campaigns, effective onsite optimisation and useful content. However, something still seemed to be holding the site back. We decided to undertake another thorough technical audit to see if we could find any potential issues.
Whilst carrying out technical audits, we try to take advantage of any and all data and tools available. This means making use of not only third-party marketing tools, but the powerful tools Google makes available to webmasters through their Google Search Console platform. In this instance, the (now depreciated) fetch and render tool was used to find an issue with the sites robots.txt set up which meant Google was potentially missing valuable content and context across hundreds of thousands of URLs.
By experimenting with the tool across the various page types published by TKA (course lists, course pages, training location pages etc.), we found that a few disallow commands in the robots.txt file meant Google missed vital information about the dates and prices of every course being offered. Not only did this mean less unique and up-to-date content from each page was being indexed, but it could even signal to Google that all the pages involved lacked important information that key competitors made readily available.
We worked with the development team to update the robots.txt file so that Google could now find, crawl and index all this date and pricing information.
Considering that this issue was found on so many URLs, we knew results would take time. We had to allow time for Google to re-crawl thousands of URLs, find the new content/information on each page and consider the updates when it came to ranking the site across thousands of keywords.
The first sign that our updates were having a positive effect came in the number of indexed URLs reported in Google Search Console. The months following the update saw tens of thousands of new URLs indexed, suggesting Google was viewing more of the pages they crawled as relevant and useful. On one occasion, we saw an additional 20,000 pages indexed overnight.
Next, we saw dramatic increases in organic traffic and broad ranking increases across the site. The new information available to Google and their algorithms suggested that pages were more valuable, useful and provided a greater user experience. The fix was implemented in December 2018 and since then, organic traffic has seen a consistent increase. To date, the site receives almost 50,000 organic users each month since the fix was implemented.