Find Resources Bigger Than 15 MB For Better Googlebot Crawling

Find Resources Bigger Than 15 MB For Better Googlebot Crawling

Googlebot is an automated and always-on net crawling system that retains Google’s index refreshed.

The web site estimates Google’s index to be greater than 62 billion net pages.

Google’s search index is “well over 100,000,000 gigabytes in size.”

Googlebot and variants (smartphones, information, photos, and so on.) have sure constraints for the frequency of JavaScript rendering or the scale of the assets.

Google makes use of crawling constraints to guard its personal crawling assets and techniques.

For occasion, if a information web site refreshes the really useful articles each 15 seconds, Googlebot may begin to skip the steadily refreshed sections – since they received’t be related or legitimate after 15 seconds.

Years in the past, Google introduced that it doesn’t crawl or use assets larger than 15 MB.

On June 28, 2022, Google republished this blog post by stating that it doesn’t use the surplus a part of the assets after 15 MB for crawling.

To emphasise that it hardly ever occurs, Google said that the “median size of an HTML file is 500 times smaller” than 15 MB.

Screenshot from the writer, August 2022

Above, reveals the median desktop and cell HTML file measurement. Thus, most web sites shouldn’t have the issue of the 15 MB constraint for crawling.

However, the net is a giant and chaotic place.

Understanding the character of the 15 MB crawling restrict and methods to research it’s important for SEOs.

A picture, video, or bug could cause crawling issues, and this lesser-known search engine optimization info may help tasks defend their natural search worth.

Find Resources Bigger Than 15 MB For Better Googlebot Crawling
Find Resources Bigger Than 15 MB For Better Googlebot Crawling

Is 15 MB Googlebot Crawling Restrict Solely For HTML Paperwork?


15 MB Googlebot crawling restrict is for all indexable and crawlable paperwork, together with Google Earth, Hancom Hanword (.hwp), OpenOffice textual content (.odt), and Wealthy Textual content Format (.rtf), or different Googlebot-supported file sorts.

Are Picture And Video Sizes Summed With HTML Doc?

No, each useful resource is evaluated individually by the 15 MB crawling restrict.

If the HTML doc is 14.99 MB, and the featured picture of the HTML doc is 14.99 MB once more, they each will probably be crawled and utilized by Googlebot.

The HTML doc’s measurement will not be summed with the assets which are linked through HTML tags.

Does Inlined CSS, JS, Or Knowledge URI Bloat HTML Doc Dimension?

Sure, inlined CSS, JS, or the Knowledge URI are counted and used within the HTML doc measurement.

Thus, if the doc exceeds 15 MB attributable to inlined assets and instructions, it would have an effect on the precise HTML doc’s crawlability.

Does Google Cease Crawling The Useful resource If It Is Bigger Than 15 MB?

No, Google crawling techniques don’t cease crawling the assets which are larger than the 15 MB restrict.

They proceed to fetch the file and use solely the smaller half than the 15 MB.

For a picture larger than 15 MB, Googlebot can chunk the picture till the 15 MB with the assistance of “content range.”

The content material-Vary is a response header that helps Googlebot or different crawlers and requesters carry out partial requests.

How To Audit The Useful resource Dimension Manually?

You need to use Google Chrome Developer Tools to audit the useful resource measurement manually.

Observe the steps under on Google Chrome.

  • Open an internet web page doc through Google Chrome.
  • Press F12.
  • Go to the Community tab.
  • Refresh the net web page.
  • Order the assets in line with the Waterfall.
  • Verify the measurement column on the primary row, which reveals the HTML doc’s measurement.

Under, you possibly can see an instance of a homepage HTML doc, which is larger than 77 KB.

search engine journal homepage html resultsScreenshot by writer, August 2022

How To Audit The Useful resource Dimension Mechanically And Bulk?

Use Python to audit the HTML doc measurement mechanically and in bulk. Advertools and Pandas are two helpful Python Libraries to automate and scale search engine optimization duties.

Observe the directions under.

  • Import Advertools and Pandas.
  • Accumulate all of the URLs within the sitemap.
  • Crawl all of the URLs within the sitemap.
  • Filter the URLs with their HTML Dimension.
import advertools as adv

import pandas as pd

df = adv.sitemap_to_df("

adv.crawl(df["loc"], output_file="output.jl", custom_settings="LOG_FILE":"output_1.log")

df = pd.read_json("output.jl", strains=True)

df[["url", "size"]].sort_values(by="size", ascending=False)

The code block above extracts the sitemap URLs and crawls them.

The final line of the code is just for creating a knowledge body with a descending order primarily based on the sizes. urls and sizePicture created by writer, August 2022

You’ll be able to see the sizes of HTML paperwork as above.

The largest HTML doc on this instance is round 700 KB, which is a class web page.

So, this web site is secure for 15 MB constraints. However, we are able to verify past this.

How To Verify The Sizes of CSS And JS Resources?

Puppeteer is used to verify the scale of CSS and JS Resources.

Puppeteer is a NodeJS bundle to manage Google Chrome with headless mode for browser automation and web site checks.

Most search engine optimization execs use Lighthouse or web page Velocity Insights API for his or her efficiency checks. However, with the assistance of Puppeteer, each technical side and simulation may be analyzed.

Observe the code block under.

const puppeteer = require('puppeteer');

const XLSX = require("xlsx");

const path = require("path");

(async () => .com", "");



          const workSheetName = "Users";

          const filePath = `./$domainName`;

          const userList = perfEntries;



          const exportPerfToExcel = (userList) => 

              const knowledge = => 

                  return [, url.transferSize, url.encodedBodySize, url. decodedBodySize];


              const workBook = XLSX.utils.book_new();

              const workSheetData = [




              const workSheet = XLSX.utils.aoa_to_sheet(workSheetData);

              XLSX.utils.book_append_sheet(workBook, workSheet, workSheetName);

              XLSX.writeFile(workBook, path.resolve(filePath));

              return true;








Should you have no idea JavaScript or didn’t end any sort of Puppeteer tutorial, it is perhaps somewhat more durable so that you can perceive these code blocks. However, it’s really easy.

It mainly opens a URL, takes all of the assets, and provides their “transferSize”, “encodedSize”, and “decodedSize.”

On this instance, “decodedSize” is the scale that we have to deal with. Under, you possibly can see the end result within the type of an XLS file.

Resource SizesByte sizes of the assets from the web site.

If you wish to automate these processes for each URL once more, you will want to make use of a for loop within the “” command.

In accordance with your preferences, you possibly can put each net web page into a distinct worksheet or connect it to the identical worksheet by appending it.


The 15 MB of Googlebot crawling constraint is a uncommon chance that may block your technical search engine optimization processes for now, however reveals that the median video, picture, and JavaScript sizes have elevated in the previous couple of years.

The median picture measurement on the desktop has exceeded 1 MB.

Timeseries of Image BytesScreenshot by writer, August 2022

The video bytes exceed 5 MB in complete.

Timeseries of video bytesScreenshot by writer, August 2022

In different phrases, once in a while, these assets – or some components of those assets – is perhaps skipped by Googlebot.

Thus, you need to be capable to management them mechanically, with bulk strategies to make time and never skip.

Leave a Comment

Your email address will not be published.