Sign up for our newsletters and digests to get news, expert articles, and tips on SEO
Enter correct email address
Thank you for subscribing!
2 comment
28 min read
Aug 29, 2022

As an SEO specialist, you don’t really need to tap into all of the intricacies of website development. But you do need to know the basics, since the way a website is coded has a great impact on its performance and therefore SEO potential. In the post on HTML tags, we’ve gone through the HTML basics you need to understand to efficiently do website SEO. This time I offer you to dig into other coding languages developers use to make a website look good and make it interactive. 

The languages are CSS and JavaScript, and in this post, I will introduce you to each of the two. Then, we’ll go through the major CSS and JS-related errors you can face when auditing your website. I’ll explain why each of the errors matters and how they can be fixed. The fixing part is something you’ll probably assign to your website developers. It’s just that this time you’ll be speaking the same language because after reading this guide you’ll know what external CSS is and why JS files returning a 302 response code is a problem. 

What is CSS

CSS stands for Cascading Style Sheets and as the name suggests it allows creating websites in style. CSS is always used alongside HTML. It’s the wrapping paper that gives a gift box the merry look. A plain HTML web page would look like this minus the set width and left-side alignment. 

Page with minimum CSS

The thing is today, CSS is used on every website, even if it is a rather dull-looking page from an RFC series of technical notes on how the Internet works. 

The HTML markup sets the structure of a web page and defines its elements in a way Google can understand. CSS, in turn, styles the website header, footer, and navigation turning them visually appealing and user-friendly. With CSS, you can do a bunch of cool things:

  • Set the color, font, and size of the text, 
  • Define spacing between elements,
  • Control the way the elements are laid out on the page,
  • Add background images or background colors.  

CSS can be implemented in 3 ways:

  • Inline. This is when the style attributes are added individually to every HTML element you want to style. This method is rarely used as it is too time-consuming.
  • Internally. To set styles for the whole web page, the <style> element is added to the <head> section of the page. This method is used when you need to give some landing pages a unique look. 
  • Externally. The most common way of implementing CSS is via an external stylesheet in the .css format. The file is linked to from the <head> section of the page. This method is the most popular one because it allows to define the style of a whole website using a single document. The common practice, though, is to use separate CSS files for different types of pages (e.g. category pages, blog, about us, etc.) as it improves the page loading speed.

How Google handles CSS

In the past, Google didn’t care too much about CSS and only interpreted the HTML markup of the page. This all changed with the mobile-friendly update back in 2015. In response to the growing popularity of mobile search, Google decided to reward websites that offered seamless user experience on mobile devices. And to make sure the page is mobile-friendly Google had to render it the way browsers do which meant loading and interpreting CSS and JavaScript files. 

Page Layout Algorithm also relies on CSS. It is meant to determine whether users can easily find the content on the page and CSS helps Google understand how the page is laid out both on desktop and mobile, and where exactly within the page every piece of content resides: front and center, in the sidebar or at the bottom of the page way below the fold. 

What is JavaScript 

While CSS adds style to a website, JavaScript adds dynamics. It brings content to life by updating webpage elements in real time in response to a user’s action. Pop-up forms, interactive maps, animated graphics, websites with continually updated content (e.g. weather forecast, exchange rates) are all examples of implementing JavaScript. 

Here at SE Ranking, we use JavaScript as well. Many of the cool effects you see on the SE Ranking platform are also powered by JS. Perhaps you remember our 2019 Christmas Wishes landing page with falling snowflakes that created the holiday mood. Our developers used JavaScript to choreograph the snowflake dance. Besides, we have recently updated our Page Changes Monitoring landing page that now features a JavaScript-powered animation designed to explain how the tool works. 

Just to give you a better feel of how JavaScript transforms websites, let me show you the same section of the landing page with JavaScript disabled.

Page Changes Monitoring landing page with JS disabled

At the same time, in spite of its powerful capabilities, JavaScript is not that ubiquitous as HTML and CSS when it comes to website development. And one of the reasons for that is SEO concerns. 

  • If not implemented properly, JavaScript can lead to a part of your content not getting indexed. Google needs to render JS files to see your page the way users do, and if it fails to do so, it will index your page without the JS-powered elements. This, in turn, can impede your rankings if non-indexed content is crucial for fulfilling user intent;
  • Page sections injected with JavaScript may contain internal links. Again, if Google fails to render JavaScript it will not follow the links. So whole pages may not get indexed unless they are linked to from other pages or the sitemap. Then, with JavaScript there’s a chance you’ll code your links in a way Google can’t understand, and as a result, it won’t follow the links.
  • JavaScript files are rather heavy and adding them to a page can significantly slow down its loading speed. In turn, this can lead to higher bounce rates and lower rankings.

Surely, all these issues can be avoided as long as you understand how Google renders JavaScript and how to make your JS code search-friendly.

How Google handles JavaScript

Over the past six years, Google has leaped a hundredfold in its ability to crawl and index JavaScript. But the process is still rather complicated, and many things can go wrong. 

With regular HTML pages, it is all plain and simple. Googlebot crawls a page and parses its content, including internal links. Then the content gets indexed while the discovered URLs are added to the crawl queue, and the process starts all over again. 

When JavaScript is added to the equation, the process gets a bit more cumbersome. To see what’s hidden within the JavaScript that normally looks like a single link to the JS file, Googlebot needs to first parse, compile, and execute it. This is called JS rendering and only after this stage is complete Google can see all the content of a page in HTML tags—the language it can understand. At this point, Google can proceed with indexing the JS-powered elements and adding the URLs hidden within JS to the crawl queue.

How Googlebot processes JavaScript

Now, are these complications something you should worry about SEO-wise? Just one year ago they were.

JavaScript rendering is very resource-demanding and costly, and until recently Google wouldn’t immediately render JavaScript. It would first index the readily-available plain HTML parts of the page and then during the so-called second wave of indexing Google would get the JavaScript processed. Back in 2018, John Mueller claimed that it took a few days to a few weeks for the page to get rendered. Therefore, websites that heavily relied on JavaScript could not expect to have their pages indexed fast. Besides, they could have issues with new pages making their way into the crawl queue because Google couldn’t follow internal links immediately.

In one of the JavaScript SEO office hours, Martin Splitt reassured webmasters that the rendering queue is now moving way faster and a page is normally rendered within minutes or even seconds. Nevertheless, coding JavaScript in a search-friendly manner is still rather tricky, and weekly JavaScript SEO office hours sessions evidently prove this. The cases users share demonstrate that things can go terribly wrong if JavaScript is not coded properly. Let me illustrate my point with a few common issues.

Content revealed on click/scroll

JavaScript is often used to implement an infinite scroll or to hide some content for the UX sake and let users reveal additional info upon clicking the button.

The problem here is that Googlebot doesn’t click or scroll the way users do in their browsers.

It surely has its own workarounds, but they may work or not depending on the technology you use. And if you implement things in a way Google can’t figure out, you’ll end up with a part of your JS-powered content not getting indexed.

Google can see and index hidden content as long as it appears in the DOM—this is where the source HTML code is sent before it gets rendered by the browser. At this stage, JavaScript can be used to modify the content. 

Now, let’s say your initial HTML contains the page’s content in full and then you use CSS properties to hide some parts of the content and JS to let users reveal these hidden parts. In this case, you’re all good as the content is still there within the HTML code and only hidden for users—Google can still see what’s hidden within the CSS code. 

If, on the other hand, your initial HTML does not contain some content pieces, and they get loaded into the DOM by click-triggered JavaScript, Google won’t see this kind of content because Googlebot can’t click. This problem may be solved, however, by implementing server-side rendering—this is when JS is executed on the server-side and Google gets the ready-made final HTML code. Dynamic rendering can also be a way out.

JavaScipt Rendering

For infinite scroll, overscrolls events are to be avoided as they require Googlebot to actually scroll the page to call the JavaScript code—something Google cannot handle. Instead, you can implement infinite scroll and lazy loading using Intersection Observer API or enable paginated loading alongside infinite scroll.

The way you code your JS links matters

Links allow Google to understand the site structure and discover content across the website. What you need to remember when implementing JavaScript is that Google will only follow JS-injected links if they are coded properly. An HTML <a> tag with a URL and a href attribute pointing to the proper URL is the golden rule you need to follow. As long as the tag is there, you’re all good even if you add some JS to the link code.

<a href=”/page” onclick=”goTo(‘page’)”>your anchor text</a>

Meanwhile, all other variations like links with a missing <a> tag or href attribute or without a proper URL will work for users but Google won’t be able to follow them.

<a onclick=”goTo(‘page’)”>your anchor text</a>
<span onclick=”goTo(‘page’)”>your anchor text</span>
<a href=”javascript:goTo(‘page’)”>your anchor text</a>
<a href=”#”>no link</a>

Surely, if you use a sitemap, Google should still discover your website pages even if it can’t follow internal links leading to these pages. However, the search engine still prefers links to a sitemap as they help it understand your website structure and the way pages are related to each other. Besides, internal linking allows you to spread link juice across your website.

Testing JavaScript code

As you can see, the way JavaScript is coded can make or break your website. I won’t go into investigating various cases of using JavaScript that can impede Google crawling and indexing your website. 

Let me just share a small piece of advice with you: Always test your code using Google’s tools such as the Mobile-Friendly Test or the URL Inspection Tool in your Google Search Console to see how Google renders your pages. 

Preferably, testing should be done at the early development stage when things can get fixed easier. While the URL Inspection Tool can only be used after the feature is live on a website, the Mobile-Friendly Test can help you catch every bug early on—just ask your developers to create a URL to their localhost server using special tools (e.g. ngrok). 

Another option would be to use Chrome Dev tools powered by Lighthouse for debugging. It is now built in the Chrome browser—press Command+Option+J (Mac) or Control+Shift+J (Windows, Linux, Chrome OS) to run the tool.

Debugging JavaScript in Chrome DevTools

Here in the Sources tab, you can find your JS files and inspect the code they inject. Then you can pause JS execution at a point where you believe something went wrong using one of the Event Listener Breakpoints and further inspect the piece of code. Once you believe you’ve detected a bug, you can then edit the code live to see in real time if the solution you came up with fixes the issue. 

The great thing about Chrome developers tools is that all the changes are applied in the user browsers and don’t affect other users. They’ll be gone once you hit the Refresh button. The tool can be used to debug any code error, not just those related to JavaScript. So, should you have any issues with your website’s CSS, the tool will come in handy as well.

Common CSS and JavaScript errors

Now that you know what CSS and JavaScript are and the way Google interprets them, you may be wondering what the two have in common. And the answer is they are both resources that are stored separately as files and linked to the page from the <head> section. 

CSS and JS files in the head section

Browsers and Google need to fetch these resources to fully render the content of the page. Sometimes, Google and browsers—or just Google—fail to load the files, and the reasons for this happening are common both for CSS and JavaScript. 

In less drastic cases, Google and browsers can fetch the files, but they load too slowly, which negatively affects the user experience and can also slow down website indexing. 

An easy way to discover all CSS and JS-related errors on your site is to launch a website audit. If you are an SE Ranking user, you’ll need to check several sections of the Website Audit’s Issue Report: JavaScript, CSS and HTTP Status Codes.

JavaScript and CSS issues
JavaScipt and CSS HTTP errors

SE Ranking not only list all the errors detected on your site but provides tips on how to fix them.

It points at the exact files that caused an error so that you know which files to adjust to make things right. A single click on the column with the number of pages, where the error occurs, opens a complete list of files’ URLs.

JavaScript files with issues

Test the tool on your own by launching a comprehensive website audit.

RUN A WEBSITE AUDIT
Score your website in 2 minutes.

Enter any website URL to get a detailed report on tech issues and suggested solutions.

And if you’re up to studying every error type in greater detail, keep on reading.

Google can’t crawl CSS and JS files

To fetch your CSS and JavaScript files, Googlebot needs to have permission to do so. In the past, it was common practice to block Google from accessing these files with the robots.txt file since Google wouldn’t use them anyway. Now that the search giant relies on CSS and JavaScript to understand website content, it encourages webmasters to “allow all site assets that would significantly affect page rendering to be crawled: for example, CSS and JavaScript files that affect the understanding of the pages.” If you don’t unblock the files, Google won’t be able to render them and index JavaScript-powered content. 

Blocking CSS and JavaScript files isn’t a negative ranking factor per se, but with mobile-first indexing and the ranking boost websites get for being mobile-friendly, it’s better to let Google access your CSS. Speaking of JavaScript, if it is only used for embellishment or if for some reason you don’t want to have the JS-injected content indexed, you can keep JavaScript files blocked. In all other cases, let Google render your files.

Another thing that can prevent Google from crawling your JavaScript files is the noindex attribute in the robots meta tag. If Google encounters the attribute before running JavaScript, it won’t render the page. For this reason, using JS to change or remove the robots meta tag usually doesn’t work because Google won’t execute JavaScript in the first place.

Google and browsers can’t load CSS and JS files

After reading the robots.txt file to check if you allow crawling, Googlebot makes an HTTP request to access your CSS and JavaScript URLs. For it to proceed with rendering the files, it should get the 200 OK response code. Sometimes though, other status codes like 4XX or 5XX are returned. 

CSS and JavaScript with 4XX status

4XX response codes mean that the requested resource does not exist. Speaking of CSS and JavaScript files, it means that Googlebot followed the URLs indicated in the <head> section of the page, but did not find your files at the designated locations. When a page returns a 4XX, it usually means that the page was deleted. With CSS and JavaScript, the error often occurs because the path to the file is not indicated correctly. It may also be a permission issue.

The bad thing about 4XX response codes is that Google’s not the only one experiencing issues rendering your CSS and JavaScript files. Browsers won’t be able to execute such files either which means your website won’t look that great and will lose its interactivity.

You do remember how a page with no CSS applied looks from the image in the first section of this post. If JS is used to load content to the website (e.g. stock exchange rates to a respective website), all the dynamically rendered content will be missing if the code isn’t running properly. 

To fix the error, your developers will have to first figure out what’s causing them, and the reasons will vary depending on the technologies used. 

CSS and JavaScript with 5XX status

5XX response codes indicate that there’s a problem on your web server’s end. It means that a browser or Googlebot sends an HTTP request, locates your CSS/JavaScript file but then your server fails to return the file. 

In the worst-case scenario, the error occurs because your whole website is down. It happens when your server cannot cope with the amount of traffic. The abrupt increase in traffic may be natural, but in most cases, it is caused by aggressive parsing software or a malicious bot flooding your server with the specific purpose to put it down (DDoS). 

The server may also fail to deliver a CSS/JavaScript file if the browser fails to fetch them during the set time span causing a 504 timeout error. This can happen if the file bundle was too big or if a user had a slow Internet connection.

To prevent this, you can configure your website server in a way to make it less impatient. But making the server wait for too long is not recommended either.

The thing is, loading a huge JS bundle takes a lot of server resources and if all your server resources are used for loading the file, it won’t be able to fulfill other requests. As a result, your whole website is put on hold until the file loads.

CSS and JavaScript files are not loading fast enough

In this section, let’s dig deeper into the issues that make your CSS and JavaScript files load for ages or just a bit longer than desired. Even if both browsers and Googlebot manage to load and render your CSS and JS files, but it takes them a while to do so, you should feel concerned.

The faster a browser can load page resources, the better experience users get, and if the files are loading slowly, users have to wait for a while to have the page rendered in their browser. 

CSS and JavaScript with 3XX status

Similarly to 4xx response code, 3ХХ status code means you’re not using a proper URL to tell Googlebot and browser where your file resides. It’s just in this case you’re not using the wrong address, but rather the old one—3XX status code indicated that you’ve moved your CSS/JS file to a different address but failed to update the URL in the website code. 

Googlebot and browsers will still fetch the files since your server will redirect them to the proper address—it’s just they’ll have to make an additional HTTP request to reach the destination URL, and that’s no good for the loading time. The performance impact should not be drastic if we’re talking about a single URL or a couple of files, but at a larger scale it can significantly slow down the page loading time.

The solution here is evident—you simply need to replace every old CSS and JavaScript URL in the website code with up-to-date destination URLs.

Caching is not enabled 

A great way to minimize the number of HTTP requests to your server is to allow caching in a response header. You surely have heard about caching—you’d often get a suggestion to clear your browser cache when information on a website is not displayed properly.

What cache actually does is it saves a copy of your website resources when a user visits your site. Then, the next time this user comes to your website, the browser will no longer have to fetch the resources—it will serve users the saved copy instead. Caching is essential for website performance as it allows for reducing latency and network load. 

Cache-control HTTP header is used to specify caching rules browsers should follow: it indicates whether a particular resource can be cached or not, who can cache it, and how long the cached copy can last. It is highly recommended to allow caching of CSS and JS files as browsers upload these files every time users visit a website, so having them stored in cache can significantly boost the page loading time. 

Here’s an example of setting caching for CSS and JS files to one day and public access.

<filesMatch ".(css|js)$">
Header set Cache-Control "max-age=86400, public"
</filesMatch>

It is worth noting though that Googlebot normally ignores the cache-control HTTP header because following the directives website set would put too much load on the crawling and rendering infrastructure.

Therefore, whenever you update your CSS and JS file and want Google to take notice of it, it is recommended to rename your file and upload it using a different URL. That way, Google will refetch the file because it will treat it as a totally new resource it hasn’t encountered before.

The number of files matters

Using multiple CSS and JavaScript files may be convenient from the developer’s perspective, but it is not great performance-wise. Browsers send a separate HTTP request to load every file, and the number of simultaneous network connections a browser can process is limited. Therefore, all the CSS and JS resources of a page will be loading one by one decreasing the rendering speed.

For this reason, it is recommended to bundle your CSS and JavaScript file to keep the number of files a browser will have to load down to the minimum.

From Google’s perspective, too many CSS and JavaScript files is not a problem meaning that it will render them anyway. But the more files you have, the more you spend your crawl budget on having JS and CSS files loaded. For huge websites with millions of pages, it may be critical as it would mean Google will not timely index some pages because it’s wasting the crawl budget on a bazillion of JS and CSS files.

File size matters as well 

The problem with bundling CSS and JavaScript files is that as your website grows, new lines of code are added to the files and eventually they may grow large enough that it will become a performance issue.

Depending on the way your website is structured it may be reasonable to not bundle all your CSS and JavaScript files together, but instead to group them in several smaller files, like a separate file for your blog JavaScript, another one for the forum JavaScript, etc. 

Another reason for splitting one huge JS/CSS bundle is caching. If you have it all in one file, every time you change something in your JS/CSS code, browsers and Google will have to recache the whole bundle. This is not great both for indexing and for the user experience. 

In terms of indexing it can go two ways depending on the caching technologies used: you’ll either force Googlebot to constantly recache your JS/CSS bundle or Google may fail to notice in time that the cache is no longer valid and you will end up with Google seeing outdated content.

Speaking of user experience, whenever you update some JS code within the bundle, browsers can no longer serve cached copies to any of your users. So, even if you only change the JS code for your blog, all your users including those who never visited your blog will have to wait for the browsers to load the whole JS bundle to access any page on your website. 

Compressing and minifying CSS and JavaScript

To keep your JavaScript and CSS files light, you’ll want to compress and minify them. Both practices are meant to reduce the size of your website resources by editing the source code, but they are distinctly different.

Compression is the process of replacing repetitive strings within the source code with pointers to the first instance of that string. Since any code has lots of repetitive parts (think of how many <script> tags your JS contains) and pointers use less space than the initial code, file compression allows to reduce the file size by up to 70%. Browsers cannot read the compressed code, but as long as the browser supports the compression method, it will be able to uncompress the file before rendering.

The great thing about compression is that developers don’t need to do it manually. All of the heavy lifting is done by the server provided that it was configured to compress resources. For example, for Apache servers, a few lines of code are added to the .htaccess file to enable compression. 

Minification is the process of removing white space, non-required semicolons, unnecessary lines, and comments from the source code. As a result, you get the code that is not quite human-readable, but still valid. Browsers can render such codes perfectly well, and they’ll even parse and load it faster than raw code. Web developers will have to take care of minification on their own, but with plenty of dedicated tools, it shouldn’t be a problem. 

Speaking of reducing the file size, minification won’t give you the staggering 70%. If you already have compression enabled on your server, further minifying your resources can help you reduce their size by an additional few to 16% depending on how your resources are coded. For this reason, some web developers believe minification is obsolete. However, the smaller your CSS and JS files are. the better. So a good practice is to combine both methods. 

Using external CSS and JavaScript files

Many websites tend to use external CSS and JavaScript files hosted on third-party domains. Reusing an open-source code that solves your problem perfectly well may seem like a great idea—after all, there’s no point in reinventing the wheel. There’s really nothing wrong with using a ready-made solution as long as it is copied and uploaded to the website’s server. At the same time, using third-party CSS and JS files hosted externally is associated with numerous risks. 

First and foremost, we’re talking about security risks. If a website that hosts the files you use gets hacked, you may end up running a malicious code injected into the external JS file. Hackers may steal private data of your users including their passwords and credit card details.

Performance-wise, think of all the errors discussed above. If you have no access to the server where the CSS and JS files are hosted, you won’t be able to set up caching, compression, or debug 5XX errors.

If the website that hosts the files at some point removes the file and you fail to notice it in a timely manner, your website will not work properly and you won’t be able to quickly replace a 404 JS or CSS file with a valid one.

Finally, if the website hosting JS or CSS files sets a 3XX redirect to a (slightly) different file, your webpage may look and work not exactly as expected. 

If you do use third-party CSS and JS files, my advice is to keep a close eye on them. Still, a way better solution is not to use external CSS and JS at all.

Subscribe to our blog

Sign up for our newsletters and digests to get news, expert articles, and tips on SEO

Thank you!
You have been successfully subscribed to our blog!
Please check your email to confirm the subscription.
comments2
  1. Really informative stuff, keep it up good work

  2. That’s really nice post. I appreciate your skills, Thanks for sharing.

Post
Write a comment

Your email address will not be published. Required fields are marked *