The most common technical SEO issues you should avoid
Wise people learn from other people’s SEO mistakes. And believe it or not, this learning process doesn’t have to be complicated.
To make it easier to figure out the top errors made by many sites, we’ve scanned over 40,000 websites with our Website Audit tool. It allows us to detect common SEO problems like broken pages, broken links, duplicate content issues, low page speed, redirect chains, etc. We divided the issues we detected into 10 categories and described them by their frequency of occurrence and severity.
In this article, we’ll discuss technical SEO problems related to:
- Images (missing alt texts in images, too big images)
- Meta tags (missing descriptions and titles, duplicate content in titles and descriptions, too long or too short titles and descriptions)
- Missing internal inbound links
- Headings (missing, duplicate, or multiple H1 tags, missing H2 tags)
- CSS files (not compressed, not minified, or not cached CSS files, too big CSS files)
- 4XX HTTP (4XX HTTP status codes, 4XX images)
- Website speed and performance (LCP and CLS values exceed the limits)
- XML sitemaps (no link to their XML sitemap in the robots.txt file, missing XML sitemap, noindex pages in the XML sitemap, HTTP URLs in the XML sitemap)
- Missing redirects between www and non-www
You’ll find an additional section at the end where we talk about less critical but still frequently occurring technical SEO errors that also deserve your attention. So we strongly encourage you to read to the end.
By the way, you can check any website’s SEO and see how well-optimized it is for both search engines and people using our SEO Analyzer.
What is technical SEO?
Technical SEO refers to optimizing a site’s technical infrastructure and server’s mechanics. This includes ensuring the site’s crawlability and indexability, improving its speed, setting proper redirects, managing error pages and meta tags, etc.
Technical SEO usually doesn’t cover content creation, keyword research, link building, website analytics, and other digital marketing practices.
Now, without further ado, let’s dive into the biggest technical SEO mistakes we discovered during our analysis.
Problems with images
Our list of common website technical problems starts with images. They make a website more aesthetically pleasing and help users perceive the content better. Plain text without a single image that would visualize information is unlikely to grab the reader’s attention.
We definitely recommend using them (but carefully) because unoptimized images can lead to website speed issues, affecting your SEO and UX.
Our research has shown two main image issues: missing alt tags and overly sized image files. Make sure to check your site to see whether you have them, too.
Missing alt text
The most common SEO problem related to images was the missing alt tag in images, with 83.87% of websites having this issue.
The alt text (alternative text) is an HTML element designed to explain the meaning of images to search robots and make them more accessible to users. It’s one of the main elements of image optimization that helps search engines like Google better understand what your picture is about and rank it.
The alt attributes for images also come in handy for users when (for whatever reason) the picture doesn’t load up. They’re also useful for people with impaired vision who have trouble perceiving visual information.
How to fix
Run website audits regularly to identify images with missing alt text. If you want to make the most out of your images, add helpful, informative, and contextual alt text to each of them. You can also use keywords but avoid keyword stuffing as it causes a negative experience for your users and signals spam to search bots.
Too big image
Another SEO issue with images is that they’re often too resource-heavy. Our inspection showed that this problem exists on 35.44% of sites.
The image file size can affect how fast the page containing it loads. The chain of logic here is simple and obvious: the bigger the size, the longer it takes to load, the longer the user has to wait, the worse the user experience, and the lower the page’s position in search results.
How to fix
One way to reduce the image file size is to compress it while maintaining image quality. There’s no perfect level of compression because it depends on image format, dimension, and pixels, but you should still try to keep them under 100 KB or less whenever possible.
Meta tag issues
Meta tags are critical for search engine optimization because they tell search engines and users important information about your web page. They also help search engines better understand how to present your pages in the SERPs. Meta tags are core elements of effective optimization. That’s why you wouldn’t want to have such page SEO errors. But some websites do have them, so below is a list of the most common ones.
This issue occurs on 71.11% of websites. But why is it so critical, given that Google neither confirms nor refutes that meta descriptions affect search rankings?
The answer is that Google can still use them for search result snippets. If you don’t specify a page description, Google will use the available content on that page to generate a description. Are you really willing to rely on the search engine in this matter? Probably not.
How to fix
SE Ranking’s Website Audit tool will help you identify pages with missing descriptions. Just go to the Issues Report and check out the Description section. You’ll see the number of pages where a description should be added, and by clicking on that number, you’ll get the list of pages. Add unique descriptions to each page, explaining what your page is about to people and search engines.
Duplicate content in page titles and descriptions
More than half of the analyzed websites have duplicate content in title tags (52.59%) and duplicate descriptions (50.17%).
Multiple pages on your website with duplicate titles and descriptions confuse Google and other search engines because they can’t quickly determine which page is relevant to a particular search query. Such pages are less likely to get a good ranking so try to make titles and descriptions as unique as possible.
How to fix
The already-mentioned Issue report will help you identify pages with duplicate titles and descriptions. All you need to do is rephrase duplicate content using your target keywords. If you’re struggling, you can use the AI Rewrite feature available in SE Ranking’s new Content Marketing Module. It’ll help you reword your titles and descriptions. You can also generate new titles and descriptions from scratch, using AI Writer. This tool helps you to generate different types of content and generally allows you to build SEO-friendly content faster.
Title and description length issues
One of the biggest SEO issues is the title and description length. 22.82% of websites have title tags that are too short, and 21.75% make their descriptions too long.
This SEO mistake is two-sided:
- A too-short title can’t fully describe your page.
- A too-long description can be cut by the search engine in the snippet.
How to fix
The general recommendation is to keep your page titles between 40-60 characters and page descriptions under 155 characters.
Missing title tag
A rare but still existing issue (9.04%) is not including the title tag at all. This way, you lose an opportunity to tell search engines and users what your page is about. Instead, Google will create the title using the available page content. And it may not always match your goals or your keywords.
How to fix
The approach to fixing this error is pretty obvious—just add unique and relevant titles to your pages based on your target keywords. Identify such pages with the help of website audit tools and then write titles yourself or use content tools.
Missing internal inbound links
It’s worrisome to see that more than half of the analyzed websites (63.09%) have pages with no links connecting them to other website pages.
This is a major issue because inbound internal links enable users and search bots to navigate your website. Links structure the website content hierarchy and help distribute link juice.
There’re some cases when you can have pages without internal links, like a landing page for a particular promo. They only exist during the period of the sale and can be accessed by the link posted on social media or sent in a newsletter. Other than that, it’s simply wrong from the SEO perspective to have isolated pages. Search engines won’t find them while scanning your website, and users won’t get there when browsing your site.
How to fix
If you have isolated pages on your website, decide whether the page is valuable. If so, revise your website structure and figure out the best way to build links to this page from the other sections and pages of your website. By ‘the best way,’ we mean the way users and Google can get to the page without applying extra effort.
HTML heading tags help people and search engines quickly understand page content. They outline the structure of the content and can even affect your page’s ranking.
Our research has shown that websites often have technical SEO problems related to H1 and H2 tags, so let’s dive into the essence of these issues.
H1 tag issues
The H1 tag is the top-level page heading that briefly describes page content and helps the reader understand whether they’ll find what they’re looking for here. That’s why we start with it.
Missing H1 tag
62.85% of websites have pages with no H1 tag. They lack the most important heading on the page that usually serves as the title for that piece of content (don’t mix it up with the title tag!). These pages then lose the chance to provide a better reading experience for users and a better content-scanning experience for Google crawlers.
Duplicate content in H1
56.6% of websites have pages with duplicated H1 tags. Like all duplicate content, non-unique H1 headings make it more challenging for search engines like Google to figure out which site page to display in search results for a given query. They can also lead to several of your pages competing for the same keyword, making duplicated H1 tags one of the most insidious errors in SEO.
Multiple H1 tags
50.25% of websites use multiple H1 tags on their pages. Given that Google’s John Mueller said that you could use H1 tags on a page as often as you want, you might be wondering what the issue is here.
Let’s look at it this way. If your page has multiple H1 headings, which one should the search engine give more weight to? You might think the first one, but you can’t be 100% sure that the search engine will follow your logic. They have theirs. What’s more, having too many H1 tags on a single page can look spammy if you use them to place keywords.
How to fix
Apply the same approach as with meta tag errors. Run a website audit to identify problems with H1 headings. Add H1s if they are missing, update H1s if they contain duplicate content, and eliminate redundant H1 headings if you have several of them on the same page.
H2 tag issues
One of the most common problems related to H2 tags is that 68.28% of websites miss them in their content. There’s no evidence that skipping the H2 tag has a direct negative effect on your rankings, but it definitely worsens the user’s and the search bot’s experience on the page. H2 tags help structure the content and make it easier to scan the page within seconds.
How to fix
If you have missing H2 tags, add them to your pages and ensure they’re brief and clear about the block they’re referring to. You can have several H2 tags as long as your content is logically structured and they aren’t placed higher than H1 tags.
The main takeaway from the h-tags section is the following: Everything that makes your pages better for users makes a difference to Google.
Problems with CSS files
CSS files make websites visually appealing and user-friendly. They also help Google understand how the page works on both desktop and mobile devices. But when they aren’t optimized correctly, your page speed can suffer.
Here are the most critical issues related to CSS file optimization revealed by our research:
Not compressed CSS files
Using uncompressed CSS files is one of the most widespread technical SEO problems, as proven by 60.53% of websites. Big CSS files slow down page loading, worsening user experience and negatively affecting search engine rankings.
How to fix
By compressing CSS files (i.e., by replacing duplicate lines in the code with pointers to the first original line), you reduce the load and the browser can render CSS faster. Set up compressing on your server and if you’re using uncompressed CSS files from an external resource, and they slow down the page’s load time, reach out to the resource’s owner and ask to configure compression.
CSS is too big
SE Ranking shows this error if the CSS file size exceeds 150 KB. This issue occurs in 56.7% of cases. We mentioned above that having a CSS that’s too big directly affects page loading.
How to fix
Use compression or minification (more on this below) to reduce the CSS file size and eliminate the problem.
Not minified CSS files
55.59% of websites use extended code. This makes the files too heavy to load, which slows your website’s speed and degrades the user experience.
How to fix
Minify your own CSS files to reduce their size, or contact the website owner, whose files you’re using, and ask them to do the same. Minification means deleting unnecessary lines, semicolons, white space, and comments from the source code. This approach alone won’t give you the perfect CSS file size reduction, but when combined with compression, you can get significant results.
Not cached CSS files
According to our inspection, the number of sites that don’t enable CSS file caching is 22.06%. This causes additional load on the server. The browser needs to send additional requests to render the page.
How to fix
Use caching, as it reduces the load on the site server. If you store cached copies of your CSS files, the next time the user visits that page, the browser will serve them the saved copy instead of sending additional requests to the server. If the CSS files you’re using are hosted on an external resource, ask the website owner to configure browser caching.
The distribution of SEO errors in this category is the following:
How to fix
4XX HTTP Errors
HTTP status codes are three-digit code messages that the server generates in response to a client’s request. The 200 code is what every page should return. It means that the request was successful. The 4XX status codes, though, are what you should avoid for the sake of SEO.
Below are the most common 4XX HTTP errors that our analysis revealed.
4XX HTTP status codes
Over a third of the analyzed websites (41.34%) have pages with a 4XX response.
In most cases, a 4XX response appears for a 404 error, and having pages with the 404 error isn’t that terrible in itself. However, you should take care of every internal link to such pages because linking out to dead pages provides a poor user experience. In addition, it drains your crawl budget.
How to fix
As a solution, you can delete internal links to 404 pages and/or set up a 301 redirect. Also, check that you don’t have backlinks to such pages.
4XX images (Not Found)
Pages aren’t the only elements that can return 4XX HTTP errors. It also applies to images and files. We noticed that 21.2% of websites have broken images that can’t be loaded. Besides a negative impact on the user experience, Google can’t index such images.
How to fix
Replace the URLs of all the broken images with working ones, or remove the links to the broken images from your website.
Website speed and performance issues
39.02% of websites have problems with page loading speed (it’s too slow). These websites are likely to notice negative behavioral signals. For example, users might go to other sites instead of waiting for the page to load fully.
Google has recently confirmed that it no longer uses any previous page speed estimation algorithms. Page load speed is now evaluated based only on Core Web Vitals. Below are the most common CWV-related issues we detected when analyzing website audits.
Largest Contentful Paint
The Largest Contentful Paint (LCP) measures the main content loading speed: how fast the largest images, text blocks, videos, etc., become visible. LCP is measured in real-world conditions based on data from the Chromium browser and in a lab environment based on data from the Lighthouse report.
You should strive for the largest elements to be loaded in under 2.5 seconds. However, our research shows that for some websites, it’s still a goal rather than an achieved result.
- 31.01% of websites have LCP higher than 2.5 seconds when measured in a lab environment.
- 12.56% of websites have LCP higher than 2.5 seconds when measured in real-world conditions.
How to fix
Cumulative Layout Shift
The Cumulative Layout Shift (CLS) measures how long it takes for your web page to become visually stable. In other words, it looks at how long it takes for all elements on the page to take their places if some new content, like an image, loads longer than other page elements. It’s also measured in real-world and lab environments. The goal here is to maintain a CLS of 0.1 or less.
Our research shows that:
- 30.59% of websites exceed this value in a lab environment.
- 10.91% of websites exceed this value in real-world conditions.
How to fix
One possible solution to this issue is to use size attributes for images and videos. It’ll help you book space for them in the final layout rendering.
XML sitemap issues
An XML sitemap is a file with a list of a website’s URLs and information about how they are related to each other. It helps search engines like Google spot every page on your website, even isolated ones or those without links from other sites.
It plays a vital role in crawling, but still, some XML-related issues occur pretty often.
- 35.87% of websites don’t include a link to their XML sitemap to the robots.txt file. But they should include one because it can help Google figure out where your sitemap is located.
- 14.44% of websites have their XML sitemaps missing. This causes crawlability problems. Without a sitemap, search bots can’t find and access isolated pages of the site.
- 11.77% of websites have pages with the noindex meta tag in their XML sitemaps. This is confusing to Google because your XML sitemap should only include URLs you want search engines to crawl and index.
- 3.36% of websites have HTTP URLs in their XML sitemaps. This is a two-sided problem. You can find HTTP URLs in your sitemap if you’re still using the HTTP protocol. Sometimes you will find HTTP URLs in your sitemap even after switching to HTTPS. That’s most likely because you haven’t updated URLs in your sitemap.
How to fix
This is what you can do to eliminate XML sitemap errors:
- Create an XML sitemap and add it to your website. Then, send the link with its location to search engines. You can also create separate XML sitemaps for URLs, images, videos, news, and mobile content. Add a link to your XML sitemap file to the robots.txt file.
- Remove pages with the noindex meta tag from your XML sitemap, or remove the noindex tag from these pages. The decision depends on your goals.
- Switch to HTTPS and replace the old URLs with new ones in your sitemap.
Missing redirect between www and non-www
Some websites contain www in their addresses, and others don’t. Our research shows that 13.31% of websites use both and don’t apply redirects. It means these websites have at least two versions of the same page. This can lead to duplicate content problems which, as you know, can negatively affect your SEO efforts and result in lower page rankings.
How to fix
Set up redirects from www to non-www, and vice versa, if your main website version uses the www prefix.
Other equally important technical SEO issues to avoid
Once you solve all the SEO errors from the previous 10 categories, your site will definitely say ‘thank you,’ and it will eventually reach the top SERP positions.
Solving the next (albeit less critical) SEO technical errors will help you really get there.
Blocked by noindex
The noindex directive added to a page’s <head> section prevents it from being indexed by Google. The search bot will drop this page and won’t display it in the SERPs even if other sites link out to it. While it’s okay to block some pages from indexing (like pages with filtered results for ecommerce sites or pages with personal user data or checkout pages), it’s not okay if search engines ignore your important pages. Such indexability issues are the most critical.
Blocked by robots.txt
Robots.txt provides crawling recommendations. If it blocks your page, search bots won’t get access to it and index it. It’s similar to how the noindex directive works, but there’s one caveat. If other pages or resources link out to the page blocked by robots.txt, it can still be indexed. Make sure it blocks only unnecessary parts of your site, not important pages.
A canonical tag tells search engines that a specific URL is the main version of a page. A canonical chain occurs when some page specifies a canonical URL, but the latter defines a different page as canonical. And the search engine may have a fair question: Which page is the canonical page after all? Although a canonical chain isn’t the king of SEO technical errors, it’s always better to be clear because search engines like Google can follow it.
Hardly any site can operate without redirects, but if they aren’t optimized correctly, they can be detrimental to SEO. Redirect chains (i.e., redirects from one page to the second, third, fourth, and so on) can make your website difficult to index and can even slow it down. Remember that the longer the chain, the slower the target page will load.
302, 303, 307 temporary redirects
Temporary redirects send your users and search bots to different pages because the ones they are looking for are unavailable now, but they will be soon. Such redirects won’t transfer the link juice of the old page to the new one. Given their temporary nature, it’s best to avoid using such redirects for extended periods of time.
External links to 3XX
Your site may have links to other resources that redirect users to new pages (not the ones you initially linked out to). If this new page is thematically related to the old one, or the transition is just as logical, there’s no problem. But if the link leads visitors to a page that doesn’t contain the necessary information, it’s worth removing this redirect and reviewing the link logic.
External links to 4XX
In this case, your site has broken links to other resources. When following a link, the user expects to see the information they need but instead lands on a non-existent page and gets a bad user experience.
3ХХ HTTP status code
The 3XX status code means pages have redirects. And it’s okay to have such pages. But it’s not okay if there are too many of them. At SE Ranking, we recommend that their number on the site doesn’t exceed 10%. It’s even better to stay under 5%. If you have such a problem, consider removing some of the redirects.
Internal links to 3XX redirect pages
This problem is similar to external links to 3XX redirected pages, but here it all happens within your site. The negative effect for both cases is the same. If the page isn’t relevant, you give your user a worse UX, which can be a bad signal for Google. Also, your pages may not work properly if the redirect isn’t configured properly, or the site may run slower if there are too many of them.
Now you know all about the top technical SEO issues preventing other sites from performing at full capacity. You also have a clearer picture of what technical issues to look out for on your site.
Remember that the biggest errors in SEO are always the ones you don’t know about. With this in mind, our final advice would be to audit your website regularly. This way, you’ll catch even minor health problems and prevent them from escalating into something more.