Technical search engine optimization, more commonly referred to as SEO or on-page SEO, can be a convoluted subject and difficult to navigate. That said, it’s also an incredibly important tool for any marketer or website owner competing in the internet landscape.
Here, I’ll go through the basics of on-page SEO and provide tips for how to evaluate and improve your site’s SEO performance.
What is on-page SEO?
In simple terms, the objective of SEO is to demonstrate to search engines that your web content is relevant to what people are searching for in order to increase your organic traffic.
This is done by identifying keywords that are both relevant to your website and what users are searching for on search engines and incorporating them strategically and where relevant in your website’s copy and meta data. The result is your website appearing on the search engine result page, or SERP.
What is the main goal of SEO?
Search engines crawl billions of websites every day pulling a staggering amount of HTML code to identify if they should appear in SERP to answer a user’s various queries and what order those websites are to appear in SERP and how they appear. Appearing on the first page of SERP for targeted keywords is the goal because it delivers a huge boost to website traffic, but getting there is complicated and takes time.
A note on website development: On-page SEO can sometimes be confused with website development because it involves a heavy amount of editing and inserting HTML code. The main difference between the two is:
- Web development is the creation of infrastructure for content
- On-page SEO is the content and showing search engines what your content is about
How to evaluate your on-page SEO
The first place to start is with an on-page SEO audit. This will tell you where your website stands and what work needs to be done to create measurable improvement.
Your on-page SEO audit should include::
- Measuring your website speed, especially for mobile devices
- Evaluating meta data
- Evaluating keyword density
- Identifying duplicated content, if any
- Eliminating indexing errors
Working through this checklist can take a lot of time, so there’s a shorter method that gives an idea on how your on-page SEO is performing and a more in-depth way that provides more detailed information.
The quick audit
The shorter method involves using Lighthouse, a Google Chrome extension that you can install — or from Google Chrome, you can press Ctrl+Shift+I, which will do basically the same thing. From the top bar, scroll to the far right, click “Lighthouse” and click “Analyze page load.” After a little time has passed, you will see the results of the audit scored into four metrics:
- This is a score showing you the speed of your site on mobile and/or desktop devices. It will identify how long it took each element to load and if the load time was excessive.
- This score reflects how easy your website is to access, navigate, and read, as well how thoroughly it is organized. Examples of this would be images having alt text, links having accurate descriptors, and fonts being a proper size and legibility to read.
- This scores how well you are following SEO best practices. This is a tricky one because best practices can change as search engines adapt their algorithms to best answer search queries. Because of this, old “best practices” can become quite harmful to your SEO. I highly recommend reading Google’s SEO Starter Guide, as it regularly updates with current best practices.
- Some might argue that this metric seems redundant, and in some ways, it is. This scores very high-level SEO practices while the other three metrics are more in depth. The SEO score measures general things: if meta data and a robot.txt file are present, if the webpages are crawlable, if mobile site speed is adequate, if links have descriptive text ... etc. If your website is falling behind in any of the other three metrics, this one will drop a certain amount depending on the issue.
When you click on each metric, you will see the specific issues that are dragging that specific score, so you’re not left wondering. However, please keep in mind that while “The Quick Audit” method does capture most issues, it does not provide as thorough information compared to an in-depth audit, especially if the website in question has indexing errors.
The in-depth audit
An in-depth audit is certainly more manual, but there are some excellent tools that are low cost or even free to start that do a lot of the heavy lifting for you. Another benefit to a manual audit is that it can be as specific and manual as you would like, or it can broadly cover the entire website.
Mobile and desktop site speed
- No piece of content should take longer than four seconds to load. The longer users wait for a site to load, the higher your site’s bounce rate will be. Keep load times as close to two seconds as possible.
- Lighthouse is currently my favorite tool for measuring site speed because it’s fast and in-depth.
- If your site is struggling, run your website through a code minimizer such as minifier.org. Make sure you are only entering the specific content that is taking too long to load.
Evaluating meta data, indexability, and keyword density
- Screaming Frog is a great tool for analyzing meta data. It’s a free software to download, but you must pay a fee to crawl over a certain number of pages.
- Once your URLs or domain has been uploaded in the tool, you will have a clear view of your site's meta data. Here is what to look for:
- Meta title
- Does the character count range between 50 and 60?
- Is there at least one keyword in the meta title?
- Is there a call to action?
- Meta description
- Does the character count range between 150 and 160?
- Does the description include at least one keyword, but no more than three?
- Is there a call to action?
- H1 tags
- Are the tags under 70 characters?
- Is there at least one keyword?
- Is the content of the tag relevant to the page?
- Are any pages showing non-indexable?
Identifying duplicate content
- Now it’s time to identify any keywords repeated on multiple pages and contextually similar or identical content.
- There are a few reasons why duplicate content is harmful, but my main one is that if you have content repeated throughout your site, it makes it very difficult, if not impossible, for a search engine to determine which piece of content to show when matched to a search query. This is called keyword cannibalism.
- As a result, search engines won’t show your content at all.
- Screaming Frog is good for going line by line and identifying repeated keywords in both the meta data and URLs.
- Moz and AHREFs are great tools for finding duplicate content.
How to solve issues to improve on-page SEO
Once you have a completed audit, some solutions to the findings are self-explanatory and easy to fix, such as meta data character count or site speed. To address more difficult findings like duplicate content or lack of keywords, you’ll want to use that information to develop an on-page SEO strategy. This strategy will vary greatly, especially if the size and age of the site are different.
Here are some examples of what you may find and possible solutions to help kickstart your on-page strategy:
- It’s important to connect your website to Google Search Console. This tool will highlight major issues that will have an impact on your SEO such as page indexing.
- Once connected, click on “pages,” and you will see a graph of pages that are and aren’t indexed. Scrolling down further will show why they aren’t indexed.
- For a large (think 1,000 plus pages), older site, duplicate content can be potentially crippling to landing a SERP ranking. These are my recommendations:
- Remove the duplicated content and keywords. If this involves removing webpages for your site to consolidate, a 301 redirect to a relevant page will be necessary.
- Ensure your XML sitemap has “www.” prior to the domain in all URLs or you may run into indexing errors.
- I recommend the removal of duplicated content as opposed to using a canonical tag because said tags are suggestions to Google. A website of that size may still run into duplication issues regardless of canonical tags.
- For a newer or smaller website, this issue will be much easier to fix with a canonical tag.
- A canonical tag tells search engines what page is relevant and should be kept.
- Joshua Hardwick, with AHREFs, wrote an excellent article on canonical tags and how to implement them.
- Keep in mind that non-canonical pages should never be included in the sitemap and canonical tags are simply suggestions to Google. They do not have to select the page with the canonical tag, so content removal may be necessary, but not as likely as if it were a much larger site.
Lacking or irrelevant keywords
- This is a problem for sites of all shapes and sizes but can be quite fun to fix through keyword research.
- Google Trends and Answer the Public are great, free tools for keyword research. They will show you what users are searching for regarding any given topic along with the search volume and where users are searching.
- Once you have found search terms that are relevant to your site and what users are searching for, compile them in a list.
- Now that you have a solid list of keywords, you need to identify the competition of those keywords, as you likely aren’t the only one using them and search engines aren’t going to automatically rank you on SERP just for being relevant.
- UberSuggest and Moz are good tools to identify the competition and difficulty of ranking for the targeted keywords you’ve just researched. They will even come up with different or longer tail permutations of the keywords you researched to make them easier to rank on SERP, as well as track the progress of your keyword ranking.
Google Search Console and indexing errors
- A common indexing error is the dreaded “404 page not found.” Simply put, it means that you have a page that is offline, and Google cannot discover it. Either remove the URL or redirect it to a live page.
- “Discovered – Currently Not Indexed” is a frustrating error because it can happen for several reasons and none of them are obvious. In my experience, this has happened with duplicate content.
- The application of canonical tags and/or removal of redundant pages/content will help clear the issue.
- Click “validate fix” at the top of the page once your work is complete.
- After 48 to 72 business hours, you will receive an email from Google letting you know if there are any remaining issues with the affected URLs.
- “Crawled – Currently Not Indexed” is an even more ambiguous error because it means that Google crawled the page and knows it exists but chose not to have that page as part of its index.
- First, clean up any duplicate content and make sure meta data is in order.
- Look at your XML sitemap to ensure that you are using the https://www URL format. Copy the sitemap and upload it into Screaming Frog to show further on-page errors. I’ve seen sitemaps where the non www. version of URLs is 301 redirected to the www. version and trigger this index error.
- Sometimes this error is an error! URLs may show up under this category of not being indexed but are in fact indexed. Do a branded Google search and include 1 of the targeted keywords on the page. If you see the affected URLs in SERP, they’re indexed.
After all this effort, you will want to see the fruits of your labor. That’s why it’s imperative to regularly monitor KPIs for both success and failure. I recommend reading Roger Montii’s article from Search Engine Journal as he goes into detail about what KPIs to watch from GA4.