Complete SEO Guide for Web Developers

SEO checklist

Complete SEO guide for meta tags, URLs, robots, sitemaps, social tags, multi-language sites, local search, mobile, page speed and more!

SEO (Search Engine Optimization) is an awkward topic for most web developers. Unlike web development, SEO is not an exact science and impact of specific SEO actions can become visible after days, weeks, months, or sometimes — never. SEO is quite different than other sorts of web development, where you make a code change, switch tab to the browser, refresh, and voila: your changes are instantly visible.

Search engine’s ranking algorithms change constantly, almost as fast as new JavaScript frameworks pop up. These changes are not documented, and in the best case, they are just hinted. Luckily, there is a number of SEO experts who are running a bunch of different experiments to figure out what’s working and what’s not, and sharing their findings with the world.

Searching for SEO best practices often returns contradictory and outdated information or topics related to keyword research, content optimization, and link building. It can be really easy to get lost in all the constant updates, and dismiss SEO as something only SEO experts should do. This SEO guide jumps right to the meat of it:

  • it provides current, regularly updated SEO best practices
  • it focuses only on SEO topics relevant to web developers
  • it skips SEO topics which are better suited for SEO experts, marketers and business owners (keyword building, content optimization, link building)

SEO practices can be divided into two main categories: on-site SEO, and off-site SEO. On-site SEO is a practice of optimizing both the content and HTML source code of a page. Off-site SEO refers to external signals and links elsewhere on the Internet. In this guide, I’ll focus exclusively on on-site SEO, and mainly on source code and web server optimizations.

title tag

Title tag ( <title></title>) is a strong SEO signal and one if the first impressions users get from the website. It is used in several very important places:

  • search engine results pages (SERPs)
  • social networks
  • web browser tabs
  • bookmarks
title tag and meta descriptions in SERP
Example of title tag and meta description display in SERPs

 

title tag in browser and bookmarks
Example of title tag display in browser tab and bookmarks

 

The exact length of displayed title in SERPs is based on container width (600px), not the number of characters. Google will typically display 50–70 characters, but try to keep titles under 60 characters.

<title></title> is required HTML element and should not be blank. Also, each title should be unique — don’t use defaults for multiple pages!

Put important and most specific keywords first and add brand name at the end of the title, e.g.:

  • Nike LunarGlide 8 Women’s Running Shoe. Nike.com
  • Apple MacBook Pro 13-Inch (2017) Review & Rating | PCMag.com

Resources:

https://moz.com/learn/seo/title-tag
https://support.google.com/webmasters/answer/35624

meta description

The meta description is an HTML attribute that provides a short summary of a web page. Search engines often display it under the blue clickable links in search engine results pages (SERPs), although they sometimes ignore it.

title tag and meta description display on Facebook
Example of title tag and meta description display on Facebook

 

Although meta description doesn’t influence rankings directly, it can impact click-through rates, which do influence rankings.

Google typically displays around 160 characters, but try to keep description under 155 characters.

Like title tag, each meta description should be unique. However, unlike title tag, the meta description is not required and you can omit it. You should write a description when targeting one to three heavily searched terms, or expect users to share a page on social networks. You can leave it blank when you are targeting long-tail traffic (three or more keywords).

Never use double quotation marks in meta description (or any other HTML attribute), because search engines will cut off your description when they encounter quote character.

Resources:

https://moz.com/learn/seo/meta-description
https://yoast.com/meta-descriptions/

 

img tag

Use alt attribute in an img tag to provide alternative text to describe the image. It will be used by screen readers, it will be displayed instead of an image when an image cannot be loaded, and it will help search engines index an image properly.

mg tag alt and src attributes
Example of img tag alt and src attributes

Give your images detailed, informative file names and descriptive alt text to help search engines index images properly.

When width and height attributes are specified, a web browser can begin to render a page even before images are downloaded.

The most popular screen readers cut off alt text at around 125 characters.

Resources:

https://support.google.com/webmasters/answer/114016
https://moz.com/learn/seo/alt-text

a tag (hyperlinks)

Use descriptive keywords in anchor text that give a sense of topic of the target page. Anchor text should be succinct, relevant to the linked-to page, not overly keyword heavy and not generic (e.g. “click here”).

<a href="http://five.agency/">Mobile design & development agency</a>

Use rel="no follow" for paid links and untrusted content (user submitted links). Search engines will discount any link value that would be passed to a normal link.

Resources:

https://support.google.com/webmasters/answer/96569
https://moz.com/learn/seo/anchor-text
https://ahrefs.com/blog/anchor-text/

 

URLs

A well-crafted URL is human-readable, semantically accurate, and provides both humans and search engines an indication about content on the destination page. Instead of using IDs in URLs, use descriptive keywordswhich reflect category hierarchy.

Ideally, a website should have a pyramid structure — all pages should be accessible with 3–4 clicks from the homepage. If applicable, link back from articles to their category and subcategory. This will help to establish site architecture and spread the link juice.

One of the common problems with URLs is duplicate content. Duplicate content is content that appears at more than one URL. It can present multiple issues for search engines and dilute visibility of each of the duplicates (including original!). To fix duplicate content issues, two methods are used most commonly: 301 redirects and rel="canonical".

301 redirect is a permanent redirect which passes the link juice to the redirected page. rel=”canonical” attribute is a part of the HTML head element and contains URL of the original page. It should be added to all duplicate pages, as well as the original page.

HTML link elements rel="next" and rel="prev" should be used to indicate the relationship between URLs in paginated series. This will hint search engines to send users to the most relevant page — typically the first page of the series.

Whenever possible, place content on the same subdomain to preserve authority — don’t use subdomains to separate website content into sections.

Resources:

https://moz.com/learn/seo/url
https://moz.com/learn/seo/internal-link
https://moz.com/learn/seo/duplicate-content
https://moz.com/learn/seo/canonicalization
https://support.google.com/webmasters/answer/66359
https://support.google.com/webmasters/answer/139066
https://webmasters.googleblog.com/2011/09/pagination-with-relnext-and-relprev.html

 

www vs non-www

From SEO perspective there is no difference between the two, but it is important to choose one and stick to it. Additionally, 301 redirects should be used to redirect from non-preferred domain to preferred domain.

non-www has two big technical disadvantages:

  • it cannot be used reliably with cloud providers like Heroku which commonly use DNS CNAME records so that they can dynamically change the IP address of the web server. non-www cannot have CNAME record.
  • cookies are sent from non-www domain to all subdomains. If the site uses a subdomain to serve static content, this would slow down access to static content and possibly cause problems with caching.

www should be preferred for largest websites, websites with a large number of subdomains, and websites hosted in the cloud.

Resources:

https://support.google.com/webmasters/answer/44231
https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Choosing_between_www_and_non-www_URLs
http://www.yes-www.org/why-use-www/

 

robots.txt

robots.txt is a text file which instructs search engine robots how to crawl pages on the website. While Google doesn’t crawl, or index the content blocked by robots.txt, Google might still find disallowed content from other places on the web and index it. To prevent indexing of a page, use robots meta tag instead.

Indicate the location of any sitemaps associated with this domain at the bottom of the robots.txt file:

Sitemap: https://www.wired.com/sitemap.xml

Google has sophisticated algorithms to determine the optimal crawl speed for a site and recommends against limiting the crawl rate unless crawler is causing server load problems.

Resources:

https://moz.com/learn/seo/robotstxt
https://support.google.com/webmasters/answer/6062608
https://support.google.com/webmasters/answer/48620
https://developers.google.com/search/reference/robots_txt

 

meta robots tag

Use meta robots tag with parameter noindex to instruct crawlers not to show the page in search results. Use it for pages like internal search results, login and other authentication related pages.

<meta name="robots" content="noindex">

Use meta robots tag with parameter nofollow to instruct crawlers not to follow the links on the page.

<meta name="robots" content="nofollow">

noindex and nofollow directive can be combined:

<meta name="robots" content="noindex, nofollow">

An alternative to using meta robots tag is X-Robots-Tag HTTP response header.

Resources:

https://moz.com/learn/seo/robots-meta-directives
https://support.google.com/webmasters/answer/93710
https://developers.google.com/search/reference/robots_meta_tag
https://support.google.com/webmasters/answer/7424835

 

sitemap.xml

Although crawlers can usually discover most of the well inter-linked site, a sitemap can improve the crawling of the site, especially when you:

  • use priority attribute to signify the importance of the page relative to other URLs in the site
  • use changefreq attribute to hint how frequently the page is likely to change
  • use lastmod attribute to signal the date of last modification of the page.

Resources:

https://www.sitemaps.org/protocol.html
https://support.google.com/webmasters/answer/156184
https://support.google.com/webmasters/answer/183668

 

Social meta tags

Social meta tags are not a direct ranking signal, but correctly set social tags, help content spread, which often leads to increased links and mentions. Open Graph has become de facto standard for social meta tags — Facebook, Twitter, LinkedIn, Google+ and a lot of other major platforms recognize it. Twitter has its own Cards meta tags, but if they are not present on the page, Twitter falls back to using Open Graph tags.

If you are using social sharing widgets, add image:width and image:height. These two tags will enable Facebook to correctly load the image in their sharing pop up the first time a page is shared.

Optional additional properties:

  • locale — the locale of the resource
  • fb:app_id — used by Facebook Domain Insights — traffic analytics
  • image:type— MIME type of the image: image/jpeg, image/gif, image/png

If you want Twitter to display full-width prominent image alongside a tweet, use twitter:card tag with the summary_large_image property.

Minimum image dimension for such Card is 300x157px, and the maximum is 4096x4096px.

summary_large_image card display on Twitter
Example of summary_large_image card display on Twitter

Resources:

http://ogp.me/
https://developers.facebook.com/docs/sharing/webmasters
https://dev.twitter.com/cards/overview
https://dev.twitter.com/cards/types/summary-large-image
https://moz.com/blog/meta-data-templates-123

Structured Data (Schema.org)

Include structured data on the page to provide explicit clues about the meaning of a page to Google and other search engines. Google also uses structured data to enable special search result features and enhancements, like images, breadcrumbs, carousels, social profiles links, etc.

 Google Search features
Example of Google Search features: images, breadcrumbs, recipe cook time and calories, carousel.

Schema.org has become de facto standard syntax for structured data. Structured data can be added to a page in three formats: JSON-LD, Microdata and RDFa. Google recommends JSON-LD, which doesn’t interleave structured data markup with the HTML, but keeps it nicely separated.

At minimum add site’s name, logo, contacts and social links. See resources below for more info.

If your website’s content can be described by any of the supported schema.org content types, add those as well. Examples include articles, books, events, music, products, recipes, movies, videos, etc.

Structured data doesn’t necessary improve rankings, but improves how pages are represented in SERPs which should improve click-through rate.

Resources:

https://moz.com/learn/seo/schema-structured-data
http://schema.org/
https://developers.google.com/search/docs/guides/intro-structured-data
https://developers.google.com/search/docs/data-types/sitename
https://developers.google.com/search/docs/data-types/logo
https://developers.google.com/search/docs/data-types/corporate-contacts
https://developers.google.com/search/docs/data-types/social-profile-links
https://developers.google.com/search/docs/data-types/articles

 

HTML5 Semantic elements

HTML5 semantic elements are probably not a SEO signal, but they help crawlers to better understand content on the website.

Use header, main and footer tags to divide body content into three sections:

Use header element inside the article to encapsulate h1, main image, category, etc. Use footer element inside the article to encapsulate author and tags.

Use aside element for social sharing icons and related articles.

Use section or aside element for related articles below the main article.

Use time element to represent article publication date.

Full example for a typical article page.

References:

https://www.w3.org/WAI/tutorials/page-structure/regions/
http://blog.teamtreehouse.com/use-html5-sectioning-elements
https://developer.mozilla.org/hr/docs/Web/HTML/Element/main
http://html5doctor.com/the-main-element/
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/time
Moz, Kissmetrics, HTML5 doctor source code

HTTPS everywhere

HTTPS is already a ranking signal, and will only become stronger in the future.

Since January 2017, Google has started displaying “Not secure” warnings for HTTP pages that collect passwords or credit cards. Long term, Google will mark all HTTP sites as non-secure. This warning will lead to increased bounce rates and shorter user engagement, which are negative SEO signals.

Moreover, HTTPS (over HTTP/2) is faster than HTTP, and faster pages rank and convert better.

Since the raise of Let’s Encrypt, you can setup HTTPS for free, and what’s even better, you can fully automate renewals.

Resources:

https://webmasters.googleblog.com/2014/08/https-as-ranking-signal.html
https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html
https://www.httpvshttps.com/

HTTP Status Codes

HTTP responses can signal to search engines that a page has moved permanently (301 Moved Permanently) or temporarily (302 Found) to a new URL. Until recently (mid 2016), 301 redirect resulted in around a 15% loss of PageRank and 302 redirect didn’t pass any PageRank to new URL, but since then Google stopped caring which redirection method you use and they all pass full PageRank. If you care about other search engines, the 301 redirect should still be preferred method for permanent redirects.

Generally, 404 Not Found should be used when a web server cannot find requested URL. However, there are a couple of exceptions to that rule, but only if there is alternative highly relevant page available for 301 redirect:

  • valuable links from external sources. You should first try to get the external sources to change the link, but that’s not always possible.
  • mistyped URLs
  • URLs that receive large number of users

404 page should clearly notify the user that the page they were trying to reach doesn’t exist. It should also provide a link to the homepage, a clearly visible search box and easy to use navigation system so that users can potentially find what they were originally looking for.

Great example of 404 page
Great example of 404 page which provides all the necessary information to users

Redirecting 404 pages to the homepage is a bad practice because most users will not realize that the web page they got is not the one they were trying to reach.

Resources:

https://moz.com/learn/seo/http-status-codes
https://moz.com/blog/301-redirection-rules-for-seo
https://support.google.com/webmasters/answer/6033049
https://moz.com/blog/are-404-pages-always-bad-for-seo

 

Mobile Optimization

Mobile devices typically have a lot slower connections and higher latency than desktops, so page load speed is crucial. On top of all other pagespeed optimizations, make sure you are serving images resized for smaller mobile resolutions.

Don’t use popups — it can be difficult and frustrating to close them which might lead to higher bounce rate. Instead, use simple banners inline with page’s content. Design for fat finger — make sure there is enough space between links and buttons so that they are not accidentally tapped.

Search results with rich snippets are even more likely to stand out on mobile than on desktop. Also, there are rich snippets enhancements like Carousels that are available only on mobile.

Google recommends using responsive web design and serving both desktop and mobile on the same URL, instead of using separate mobile URLs.

Most mobile browsers do not render Flash. Use HTML5 instead.

Use meta viewport tag to instruct browser how to adjust the dimensions and scaling of the page to the width of the device:

 

Google recommends serving the same HTML, CSS and JavaScript to all devices and this is setup Google can automatically detect.

If the website has a local element, optimize mobile content for local search — see Optimize for local search section for more details.

Resources:

https://moz.com/learn/seo/mobile-optimization
https://developers.google.com/search/mobile-sites/

 

Optimize for local search

Local results appear for people who search for businesses and places near their location. To improve local ranking on Google and enhance presence in Search and Maps, provide complete and detailed business information in Google My Business.

good and services google search
Add accurate and appealing pictures to your listings to show people your goods and services.

Add accurate and appealing pictures to your listings to show people your goods and services.

Add the name, address and phone number to the website, but also to other websites like Yelp, Foursquare, social media sites. Make sure information is consistent — watch for typos. Add city and state to the title tag, H1 or other headings, URL, content, image alt tags and meta description. Embed Google Map that points to your Google My Business listing.

High quality, positive reviews improve website visibility and increase the likelihood that a potential customer will visit the location. Encourage customers to leave a review and respond to reviews.

Resources:

https://support.google.com/business/answer/7091
https://moz.com/blog/everybody-needs-local-seo
http://searchengineland.com/local-seo-rank-local-business-218906

 

Multi-Language Websites

You can help Google determine the website language correctly by using a single language for content and navigation on each page, and by avoiding side-by-side translations. Keep the content for each language on separate URLs. Most often, you’ll want to use subdirectories or subdomains to separate multilingual content. Subdirectories structure is easier to setup and maintain, but is less flexible — it uses a single server (or single cloud deployment) and makes it harder to separate sites. If you want to target countries instead of languages, use country-specific top-level domains (.co.uk, .de, .es…).

Cross-link each language version of a page so that users can switch the language with a click or two.

Add lang attribute to html tag, to declare the language of a web page. Google ignores this attribute, but Bing uses it, it’s recommended by W3C and it helps some screen readers to handle pronunciation properly.

Avoid automatic redirection based on the user’s perceived language. Instead, use hreflang attribute to specify the relationship between translated pages and Google will use that information to serve the correct language URL in search results. Each language page should identify all different language versions, including itself.

The value of hreflang attribute identifies the language and optionally the region (e.g. en, en-gb, en-au). For language/country selector pages, or homepages shown for all languages, use hreflang="x-default".

Resources:

https://support.google.com/webmasters/answer/182192
https://support.google.com/webmasters/answer/189077
https://moz.com/learn/seo/hreflang-tag
https://moz.com/learn/seo/international-seo
https://moz.com/blog/the-international-seo-checklist
https://moz.com/blog/hreflang-behaviour-insights
https://blogs.bing.com/webmaster/2011/03/01/how-to-tell-bing-your-websites-country-and-language/

AJAX

Google crawler is getting better and better in crawling JavaScript content, but it still needs a little bit of help to finding its way around sites which use JavaScript for navigation. If possible, build your web site with progressive enhancement in mind — first build your site’s structure and navigation using only HTML and use AJAX to spice the appearance and interface (Hijax).

Hash fragments that represent unique page states should begin with the exclamation mark. E.g.

https://www.example.com/ajax.html#!mystate

You can add hash fragments to your link rel="canonical" and sitemap.xml if that is the preferred URL you want to appear in search results.

Resources:

https://support.google.com/webmasters/answer/81766
https://support.google.com/webmasters/answer/174993
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157

Page Load Speed

Page load speed is one of the confirmed signals that Google uses to rank pages.

According to some researches, it seems that Google uses Time to First Byte (TTFB) as page load time. Page load speed is also important from user experience perspective. Pages that load faster have lower bounce rates and higher average time on page. Therefore, both time to first byte and page load time should be optimized.

Page speed optimization deserves its own article (I’ll write one soon, so stay tuned! :-)), but in the meantime check out the resources below.

Resources:

https://moz.com/learn/seo/page-speed
https://webmasters.googleblog.com/2010/04/using-site-speed-in-web-search-ranking.html
https://developers.google.com/speed/docs/insights/rules
https://moz.com/blog/how-website-speed-actually-impacts-search-ranking
https://moz.com/blog/15-tips-to-speed-up-your-website

This Is the End

Yay! You’ve made it until the end. Congratulations!

This guide is also available as Web Developer’s SEO Checklist — a handy interactive checklist you can use while you implement all these actions on your latest project.

As I have mentioned in the introduction, SEO is changing extremely fast. If you think that anything is missing from this guide, or that some of the recommendations in this guide are outdated, I would love to hear it. Just drop me a comment below.

Thank you for reading. If you like the article, please click the👏 below so others can find it too — it would really mean a lot to me. :)

 

Who are Five and Shoutem? Well, we are all about Mobile Apps. Five is a mobile design and development agency with offices in Croatia and NYC; and Shoutem is a platform for building apps.