Last week I polled the twitterverse about their knowledge of SEO (Search Engine Optimization).
LaurieGiven everything happening today, I’m genuinely wondering whether developers at large have a strong understanding of SEO. If you’re a dev, do you feel you understand it? Canonical-urls? Duplicate content? Backlinks?22:27 PM - 28 May 2019
With survey options of "yes", "no", and "some" the least common answer was yes! It seems that developers, for the most part, don't feel comfortable with the concepts at play in SEO.
Throughout the web, you'll find plenty of articles about how to handle SEO on your site. So instead, I wanted to talk about the way search engines work and what those SEO "rules" and recommendations are derived from.
The first thing to understand is that search engines crawl the internet looking for site pages. Once it finds something new, the search engine will index the page content so it can be a part of searches. Once content is indexed, it can be ranked compared to other searches for similar content.
The term SEO, or search engine optimization, is about getting content to rank higher for given search terms. You're focused on specific keywords that someone would use to find your site. The higher a rank, the more traffic a site is likely to see, which can be a source of marketing, growth, or even income.
When creating content, you want it to be seen by the largest audience possible. That may mean posting the same content on different sites. By doing that you can hit multiple audiences.
But here is the snag. You want all that traffic to be added up together to rank that piece of content higher! This is where a canonical_url comes in. (This post is actually using one right now from my company blog Ten Mile Square)
If you look at analytics, you can still see the different sources of traffic. But for SEO purposes, there is a "home" that should continue to rank higher in search results. This is particularly relevant if your canonical_url is used on a high ranking domain that would otherwise outrank your home domain. I'll dive into this more below.
If you're looking for how to implement this on your site, check out the specific methods.
You may have heard of backlinks and how critical they are to improving your content ranking. It's awesome to have multiple things referencing your post! The more there are, the more search engines are inclined to rank this post higher.
This is considered a "vote of confidence" from another site. Backlinks can be especially valuable if the site they appear on is highly ranked. So having a "home" for your page means that all of that internet goodwill is being passed to a specific location, instead of splitting your content locations and competing with yourself.
To dive more into backlinks and see how to benefit from them on your own site, check out this resource.
Now we get into the trickiest part of this post. It's important to know that there are different rules for duplicate content within a domain versus across different domains. I'm going to focus on the latter issue.
As we mentioned above, just like pages can rank, domains can rank as well.
There are a lot of details about how domains rank higher including load times, working links, etc. That could be a whole other post.
The more often a particular domain satisfies user queries, the higher it will rank. This is really important if content exists on two different sites. If you don't have that canonical_url, the search engine may decide that the higher ranking domain is the original source of the information, and not even rank your version of the content!
This may seem unfair, but it does make sense. Search engines are trying to match your query with unique, valuable content. That means handling content that shows up multiple times. It's not relevant to show you three hits all with the same content.
Over the years the way this is handled has gotten pretty sophisticated. And there are domains that are full of only duplicate content. In those cases, the search engine may decide that all content on a site can be found elsewhere and that it's a waste to index it at all.
When people talk about "penalizing" duplicate content this is normally what they mean. Some will say search engines don't penalize duplicate content but rather reward unique content. It's all kind of semantics. But it is important to understand that search engines are looking for valuable, unique information.
For a real deep dive into the complexities of this issue, this is a great post.
Over time search engines get better at judging content. This is usually a net positive, but sometimes it can have unwanted side effects. If you're looking to improve your SEO, or even understand it better, keep track of the latest rules. In fact, they just changed yesterday!
Google SearchLiaison@searchliaisonTomorrow, we are releasing a broad core algorithm update, as we do several times per year. It is called the June 2019 Core Update. Our guidance about such updates remains as we’ve covered before. Please see this tweet for more about that:
twitter.com/searchliaison/…20:00 PM - 02 Jun 2019Google SearchLiaison @searchliaisonHere’s an update about updates -- updates to our search algorithms. As explained before, each day, Google usually releases one or more changes designed to improve our results. Most have little noticeable change but help us continue to incrementally improve search….