All Collections
Website Engine
SEO
Duplicate Content and Content Assist
Duplicate Content and Content Assist
Updated over a week ago

What is duplicate content?

Duplicate content means that similar content appears at multiple URLs. When people start linking to different versions of the same content, this is known as duplicate content.

Is Twenty Over Ten’s Content Assist library considered duplicate content?

Absolutely not. Because the content within your Content Assist library is completely customizable, we highly encourage all of our advisors to tailor the posts for their target audience, geographic location, and business.

You have full editing rights to ANY piece of content in the library.

For more information on best practices for tailoring your Content Assist articles for increased SEO, watch our video tutorial or read this article.

What if I don’t edit the Content Assist articles from the library? Will the content then be considered “duplicate”?

First, keep this in mind: Unlike some of our competitors, Twenty Over Ten does not automatically push out new pieces of content to ANY advisor’s website. This means that every single advisor/website owner must actively log in to the library and choose pieces of content to add to their own site.

Every advisor targets different audiences, so the content that they choose will be specific to their audiences. Of all the 400+ pieces of content in the library, there has never been a piece shared more than 200 times.

So, if I don’t edit the content pieces from Content Assist, will it hurt my website ranking?

According to Google, “duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results.”

Meaning if you are deliberately manipulating and duplicating content across your website (both blog and main pages) in an attempt to manipulate the search engine rankings or to win more traffic, then yes, Google may decide to de-rank your site. Read more on this from Google here.

The bottom line: Google’s bots visit most sites every single day. If it happens to find a copied version of something a week later on another site, they’re smart, and they know where the original first appeared. But Googlebot doesn’t get angry and penalize the site; it just moves on.

I’ve decided to use duplicate content from a third-party provider. Should I block Google from indexing my duplicate content?

According to Google, they “do not recommend blocking crawler access to duplicate content (dc) on your website, whether with a robots.txt file or other methods.”

Instead, Google recommends allowing their bots to crawl the duplicate URLs but marking them as duplicates by using rel="canonical” tag. You can also consider adjusting the crawl rate setting in your Google Search Console.

Did this answer your question?