The Future of SEO: 2015 Ranking Factors Expert Survey Deep Dive

Posted by Cyrus-Shepard

Recently, Moz announced the results of our biennial Ranking Factors study. Today, we’d like to explore one of the most vital elements of the study: the Ranking Factors survey.

2015 Ranking Factors Expert Survey

Every two years, Moz surveys the brightest minds in SEO and search marketing with a comprehensive set of questions meant to gauge the current workings of Google’s search algorithm. This year’s panel of experts possesses a truly unique set of knowledge and perspectives. We’re thankful on behalf of the entire community for their contribution.

In addition to asking the participants about what does and doesn’t work in Google’s ranking algorithm today, one of the most illuminating group of questions asks the panel to predict the future of search – how the features of Google’s algorithm are expected to change over the next 12 months.

Amazingly, almost all of the factors that are expected to increase in influence revolved around user experience, including:

  • Mobile-friendliness
  • Perceived value
  • Readability
  • …and more

The experts predicted that more traditional ranking signals, such as those around links and URL structures, would largely remain the same, while the more manipulative aspects of SEO, like paid links and anchor text (which is subject to manipulation), would largely decrease in influence.

The survey also asks respondents to weight the importance of various factors within Google’s current ranking algorithm (on a scale of 1-10). Understanding these areas of importance helps to inform webmasters and marketers where to invest time and energy in working to improve the search presence of their websites.

On-page keyword features

These features describe use of the keyword term/phrase in particular parts of the HTML code on the page (title element, H1s, alt attributes, etc).

Highest influence: Keyword present in title element, 8.34
Lowest influence: Keyword present in specific HTML elements (bold/italic/li/a/etc), 4.16

Titles are still very powerful. Overall, it’s about focus and matching query syntax. If your post is about airplane propellers but you go on a three paragraph rant about gorillas, you’re going to have a problem ranking for airplane propellers.

AJ Kohn

Keyword usage is vital to making the cut, but we don’t always see it correlate with ranking, because we’re only looking at what already made the cut. The page has to be relevant to appear for a query, IMO, but when it comes to how high the page ranks once it’s relevant, I think keywords have less impact than they once did. So, it’s a necessary but not sufficient condition to ranking.

Peter Meyers

In my experience, most of problems with organic visibility are related to on-page factors. When I look for an opportunity, I try to check for 2 strong things: presence of keyword in the title and in the main content. Having both can speed up your visibility, especially on long-tail queries.

Fabio Ricotta


Domain-level keyword features

These features cover how keywords are used in the root or subdomain name, and how much impact this might have on search engine rankings.

Highest influence: Keyword is the exact match root domain name, 5.83
Lowest influence: Keyword is the domain extension, 2.55

The only domain/keyword factor I’ve seen really influence rankings is an exact match. Subdomains, partial match, and others appear to have little or no effect.

Ian Lurie

There’s no direct influence, but an exact match root domain name can definitely lead to a higher CTR within the SERPs and therefore a better ranking in the long term.

Marcus Tandler

It’s very easy to link keyword-rich domains with their success in Google’s results for the given keyword. I’m always mindful about other signals that align with domain name which may have contributed to its success. These includes inbound links, mentions, and local citations.

Dan Petrovic


Page-level link-based features

These features describe link metrics for the individual ranking page (such as number of links, PageRank, etc).

Highest influence: Raw quantity of links from high-authority sites, 7.78
Lowest influence: Sentiment of the external links pointing to the page, 3.85

High-quality links still rule rankings. The way a brand can earn links has become more important over the years, whereas link schemes can hurt a site more than ever before. There is a lot of FUD slinging in this respect!

Dennis Goedegebuure

Similar to my thoughts on content, I suspect link-based metrics are going to be used increasingly with a focus on verisimilitude (whether content is actually true or not) and relationships between nodes in Knowledge Graph. Google’s recent issues with things, such as the snippet results for “evolution,” highlight the importance of them only pulling things that are factually correct for featured parts of a SERP. Thus, just counting traditional link metrics won’t cut it anymore.

Pete Wailes

While anchor text is still a powerful ranking factor, using targeted anchor text carries a significant amount of risk and can easily wipe out your previous success.

Geoff Kenyon


Domain-level brand features

These features describe elements that indicate qualities of branding and brand metrics.

Highest influence: Search volume for the brand/domain, 6.54
Lowest influence: Popularity of business’s official social media profiles, 3.99

This is clearly on deck to change very soon with the reintegration of Twitter into Google’s Real-Time Results. It will be interesting to see how this affects the “Breaking News” box and trending topics. Social influencers, quality and quantity of followers, RTs, and favorites will all be a factor. And what’s this?! Hashtags will be important again?! Have mercy!

Marshall Simmonds

Google has to give the people what they want, and if most of the time they are searching for a brand, Google is going to give them that brand. Google doesn’t have a brand bias, we do.

Russ Jones

It’s already noticeable; brands are more prominently displayed in search results for both informational and commercial queries. I’m expecting Google will be paying more attention to brand-related metrics from now on (and certainly more initiatives to encourage site owners to optimize for better entity detection).

Jason Acidre

Page-level social features

These features relate to third-party metrics from social media sources (Facebook, Twitter, Google+, etc) for the ranking page.

Highest influence: Engagement with content/URL on social networks, 3.87
Lowest influence: Upvotes for the page on social sites, 2.7

Social ranking factors are important in a revamped Query Deserves Freshness algorithm. Essentially, if your content gets a lot of natural tweets, shares, and likes, it will rank prominently for a short period of time, until larger and more authoritative sites catch up.

Dev Basu

Social popularity has several factors to consider: (1) Years ago, Google and Bing said they take into account the authority of a social profile sharing a link and the popularity of the link being shared (retweets/reshares), and there was more complexity to social signals that was never revealed even back then. (2) My experience has been that social links and shares have more power for newsy/fresh-type content. For example, a lot of social shares for a dentist’s office website wouldn’t be nearly as powerful (or relevant to consider) as a lot of social shares for an article on a site with a constant flow of fresh content.

Laura Lippay

Honestly, I do not think that the so-called “social signals” have any direct influence on the Google Algorithm (that does not mean that a correlation doesn’t exist, though). My only doubt is related to Twitter, because of the renewed contract between Google and Twitter itself. That said, as of now I do not consider Twitter to offer any ranking signals, except for very specific niches related to news and “news-able” content, where QDF plays a fundamental role.

Gianluca Fiorelli


Page-level keyword-agnostic features

These elements describe non-keyword-usage, non-link-metrics features of individual pages (such as length of the page, load speed, etc).

Highest influence: Uniqueness of the content on the page, 7.85
Lowest influence: Page contains Open Graph data and/or Twitter cards, 3.64

By branching mobile search off of Google’s core ranking algorithm, having a “mobile-friendly” website is probably now less important for desktop search rankings. Our clients are seeing an ever-increasing percentage of organic search traffic coming from mobile devices, though (particularly in retail), so this is certainly not an excuse to ignore responsive design – the opposite, in fact. Click-through rate from the SERPs has been an important ranking signal for a long time and continues to be, flagging irrelevant or poor-quality search listings.

Rob Kerry

I believe many of these will be measured within the ecosystem, rather than absolutely. For example, the effect of bounce rate (or rather, bounce speed) on a site will be relative to the bounce speeds on other pages in similar positions for similar terms.

Dan Barker

I want to answer these a certain way because, while I have been told by Google what matters to them, what I see in the SERPs does not back up what Google claims they want. There are a lot of sites out there with horrible UX that rank in the top three. While I believe it’s really important for conversion and to bring customers back, I don’t feel as though Google is all that concerned, based on the sites that rank highly. Additionally, Google practically screams “unique content,” yet sites that more or less steal and republish content from other sites are still ranking highly. What I think should matter to Google doesn’t seem to matter to them, based on the results they give me.

Melissa Fach


Domain-level link authority features

These features describe link metrics about the domain hosting the page.

Highest influence: Quantity of unique linking domains to the domain, 7.45
Lowest influence: Sentiment of the external links pointing to the site, 3.91

Quantity and quality of unique linking domains at the domain level is still among the most significant factors in determining how a domain will perform as a whole in the organic search results, and is among the best SEO “spot checks” for determining if a site will be successful relative to other competitor sites with similar content and selling points.

Todd Malicoat

Throughout this survey, when I say “no direct influence,” this is interchangeable with “no direct positive influence.” For example, I’ve marked exact match domain as low numbers, while their actual influence may be higher – though negatively.

Kirsty Hulse

Topical relevancy has, in my opinion, gained much ground as a relevant ranking factor. Although I find it most at play when at page level, I am seeing significant shifts at overall domain relevancy, by long-tail growth or by topically-relevant domains linking to sites. One way I judge such movements is the growth of the long-tail relevant to the subject or ranking, when neither anchor text (exact match or synonyms) nor exact phrase is used in a site’s content, yet it still ranks very highly for long-tail and mid-tail synonyms.

Rishi Lakhani


Domain-level keyword-agnostic features

These features relate to the entire root domain, but don’t directly describe link- or keyword-based elements. Instead, they relate to things like the length of the domain name in characters.

Highest influence: Uniqueness of content across the whole site, 7.52
Lowest influence: Length of time until domain name expires, 2.45

Character length of domain name is another correlative yet not causative factor, in my opinion. They don’t need to rule these out – it just so happens that longer domain names get clicked on, so they get ruled out quickly.

Ross Hudgens

A few points: Google’s document inception date patents describe how Google might handle freshness and maturity of content for a query. The “trust signal” pages sound like a site quality metric that Google might use to score a page on the basis of site quality. Some white papers from Microsoft on web spam signals identified multiple hyphens in subdomains as evidence of web spam. The length of time until the domain expires was cited as a potential signal in Google’s patent on information retrieval through historic data, and was refuted by Matt Cutts after domain sellers started trying to use that information to sell domain extensions to “help the SEO” of a site.

Bill Slawski

I think that page speed only becomes a factor when it is significantly slow. I think that having error pages on the site doesn’t matter, unless there are so many that it greatly impacts Google’s ability to crawl.

Marie Haynes


The future of search

To bring it back to the beginning, we asked the experts if they had any comments or alternative signals they think will become more or less important over the next 12 months.

While I expect that static factors, such as incoming links and anchor text, will remain influential, I think the power of these will be mediated by the presence or absence of engagement factors.

Sha Menz

The app world and webpage world are getting lumped together. If you have the more popular app relative to your competitors, expect Google to notice.

Simon Abramovitch

Mobile will continue to increase, with directly-related factors increasing as well. Structured data will increase, along with more data partners and user segmentation/personalization of SERPs to match query intent, localization, and device-specific need states.

Rhea Drysdale

User location may have more influence in mobile SERPs as (a) more connected devices like cars and watches allow voice search, and (b) sites evolve accordingly to make such signals more accurate.

Aidan Beanland

I really think that over the next 12-18 months we are going to see a larger impact of structured data in the SERPs. In fact, we are already seeing this. Google has teams that focus on artificial intelligence and machine learning. They are studying “relationships of interest” and, at the heart of what they are doing, are still looking to provide the most relevant result in the quickest fashion. Things like schema that help “educate” the search engines as to a given topic or entity are only going to become more important as a result.

Jody Nimetz


For more data, check out the complete Ranking Factors Survey results.

2015 Ranking Factors Expert Survey

Finally, we leave you with this infographic created by Kevin Engle which shows the relative weighting of broad areas of Google’s algorithm, according to the experts.

What’s your opinion on the future of search and SEO? Let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Announcing the 2015 Search Engine Ranking Factors Study

Posted by Cyrus-Shepard

We’re excited to announce the results of Moz’s biannual Search Engine Ranking Correlation Study and Expert Survey, a.k.a. Ranking Factors.

Moz’s Ranking Factors study helps identify which attributes of pages and sites have the strongest association with ranking highly in Google. The study consists of two parts: a survey of professional SEOs and a large correlation study.

This year, with the help of Moz’s data scientist Dr. Matt Peters, new data partners, and over 150 search marketing professionals, we were able to study more data points than in any year past. All together, we measured over 170 correlations and collected over 15,000 data points from our panel of SEO experts.

Ready to dig in?

2015 Ranking Factors Study

We want to especially thank our data partners. SimilarWeb, Ahrefs, and DomainTools each gave us unparallelled access and their data was essential to helping make this study a success. It’s amazing and wonderful when different companies—even competitors—can come together for the advancement of knowledge.

You can see all of our findings within the study now. In the coming days and weeks we’ll dive into deeper analysis as to what we can learn from these correlations.

Search Engine Ranking Correlation Study

Moz’s Ranking Correlation Study measures which attributes of pages and websites are associated with higher rankings in Google’s search results. This means we look at characteristics such as:

  • Keyword usage
  • Page load speed
  • Anchor text
  • …and over 170 other attributes

To be clear, the study doesn’t tell us if Google actually uses these attributes in its core ranking algorithm. Instead, it shows which features of pages and sites are most associated with higher rankings. It’s a fine, but important, distinction.

While correlation studies can’t prove or disprove which attributes Google considers in its algorithm, it does provide valuable hints. In fact, many would argue that correlation studies are even more important than causation when working with today’s increasingly complex algorithms.

For the study, Dr. Peters examined the top 50 Google results of 16,521 search queries, resulting in over 700,000 unique URLs. You can read about the full methodology here.

Here’s a sample of our findings:

Example: Page-Level Link-Based Features

The features in the chart below describe link metrics to the individual ranking page (such as number of links, PageRank, etc.) and their correlation to higher rankings in Google.

Despite rumors to the contrary, links continue to show one of the strongest associations with higher rankings out of all the features we studied. While this doesn’t prove how Google uses links in its algorithm, this information combined with statements from Google and the observations of many professional marketers leads us to strongly believe that links remain hugely important for SEO.

Link-based features were only one of the features categories we examined. The complete correlation study includes 12 different categories of data.

10 Ranking Factors summary findings

  1. We continue to see lower correlations between on-page keyword use and rankings. This could likely be because Google is smarter about what pages mean (through related keywords, synonyms, close variants, and entities) without relying on exact keyword phrases. We believe matching user intent is of the utmost importance.
  2. While page length, hreflang use, and total number of links all show moderate association with Google rankings, we found that using HTTPS has a very low positive correlation. This could indicate it’s the “tie-breaker” Google claims. Negatively associated factors include server response time and the total length of the URL.
  3. Despite rumors to the contrary, the data continues to show some of the highest correlations between Google rankings and the number of links to a given page.
  4. While there exists a decent correlation between exact-match domains (domains where the keyword matches the domain exactly, e.g. redwidgets.com) and rankings, this is likely due to the prominence of anchor text, keyword usage, and other signals, instead of an algorithmic bias in favor of these domains.
  5. Our study showed little relationship with the type of top-level domain (.com, .org, etc.) and rankings in Google.
  6. While not quite as strong as page-level link metrics, the overall links to a site’s root and subdomain showed a reasonably strong correlation to rankings. We believe links continue to play a prominent role in Google’s algorithm.
  7. Use of anchor text was another prominent feature of high-ranking results, with the number of unique domains linking with partial-match anchor text leading the way.
  8. Always controversial, the number of social shares a page accumulates tends to show a positive correlation with rankings. Although there is strong reason to believe Google doesn’t use social share counts directly in its algorithm, there are many secondary SEO benefits to be gained through successful social sharing.
  9. Time until domain registration expiration was moderately correlated with higher rankings, while private registration showed a small negative correlation.
  10. Engagement metrics from SimilarWeb showed that pages with lower bounce rates, higher pageviews, and better time-on-site metrics were associated with higher rankings.

Ranking Factors Expert Survey

While correlation data can provide valuable insight into the workings of Google’s algorithm, we often learn much more by gathering the collective wisdom of search marketing experts working at the top of their game.

For this reason, every two years we conduct the Ranking Factors Expert Survey.

The survey itself is famously grueling–over 100 questions covering every aspect of Google’s ranking algorithm. This year, we sent the invitation-only survey to 150 industry professionals.

Stay tuned for a deeper dive into the Expert Survey later this week. We’re honored to have the participation of so many knowledgeable professionals.

In the meantime, you can freely view all the findings and results right now:

2015 Ranking Factors Study

Ranking Factors wouldn’t be possible without the contribution of dozens of very talented people, but we’d especially like to thank Dr. Matt Peters, Kevin Engle, Rand Fishkin, Casey Coates, Trevor Klein, and Kelly Cooper for their efforts, along with our data partners and all the survey participants.

What ranking factors or correlations stand out to you? Leave your thoughts in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Moz Ranking Factors Preview

Posted by EricEnge

How do the SERPs for commercial queries vary from the treatment of informational queries? Moz is about to publish its new Search Engine Ranking Factors, and was kind enough to provide me with access to their raw ranking data. Today I am going to share some of what I found.

In addition, I am going to compare it against raw ranking data pulled by my company, Stone Temple Consulting (STC). What makes this so interesting is that the Moz data is based on commercial queries across 165,177 pages and the STC data is based on informational queries over 182,340 pages (for a total of 347,517 result pages). Let’s dive in!

Mobile-friendliness

Google rolled out their Mobile-Friendly Update on April 21 to much fanfare. We published our study results on how big that impact was here, and in that test, we tracked a set of 15,235 SERPs both before and after the SERPs.

The following chart shows the percentage of the top 10 results in the SERPs that are mobile friendly for the Moz (commercial) queries, and the STC informational queries before and after the mobile update:

Clearly, the commercial queries are returning a much larger percentage of mobile friendly results than the informational queries. Much of this may be due to it being more important to people running E-commerce sites to have a mobile-friendly site.

What this suggests to us is that publishers of E-commerce sites have been faster to adopt mobile friendliness than publishers of informational sites. That makes sense. Of course, our friends at Google know this is more important for commercial queries, too.

Your actionable takeaway

Regardless of query type, you can see that more than 60% of the results meet Google’s current definition for mobile friendliness. For commercial queries, it’s nearly 3/4 of them. Obviously, if you are not currently mobile friendly, then solve that, but that’s not the whole story.

Over time, I believe that what is considered mobile friendly is going to change. The mobile world will become much more than just viewing your current desktop site with a smaller screen and a crappier keyboard. What are some more things you can expect in the long term?

  1. Different Site Architectures for Desktop and Mobile: I am of the opinion that the entire work flow may be different for many mobile sites.
  2. Voice navigation: We will stop seeing the keyboard as the primary navigation option for a mobile site.
  3. Continuing Rise of Apps: The prior two points may be a large factor in driving this.

My third point is an item that is already in progress, and the first two are really not for most people at this time. However, I put them out there to stimulate some thinking that much more is going to happen in this space than meets the eye. In the short term, what can you do?

My suggestion is that you start looking at the mobile version of your site as more than a different rendering of your desktop site. What are the different use cases between mobile and desktop? Consider running two surveys of your users, one of desktop users and one of smartphone users, and ask them what they are looking for, and what they would like to see. My bet is that you will quickly see that the use cases are different in material ways.

In the near term, you can leverage this information to make your mobile site optimization work better for users, probably without re-architecting it entirely. In the longer term, collecting this type of data will prepare you for considering more radical design differences between your desktop and mobile sites.

HTTP vs. HTTPS

Another one of the newer ranking factors is whether or not a site uses HTTPS. Just this past July 22, Google’s Gary Illyes clarified again that this is a minor ranking signal that acts like a tiebreaker in cases where the ranking for two competing pages are “more or less equal.”

How has that played out in the SERPs? Let’s take a look:

As with the mobile-friendliness, we once again see the commercial queries placing significantly more emphasis on this factor than the informational queries. Yet, the penetration levels are clearly far lower than they are for mobile friendliness. So should I care about this then?

Yes, it matters. Here are three reasons why:

  1. At SMX Advanced, Google’s Gary Illyes indicated at SMX Advanced that they plan to increase the strength of HTTPS as a ranking factor over time.
  2. Google’s Chrome is making moves toward warning users visiting unsecure web sites.
  3. Mozilla has also laid plans to follow suit.

Yes, I know there is much debate about whether or not you need to have HTTPS if all you are doing is running a content site. But a lot of big players out there are taking a simple stance: that it’s time for the plain text web to come to an end.

The big thing that HTTPS helps prevent is Man in the Middle Attacks. Do read the linked article if you don’t know what that is. Basically though, when you communicate with a non-secure web site, it’s pretty trivial for someone to intercept the communication and monitor or alter the information flow between you and the sending web site.

The most trivial form of this can occur any time you connect to a third party Wifi connection. People can inject ads you don’t want, or simply monitor everything you do and build a profile about you. Is that what you want?

Let me offer a simple example: Have you ever connected to Wifi in a hotel? What’s the first thing that happens? You try to go to a website, but instead you get a login screen asking for your room number and last name to sign in – and most times they charge you some fee.

That’s the concept – you tried to go to a web site, and instead got served different content (the Wifi login screen). The hotel can do this at any time. Even after you login and pay their fee, they can intercept your communication with other web sites and modify the content. A simple application for this it to inject ads. They can also monitor and keep a record of every site you visit. They can do this because they are in the middle.

In an HTTPS world, they will still be able to intercept the initial connection, but once you are connected, they will no longer be able to see inside the content going back and forth between you and the https websites you choose to access.

Your actionable takeaway

Eventually, the plain text web will come to an end. As this movement grows, more and more publishers will make the switch to HTTPS, and Google will dial up the strength of this signal as a ranking factor. If you have not made the switch, then get it into your longer term plans.

Summary

Both mobile-friendliness and HTTPS support appear to matter more to commercial sites today. I tend to think that this is more a result of more e-commerce site publishers and informational site publishers have made the conversions, rather than it being the impact of the related Google algorithms. Regardless of that, the importance of both of these factors will grow, and it would be wise to aggressively prepare for the future.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking – Whiteboard Friday

Posted by randfish

There’s no doubt that quite a bit has changed about SEO, and that the field is far more integrated with other aspects of online marketing than it once was. In today’s Whiteboard Friday, Rand pushes back against the idea that effective modern SEO doesn’t require any technical expertise, outlining a fantastic list of technical elements that today’s SEOs need to know about in order to be truly effective.

Why Effective, Modern SEO Requires Technical, Creative, and Strategic Thinking - Whiteboard Friday

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week I’m going to do something unusual. I don’t usually point out these inconsistencies or sort of take issue with other folks’ content on the web, because I generally find that that’s not all that valuable and useful. But I’m going to make an exception here.

There is an article by Jayson DeMers, who I think might actually be here in Seattle — maybe he and I can hang out at some point — called “Why Modern SEO Requires Almost No Technical Expertise.” It was an article that got a shocking amount of traction and attention. On Facebook, it has thousands of shares. On LinkedIn, it did really well. On Twitter, it got a bunch of attention.

Some folks in the SEO world have already pointed out some issues around this. But because of the increasing popularity of this article, and because I think there’s, like, this hopefulness from worlds outside of kind of the hardcore SEO world that are looking to this piece and going, “Look, this is great. We don’t have to be technical. We don’t have to worry about technical things in order to do SEO.”

Look, I completely get the appeal of that. I did want to point out some of the reasons why this is not so accurate. At the same time, I don’t want to rain on Jayson, because I think that it’s very possible he’s writing an article for Entrepreneur, maybe he has sort of a commitment to them. Maybe he had no idea that this article was going to spark so much attention and investment. He does make some good points. I think it’s just really the title and then some of the messages inside there that I take strong issue with, and so I wanted to bring those up.

First off, some of the good points he did bring up.

One, he wisely says, “You don’t need to know how to code or to write and read algorithms in order to do SEO.” I totally agree with that. If today you’re looking at SEO and you’re thinking, “Well, am I going to get more into this subject? Am I going to try investing in SEO? But I don’t even know HTML and CSS yet.”

Those are good skills to have, and they will help you in SEO, but you don’t need them. Jayson’s totally right. You don’t have to have them, and you can learn and pick up some of these things, and do searches, watch some Whiteboard Fridays, check out some guides, and pick up a lot of that stuff later on as you need it in your career. SEO doesn’t have that hard requirement.

And secondly, he makes an intelligent point that we’ve made many times here at Moz, which is that, broadly speaking, a better user experience is well correlated with better rankings.

You make a great website that delivers great user experience, that provides the answers to searchers’ questions and gives them extraordinarily good content, way better than what’s out there already in the search results, generally speaking you’re going to see happy searchers, and that’s going to lead to higher rankings.

But not entirely. There are a lot of other elements that go in here. So I’ll bring up some frustrating points around the piece as well.

First off, there’s no acknowledgment — and I find this a little disturbing — that the ability to read and write code, or even HTML and CSS, which I think are the basic place to start, is helpful or can take your SEO efforts to the next level. I think both of those things are true.

So being able to look at a web page, view source on it, or pull up Firebug in Firefox or something and diagnose what’s going on and then go, “Oh, that’s why Google is not able to see this content. That’s why we’re not ranking for this keyword or term, or why even when I enter this exact sentence in quotes into Google, which is on our page, this is why it’s not bringing it up. It’s because it’s loading it after the page from a remote file that Google can’t access.” These are technical things, and being able to see how that code is built, how it’s structured, and what’s going on there, very, very helpful.

Some coding knowledge also can take your SEO efforts even further. I mean, so many times, SEOs are stymied by the conversations that we have with our programmers and our developers and the technical staff on our teams. When we can have those conversations intelligently, because at least we understand the principles of how an if-then statement works, or what software engineering best practices are being used, or they can upload something into a GitHub repository, and we can take a look at it there, that kind of stuff is really helpful.

Secondly, I don’t like that the article overly reduces all of this information that we have about what we’ve learned about Google. So he mentions two sources. One is things that Google tells us, and others are SEO experiments. I think both of those are true. Although I’d add that there’s sort of a sixth sense of knowledge that we gain over time from looking at many, many search results and kind of having this feel for why things rank, and what might be wrong with a site, and getting really good at that using tools and data as well. There are people who can look at Open Site Explorer and then go, “Aha, I bet this is going to happen.” They can look, and 90% of the time they’re right.

So he boils this down to, one, write quality content, and two, reduce your bounce rate. Neither of those things are wrong. You should write quality content, although I’d argue there are lots of other forms of quality content that aren’t necessarily written — video, images and graphics, podcasts, lots of other stuff.

And secondly, that just doing those two things is not always enough. So you can see, like many, many folks look and go, “I have quality content. It has a low bounce rate. How come I don’t rank better?” Well, your competitors, they’re also going to have quality content with a low bounce rate. That’s not a very high bar.

Also, frustratingly, this really gets in my craw. I don’t think “write quality content” means anything. You tell me. When you hear that, to me that is a totally non-actionable, non-useful phrase that’s a piece of advice that is so generic as to be discardable. So I really wish that there was more substance behind that.

The article also makes, in my opinion, the totally inaccurate claim that modern SEO really is reduced to “the happier your users are when they visit your site, the higher you’re going to rank.”

Wow. Okay. Again, I think broadly these things are correlated. User happiness and rank is broadly correlated, but it’s not a one to one. This is not like a, “Oh, well, that’s a 1.0 correlation.”

I would guess that the correlation is probably closer to like the page authority range. I bet it’s like 0.35 or something correlation. If you were to actually measure this broadly across the web and say like, “Hey, were you happier with result one, two, three, four, or five,” the ordering would not be perfect at all. It probably wouldn’t even be close.

There’s a ton of reasons why sometimes someone who ranks on Page 2 or Page 3 or doesn’t rank at all for a query is doing a better piece of content than the person who does rank well or ranks on Page 1, Position 1.

Then the article suggests five and sort of a half steps to successful modern SEO, which I think is a really incomplete list. So Jayson gives us;

  • Good on-site experience
  • Writing good content
  • Getting others to acknowledge you as an authority
  • Rising in social popularity
  • Earning local relevance
  • Dealing with modern CMS systems (which he notes most modern CMS systems are SEO-friendly)

The thing is there’s nothing actually wrong with any of these. They’re all, generally speaking, correct, either directly or indirectly related to SEO. The one about local relevance, I have some issue with, because he doesn’t note that there’s a separate algorithm for sort of how local SEO is done and how Google ranks local sites in maps and in their local search results. Also not noted is that rising in social popularity won’t necessarily directly help your SEO, although it can have indirect and positive benefits.

I feel like this list is super incomplete. Okay, I brainstormed just off the top of my head in the 10 minutes before we filmed this video a list. The list was so long that, as you can see, I filled up the whole whiteboard and then didn’t have any more room. I’m not going to bother to erase and go try and be absolutely complete.

But there’s a huge, huge number of things that are important, critically important for technical SEO. If you don’t know how to do these things, you are sunk in many cases. You can’t be an effective SEO analyst, or consultant, or in-house team member, because you simply can’t diagnose the potential problems, rectify those potential problems, identify strategies that your competitors are using, be able to diagnose a traffic gain or loss. You have to have these skills in order to do that.

I’ll run through these quickly, but really the idea is just that this list is so huge and so long that I think it’s very, very, very wrong to say technical SEO is behind us. I almost feel like the opposite is true.

We have to be able to understand things like;

  • Content rendering and indexability
  • Crawl structure, internal links, JavaScript, Ajax. If something’s post-loading after the page and Google’s not able to index it, or there are links that are accessible via JavaScript or Ajax, maybe Google can’t necessarily see those or isn’t crawling them as effectively, or is crawling them, but isn’t assigning them as much link weight as they might be assigning other stuff, and you’ve made it tough to link to them externally, and so they can’t crawl it.
  • Disabling crawling and/or indexing of thin or incomplete or non-search-targeted content. We have a bunch of search results pages. Should we use rel=prev/next? Should we robots.txt those out? Should we disallow from crawling with meta robots? Should we rel=canonical them to other pages? Should we exclude them via the protocols inside Google Webmaster Tools, which is now Google Search Console?
  • Managing redirects, domain migrations, content updates. A new piece of content comes out, replacing an old piece of content, what do we do with that old piece of content? What’s the best practice? It varies by different things. We have a whole Whiteboard Friday about the different things that you could do with that. What about a big redirect or a domain migration? You buy another company and you’re redirecting their site to your site. You have to understand things about subdomain structures versus subfolders, which, again, we’ve done another Whiteboard Friday about that.
  • Proper error codes, downtime procedures, and not found pages. If your 404 pages turn out to all be 200 pages, well, now you’ve made a big error there, and Google could be crawling tons of 404 pages that they think are real pages, because you’ve made it a status code 200, or you’ve used a 404 code when you should have used a 410, which is a permanently removed, to be able to get it completely out of the indexes, as opposed to having Google revisit it and keep it in the index.

Downtime procedures. So there’s specifically a… I can’t even remember. It’s a 5xx code that you can use. Maybe it was a 503 or something that you can use that’s like, “Revisit later. We’re having some downtime right now.” Google urges you to use that specific code rather than using a 404, which tells them, “This page is now an error.”

Disney had that problem a while ago, if you guys remember, where they 404ed all their pages during an hour of downtime, and then their homepage, when you searched for Disney World, was, like, “Not found.” Oh, jeez, Disney World, not so good.

  • International and multi-language targeting issues. I won’t go into that. But you have to know the protocols there. Duplicate content, syndication, scrapers. How do we handle all that? Somebody else wants to take our content, put it on their site, what should we do? Someone’s scraping our content. What can we do? We have duplicate content on our own site. What should we do?
  • Diagnosing traffic drops via analytics and metrics. Being able to look at a rankings report, being able to look at analytics connecting those up and trying to see: Why did we go up or down? Did we have less pages being indexed, more pages being indexed, more pages getting traffic less, more keywords less?
  • Understanding advanced search parameters. Today, just today, I was checking out the related parameter in Google, which is fascinating for most sites. Well, for Moz, weirdly, related:oursite.com shows nothing. But for virtually every other sit, well, most other sites on the web, it does show some really interesting data, and you can see how Google is connecting up, essentially, intentions and topics from different sites and pages, which can be fascinating, could expose opportunities for links, could expose understanding of how they view your site versus your competition or who they think your competition is.

Then there are tons of parameters, like in URL and in anchor, and da, da, da, da. In anchor doesn’t work anymore, never mind about that one.

I have to go faster, because we’re just going to run out of these. Like, come on. Interpreting and leveraging data in Google Search Console. If you don’t know how to use that, Google could be telling you, you have all sorts of errors, and you don’t know what they are.

  • Leveraging topic modeling and extraction. Using all these cool tools that are coming out for better keyword research and better on-page targeting. I talked about a couple of those at MozCon, like MonkeyLearn. There’s the new Moz Context API, which will be coming out soon, around that. There’s the Alchemy API, which a lot of folks really like and use.
  • Identifying and extracting opportunities based on site crawls. You run a Screaming Frog crawl on your site and you’re going, “Oh, here’s all these problems and issues.” If you don’t have these technical skills, you can’t diagnose that. You can’t figure out what’s wrong. You can’t figure out what needs fixing, what needs addressing.
  • Using rich snippet format to stand out in the SERPs. This is just getting a better click-through rate, which can seriously help your site and obviously your traffic.
  • Applying Google-supported protocols like rel=canonical, meta description, rel=prev/next, hreflang, robots.txt, meta robots, x robots, NOODP, XML sitemaps, rel=nofollow. The list goes on and on and on. If you’re not technical, you don’t know what those are, you think you just need to write good content and lower your bounce rate, it’s not going to work.
  • Using APIs from services like AdWords or MozScape, or hrefs from Majestic, or SEM refs from SearchScape or Alchemy API. Those APIs can have powerful things that they can do for your site. There are some powerful problems they could help you solve if you know how to use them. It’s actually not that hard to write something, even inside a Google Doc or Excel, to pull from an API and get some data in there. There’s a bunch of good tutorials out there. Richard Baxter has one, Annie Cushing has one, I think Distilled has some. So really cool stuff there.
  • Diagnosing page load speed issues, which goes right to what Jayson was talking about. You need that fast-loading page. Well, if you don’t have any technical skills, you can’t figure out why your page might not be loading quickly.
  • Diagnosing mobile friendliness issues
  • Advising app developers on the new protocols around App deep linking, so that you can get the content from your mobile apps into the web search results on mobile devices. Awesome. Super powerful. Potentially crazy powerful, as mobile search is becoming bigger than desktop.

Okay, I’m going to take a deep breath and relax. I don’t know Jayson’s intention, and in fact, if he were in this room, he’d be like, “No, I totally agree with all those things. I wrote the article in a rush. I had no idea it was going to be big. I was just trying to make the broader points around you don’t have to be a coder in order to do SEO.” That’s completely fine.

So I’m not going to try and rain criticism down on him. But I think if you’re reading that article, or you’re seeing it in your feed, or your clients are, or your boss is, or other folks are in your world, maybe you can point them to this Whiteboard Friday and let them know, no, that’s not quite right. There’s a ton of technical SEO that is required in 2015 and will be for years to come, I think, that SEOs have to have in order to be effective at their jobs.

All right, everyone. Look forward to some great comments, and we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

The 2015 #MozCon Video Bundle Has Arrived!

Posted by EricaMcGillivray

The bird has landed, and by bird, I mean the MozCon 2015 Video Bundle! That’s right, 27 sessions and over 15 hours of knowledge from our top notch speakers right at your fingertips. Watch presentations about SEO, personalization, content strategy, local SEO, Facebook graph search, and more to level up your online marketing expertise.

If these videos were already on your wish list, skip ahead:

If you attended MozCon, the videos are included with your ticket. You should have an email in your inbox (sent to the address you registered for MozCon with) containing your unique URL for a free “purchase.”

MozCon 2015 was fantastic! This year, we opened up the room for a few more attendees and to fit our growing staff, which meant 1,600 people showed up. Each year we work to bring our programming one step further with incredible speakers, diverse topics, and tons of tactics and tips for you.


What did attendees say?

We heard directly from 30% of MozCon attendees. Here’s what they had to say about the content:

What percentage of the presentations did you find interesting? 53% found 80%+ interesting to their work.

Did you find the presentations to be advanced enough? 74% found them to be just perfect.

Wil Reynolds at MozCon 2015


What do I get in the bundle?

Our videos feature the presenter and their presentation side-by-side, so there’s no need to flip to another program to view a slide deck. You’ll have easy access to links and reference tools, and the videos even offer closed captioning for your enjoyment and ease of understanding.

For $299, the 2015 MozCon Video Bundle gives you instant access to:

  • 27 videos (over 15 hours) from MozCon 2015
  • Stream or download the videos to your computer, tablet, phone, phablet, or whatever you’ve got handy
  • Downloadable slide decks for all presentations


Bonus! A free full session from 2015!

Because some sessions are just too good to hide behind a paywall. Sample what the conference is all about with a full session from Cara Harshman about personalization on the web:


Surprised and excited to see these videos so early? Huge thanks is due to the Moz team for working hard to process, build, program, write, design, and do all the necessaries to make these happen. You’re the best!

Still not convinced you want the videos? Watch the preview for the Sherlock Christmas Special. Want to attend the live show? Buy your early bird ticket for MozCon 2016. We’ve sold out the conference for the last five years running, so grab your ticket now!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

A Practical Guide to Content and Its Metrics

Posted by gfiorelli1

A small disclaimer:

Before you start reading, I want to say that I am not an analytics expert per se, but a strategic SEO and digital marketing consultant. On the other hand, in my daily work of auditing and designing holistic digital marketing strategies, I deal a lot with Analytics in order to understand my clients’ gaps and opportunities.

For that reason, what you are going to read isn’t an “ultimate guide,” but instead my personal and practical guide to content and its metrics, filled with links to useful resources that helped me solving the big contents’ metric mystery. I happily expect to see your ideas in the comments.

The difference between content and formats

One of the hardest things to measure is content effectiveness, mostly because there exists great confusion about its changing nature and purpose. One common problem is thinking of “content” and “formats” as synonyms, which leads to frustration and, with the wrong scaling processes present, may also lead to Google disasters.

What is the difference between content and formats?

  1. Content is any message a brand/person delivers to an audience;
  2. Formats are the specific ways a brand/person can deliver that message (e.g. data visualizations, written content, images/photos, video, etc.).

Just to be clear: We engage and eventually share the ideas and emotions that content represents, not its formats. Formats are just the clothing we choose for our content, and keeping the fashion metaphor, some ways of dressing are better than others for making a message more explicit.

Strategy, as in everything in marketing, also plays a very important role when it comes to content.

It is during the strategic phase that we attempt to understand (both thanks to our own site analysis and competitive analysis of others’ sites) if our content is responding to our audience’s interests and needs, and also to understand what metrics we must choose in order to assess its success or failure.

Paraphrasing an old Pirelli commercial tagline: Content without strategy is nothing.

Strategy: Starting with why/how/what

When we are building a content strategy, we should ask ourselves (and our clients and CMOs) these classic questions:

  1. Why does the brand exist?
  2. How does the brand solidify its “why?”
  3. What specific tactics will the brand use for successfully developing the “how?”

Only when we have those answers can we understand the goals of our content, what metrics to consider, and how to calculate them.

Let use an example every Mozzer can understand.

Why does Moz exist?

The answer is in its tagline:

  • Inbound marketing is complicated. Moz’s software makes it easy.

How does Moz solidify its “why?”

  • Moz produces a series of tools, which help marketers in auditing, monitoring and taking insightful decisions about their web marketing projects.
  • Moreover, Moz creates and publishes content, which aims to educate marketers to do their jobs better.

If you notice, we can already pick out a couple of generic goals here:

  1. Leads > subscriptions;
  2. Awareness (that may ultimately drive leads).

What specific tactics does Moz use for successfully achieving its main goals?

Considering the nature of the two main goals we clarified above, we can find content tactics covering all the areas of the so-called content matrix.

Some classic content matrix models are the ones developed by Distilled (in the image above) and Smart Insights and First 10, but it is a good idea to develop your own based on the insights you may have about your specific industry niche.

The things Moz does are many, so I am presenting an incomplete list here.

In the “Purchase” side and with conversion and persuasion as end goals:

  • Home page and “Products” section of Moz.com (we can define them as “organic landing pages”);
  • Content about tools
    • Free tools;
    • Pro tools (which are substantially free for a 30-day trial period).
  • CPC landing pages;
  • Price page with testimonials;
  • “About” section;
  • Events sponsorship.

In the “Awareness” side and with educational and entertainment (or pure engagement) purposes:

  • The blogs (both the main blog and UGC);
  • The “Learn and Connect” section, which includes the Q&A;
  • Guides;
  • Games (The SEO Expert Quiz can surely be considered a game);
  • Webinars;
  • Social media publishing;
  • Email marketing
  • Live events (MozCon and LocalUp, but also the events where Moz Staff is present with one or more speakers).

Once we have the content inventory of our web site, we can relatively easily identify the specific goals for the different pieces of content, and of the single type of content we own and will create.

I will usually not consider content like tools, sponsorship, or live events, because even though content surely plays a role in their goals’ achievement, there are also other factors like user satisfaction and serendipity involved which are not directly related to content itself or cannot be easily measured.

Measuring landing/conversion pages’ content

This may be the easier kind of content to measure, because it is deeply related to the more general measures of leads and conversions, and it is also strongly related to everything CRO.

We can measure the effectiveness of our landing/conversion pages’ content easily with Google Analytics, especially if we remember to implement content grouping (here’s the official Google guide) and follow the suggestions Jeff Sauer offered in this post on Moz.

We can find another great resource and practical suggestions in this older (but still valid) post by Justin Cutroni: How to Use Google Analytics Content Grouping: 4 Business Examples. The example Justin offers about Patagonia.com is particularly interesting, because it is explicitly about product pages.

On the other hand, we should always remember that the default conversion rate metric should not be taken as the only metric to incorporate into decision-making; the same is true when it comes to content performance and optimization. In fact, as Dan Barker said once, the better we segment our analysis the better we can understand the performance of our money pages, give a better meaning to the conversion rate value and, therefore, correct and improve our sales and leads.

Good examples of segmentation are:

  • Conversions per returning visitor vs new visitor;
  • Conversions per type of visitor based on demographic data;
  • Conversions per channel/device.

These segmented metrics are fundamental for developing A/B tests with our content.

Here are some examples of A/B tests for landing/conversion pages’ content:

  • Title tags and meta description A/B tests (yes, title tags and meta descriptions are content too, and they have a fundamental role in CTR and “first impressions”);
  • Prominent presence of testimonials vs. a more discreet one;
  • Tone of voice used in the product description (copywriting experiment);
  • Product slideshow vs. video.

Here are a few additional sources about CRO and content, surely better than me for inspiring you in this specific field:

Measuring on-site “editorial” content

Here is where things start getting a little more complicated.

Blog posts, guides, white papers, and similar content usually do not have a conversion/lead nature, at leastnot directly. Usually their goals are more intangible ones, such as creating awareness, likability, trust, and authority.

In other cases, then, this kind of content also serves the objective of creating and maintaining an active community, as it does in the case of Moz. I tend to consider this a subset, though, because in many niches creating a community is not a top priority. Or, even if it is, it does not offer a reliable flux of “signals” so as to appropriately measure the effectiveness of our content because of pure lack of statistical evidence.

A good starting place is measuring the so-called consumption metrics.

Again, the ideal is to implement content grouping in Google Analytics (see the video above), because that way we can segment every different kind of editorial content.

For instance, if we have a blog, not only we can create a group for it, but we can also create

  • As many groups as there are categories and tags on our blog;
  • groups by average length of the posts;
  • groups per the kind of prominent formats used (video posts like Moz’s Whiteboard Fridays, infographics, long-form, etc.).

This are just three examples; think about your own measuring needs and the nature of your content, and you will come out with other ideas for content groupings.

The following are basic metrics that you’ll need to consider when measuring your editorial content:

  1. Pageviews / Unique Pageviews
  2. Pages / Session
  3. Time on Page

The ideal is to analyze these metrics at least with these secondary levels:

  • Medium / Sources, so you can understand what channel contributed the most to your content visibility. Remember, though, that dark search/social is a reality that can screw up your metrics (check out Marshall Simmonds’ deck from MozCon 2015);
  • User Type, so to see what percent of the Pageviews is due to returning visitors (a good indicator of the level of trust and authority our content has) and new ones (which indicates the ability our content has to attract new potentially long-lasting readers);
  • Mobile, which is useful in understanding the environments in which our users mostly interact with our content, and how we have to optimize its experience depending on the device used, hence helping making our content more memorable.

You surely can have fun also analyzing your content’s performance by segmenting them per demographic indicators. For instance, it may be interesting to see what affinity categories of your readers there are, depending on the categorization used in your blog and that you have replicated in your content grouping. This, in fact, can help us in better understanding the personas composing our audience, and so refining the targeting of our content.

As you can see, I did not mention bounce rate as a metric to consider, and there is a reason for that: Bounce rate is tricky, and its misinterpretation can lead to bad decisions.

Instead of bounce rate, when it comes to editorial content (and blog posts in particular), I prefer to consider scroll completion, a metric we can retrieve using Tag Manager (see this post by Optimize Smart).

Finally, especially if you also grouped content for outstanding format used (video, embedded SlideShare, etc.), you will need to retrieve users’ interactions through Tag Manager. However, if you really want to dig into the analysis of how that content is consumed by users, you will need to export your Analytics data and then combine it with data from external sources, like YouTube Analytics, SlideShare Analytics, etc.

The more we share, the more we have. This is also true in Marketing.

Consumption metrics, though, are not enough in order to understand the performance of your content, especially if you strongly rely on a community and one of the content objectives is creating and growing a community around your brand.

I am talking of the so-called sharing metrics:

  1. Social shares (Likes, Tweets, Pins, etc.);
  2. Inbound links;
  3. Un-linked mentions
  4. Email forwarding.

All this can be tracked and measured (e.g.: social shares, mentions on web sites or on social).

I usually add comments into these Metrics, because of the social nature comments have. Again, thanks to Tag Manager, you can easily tag when someone clicks on the “add comment” button.

A final metric we should always consider is the page value. As Google itself explains in that Help Page:

Page value is a measure of influence. It’s a single number that can help you better understand which pages on your site drive conversions and revenue. Pages with a high Page Value are more influential than pages with a low Page Value [Page Value is also shown for groups of content].

The combined analysis of consumption and social metrics can offer us a very granular understanding of how our content is performing, therefore how to optimize our strategy and/or how to start conducting A/B tests.

On the other hand, such a granular vision is not the ideal for reporting, especially if we have to report to a board of directors and not to our in-house or in-agency counterpart.

In that case being able to resume all these metrics (or the most relevant ones) in just one metric is very useful.

How to do it? My suggestion is to follow (and adapt to your own needs) the methodology used by the Moz editorial team and described in this post by Trevor Klein.

What about the ROI of editorial content? Don’t give up; I’ll talk about it below.

Measuring the ROI of content marketing and content-based link building campaigns

Theoretically measuring the ROI of something is relatively easy:

(Return – Investment) / Investment = ROI.

However the difficulty is not in that formula itself, but in the values used in that formula.

How to calculate the investment value?

Usually we have a given budget assigned for our content marketing and/or content-based campaigns. If that is the case, perfect! We have a figure to use for the investment value.

A complete different situation is when we must present a budget proposal and/or assign part of the budget to each campaign in a balanced and considered way.

In this post by Caroline Gilbert for Siege Media you can find great suggestions about how to calculate a content marketing budget, but I would like to present mine, too, which is based on competitive analysis.

Here’s what I do:

  1. Identify the distinct competitors which created content related to what we will target with our campaign. I rely on both SERP analysis (i.e.: using the Keyword Difficulty Tool by Moz) and information we can retrieve with a “keyword search” on Buzzsumo.
  2. Retrieve all meaningful content metrics:
    • Links (another reason why I use the Keyword Difficulty Tool);
    • Social shares per kind of social network (these are available from BuzzSumo). Remember that some of these social shares can be tallied by sponsored content (check this Social Media Explorer post about how to do Facebook competitive analysis).
    • Estimated traffic to the content’s URL (data retrieved via SimilarWeb).
  3. Assign a monetary value to the metrics retrieved.
  4. Calculate the competitors’ potential investment value.
  5. Calculate the median investment value of all the competitors.
  6. Consider the delta between what the client/company invested in content marketing (or link building, if it is moving from classic old link building to modern link earning) before, as well as the median investment value of the competitors.
  7. Calculate and propose the content marketing / content-based campaign’s value in a range which goes from “minimum viable budget” to “ideal.”

Reality teaches us that the proposed investment is not the same than the real investment, but at least we then have some data for proposing it and not just a gut feeling. However, we must be prepared to work with budgets that are more on the “minimum viable” side than on the ideal one.

How to calculate revenue?

You can find a good number of ROI calculators, but I particularly like the Fractl one, because it is very easy to understand and use.

Their general philosophy is to calculate ROI in terms of how much traffic, links, and social shares the content itself has generated organically, hence how much it helped saving in paid promotion.

If you look at it, it reminds the methodology I described above (points 1 to 7).

However, when it comes to social shares, you should avoid the classic mistake of considering only the social shares directly generated by the page your content has been published.

For instance, let’s take the Idioms of the World campaigns Verve Search did for HotelClub.com and which won the European Search Awards.

If we we look only at its own social share metrics, we will have just a partial picture:

Instead, if we see what are the social shares metrics of the pages that linked and talked about it, we will have the complete picture.

We can use (again) BuzzSumo for retrieving this data (also using its Content Analysis feature), or using URL Profiler.

As you can imagine, you can calculate the ROI of your editorial content using the same methodology.

Obviously the Fractl ROI calculator is far from being perfect, as it does not consider the offline repercussion a content campaign may have (the Idioms of the World campaign was organically published in a outstanding placement on The Guardian’s paper version, for instance), but it is a solid base for crafting your own ROI calculation.

Conclusions

So, we have arrived at the end of this personal guide about content and its metrics.

Remember these important things:

  • Don’t be data driven, be data informed;
  • Think strategically, act tactically;
  • Content’s metrics vary depending on the goals of content itself.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Distance from Perfect

Posted by wrttnwrd

In spite of all the advice, the strategic discussions and the conference talks, we Internet marketers are still algorithmic thinkers. That’s obvious when you think of SEO.

Even when we talk about content, we’re algorithmic thinkers. Ask yourself: How many times has a client asked you, “How much content do we need?” How often do you still hear “How unique does this page need to be?”

That’s 100% algorithmic thinking: Produce a certain amount of content, move up a certain number of spaces.

But you and I know it’s complete bullshit.

I’m not suggesting you ignore the algorithm. You should definitely chase it. Understanding a little bit about what goes on in Google’s pointy little head helps. But it’s not enough.

A tale of SEO woe that makes you go “whoa”

I have this friend.

He ranked #10 for “flibbergibbet.” He wanted to rank #1.

He compared his site to the #1 site and realized the #1 site had five hundred blog posts.

“That site has five hundred blog posts,” he said, “I must have more.”

So he hired a few writers and cranked out five thousand blogs posts that melted Microsoft Word’s grammar check. He didn’t move up in the rankings. I’m shocked.

“That guy’s spamming,” he decided, “I’ll just report him to Google and hope for the best.”

What happened? Why didn’t adding five thousand blog posts work?

It’s pretty obvious: My, uh, friend added nothing but crap content to a site that was already outranked. Bulk is no longer a ranking tactic. Google’s very aware of that tactic. Lots of smart engineers have put time into updates like Panda to compensate.

He started like this:
a few posts, no rankings

And ended up like this:
more posts, no rankings

Alright, yeah, I was Mr. Flood The Site With Content, way back in 2003. Don’t judge me, whippersnappers.

Reality’s never that obvious. You’re scratching and clawing to move up two spots, you’ve got an overtasked IT team pushing back on changes, and you’ve got a boss who needs to know the implications of every recommendation.

Why fix duplication if rel=canonical can address it? Fixing duplication will take more time and cost more money. It’s easier to paste in one line of code. You and I know it’s better to fix the duplication. But it’s a hard sell.

Why deal with 302 versus 404 response codes and home page redirection? The basic user experience remains the same. Again, we just know that a server should return one home page without any redirects and that it should send a ‘not found’ 404 response if a page is missing. If it’s going to take 3 developer hours to reconfigure the server, though, how do we justify it? There’s no flashing sign reading “Your site has a problem!”

Why change this thing and not that thing?

At the same time, our boss/client sees that the site above theirs has five hundred blog posts and thousands of links from sites selling correspondence MBAs. So they want five thousand blog posts and cheap links as quickly as possible.

Cue crazy music.

SEO lacks clarity

SEO is, in some ways, for the insane. It’s an absurd collection of technical tweaks, content thinking, link building and other little tactics that may or may not work. A novice gets exposed to one piece of crappy information after another, with an occasional bit of useful stuff mixed in. They create sites that repel search engines and piss off users. They get more awful advice. The cycle repeats. Every time it does, best practices get more muddled.

SEO lacks clarity. We can’t easily weigh the value of one change or tactic over another. But we can look at our changes and tactics in context. When we examine the potential of several changes or tactics before we flip the switch, we get a closer balance between algorithm-thinking and actual strategy.

Distance from perfect brings clarity to tactics and strategy

At some point you have to turn that knowledge into practice. You have to take action based on recommendations, your knowledge of SEO, and business considerations.

That’s hard when we can’t even agree on subdomains vs. subfolders.

I know subfolders work better. Sorry, couldn’t resist. Let the flaming comments commence.

To get clarity, take a deep breath and ask yourself:

“All other things being equal, will this change, tactic, or strategy move my site closer to perfect than my competitors?”

Breaking it down:

“Change, tactic, or strategy”

A change takes an existing component or policy and makes it something else. Replatforming is a massive change. Adding a new page is a smaller one. Adding ALT attributes to your images is another example. Changing the way your shopping cart works is yet another.

A tactic is a specific, executable practice. In SEO, that might be fixing broken links, optimizing ALT attributes, optimizing title tags or producing a specific piece of content.

A strategy is a broader decision that’ll cause change or drive tactics. A long-term content policy is the easiest example. Shifting away from asynchronous content and moving to server-generated content is another example.

“Perfect”

No one knows exactly what Google considers “perfect,” and “perfect” can’t really exist, but you can bet a perfect web page/site would have all of the following:

  1. Completely visible content that’s perfectly relevant to the audience and query
  2. A flawless user experience
  3. Instant load time
  4. Zero duplicate content
  5. Every page easily indexed and classified
  6. No mistakes, broken links, redirects or anything else generally yucky
  7. Zero reported problems or suggestions in each search engines’ webmaster tools, sorry, “Search Consoles”
  8. Complete authority through immaculate, organically-generated links

These 8 categories (and any of the other bazillion that probably exist) give you a way to break down “perfect” and help you focus on what’s really going to move you forward. These different areas may involve different facets of your organization.

Your IT team can work on load time and creating an error-free front- and back-end. Link building requires the time and effort of content and outreach teams.

Tactics for relevant, visible content and current best practices in UX are going to be more involved, requiring research and real study of your audience.

What you need and what resources you have are going to impact which tactics are most realistic for you.

But there’s a basic rule: If a website would make Googlebot swoon and present zero obstacles to users, it’s close to perfect.

“All other things being equal”

Assume every competing website is optimized exactly as well as yours.

Now ask: Will this [tactic, change or strategy] move you closer to perfect?

That’s the “all other things being equal” rule. And it’s an incredibly powerful rubric for evaluating potential changes before you act. Pretend you’re in a tie with your competitors. Will this one thing be the tiebreaker? Will it put you ahead? Or will it cause you to fall behind?

“Closer to perfect than my competitors”

Perfect is great, but unattainable. What you really need is to be just a little perfect-er.

Chasing perfect can be dangerous. Perfect is the enemy of the good (I love that quote. Hated Voltaire. But I love that quote). If you wait for the opportunity/resources to reach perfection, you’ll never do anything. And the only way to reduce distance from perfect is to execute.

Instead of aiming for pure perfection, aim for more perfect than your competitors. Beat them feature-by-feature, tactic-by-tactic. Implement strategy that supports long-term superiority.

Don’t slack off. But set priorities and measure your effort. If fixing server response codes will take one hour and fixing duplication will take ten, fix the response codes first. Both move you closer to perfect. Fixing response codes may not move the needle as much, but it’s a lot easier to do. Then move on to fixing duplicates.

Do the 60% that gets you a 90% improvement. Then move on to the next thing and do it again. When you’re done, get to work on that last 40%. Repeat as necessary.

Take advantage of quick wins. That gives you more time to focus on your bigger solutions.

Sites that are “fine” are pretty far from perfect

Google has lots of tweaks, tools and workarounds to help us mitigate sub-optimal sites:

  • Rel=canonical lets us guide Google past duplicate content rather than fix it
  • HTML snapshots let us reveal content that’s delivered using asynchronous content and JavaScript frameworks
  • We can use rel=next and prev to guide search bots through outrageously long pagination tunnels
  • And we can use rel=nofollow to hide spammy links and banners

Easy, right? All of these solutions may reduce distance from perfect (the search engines don’t guarantee it). But they don’t reduce it as much as fixing the problems.
Just fine does not equal fixed

The next time you set up rel=canonical, ask yourself:

“All other things being equal, will using rel=canonical to make up for duplication move my site closer to perfect than my competitors?”

Answer: Not if they’re using rel=canonical, too. You’re both using imperfect solutions that force search engines to crawl every page of your site, duplicates included. If you want to pass them on your way to perfect, you need to fix the duplicate content.

When you use Angular.js to deliver regular content pages, ask yourself:

“All other things being equal, will using HTML snapshots instead of actual, visible content move my site closer to perfect than my competitors?”

Answer: No. Just no. Not in your wildest, code-addled dreams. If I’m Google, which site will I prefer? The one that renders for me the same way it renders for users? Or the one that has to deliver two separate versions of every page?

When you spill banner ads all over your site, ask yourself…

You get the idea. Nofollow is better than follow, but banner pollution is still pretty dang far from perfect.

Mitigating SEO issues with search engine-specific tools is “fine.” But it’s far, far from perfect. If search engines are forced to choose, they’ll favor the site that just works.

Not just SEO

By the way, distance from perfect absolutely applies to other channels.

I’m focusing on SEO, but think of other Internet marketing disciplines. I hear stuff like “How fast should my site be?” (Faster than it is right now.) Or “I’ve heard you shouldn’t have any content below the fold.” (Maybe in 2001.) Or “I need background video on my home page!” (Why? Do you have a reason?) Or, my favorite: “What’s a good bounce rate?” (Zero is pretty awesome.)

And Internet marketing venues are working to measure distance from perfect. Pay-per-click marketing has the quality score: A codified financial reward applied for seeking distance from perfect in as many elements as possible of your advertising program.

Social media venues are aggressively building their own forms of graphing, scoring and ranking systems designed to separate the good from the bad.

Really, all marketing includes some measure of distance from perfect. But no channel is more influenced by it than SEO. Instead of arguing one rule at a time, ask yourself and your boss or client: Will this move us closer to perfect?

Hell, you might even please a customer or two.

One last note for all of the SEOs in the crowd. Before you start pointing out edge cases, consider this: We spend our days combing Google for embarrassing rankings issues. Every now and then, we find one, point, and start yelling “SEE! SEE!!!! THE GOOGLES MADE MISTAKES!!!!” Google’s got lots of issues. Screwing up the rankings isn’t one of them.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Stop Ghost Spam in Google Analytics with One Filter

Posted by CarloSeo

The spam in Google Analytics (GA) is becoming a serious issue. Due to a deluge of referral spam from social buttons, adult sites, and many, many other sources, people are starting to become overwhelmed by all the filters they are setting up to manage the useless data they are receiving.

The good news is, there is no need to panic. In this post, I’m going to focus on the most common mistakes people make when fighting spam in GA, and explain an efficient way to prevent it.

But first, let’s make sure we understand how spam works. A couple of months ago, Jared Gardner wrote an excellent article explaining what referral spam is, including its intended purpose. He also pointed out some great examples of referral spam.

Types of spam

The spam in Google Analytics can be categorized by two types: ghosts and crawlers.

Ghosts

The vast majority of spam is this type. They are called ghosts because they never access your site. It is important to keep this in mind, as it’s key to creating a more efficient solution for managing spam.

As unusual as it sounds, this type of spam doesn’t have any interaction with your site at all. You may wonder how that is possible since one of the main purposes of GA is to track visits to our sites.

They do it by using the Measurement Protocol, which allows people to send data directly to Google Analytics’ servers. Using this method, and probably randomly generated tracking codes (UA-XXXXX-1) as well, the spammers leave a “visit” with fake data, without even knowing who they are hitting.

measurement-protocol.png

Crawlers

This type of spam, the opposite to ghost spam, does access your site. As the name implies, these spam bots crawl your pages, ignoring rules like those found in robots.txt that are supposed to stop them from reading your site. When they exit your site, they leave a record on your reports that appears similar to a legitimate visit.

Crawlers are harder to identify because they know their targets and use real data. But it is also true that new ones seldom appear. So if you detect a referral in your analytics that looks suspicious, researching it on Google or checking it against this list might help you answer the question of whether or not it is spammy.

Most common mistakes made when dealing with spam in GA

I’ve been following this issue closely for the last few months. According to the comments people have made on my articles and conversations I’ve found in discussion forums, there are primarily three mistakes people make when dealing with spam in Google Analytics.

Mistake #1. Blocking ghost spam from the .htaccess file

One of the biggest mistakes people make is trying to block Ghost Spam from the .htaccess file.

For those who are not familiar with this file, one of its main functions is to allow/block access to your site. Now we know that ghosts never reach your site, so adding them here won’t have any effect and will only add useless lines to your .htaccess file.

Ghost spam usually shows up for a few days and then disappears. As a result, sometimes people think that they successfully blocked it from here when really it’s just a coincidence of timing.

Then when the spammers later return, they get worried because the solution is not working anymore, and they think the spammer somehow bypassed the barriers they set up.

The truth is, the .htaccess file can only effectively block crawlers such as buttons-for-website.com and a few others since these access your site. Most of the spam can’t be blocked using this method, so there is no other option than using filters to exclude them.

Mistake #2. Using the referral exclusion list to stop spam

Another error is trying to use the referral exclusion list to stop the spam. The name may confuse you, but this list is not intended to exclude referrals in the way we want to for the spam. It has other purposes.

For example, when a customer buys something, sometimes they get redirected to a third-party page for payment. After making a payment, they’re redirected back to you website, and GA records that as a new referral. It is appropriate to use referral exclusion list to prevent this from happening.

If you try to use the referral exclusion list to manage spam, however, the referral part will be stripped since there is no preexisting record. As a result, a direct visit will be recorded, and you will have a bigger problem than the one you started with since. You will still have spam, and direct visits are harder to track.

Mistake #3. Worrying that bounce rate changes will affect rankings

When people see that the bounce rate changes drastically because of the spam, they start worrying about the impact that it will have on their rankings in the SERPs.

bounce.png

This is another mistake commonly made. With or without spam, Google doesn’t take into consideration Google Analytics metrics as a ranking factor. Here is an explanation about this from Matt Cutts, the former head of Google’s web spam team.

And if you think about it, Cutts’ explanation makes sense; because although many people have GA, not everyone uses it.

Assuming your site has been hacked

Another common concern when people see strange landing pages coming from spam on their reports is that they have been hacked.

landing page

The page that the spam shows on the reports doesn’t exist, and if you try to open it, you will get a 404 page. Your site hasn’t been compromised.

But you have to make sure the page doesn’t exist. Because there are cases (not spam) where some sites have a security breach and get injected with pages full of bad keywords to defame the website.

What should you worry about?

Now that we’ve discarded security issues and their effects on rankings, the only thing left to worry about is your data. The fake trail that the spam leaves behind pollutes your reports.

It might have greater or lesser impact depending on your site traffic, but everyone is susceptible to the spam.

Small and midsize sites are the most easily impacted – not only because a big part of their traffic can be spam, but also because usually these sites are self-managed and sometimes don’t have the support of an analyst or a webmaster.

Big sites with a lot of traffic can also be impacted by spam, and although the impact can be insignificant, invalid traffic means inaccurate reports no matter the size of the website. As an analyst, you should be able to explain what’s going on in even in the most granular reports.

You only need one filter to deal with ghost spam

Usually it is recommended to add the referral to an exclusion filter after it is spotted. Although this is useful for a quick action against the spam, it has three big disadvantages.

  • Making filters every week for every new spam detected is tedious and time-consuming, especially if you manage many sites. Plus, by the time you apply the filter, and it starts working, you already have some affected data.
  • Some of the spammers use direct visits along with the referrals.
  • These direct hits won’t be stopped by the filter so even if you are excluding the referral you will sill be receiving invalid traffic, which explains why some people have seen an unusual spike in direct traffic.

Luckily, there is a good way to prevent all these problems. Most of the spam (ghost) works by hitting GA’s random tracking-IDs, meaning the offender doesn’t really know who is the target, and for that reason either the hostname is not set or it uses a fake one. (See report below)

Ghost-Spam.png

You can see that they use some weird names or don’t even bother to set one. Although there are some known names in the list, these can be easily added by the spammer.

On the other hand, valid traffic will always use a real hostname. In most of the cases, this will be the domain. But it also can also result from paid services, translation services, or any other place where you’ve inserted GA tracking code.

Valid-Referral.png

Based on this, we can make a filter that will include only hits that use real hostnames. This will automatically exclude all hits from ghost spam, whether it shows up as a referral, keyword, or pageview; or even as a direct visit.

To create this filter, you will need to find the report of hostnames. Here’s how:

  1. Go to the Reporting tab in GA
  2. Click on Audience in the lefthand panel
  3. Expand Technology and select Network
  4. At the top of the report, click on Hostname

Valid-list

You will see a list of all hostnames, including the ones that the spam uses. Make a list of all the valid hostnames you find, as follows:

  • yourmaindomain.com
  • blog.yourmaindomain.com
  • es.yourmaindomain.com
  • payingservice.com
  • translatetool.com
  • anotheruseddomain.com

For small to medium sites, this list of hostnames will likely consist of the main domain and a couple of subdomains. After you are sure you got all of them, create a regular expression similar to this one:

yourmaindomain.com|anotheruseddomain.com|payingservice.com|translatetool.com

You don’t need to put all of your subdomains in the regular expression. The main domain will match all of them. If you don’t have a view set up without filters, create one now.

Then create a Custom Filter.

Make sure you select INCLUDE, then select “Hostname” on the filter field, and copy your expression into the Filter Pattern box.

filter

You might want to verify the filter before saving to check that everything is okay. Once you’re ready, set it to save, and apply the filter to all the views you want (except the view without filters).

This single filter will get rid of future occurrences of ghost spam that use invalid hostnames, and it doesn’t require much maintenance. But it’s important that every time you add your tracking code to any service, you add it to the end of the filter.

Now you should only need to take care of the crawler spam. Since crawlers access your site, you can block them by adding these lines to the .htaccess file:

## STOP REFERRER SPAM
RewriteCond %{HTTP_REFERER} semalt.com [NC,OR]
RewriteCond %{HTTP_REFERER} buttons-for-website.com [NC]
RewriteRule .* - [F]

It is important to note that this file is very sensitive, and misplacing a single character it it can bring down your entire site. Therefore, make sure you create a backup copy of your .htaccess file prior to editing it.

If you don’t feel comfortable messing around with your .htaccess file, you can alternatively make an expression with all the crawlers, then and add it to an exclude filter by Campaign Source.

Implement these combined solutions, and you will worry much less about spam contaminating your analytics data. This will have the added benefit of freeing up more time for you to spend actually analyze your valid data.

After stopping spam, you can also get clean reports from the historical data by using the same expressions in an Advance Segment to exclude all the spam.

Bonus resources to help you manage spam

If you still need more information to help you understand and deal with the spam on your GA reports, you can read my main article on the subject here: http://www.ohow.co/what-is-referrer-spam-how-stop-it-guide/.

Additional information on how to stop spam can be found at these URLs:

In closing, I am eager to hear your ideas on this serious issue. Please share them in the comments below.

(Editor’s Note: All images featured in this post were created by the author.)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

The Linkbait Bump: How Viral Content Creates Long-Term Lift in Organic Traffic – Whiteboard Friday

Posted by randfish

A single fantastic (or “10x”) piece of content can lift a site’s traffic curves long beyond the popularity of that one piece. In today’s Whiteboard Friday, Rand talks about why those curves settle into a “new normal,” and how you can go about creating the content that drives that change.

Linkbait Bump Whiteboard

For reference, here’s a still of this week’s whiteboard. Click on it to open a high resolution image in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about the linkbait bump, classic phrase in the SEO world and almost a little dated. I think today we’re talking a little bit more about viral content and how high-quality content, content that really is the cornerstone of a brand or a website’s content can be an incredible and powerful driver of traffic, not just when it initially launches but over time.

So let’s take a look.

This is a classic linkbait bump, viral content bump analytics chart. I’m seeing over here my traffic and over here the different months of the year. You know, January, February, March, like I’m under a thousand. Maybe I’m at 500 visits or something, and then I have this big piece of viral content. It performs outstandingly well from a relative standpoint for my site. It gets 10,000 or more visits, drives a ton more people to my site, and then what happens is that that traffic falls back down. But the new normal down here, new normal is higher than the old normal was. So the new normal might be at 1,000, 1,500 or 2,000 visits whereas before I was at 500.

Why does this happen?

A lot of folks see an analytics chart like this, see examples of content that’s done this for websites, and they want to know: Why does this happen and how can I replicate that effect? The reasons why are it sort of feeds back into that viral loop or the flywheel, which we’ve talked about in previous Whiteboard Fridays, where essentially you start with a piece of content. That content does well, and then you have things like more social followers on your brand’s accounts. So now next time you go to amplify content or share content socially, you’re reaching more potential people. You have a bigger audience. You have more people who share your content because they’ve seen that that content performs well for them in social. So they want to find other content from you that might help their social accounts perform well.

You see more RSS and email subscribers because people see your interesting content and go, “Hey, I want to see when these guys produce something else.” You see more branded search traffic because people are looking specifically for content from you, not necessarily just around this viral piece, although that’s often a big part of it, but around other pieces as well, especially if you do a good job of exposing them to that additional content. You get more bookmark and type in traffic, more searchers biased by personalization because they’ve already visited your site. So now when they search and they’re logged into their accounts, they’re going to see your site ranking higher than they normally would otherwise, and you get an organic SEO lift from all the links and shares and engagement.

So there’s a ton of different factors that feed into this, and you kind of want to hit all of these things. If you have a piece of content that gets a lot of shares, a lot of links, but then doesn’t promote engagement, doesn’t get more people signing up, doesn’t get more people searching for your brand or searching for that content specifically, then it’s not going to have the same impact. Your traffic might fall further and more quickly.

How do you achieve this?

How do we get content that’s going to do this? Well, we’re going to talk through a number of things that we’ve talked about previously on Whiteboard Friday. But there are some additional ones as well. This isn’t just creating good content or creating high quality content, it’s creating a particular kind of content. So for this what you want is a deep understanding, not necessarily of what your standard users or standard customers are interested in, but a deep understanding of what influencers in your niche will share and promote and why they do that.

This often means that you follow a lot of sharers and influencers in your field, and you understand, hey, they’re all sharing X piece of content. Why? Oh, because it does this, because it makes them look good, because it helps their authority in the field, because it provides a lot of value to their followers, because they know it’s going to get a lot of retweets and shares and traffic. Whatever that because is, you have to have a deep understanding of it in order to have success with viral kinds of content.

Next, you want to have empathy for users and what will give them the best possible experience. So if you know, for example, that a lot of people are coming on mobile and are going to be sharing on mobile, which is true of almost all viral content today, FYI, you need to be providing a great mobile and desktop experience. Oftentimes that mobile experience has to be different, not just responsive design, but actually a different format, a different way of being able to scroll through or watch or see or experience that content.

There are some good examples out there of content that does that. It makes a very different user experience based on the browser or the device you’re using.

You also need to be aware of what will turn them off. So promotional messages, pop-ups, trying to sell to them, oftentimes that diminishes user experience. It means that content that could have been more viral, that could have gotten more shares won’t.

Unique value and attributes that separate your content from everything else in the field. So if there’s like ABCD and whoa, what’s that? That’s very unique. That stands out from the crowd. That provides a different form of value in a different way than what everyone else is doing. That uniqueness is often a big reason why content spreads virally, why it gets more shared than just the normal stuff.

I’ve talk about this a number of times, but content that’s 10X better than what the competition provides. So unique value from the competition, but also quality that is not just a step up, but 10X better, massively, massively better than what else you can get out there. That makes it unique enough. That makes it stand out from the crowd, and that’s a very hard thing to do, but that’s why this is so rare and so valuable.

This is a critical one, and I think one that, I’ll just say, many organizations fail at. That is the freedom and support to fail many times, to try to create these types of effects, to have this impact many times before you hit on a success. A lot of managers and clients and teams and execs just don’t give marketing teams and content teams the freedom to say, “Yeah, you know what? You spent a month and developer resources and designer resources and spent some money to go do some research and contracted with this third party, and it wasn’t a hit. It didn’t work. We didn’t get the viral content bump. It just kind of did okay. You know what? We believe in you. You’ve got a lot of chances. You should try this another 9 or 10 times before we throw it out. We really want to have a success here.”

That is something that very few teams invest in. The powerful thing is because so few people are willing to invest that way, the ones that do, the ones that believe in this, the ones that invest long term, the ones that are willing to take those failures are going to have a much better shot at success, and they can stand out from the crowd. They can get these bumps. It’s powerful.

Not a requirement, but it really, really helps to have a strong engaged community, either on your site and around your brand, or at least in your niche and your topic area that will help, that wants to see you, your brand, your content succeed. If you’re in a space that has no community, I would work on building one, even if it’s very small. We’re not talking about building a community of thousands or tens of thousands. A community of 100 people, a community of 50 people even can be powerful enough to help content get that catalyst, that first bump that’ll boost it into viral potential.

Then finally, for this type of content, you need to have a logical and not overly promotional match between your brand and the content itself. You can see many sites in what I call sketchy niches. So like a criminal law site or a casino site or a pharmaceutical site that’s offering like an interactive musical experience widget, and you’re like, “Why in the world is this brand promoting this content? Why did they even make it? How does that match up with what they do? Oh, it’s clearly just intentionally promotional.”

Look, many of these brands go out there and they say, “Hey, the average web user doesn’t know and doesn’t care.” I agree. But the average web user is not an influencer. Influencers know. Well, they’re very, very suspicious of why content is being produced and promoted, and they’re very skeptical of promoting content that they don’t think is altruistic. So this kills a lot of content for brands that try and invest in it when there’s no match. So I think you really need that.

Now, when you do these linkbait bump kinds of things, I would strongly recommend that you follow up, that you consider the quality of the content that you’re producing. Thereafter, that you invest in reproducing these resources, keeping those resources updated, and that you don’t simply give up on content production after this. However, if you’re a small business site, a small or medium business, you might think about only doing one or two of these a year. If you are a heavy content player, you’re doing a lot of content marketing, content marketing is how you’re investing in web traffic, I’d probably be considering these weekly or monthly at the least.

All right, everyone. Look forward to your experiences with the linkbait bump, and I will see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

​​Measure Your Mobile Rankings and Search Visibility in Moz Analytics

Posted by jon.white

We have launched a couple of new things in Moz Pro that we are excited to share with you all: Mobile Rankings and a Search Visibility score. If you want, you can jump right in by heading to a campaign and adding a mobile engine, or keep reading for more details!

Track your mobile vs. desktop rankings in Moz Analytics

Mobilegeddon came and went with slightly less fanfare than expected, somewhat due to the vast ‘Mobile Friendly’ updates we all did at super short notice (nice work everyone!). Nevertheless, mobile rankings visibility is now firmly on everyone’s radar, and will only become more important over time.

Now you can track your campaigns’ mobile rankings for all of the same keywords and locations you are tracking on desktop.

For this campaign my mobile visibility is almost 20% lower than my desktop visibility and falling;
I can drill down to find out why

Clicking on this will take you into a new Engines tab within your Keyword Rankings page where you can find a more detailed version of this chart as well as a tabular view by keyword for both desktop and mobile. Here you can also filter by label and location.

Here I can see Search Visibility across engines including mobile;
in this case, for my branded keywords.

We have given an extra engine to all campaigns

We’ve given customers an extra engine for each campaign, increasing the number from 3 to 4. Use the extra slot to add the mobile engine and unlock your mobile data!

We will begin to track mobile rankings within 24 hours of adding to a campaign. Once you are set up, you will notice a new chart on your dashboard showing visibility for Desktop vs. Mobile Search Visibility.

Measure your Search Visibility score vs. competitors

The overall Search Visibility for my campaign

Along with this change we have also added a Search Visibility score to your rankings data. Use your visibility score to track and report on your overall campaign ranking performance, compare to your competitors, and look for any large shifts that might indicate penalties or algorithm changes. For a deeper drill-down into your data you can also segment your visibility score by keyword labels or locations. Visit the rankings summary page on any campaign to get started.

How is Search Visibility calculated?

Good question!

The Search Visibility score is the percentage of clicks we estimate you receive based on your rankings positions, across all of your keywords.

We take each ranking position for each keyword, multiply by an estimated click-thru-rate, and then take the average of all of your keywords. You can think of it as the percentage of your SERPs that you own. The score is expressed as a percentage, though scores of 100% would be almost impossible unless you are tracking keywords using the “site:” modifier. It is probably more useful to measure yourself vs. your competitors rather than focus on the actual score, but, as a rule of thumb, mid-40s is probably the realistic maximum for non-branded keywords.

Jeremy, our Moz Analytics TPM, came up with this metaphor:

Think of the SERPs for your keywords as villages. Each position on the SERP is a plot of land in SERP-village. The Search Visibility score is the average amount of plots you own in each SERP-village. Prime real estate plots (i.e., better ranking positions, like #1) are worth more. A complete monopoly of real estate in SERP-village would equate to a score of 100%. The Search Visibility score equates to how much total land you own in all SERP-villages.

Some neat ways to use this feature

  • Label and group your keywords, particularly when you add them – As visibility score is an average of all of your keywords, when you add or remove keywords from your campaign you will likely see fluctuations in the score that are unrelated to performance. Solve this by getting in the habit of labeling keywords when you add them. Then segment your data by these labels to track performance of specific keyword groups over time.
  • See how location affects your mobile rankings – Using the Engines tab in Keyword Rankings, use the filters to select just local keywords. Look for big differences between Mobile and Desktop where Google might be assuming local intent for mobile searches but not for desktop. Check out how your competitors perform for these keywords. Can you use this data?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz