Local Listings: How Canadian Retailers Stack Up and What You Can Learn

Posted by rMaynes1

Local online listings are an essential component of an effective strategy to drive customers into local stores. Ranking highly in the search engine results and dominating the top rankings with your listings is critical for your customers to be able to find you in an online search.

Partnering with Placeable, Mediative, one of North America’s leading digital marketing and advertising agencies, took twenty-five of Canada’s top retail brands (those with multiple local stores, spread nationally), and analyzed how they’re faring when it comes to local digital marketing compared to their American counterparts. The analysis found that 80% of the online listings for twenty-five of Canada’s top retailers are inconsistent, inaccurate, or missing information. The top twenty-five US retailers are outperforming the top twenty-five Canadian retailers by over 28%.

The retailers’ digital presence was analyzed across four dimensions, and brands received a score from 0 to 100 for each of the four dimensions. The dimension scores were weighted and combined into a single overall score: the NatLo™ Score (“National-to-Local”). This overall score is an authoritative measure of a company’s local digital marketing performance:

Depth

Depth and accuracy of published location content, as well as the richness and completeness of site information. Some examples include name, address, phone number, descriptions, services, photos, calls-to-action, and more.

Brands that achieve exceptional depth deliver a better customer experience with richer content about their locations and offerings. Greater Depth also produces higher online to offline conversion rates and supports other marketing calls-to-action.

Visibility

Website effectiveness in search/discoverability. Some examples include site structure, page optimization, and web and mobile site performance.

Strong visibility produces higher search engine rankings and greater traffic. It also enables brands to achieve multiple listings in search results. Brands with poor Visibility surrender more traffic to directories and competitors.

Reach

Data consistency and coverage across third-party sites. Some examples include presence, completeness, and accuracy of location data on third party sites such as Google, Facebook, Factual, Foursquare, and the Yellow Pages directory site (YellowPages.ca).

Brands with outstanding reach can be found by consumers across a range of search engines, social sites and apps. Poor Reach can lead to consumer confusion and misallocated marketing investments.

Precision

Geographic accuracy of location data. For example, the pin placement of each location based on latitude and longitude, or the dispersion of pins on third-party sites (pin spread).

Superior precision enables customers to efficiently navigate to a brand’s location. Failure to ensure Precision damages customer trust and increases the risk of competitive poaching.

Key findings of the report

  • Across twenty-five of Canada’s top retailers, on average, 80% of all third-party site listings are inconsistent, inaccurate, or missing information.
  • The top twenty-five US retailers outperformed retailers in Canada by over 28%. Canadian retailers are weak in comparison.
  • In the analysis of twenty-five of Canada’s top retail brands, seven (or 28%) did not have local landing pages on their website, a key component of local SEO strategy that’s needed to rank higher in search engines and get more traffic.
  • Of those that included local landing pages, 72% failed to provide any content over and above the basics of name, address, hours, and phone number on their location pages. Engaging local content on local pages increases a brand’s visibility on the search engine results page, and ultimately drives more traffic to the website and in-store.
  • When it comes to Facebook location data, 90% was either inconsistent or missing, and 75% of Google+ location data was either inconsistent or missing. Inadequate syndication of location data across the third party ecosystem leads to poor placement in search engine results and the loss of online site visits.
  • Only 8% of map pin placements were “good,” with 92% being “fair” or “poor.” Inaccurate pin placements lead to customer frustrations, and a stronger chance customers will visit competitors’ locations.

Canada’s best-performing retail brands

Brands with NatLo™ scores of over 70 have positioned themselves digitally to perform effectively within their local markets, drive consumer awareness, achieve online and mobile visibility, and capture the most web traffic and store visits. Brands achieving a score in the 60s are still doing well and have a good grasp of their local strategy; however, there are still areas that can be improved to maintain competitiveness. Scores below 60 indicate that there are areas in their local strategy that need improvement.

The average score across the Canadian retailers that were analyzed was just under 48—not a single one of the brands was excelling at maintaining accurate and consistent information across multiple channels. Ultimately, the listings accuracy of Canadian brands is weak.

The five top-performing brands analyzed and their corresponding NatLo™ scores were as follows:

Jean Coutu

60

80

66

37

56

Walmart

60

70

64

63

42

Rona

58

50

86

41

56

Lululemon Athletica

57

40

74

60

54

Real Canadian Superstore

55

50

51

72

45

Deep dive on depth: Jean Coutu

Jean Coutu performed exceptionally well in the dimension of depth, achieving the highest score across the retailers (80). What does Jean Coutu do to achieve a good depth score?

  1. The brand publishes the name, address, phone number, and hours for each location on its website.
  2. Additional location information is provided, including local flyers and offers, directions, services provided, brands offered, photos, and more (see image below). This delivers a much better customer experience.

Deep dive on visibility: Rona

Rona performed exceptionally well in terms of visibility, with a score of 86.

Rona achieves superior digital presence by using optimized mobile and web locators, plus page optimization tactics such as breadcrumb navigation. The website has a good store locator (as can be seen in the image below), with clean, location-specific urls that are easy for Google to find (e.g. http://www.rona.ca/en/Rona-home-garden-kelowna-kelowna):

In this study of Canadian retailers, many brands struggled with visibility due to the absence of specific, indexible local landing pages and a lack of engaging local content.

Deep dive on reach: Real Canadian Superstore

Real Canadian Superstore performed exceptionally well in the dimension of reach with a score of 72. The brand has claimed all of its Facebook pages, and 69% of location data matches what is listed on their website. Real Canadian Superstore also performed well in terms of its Google+ Local pages, with 49% of location data matching. The brand has a good local presence on Facebook, YellowPages.ca, Factual, Foursquare, and Google+ Local. By claiming these pages, the brand is extending its online reach.

Across all third-party sites measured, Real Canadian Superstore had a 0% location data missing rate on average, compared to the total average across all brands of 20%. Many of the brands struggled with external reach, missing important location information on Facebook and Google+, or having inaccurate information when compared to the same location information on the brand’s website.

Deep dive on precision: Rexall Pharmacy

Interestingly, none of the top-scoring retailers performed very well in terms of precision (accuracy and consistency of pin placements). Rexall Pharmacy was the top performer, with a score of 62. At the time of writing, their precision scores were as follows:

39% were good.
35% were fair.
26% were poor.

In the retail industry, competing businesses can be located very closely to one another, so if the location of your business on a map or the directions provided through the map are accurate, there’s less chance of your customers visiting your competitor’s locations instead of yours.

In conclusion

All in all, Canada’s top retail brands excel at brand advertising at the national level. But when it comes to driving local customers into local stores, a different strategy is required, and not all the top retailers are getting it. A solid local strategy requires all four dimensions to be addressed in order to achieve the synergies of an integrated strategy. By emulating the tactics implemented by the four retailers highlighted above, even the smallest local business can make significant improvements to their local online strategy.

Rebecca Maynes, Manager of Content Marketing and Research with Mediative, was the major contributor on this report. The full report, including which brands in Canada are performing well and which need to make some improvements, is available for free download.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

​Announcing MozCon Local 2016!

Posted by EricaMcGillivray

Looking to level up your local marketing and SEO skills? Join us in Seattle for MozCon Local, Thursday and Friday, February 18-19. With both an all-day conference and a half-day workshop, you’ll hear from top local speakers on topics critical to local marketing, including local link building, app search, mobile optimization, content creation, and so much more. For those who attended LocalUp Advanced last year, this is its newest iteration.

For Friday’s main show, we’ll have a full day of speakers giving in-depth presentations with tons of actionable tips. Whether you’re an in-house or agency marketer, a Yellow Pages publisher, or a consultant, you’ll come away with a long to-do list and new knowledge. Plus, you’ll be able to interact directly with speakers both during Q&A sessions and around the conference, and spend time getting to know your fellow local marketers.

Mary Bowling

We’ve teamed with our friends at Local U again to bring you in-depth workshops on Thursday afternoon. If you have specific questions and needs for your clients or local marketing, they’ll be able to dive into the details, give advice, and address issues unique to your business. For those of you who attended last year’s LocalUp, you’ll remember the great Q&A sessions. Local U is planning a couple different tracks from agency management to recommended tools, which will be a blast.

Buy your MozCon Local 2016 ticket!


Some of our great speakers (more coming!)

Darren Shaw

Darren Shaw
Whitespark

Darren Shaw is the President and Founder of Whitespark, a company that builds software and provides services to help businesses with local search. He’s widely regarded in the local SEO community as an innovator, one whose years of experience working with massive local data sets have given him uncommon insights into the inner workings of the world of citation-building and local search marketing. Darren has been working on the web for over 16 years and loves everything about local SEO.

David Mihm

David Mihm
Moz

David Mihm is one of the world’s leading practitioners of Local search engine marketing. He has created and promoted search-friendly websites for clients of all sizes since the early 2000’s. David co-founded GetListed.org, which he sold to Moz in November 2012.

Ed Reese

Ed Reese
Sixth Man Marketing

Ed Reese leads a talented analytics and usability team at his firm Sixth Man Marketing, is a co-founder of LocalU, and an adjunct professor of digital marketing at Gonzaga University. In his free time, he optimizes his foosball and disc golf technique and spends time with his wife and two boys.

Emily Grossman

Emily Grossman
MobileMoxie

Emily Grossman is a Mobile Marketing Specialist at MobileMoxie, and she has been working with mobile apps since the early days of the app stores in 2010. She specializes in app search marketing, with a focus on strategic deep linking, app indexing, app launch strategy, and app store optimization (ASO).

Lindsay Wassell

Lindsay Wassell
Keyphraseology

Lindsay Wassell’s been herding bots and wrangling SERPs since 2001. She has a zeal for helping small businesses grow with improved digital presence. Lindsay is the CEO and founder of Keyphraseology.

Mary Bowling

Mary Bowling
Ignitor Digital

Mary Bowling’s been in SEO since 2003 and has specialized in Local SEO since 2006. When she’s not writing about, teaching, consulting and doing internet marketing, you’ll find her rafting, biking, and skiing/snowboarding in the mountains and deserts of Colorado and Utah.

Mike Ramsey

Mike Ramsey
Nifty Marketing

Mike Ramsey is the president of Nifty Marketing and a founding faculty member of Local University. He is a lover of search and social with a heavy focus in local marketing and enjoys the chess game of entrepreneurship and business management. Mike loves to travel and loves his home state of Idaho.

Rand Fishkin

Rand Fishkin
Moz

Rand Fishkin uses the ludicrous title, Wizard of Moz. He’s founder and former CEO of Moz, co-author of a pair of books on SEO, and co-founder of Inbound.org.

Robi Ganguly

Robi Ganguly
Apptentive

Robi Ganguly is the co-founder and CEO of Apptentive, the easiest way for every company to communicate with their mobile app customers. A native Seattleite, Robi enjoys building relationships, running, reading, and cooking.


MozCon Local takes place at our headquarters in Seattle, which means you’ll be spending the day in the MozPlex. In addition to all the learning, we’ll be providing great swag. Thursday’s workshops will have a snack break and networking time, and for Friday’s show your ticket includes breakfast, lunch, and two snack breaks. Additionally, on Friday evening, we’ll have a networking party so you meet those attending, some who you may have only met on Twitter. Face-to-face ftw!

We’re expecting around 200 people to join us, including speakers, Mozzers, and Local U staff. LocalUp Advanced sold out last year, and we expect MozCon Local to sell out, too, so you’ll want to buy your ticket now!

Our best early-bird prices:

Ticket Normal price Early-bird price
Friday conference Moz or LocalU subscriber ticket $599 $399
Friday conference GA ticket $899 $699
Thursday workshop Moz or LocalU subscriber ticket $399 $299
Thursday workshop GA ticket $549 $399

In order to attend both the conference and workshop, you must purchase tickets to each. Or you may choose to attend one or the other, depending on your needs.

Buy your MozCon Local 2016 ticket!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

One Size Does Not Fit All: Driving Conversions Through Audience Analysis

Posted by SarahGurbach

“We need more content.”
– Every brand ever, at some point in the history of their company

Having worked as a digital consultant over the past few years, I have been exposed to a good amount of brands in various industries. Some had content teams that consisted of one freelance copywriter, while others had a full-blown crew stocked with designers, videographers, and a slew of writers. Regardless of size, though, when discussing their content needs, there was always one common theme: they thought they needed more of it.

And honestly, my reaction would be something like:

“More content?! Easy! I know just the strategy to get you ranking for all the long-tail keywords surrounding your head term. I’ll do a keyword gap analysis, some competitive research, maybe a little trend reporting and come up with 15–20 content ideas for you to send to your copywriter. We’ll optimize those bad boys with title tags, H1s, and some not-so-secretly hidden CTAs, and we’re done. We’ll rank in the SERPs and get the masses to your site. Oh! And we can share this on social, too.”

Seriously, I won’t lie. That’s what I used to do. But then I got sick of blindly going into these things or trying to find some systematic way of coming up with a content strategy that could be used for any brand, of any size, in any industry, that would appeal to any consumer.

So instead of immediately saying yes, I started asking them “why”… roughly 5 times (h/t Wil Reynolds):

1. Why do you want more content?

“Because I want rankings.” (Well, at least they aren’t trying to hide it.)

2. Why do you want rankings?

“Because I want more traffic.” (Okay, we’re getting there.)

3. Why do you want more traffic?

“Because I want more brand awareness.” (Closer…)

4. Why do you want more brand awareness?

“Because I want people to buy my product.” (Ah, here we go.)

5. Why do you want people to buy your product?

“Because I want money.” (Bingo!)

Suddenly, it’s no longer just “we need more content,” but actually “we need the right kind of content for the right kind of audience at the right time in their journey.” And that may seem leaps and bounds more complicated than their original statement, but we aren’t dealing with the same kind of digital atmosphere anymore—and we sure aren’t dealing with the same consumers. Think With Google’s Customer Path to Purchase perfectly visualizes just how complex our consumers have become online.

think_with_google_food_and_drink.png

And it doesn’t just stop there. At each of these interactions, the consumer will be at a different point in their journey, and they are going to need different content to help build their relationship with your brand. Now more than ever, it is imperative that you understand who your audience is and what is important to them…and then be where they are every step of the way.

Super easy, right? Let’s break it down. Here are some ways you can better understand your audience.

Who is your (right) audience?

“If your content is for everybody, then your content really is for nobody.”
Kristina Halvorson, MozCon 2015

While Kristina’s entire presentation was gold, that was probably my favorite line of this past MozCon. Knowing who your audience is (and who your audience isn’t) is pivotal in creating a successful content strategy. When you’re a brand, you have a tendency to fall into the trap of wanting to make everyone your audience. But you aren’t right for everyone, which is why you have a conversion rate of 0.02%. You don’t need to be the best brand for everyone; you just need to be the best brand for someone…and then see if they have friends.

But I’m not saying you have to go out and do more focus groups, consumer surveys, and personas (although it wouldn’t hurt to do a revamp every now and again). Let’s work with what you’ve got.

Analytics

As stated before, it’s all about targeting the right audience. Let’s say, in this case, the most important people for my business are those that complete a specific goal. Well, I want to find out everything I can about those people and what is bringing them to my site.

To do this, set up a segment in Google Analytics to see only the traffic that resulted in that goal completion:

  • Add Segment
    • Conditions
      • Find your specific goal
      • Change to > or = 1 per session

From there, you can use the demographics functionality in GA to take a deeper dive into that audience in particular:

You can look at age, gender, location, device, and more. You can even look at their interests:

I would also recommend doing this for particular groups of pages to better understand what kind of content brings in users that will convert. You can create groupings based on the type of content (i.e. help articles, branded content, top-of-the-funnel content, etc.) or you can just look at specific folders.

You can also use this segment to better analyze which sites are sending referral traffic that results in a goal completion, as this would be a strong indicator that those sites are speaking to an audience that is interested in your brand and/or product.

Twitter followers

While analyzing your current followers may only help you understand the audience you already have, it will absolutely help you find trends among people who are interested in your brand and could help you better target future strategies.

Let’s start with Twitter. I am a huge fan of Followerwonk for so many reasons. I use it for everything from audience analysis, to competitor research, to prospecting. But for the sake of better understanding your audience, throw your Twitter handle in and click “analyze their followers.”

Followerwonk will give you a sample size of 5,000, which still gives you a pretty good overview of your followers. However, if you export all of the data, you can analyze up to 100,000 followers. As a cheap beer enthusiast myself, I analyzed people following Rainier beer and was pleasantly surprised to see that I am in good company (hello, marketers).

You can also use Followerwonk to better understand when your audience is most active on Twitter, so you can prioritize when you’ll post the content you crafted specifically for those people when they’re most active.

Additionally, I am a big fan of Followerwonk’s ability to search Twitter bios for specific keywords. Not only is it useful for finding authorities in a specific space, it allows you to find all of the additional words that your audience is using to describe themselves.

Search Twitter bios for a keyword that is important to your business or a word that describes your target audience. Once you do that, export all the bios, throw those bad boys into a word-cloud tool, and see what you get.

Obviously, “cheap beer” leads the way, but look at the other words: craft, wine, whiskey, expensive, connoisseur. Maybe, just maybe, cheap beer enthusiasts also know how to enjoy a fine craft beer every now and then. Would I love to read a cheap beer enthusiast’s guide to inexpensive craft beer? Why, yes, I would. And something tells me that those people on Twitter wouldn’t mind sharing it.

Is this mind blowing? Not necessarily. Does it take 5 minutes, help you better understand your audience, and give you some content ideas? Absolutely.

Facebook fans

Utilize Facebook insights as much as possible for figuring out which audience engages with your posts the most—that’s the audience you want to go after. Facebook defaults to “Your Fans,” but check out the “People Engaged” tab to see active fans.

Simon Penson talked about how you can use Facebook to see if your audience has a greater affinity to a certain product/brand/activity than the rest of their cohort at SearchLove last year, and I highly recommend you play around with that function on Facebook as well.

What do they need?

Internal site search

I like to look at site search data for two reasons: to find out what users are looking for, and to find out what users are having a hard time finding. I’ll elaborate on the latter and then go into detail about the former. If you notice that a lot of users are using internal site search to find content that you already have, chances are that content is not organized in a way that is easy to find. Consider fixing that, if possible.

I usually like to look at a year’s worth of data in GA, so change the dates to the past year and take a look at what is searched for most often on your site. In the example below, this educational client can easily tell that the most important things to their prospective students are tuition prices and the academic calendar. That may not be a surprise, but who knows what gems you may find in your own internal site search? If I were this client, I would definitely be playing into the financial aspects of their school, as it’s proven to be important.

Questions

Similar to site search, it’s important to understand what questions your customers have about your product or industry. Being there to answer those questions allows you to be present at the beginning of their path to purchase, while being an authority in the space.

Don’t hit enter

This is an oldie but a serious goodie, and I still use it to this day. Start with the 5 Ws + your head term and see what pops up in Google Autocomplete. This isn’t the end-all be-all, but it’s a good starting point.

Use a handy tool

I haven’t been able to play around with all of Grepwords’ tools and functionalities, but I love the question portion. It basically helps you pull in all of the questions surrounding one keyword and provides the search volume.

Forums

This is a fun one. If you know that there are popular forums where people talk about your industry, products, and/or services, you can use advanced search queries to find anyone asking questions about your product or service. Example:

site:stackoverflow.com inurl:”brand name” AND “product name”

You can get super granular and even look for ones that haven’t been answered:

site:stackoverflow.com inurl: “brand name” AND “product name” -inurl:”answer”

From there, you can scrape the results in the SERPs and siphon through the questions to see if there are any trends or issues.

Ask them

And sometimes, if you want to reach the human behind the computer, you have to actually talk to the human. If you are a B2B that has a sales department, have someone on the marketing team sit in on 10–15 of those calls to see if there are any trends in regards to the types of questions they ask or issues they have. If you are a B2C, try offering a small incentive to have your customer take a survey or chat with someone for ten minutes about their experience.

If you are not comfortable reaching out to your current customers, consider utilizing Google Consumer Surveys. After collecting data from GA and other social platforms, you can use that information to hyper-focus your audience segment or create some form of a qualifier question to ensure you are targeting the right audience for your questions.

While Consumer Surveys has its issues, overall it can be a great way to collect data. This is not the platform to ask fifty questions so you can create a buyer persona; instead, pick some questions that are going to help you understand your audience a bit more. Example questions are:

  • Before purchasing [product], what is your research process?
  • Are you active on social? If so, which channels?
  • What prevents you from purchasing a product?
  • What prevents you from purchasing from a specific brand?
  • What are your favorite sites to browse for articles?

Side note: I am also a huge fan of testing potential headlines before publishing content. Obviously, this is not something you will do for every blog post, but if I was Zulily and I was considering posting a major thought leadership piece, I would probably want to set up a 2-question survey:

  • Question #1: Are you a mom?
  • If yes, question #2: Which of these articles looks most interesting to you?

The great thing about that is you only get charged for the 2nd question if they pass the qualifier round.

Give ’em what they want

Now that you have a better understanding of the kind of people you want to target, it’s important that you spend the time creating content that will actually be of value to them. Continuously revisit these research methods as your audience grows and changes.

I rambled on about my favorite techniques, but I would love to hear how you go about better understanding your own audience. Sound off in the comments below, or shoot me a tweet @TheGurbs.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

The Impact of Queries, Long and Short Clicks, and Click Through Rate on Google’s Rankings – Whiteboard Friday

Posted by randfish

Through experimentation and analysis of patents that Google has submitted, we’ve come to know some interesting things about what the engine values. In today’s Whiteboard Friday, Rand covers some of what Google likely learns from certain user behavior, specifically queries, CTR, and long vs. short clicks.

The Impact of Queries Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about click-through rate, queries and clicks, long versus short clicks, and queries themselves in terms of how they impact the ranking.

So recently we’ve had, in the last year, a ton of very interesting experiments, or at least a small handful of very interesting experiments, looking at the impact that could be had by getting together kind of a sample set of searchers and having them perform queries, click on things, click on things and click the back button. These experiments have actually shown results. At least some of them have. Others haven’t. There are interesting dichotomies between the two that I’ll talk about at the end of this.

But because of that and because of some recent patent applications and some research papers that have come to light from the search engines themselves, we started to rethink the way user and usage data are making their way into search engines, and we’re starting to rethink the importance of them. You can see that in the ranking factor survey this year, folks giving user and usage data a higher than ever sentiment around the importance of that information in search rankings.

3 elements that (probably) impact your rankings

So let me talk about three different elements that we are experiencing and that we have been talking about in the SEO world and how they can impact your rankings potentially.

1) Queries

This has to do primarily with a paper that Google wrote or a patent application that they wrote around site quality and the quality of search results. Queries themselves could be used in the search results to say, “Hey, wait a minute, we, Google, see a lot of searches that are combining a brand name with a generic term or phrase, and because we’re seeing that, we might start to associate the generic term with the brand term.”

I’ll give you an example. I’ve done a search here for sushi rice. You can see there’s Alton Brown ranking number one, and then norecipes.com, makemysushi.com, and then Morimoto. Morimoto’s recipe is in Food & Wine. If lots and lots of folks are starting to say like, “Wow, Morimoto sushi rice is just incredible,” and it kind of starts up this movement around, “How do we recreate Morimoto sushi rice,” so many, many people are performing searches specifically for Morimoto sushi rice, not just generic sushi rice, Google might start to see that and say, “You know what? Because I see that hundreds of people a day are searching for this particular brand, the Morimoto sushi rice recipe, maybe I should take the result from Morimoto on foodandwine.com and move that higher up in the rankings than I normally would have them.”

Those queries themselves are impacting the search results for the non-branded version, just the sushi rice version of that query. Google’s written about this. We’re doing some interesting testing around this right now with the IMEC Labs, and maybe I’ll be able to report more soon in the future on the impact of that. Some folks in the SEO space have already reported that they see this impact as their brand grows, and as these brand associations grow, their rankings for the non-branded term rise as well, even if they’re not earning a bunch of links or getting a lot of other ranking signals that you’d normally expect.

2) Clicks and click through rate

So Google might be thinking if there’s a result that’s significantly over-performing its rankings ordinary position performance, so if for example we say, let’s look at the third result. Here’s “How to make perfect sushi rice.”

This is from makemysushi.com. Let’s imagine that the normal in this set of search results that, on average, the position three result gets about 11%, but Google is seeing that these guys makemysushi.com is getting a 25% click-through rate, much higher than their normal 11%. Well, Google might kind of scratch their head and go, “You know what? It seems like whatever the snippet is here or the title, the domain, the meta description, whatever is showing here, is really interesting folks. So perhaps we should rank them higher than they rank today.”

Maybe that the click-through rate is a signal to Google of, “Gosh, people are deeply interested in this. It’s more interesting than the average result of that position. Let’s move them up.” This is something I’ve tested, that IMEC Labs have tested and seen results. At least when it’s done with real searchers and enough of them to have an impact, you can kind of observe this. There was a post on my blog last year, and we did a series of several experiments, several of which have showed results time and time again. That’s a pretty interesting one that click-through rate can be done like that.

3) Long versus short clicks

So this is essentially if searchers are clicking on a particular result, but they’re immediately clicking the back button and going back to the search results and choosing a different result, that could tell the search engine, could tell Google that, “You know, maybe that result is not that great. Maybe searchers are deeply unhappy with that result for whatever reason.”

For example, let’s say Google looked at number two, the norecipes.com, and they looked at number four from Food & Wine, and they said, “Gosh, the number two result has an average time on site of 11 seconds and a bounce back to the SERPs rate of 76%. So 76% of searchers who click on No Recipes from this particular search come back and choose a different result. That’s clearly they’re very disappointed.

But number four, the Food & Wine result, from Morimoto, time on site average is like 2 minutes and 50 seconds. That’s where we see them, and of course they can get this data from places like Chrome. They can get it from Android. They are not necessarily looking at the same numbers that you’re looking at in your Analytics. They’re not taking it from Google Analytics. I believe them when they say that they’re not. But certainly if you look at the terms of use in terms of service for Chrome and Android, they are allowed to collect that data and use it any way they want.

The return to SERPs rate is only 9%. So 91% of the people who are hitting Food & Wine, they’re staying on there. They’re satisfied. They don’t have to search for sushi rice recipes anymore. They’re happy. Well, this tells Google, “Maybe that number two result is not making my searchers happy, and potentially I should rank number four instead.”

There are some important items to consider around all this…

Because if your gears turn the way my gears turned, you’re always thinking like, “Wait a minute. Can’t black hat folks manipulate this stuff? Isn’t this really open to all sorts of noise and problems?” The answer is yeah, it could be. But remember a few things.

First off, gaming this almost never works.

In fact, there is a great study published on Search Engine Land. It was called, I think, something like “Click-through rate is not an organic ranking signal. It doesn’t work.” It talked about a guy who fired up a ton of proxy servers, had them click a bunch of stuff, faking traffic essentially by using bots, and didn’t see any movement at all.

But you compare that to another report that was published on Search Engine Land, again just recently, which replicated the experiment that I and the IMEC Labs folks did using real human beings, and they did see results. The rankings rose rather quickly and kind of stayed there. So real human beings searching, very different story from bots searching.

Look, we remember back in the days when AdWords first came out, when Omniture was there, that Google did spend a ton of time and a lot of work to identify fraudulent types of clicks, fraudulent types of search activity, and they do a great job of limiting that in the AdWords account. I’m sure that they’re doing that on the organic SEO side as well.

So manipulation is going to be very, very tough if not impossible. If you don’t get real searchers and a real pattern that looks like a bunch of people who are logged in, logged out, geographically distributed, distributed by demographic profile, distributed by previous searcher behavior, look like they’re real normal people searching, if you don’t have that kind of a pattern, this stuff is not going to work. Plenty of our experiments didn’t work as well.

What if I make my site better for no gain in rankings?

Even if none of this is a ranking factor. Even if you say to yourself, “You know what? Rand, none of the experiments that you ran or IMEC Labs ran or the Search Engine Land study published, none of them, I don’t believe them. I think they’re all wrong. I find holes in all of them.” Guess what? So what? It doesn’t matter.

Is there any reason that you wouldn’t optimize for a higher click-through rate? Is there any reason you wouldn’t optimize for longer clicks versus shorter clicks? Is there any reason that you wouldn’t optimize to try and get more branded search traffic, people associating your brand with the generic term? No way. You’re going to do this any way. It’s one of those wonderful benefits of doing holistic, broad thinking SEO and broad organic marketing in general that helps you whether you believe these are ranking signals or not, and that’s a great thing.

The experiments have been somewhat inconsistent.

But there are some patterns in them. As we’ve been running these, what we’ve seen is if you get more people searching, you tend to have a much better chance of getting a good result. The test that I ran on Twitter and on social media, that had several thousand people participating, up, up, up, up, rose right up to the top real fast. The ones that only had a few hundred people didn’t seem to move the needle.

Same story with long tail queries versus more head of the demand curve stuff. It’s harder to move more entrenched rankings just like it would be with links. The results tended to last only between a few hours and a few days. I think that makes total sense as well, because after you’ve inflated the click signals or query signals or long click signals or whatever it is with these experimental results, over time those are going to fall away and the norm that existed previously is going to return. So naturally you would expect to see those results return back to what they were prior to the experiments.

So with all that said, I’m looking forward to some great discussion in the Q&A. I know that plenty of you out there have been trying and experimenting on your own with this stuff, and some of you have seen great results from improving your click-through rates, improving your snippets, making your pages better for searchers and keeping them on it longer. I’m sure we’re going to have some interesting discussion about all these types of experiments.

So we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Case Study: Can You Fake Blog Post Freshness?

Posted by anthonydnelson

Over the years, you’ve certainly read something about how Google loves fresh content. Perhaps you’ve read that sometimes it takes its love of freshness too far.

Now it’s the middle of 2015. Does freshness still play a significant role in how Google ranks search results?

To find out, I decided to conduct a small experiment on a blog. Specifically, I wanted to see if my test could answer the following questions:

  1. If you update a blog post’s date, will it receive a boost in the search engine results pages (SERPs)?
  2. Can you fake freshness?
  3. Do you have to make changes to the content?
  4. If there is a boost present, how long does it last?

Details of the test

  • This test was performed on 16 blog posts on the same site
  • All posts were originally published between September 2010 and March 2014. Each post was at least one year old at the time of this experiment.
  • Each post (except No. 16) received organic traffic throughout 2014, showing an ability to consistently rank in the SERPs
  • URLs for these posts did not change
  • The content was not edited at all
  • The content of focused on evergreen topics (not the type of queries that would be obvious for Query Deserves Freshness (QDF)
  • Only the publishing date was changed. On April 17th, the dates of these posts were set to either April 16th or April 15th, making them all look like they were one to two days old.
  • Each blog post shows the publishing date on-page
  • Posts were not intentionally shared on social media. A few of the more trafficked posts likely received a couple of tweets/likes/pins, but nothing out of the ordinary.
  • Google Search Console, Ahrefs and Open Site Explorer (OSE) did not show any new external links pointed at the posts during the time of testing

Baseline organic traffic

Before starting the test, I took a look at how the test posts were performing in organic search.

The graph below shows the organic traffic received by each of the 16 test posts for the four full weeks (March 15 – April 11) prior to the test beginning.

The important thing to note here is the organic traffic received by each page was relatively static. These posts were not bouncing around, going from 200 visits to 800 visits each week. There is little variation.

baseline organic traffic

The blue line and corresponding number highlights the weekly average for each post, which we will compare to the graph below.

Turning the test on

This one was pretty easy to implement. It took me about 15 minutes to update all of the publishing dates for the blog posts.

All posts were updated on April 17th. I began collecting traffic data again on April 26th, giving Google a week to crawl and process the changes.

Organic traffic after republishing

All 16 posts received a boost in organic traffic.

This graph shows the average organic traffic that each post received for the first four full weeks (April 26 through May 23) after republishing.

organic traffic after republishing blog posts

I expected a lift, but I was surprised at how significant it was.

Look at some of those posts, doubling in average traffic over a one month period. Crazy.

Faking the date on a blog post had a major impact on my traffic levels.

Post No. 16 received a lift as well, but was too small to register on the graph. The traffic numbers for that post were too low to be statistically significant in any way. It was thrown into the test to see if a post with almost no organic traffic could become relevant entirely from freshness alone.

Percentage lift

The graph below shows the percentage lift each post received in organic traffic.

organic lift from updating blog dates

Post No. 14 above actually received a 663% lift, but it skewed the visibility of the chart data so much that I intentionally cut it off.

The 16 posts received 3,601 organic visits in four weeks, beginning March 15 and ending April 11. (That’s an average of 225 organic visits per post, per week.) In the four weeks following republishing, these 16 posts received 6,003 organic visits (an average of 375 organic visits per post, per week).

Overall, there was a 66% lift.

Search impressions (individual post view)

Below you will find a few screenshots from Google Search Console showing the search impressions for a couple of these posts.

Note: Sixteen screenshots seemed like overkill, so here are a few that show a dramatic change. The rest look very similar.

lift in organic impressions

organic impressions

increase in organic impressions

search console organic impressions

What surprised me the most was how quickly their visibility in the SERPs jumped up.

Keyword rankings

It’s safe to assume the lift in search impressions was caused by improved keyword rankings.

I wasn’t tracking rankings for all of the queries these posts were targeting, but I was tracking a few.

keyword ranking after republishing

faking blog date improves ranking

ranking boost from faked freshness

The first two graphs above show a dramatic improvement in rankings, both going from the middle of the second page to the middle of the first page. The third graph appears to show a smaller boost, but moving a post that is stuck around No. 6 up to the No. 2 spot in Google can lead to a large traffic increase.

Organic traffic (individual posts view)

Here is the weekly organic traffic data for four of the posts in this test.

You can see an annotation in each screenshot below on the week each post was republished. You will notice how relatively flat the traffic is prior to the test, followed by an immediate jump in organic traffic.

republished blog posts get temporary increase in traffic

organic traffic lift after republishing blog post

increase in organic traffic

google analytics organic traffic

These only contain one annotation for the sake of this test, but I recommend that you heavily annotate your analytics accounts when you make website changes.

Why does this work?

Did these posts all receive a major traffic boost just from faking the publishing date alone?

  • Better internal linking? Updating a post date brings a post from deep in the archive closer to your blog’s home page. Link equity should flow through to it more easily. While that is certainly true, six of the 16 posts above were linked sitewide from the blog sidebar or top navigation. I wouldn’t expect those posts to see a dramatic lift from moving up in the feed because they were already well linked from the blog’s navigation.
  • Mobilegeddon update? In the Search Console screenshots above, you will see the Mobilegeddon update highlighted just a couple of days after the test began. It is clear that each post jumped dramatically before this update hit. The blog that it was tested on had been responsive for over a year, and no other posts saw a dramatic lift during this time period.
  • Google loves freshness? I certainly think this is still the case. Old posts that rank well appear to see an immediate boost when their publishing date is updated.

Conclusions

Let’s take a second look at the questions I originally hoped this small test would answer:

  1. If you update a blog post’s date, will it receive a boost in the SERPs? Maybe.
  2. Can you fake freshness? Yes.
  3. Do you have to make changes to the content? No.
  4. If there is a boost present, how long does it last? In this case, approximately two months, but you should test!

Should you go update all your post dates?

Go ahead and update a few blog post dates of your own. It’s possible you’ll see a similar lift in the SERPs. Then report back in a few weeks with the results in the comments on this post.

First, though, remember that the posts used in my test were solid posts that already brought in organic traffic. If your post never ranked to begin with, changing the date isn’t going to do much, if anything.

Don’t mistake this as a trick for sustained growth or as a significant finding. This is just a small test I ran to satisfy my curiosity. There are a lot of variables that can influence SEO tests, so be sure to run your own tests. Instead of blinding trusting that what you read about working for others on SEO blogs will work for you, draw your own conclusions from your own data.

For now, though, “fresh” content still wins.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Content, Shares, and Links: Insights from Analyzing 1 Million Articles

Posted by Steve_Rayson

This summer BuzzSumo teamed up with Moz to analyze the shares and links of over 1m articles. We wanted to look at the correlation of shares and links, to understand the content that gets both shares and links, and to identify the formats that get relatively more shares or links.

What we found is that the majority of content published on the internet is simply ignored when it comes to shares and links. The data suggests most content is simply not worthy of sharing or linking, and also that people are very poor at amplifying content. It may sound harsh but it seems most people are wasting their time either producing poor content or failing to amplify it.

On a more positive note we also found some great examples of content that people love to both share and link to. It was not a surprise to find content gets far more shares than links. Shares are much easier to acquire. Everyone can share content easily and it is almost frictionless in some cases. Content has to work much harder to acquire links. Our research uncovered:

  • The sweet spot content that achieves both shares and links
  • The content that achieves higher than average referring domain links
  • The impact of content formats and content length on shares and links

Our summary findings are as follows:

  1. The majority of posts receive few shares and even fewer links. In a randomly selected sample of 100,000 posts over 50% had 2 or less Facebook interactions (shares, likes or comments) and over 75% had zero external links. This suggests there is a lot of very poor content out there and also that people are very poor at amplifying their content.
  2. When we looked at a bigger sample of 750,000 well shared posts we found over 50% of these posts still had zero external links. Thus suggests while many posts acquire shares, and in some cases large numbers of shares, they find it far harder to acquire links.
  3. Shares and links are not normally distributed around an average. There are high performing outlier posts that get a lot of shares and links but most content is grouped at the low end, with close to zero shares and links. For example, over 75% of articles from our random sample of 100,000 posts had zero external links and just 1 or less referring domain link.
  4. Across our total sample of 1m posts there was NO overall correlation of shares and links, implying people share and link for different reasons. The correlation of total shares and referring domain links across 750,000 articles was just 0.021.
  5. There are, however, specific content types that do have a strong positive correlation of shares and links. This includes research backed content and opinion forming journalism. We found these content formats achieve both higher shares and significantly more links.
  6. 85% of content published (excluding videos and quizzes) is less than 1,000 words long. However, long form content of over 1,000 words consistently receives more shares and links than shorter form content. Either people ignore the data or it is simply too hard for them to write quality long form content.
  7. Content formats matter. Formats such as entertainment videos and quizzes are far more likely to be shared than linked to. Some quizzes and videos get hundreds of thousands of shares but no links.
  8. List posts and videos achieve much higher shares on average than other content formats. However, in terms of achieving links, list posts and why posts achieve a higher number of referring domain links than other content formats on average. While we may love to hate them, list posts remain a powerful content format.

We have outlined the findings in more detail below. You can download the full 30 page research report from the BuzzSumo site:

Download the full 30-page research report

The majority of posts receive few shares and even fewer links

We pulled an initial sample of 757,000 posts from the BuzzSumo database. 100,000 of these posts were pulled at random and acted as a control group. As we wanted to investigate certain content formats, the other 657,000 were well shared videos, ‘how to’ posts, list posts, quizzes, infographics, why posts and videos. The overall sample therefore had a specific bias to well shared posts and specific content formats. However, despite this bias towards well shared articles, 50% of our 757,000 articles still had 11 or less Twitter shares and 50% of the posts had zero external links.

By comparison 50% of the 100,000 randomly selected posts had 2 or less Twitter shares, 2 or less Facebook interactions, 1 or less Google+ shares and zero LinkedIn shares. 75% of the posts had zero external links and 1 or less referring domain links.

75% of randomly selected articles had zero external links

Shares and links are not normally distributed

Shares and links are not distributed normally around an average. Some posts go viral and get a very high numbers of shares and links. This distorts the average, the vast majority of posts receive very few shares or links and sit at the bottom of a very skewed distribution curve as shown below.

This chart is cut off on the right at 1,000 shares, in fact the long thin tail would extend a very long way as a number of articles received over 1m shares and one received 5.7m shares.

This long tail distribution is the same for shares and links across all the domains we analyzed. The skewed nature of the distribution means that averages can be misleading due to the long tail of highly shared or linked content. In the example below we show the distribution of shares for a domain. In this example the average is the blue line but 50% of all posts lie to the left of the red line, the median.

There is NO correlation of shares and links

We used the Pearson correlation co-efficient, a measure of the linear correlation between two variables. The results can range from between 1 (a total positive correlation) to 0 (where there is no correlation) to −1 (a total negative correlation).

The overall correlations for our sample were:

Total shares and Referring Domain Links 0.021

Total shares and Sub-domain Links 0.020

Total shares and External Links 0.011

The results suggest that people share and link to content for different reasons.

We also looked at different social networks to see if there were more positive correlations for specific networks. We found no strong positive correlation of shares to referring domain links across the different networks as shown below.

  • Facebook total interactions 0.0221
  • Twitter 0.0281
  • Linkedin 0.0216
  • Pinterest 0.0065
  • Google plus 0.0058

Whilst there is no correlation by social network there is some evidence that very highly shared posts have a higher correlation of shares and links. This can be seen below.

Content sample Average total shares Median shares Average referring domain links Median referring domain links Correlation total shares – referring domains
Full sample
of posts

(757,317)
4,393 202 3.77 1 0.021
Posts with over
10,000 total shares

(69,114)
35,080 18,098 7.06 2 0.101

The increased correlation is relatively small, however, it does indicate that very popular sites, other things being equal, would have slightly higher correlations of shares and links.

Our finding that there is no overall correlation contradicts previous studies that have suggested there is a positive correlation of shares and links. We believe the previous findings may have been due to inadequate sampling as we will discuss below.

The content sweet spot: content with a positive correlation of shares and links

Our research found there are specific content types that have a high correlation of shares and links. This content attracts both shares and links, and as shares increase so do referring domain links. Thus whilst content is generally shared and linked to for different reasons, there appears to be an overlap where some content meets the criteria for both sharing and linking.

Screen Shot 2015-08-24 at 17.38.25.png

The content that falls into this overlap area, our sweet spot, includes content from popular domains such as major publishers. In our sample the content also included authoritative, research backed content, opinion forming journalism and major news sites.

In our sample of 757,000 well shared posts the following were examples of domains that had a high correlation of shares and links.

Site Number of articles in sample Referring domain links – total shares correlation
The Breast Cancer Site 17 0.90
New York Review of books 11 0.95
Pew Research 25 0.86
The Economist 129 0.73

We were very cautious about drawing conclusions from this data as the individual sample sizes were very small. We therefore undertook a second, separate sampling exercise for domains with high correlations. This analysis is outlined in the next section below.

Our belief is that previous studies may have sampled content disproportionately from popular sites within the area of overlap. This would explain a positive correlation of shares and links. However, the data shows that the domains in the area of overlap are actually outliers when it comes to shares and links.

Sweet-spot content: opinion-forming journalism and research-backed content

In order to explore further the nature of content on sites with high correlations we looked at a further 250,000 random articles from those domains.

For example, we looked at 49,952 articles from the New York Times and 46,128 from the Guardian. These larger samples had a lower correlation of links and shares, as we would expect due to the samples having a lower level of shares overall. The figures were as follows:

Domain

Number articles in sample

Average Total Shares

Average Referring Domain Links

Correlation of Total Shares to Domain Links

Nytimes.com

49,952

918

3.26

0.381

Theguardian.com

46,128

797

10.18

0.287

We then subsetted various content types to see if particular types of content had higher correlations. During this analysis we found that opinion content from these sites, such as editorials and columnists, had significantly higher average shares and links, and a higher correlation. For example:

Opinion content

Number articles in sample

Average Total Shares

Average Referring Domain Links

Correlation of Total Shares to Domain Links

Nytimes.com

4,143

3,990

9.2

0.498

Theguardian.com

19,606

1,777

12.54

0.433

The higher shares and links may be because opinion content tends to be focused on current trending areas of interest and because the authors take a particular slant or viewpoint that can be controversial and engaging.

We decided to look in more detail at opinion forming journalism. For example, we looked at over 20,000 articles from The Atlantic and New Republic. In both cases we saw a high correlation of shares and links combined with a high number of referring domain links as shown below.

Domain

Number articles in sample

Average Total Shares

Average Referring Domain Links

Correlation of Total Shares to Domain Links

TheAtlantic.com

16,734

2,786

18.82

0.586

NewRepublic.com

6,244

997

12.8

0.529

This data appears to support the hypothesis that authoritative, opinion shaping journalism sits within the content sweet spot. It particularly attracts more referring domain links.

The other content type that had a high correlation of shares and links in our original sample was research backed content. We therefore sampled more data from sites that publish a lot of well researched and evidenced content. We found content on these sites had a significantly higher number of referring domain links. The content also had a higher correlation of links and shares as shown below.

Domain

Number articles in sample

Average Total Shares

Average Referring Domain Links

Correlation of Total Shares to Domain Links

FiveThirtyEight.com

1,977

1,783

18.5

0.55

Priceonomics.com

541

1,797

11.49

0.629

PewResearch.com

892

751

25.7

0.4

Thus whilst overall there is no correlation of shares and links, there are specific types of content that do have a high correlation of shares and links. This content appears to sit close to the center of the overlap of shares and links, our content sweet spot.

The higher correlation appears to be caused by the content achieving a higher level of referring domain links. Shares are generally much easier to achieve than referring domain links. You have to work much harder to get such links and it appears that research backed content and authoritative opinion shaping journalism is better at achieving referring domain links.

Want shares and links? Create deep research or opinion-forming content

Our conclusion is that if you want to create content that achieves a high level of both shares and links then you should concentrate on opinion forming, authoritative content on current topics or well researched and evidenced content. This post falls very clearly into the latter category, so we will shall see if this proves to be the case here.

The impact of content format on shares and links

We specifically looked at the issue of content formats. Previous research had suggested that some content formats may have a higher correlation of shares and links. Below are the details of shares and links by content format from our sample of 757,317 posts.

Content Type

Number in sample

Average Total Shares

Average Referring Domain Links

Correlation of total shares & referring domain links

List post

99,935

10,734

6.19

0.092

Quiz

69,757

1,374

1.6

0.048

Why post

99,876

1,443

5.66

0.125

How to post

99,937

1,782

4.41

0.025

Infographic

98,912

268

3.67

0.017

Video

99,520

8,572

4.13

0.091

What stands out is the high level of shares for list posts and videos.

By contrast the average level of shares for infographics is very low. Whilst the top infographics did well (there were 343 infographics with more than 10,000 shares) the majority of infographics in our sample performed poorly. Over 50% of infographics (53,000 in our sample) had zero external links and 25% had less than 10 shares in total across all networks. This may reflect a recent trend to turn everything into an infographic leading to many poor pieces of content.

What also stands out is the relatively low number of referring domain links for quizzes. People may love to share quizzes but they are less likely to link to them.

In terms of the correlation of total shares and referring domain links, Why posts had the highest correlation than all other content types at 0.125. List posts and videos also have a higher correlation than the overall sample correlation which was 0.021.

List posts appear to perform consistently well as a content format in terms of both shares and links.

Some content types are more likely to be shared than linked to

Surprising, unexpected and entertaining images, quizzes and videos have the potential to go viral with high shares. However, this form of content is far less likely to achieve links.

Entertaining content such as Vine videos and quizzes often had zero links despite very high levels of shares. Here are some examples.

Content

Total Shares

External Links

Referring Domain Links

Vine video

https://vine.co/v/O0VvMWL5F2d

347,823

0

0

Vine video

https://vine.co/v/O071IWJYEUi

253,041

0

1

Disney Dog Quiz

http://blogs.disney.com/oh-my-disney/2014/06/30/qu…

259,000

0

1

Brainfall Quiz

http://www.brainfall.com/quizzes/how-bitchy-are-yo…

282,058

0

0

Long form content consistently receives more shares and links than shorter-form content

We removed videos and quizzes from our initial sample to analyze the impact of content length. This gave us a sample of 489,128 text based articles which broke down by content length as follows:

Length (words) No in sample Percent
<1,000 418,167 85.5
1-2,000 58,642 12
2-3,000 8,172 1.7
3,000-10,000 3,909 0.8

Over 85% of articles had less than 1,000 words.

We looked at the impact of content length on total shares and domain links.

Length (words) Total Shares Average Referring Domain Links Average
<1,000 2,823 3.47
1-2,000 3,456 6.92
2-3,000 4,254 8.81
3-10,000 5,883 11.07

We can see that long form content consistently gets higher average shares and significantly higher average links. This supports our previous research findings, although there are exceptions, particularly with regard to shares. One such exception we identified is IFL Science, that publishes short form content shared by its 21m Facebook fans. The site curates images and videos to explain scientific research and findings. This article examines how they create their short form viral content. However, IFLS Science is very much an exception. On average long form content performs better, particularly when it comes to links.

When we looked at the impact of content length on the correlation of shares and links. we found that content of over 1,000 words had a higher correlation but the correlation did not increase further beyond 2,000 words.

Length (words) Correlation Shares/Links
<1,000 0.024
1-2,000 0.113
2-3,000 0.094
3,000+ 0.072

The impact of combined factors

We have not undertaken any detailed linear regression modelling or built any predictive models but it does appear that a combination of factors can increase shares, links and the correlation. For example, when we subsetted List posts to look at those over 1,000 words in length, the average number of referring domain links increased from 6.19 to 9.53. Similarly in our original sample there were 1,332 articles from the New York Times. The average number of referring domain links for the sample was 7.2. When we subsetted out just the posts over 1,000 words the average number of referring domain links increased to 15.82. When we subsetted out just the List posts the average number of referring domain links increased further to 18.5.

The combined impact of factors such as overall site popularity, content format, content type and content length is an area for further investigation. However, the initial findings do indicate that shares and/or links can be increased when some of these factors are combined.

You can download the full 30 page research report from the BuzzSumo site:

Download the full 30-page research report

Steve will be discussing the findings at a Mozinar on September 22, at 10.30am Pacific Time. You can register and save your place here https://attendee.gotowebinar.com/register/41189941…

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Clean Your Site’s Cruft Before It Causes Rankings Problems – Whiteboard Friday

Posted by randfish

We all have it. The cruft. The low-quality, or even duplicate-content pages on our sites that we just haven’t had time to find and clean up. It may seem harmless, but that cruft might just be harming your entire site’s ranking potential. In today’s Whiteboard Friday, Rand gives you a bit of momentum, showing you how you can go about finding and taking care of the cruft on your site.

Cleaning the Cruft from Your Site Before it Causes Pain and Problems with your Rankings Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about cleaning out the cruft from your website. By cruft what I mean is low quality, thin quality, duplicate content types of pages that can cause issues even if they don’t seem to be causing a problem today.

What is cruft?

If you were to, for example, launch a large number of low quality pages, pages that Google thought were of poor quality, that users didn’t interact with, you could find yourself in a seriously bad situation, and that’s for a number of reasons. So Google, yes, certainly they’re going to look at content on a page by page basis, but they’re also considering things domain wide.

So they might look at a domain and see lots of these green pages, high quality, high performing pages with unique content, exactly what you want. But then they’re going to see like these pink and orange blobs of content in there, thin content pages with low engagement metrics that don’t seem to perform well, duplicate content pages that don’t have proper canonicalization on them yet. This is really what I’m calling cruft, kind of these two things, and many variations of them can fit inside those.

But one issue with cruft for sure it can cause Panda issues. So Google’s Panda algorithm is designed to look at a site and say, “You know what? You’re tipping over the balance of what a high quality site looks like to us. We see too many low quality pages on the site, and therefore we’re not just going to hurt the ranking ability of the low quality pages, we’re going to hurt the whole site.” Very problematic, really, really challenging and many folks who’ve encountered Panda issues over time have seen this.

There are also other probably non-directly Panda kinds of related things, like site-wide analysis of things like algorithmic looks at engagement and quality. So, for example ,there was a recent analysis of the Phantom II update that Google did, which hasn’t really been formalized very much and Google hasn’t said anything about it. But one of the things that they looked at in that Phantom update was the engagement of pages on the sites that got hurt versus the engagement of pages on the sites that benefited, and you saw a clear pattern. Engagement on sites that benefited tended to be higher. On those that were hurt, tended to be lower. So again, it could be not just Panda but other things that will hurt you here.

It can waste crawl bandwidth, which sucks. Especially if you have a large site or complex site, if the engine has to go crawl a bunch of pages that are cruft, that is potentially less crawl bandwidth and less frequent updates for crawling to your good pages.

It can also hurt from a user perspective. User happiness may be lowered, and that could mean a hit to your brand perception. It could also drive down better converting pages. It’s not always the case that Google is perfect about this. They could see some of these duplicate content, some of these thin content pages, poorly performing pages and still rank them ahead of the page you wish ranked there, the high quality one that has good conversion, good engagement, and that sucks just for your conversion funnel.

So all sorts of problems here, which is why we want to try and proactively clean out the cruft. This is part of the SEO auditing process. If you look at a site audit document, if you look at site auditing software, or step-by-step how-to’s, like the one from Annie that we use here at Moz, you will see this problem addressed.

How do I identify what’s cruft on my site(s)?

So let’s talk about some ways to proactively identify cruft and then some tips for what we should do afterwards.

Filter that cruft away!

One of those ways for sure that a lot of folks use is Google Analytics or Omniture or Webtrends, whatever your analytics system is. What you’re trying to design there is a cruft filter. So I got my little filter. I keep all my good pages inside, and I filter out the low quality ones.

What I can use is one of two things. First, a threshold for bounce or bounce rate or time on site, or pages per visit, any kind of engagement metric that I like I can use that as a potential filter. I could also do some sort of a percentage, meaning in scenario one I basically say, “Hey the threshold is anything with a bounce rate higher than 90%, I want my cruft filter to show me what’s going on there.” I’d create that filter inside GA or inside Omniture. I’d look at all the pages that match that criteria, and then I’d try and see what was wrong with them and fix those up.

The second one is basically I say, “Hey, here’s the average time on site, here’s the median time on site, here’s the average bounce rate, median bounce rate, average pages per visit, median, great. Now take me 50% below that or one standard deviation below that. Now show me all that stuff, filters that out.”

This process is going to capture thin and low quality pages, the ones I’ve been showing you in pink. It’s not going to catch the orange ones. Duplicate content pages are likely to perform very similarly to the thing that they are a duplicate of. So this process is helpful for one of those, not so helpful for other ones.

Sort that cruft!

For that process, you might want to use something like Screaming Frog or OnPage.org, which is a great tool, or Moz Analytics, comes from some company I’ve heard of.

Basically, in this case, you’ve got a cruft sorter that is essentially looking at filtration, items that you can identify in things like the URL string or in title elements that match or content that matches, those kinds of things, and so you might use a duplicate content filter. Most of these pieces of software already have a default setting. In some of them you can change that. I think OnPage.org and Screaming Frog both let you change the duplicate content filter. Moz Analytics not so much, same thing with Google Webmaster Tools, now Search Console, which I’ll talk about in a sec.

So I might say like, “Hey, identify anything that’s more than 80% duplicate content.” Or if I know that I have a site with a lot of pages that have only a few images and a little bit of text, but a lot of navigation and HTML on them, well, maybe I’d turn that up to 90% or even 95% depending.

I can also use some rules to identify known duplicate content violators. So for example, if I’ve identified that everything that has a question mark refer equals bounce or something or partner. Well, okay, now I just need to filter for that particular URL string, or I could look for titles. So if I know that, for example, one of my pages has been heavily duplicated throughout the site or a certain type, I can look for all the titles containing those and then filter out the dupes.

I can also do this for content length. Many folks will look at content length and say, “Hey, if there’s a page with fewer than 50 unique words on it in my blog, show that to me. I want to figure out why that is, and then I might want to do some work on those pages.”

Ask the SERP providers (cautiously)

Then the last one that we can do for this identification process is Google and Bing Webmaster Tools/Search Console. They have existing filters and features that aren’t very malleable. We can’t do a whole lot with them, but they will show you potential site crawl issues, broken pages, sometimes dupe content. They’re not going to catch everything though. Part of this process is to proactively find things before Google finds them and Bing finds them and start considering them a problem on our site. So we may want to do some of this work before we go, “Oh, let’s just shove an XML sitemap to Google and let them crawl everything, and then they’ll tell us what’s broken.” A little risky.

Additional tips, tricks, and robots

A couple additional tips, analytics stats, like the ones from GA or Omniture or Webtrends, they can totally mislead you, especially for pages with very few visits, where you just don’t have enough of a sample set to know how they’re performing or ones that the engines haven’t indexed yet. So if something hasn’t been indexed or it just isn’t getting search traffic, it might show you misleading metrics about how users are engaging with it that could bias you in ways that you don’t want to be biased. So be aware of that. You can control for it generally by looking at other stats or by using these other methods.

When you’re doing this, the first thing you should do is any time you identify cruft, remove it from your XML sitemaps. That’s just good hygiene, good practice. Oftentimes it is enough to at least have some of the preventative measures from getting hurt here.

However, there’s no one size fits all methodology after the don’t include it in your XML sitemap. If it’s a duplicate, you want to canonicalize it. I don’t want to delete all these pages maybe. Maybe I want to delete some of them, but I need to be considered about that. Maybe they’re printer friendly pages. Maybe they’re pages that have a specific format. It’s a PDF version instead of an HTML version. Whatever it is, you want to identify those and probably canonicalize.

Is it useful to no one? Like literally, absolutely no one. You don’t want engines visiting. You don’t want people visiting it. There’s no channel that you care about that page getting traffic to. Well you have two options — 301 it. If it’s already ranking for something or it’s on the topic of something, send it to the page that will perform well that you wish that traffic was going to, or you can completely 404 it. Of course, if you’re having serious trouble or you need to remove it entirely from engines ASAP, you can use the 410 permanently delete. Just be careful with that.

Is it useful to some visitors, but not search engines? Like you don’t want searchers to find it in the engines, but if somebody goes and is paging through a bunch of pages and that kind of thing, okay, great, I can use no index, follow for that in the meta robots tag of a page.

If there’s no reason bots should access it at all, like you don’t care about them following the links on it, this is a very rare use case, but there can be certain types of internal content that maybe you don’t want bots even trying to access, like a huge internal file system that particular kinds of your visitors might want to get access to but nobody else, you can use the robots.txt file to block crawlers from visiting it. Just be aware it can still get into the engines if it’s blocked in robots.txt. It just won’t show any description. They’ll say, “We are not showing a site description for this page because it’s blocked by robots.”

If the page is almost good, like it’s on the borderline between pink and green here, well just make it good. Fix it up. Make that page a winner, get it back in the engines, make sure it’s performing well, find all the pages like that have those problems, fix them up or consider recreating them and then 301’ing them over if you want to do that.

With this process, hopefully you can prevent yourself from getting hit by the potential penalties, or being algorithmically filtered, or just being identified as not that great a website. You want Google to consider your site as high quality as they possibly can. You want the same for your visitors, and this process can really help you do that.

Looking forward to the comments, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

New Study: Data Reveals 67% of Consumers are Influenced by Online Reviews

Posted by Dhinckley

Google processed over 1 trillion search queries in 2014. As Google Search continues to further integrate into our normal daily activities, those search results become increasingly important, especially when individuals are searching for information about a company or product.

To better understand just how much of an impact Google has on an individual’s purchasing decisions, we set up a research study with a group of 1,000 consumers through Google Consumer Surveys. The study investigates how individuals interact with Google and other major sites during the buying process.

Do searchers go beyond page 1 of Google?

We first sought to understand how deeply people went into the Google search results. We wanted to know if people tended to stop at page 1 of the search results, or if they dug deeper into page 2 and beyond. A better understanding of how many pages of search results are viewed provides insight into how many result pages we should monitor related to a brand or product.

When asked, 36% of respondents claimed to look through the first two pages or more of search results. But, looking at actual search data, it is clear that individuals view less than 2% of searches below the top five results on the first page. From this, it is clear that actual consumer behavior differs from self-reported search activity.

Do Searchers Go Beyond Page 1 of Google?

Takeaway: People are willing to view as many as two pages of search results but rarely do so during normal search activities.

Are purchasing decisions affected by online reviews?

Google has integrated reviews into the Google+ Local initiative and often displays these reviews near the top of search results for businesses. Other review sites, such as Yelp and TripAdvisor, will also often rank near the top for search queries for a company or product. Because of the prevalence of review sites appearing in the search results for brands and products, we wanted a better understanding of how these reviews impacted consumers’ decision-making.

We asked participants, “When making a major purchase such as an appliance, a smart phone, or even a car, how important are online reviews in your decision-making?”

The results revealed that online reviews impact 67.7% of respondents’ purchasing decisions. More than half of the respondents (54.7%) admitted that online reviews are fairly, very, or absolutely an important part of their decision-making process.

Purchasing Decisions and Online Reviews

Takeaway: Companies need to take reviews seriously. Restaurant review stories receive all the press, but most companies will eventually have pages from review sites ranking for their names. Building a strong base of positive reviews now will help protect against any negative reviews down the road.

When do negative reviews cost your business customers?

Our research also uncovered that businesses risk losing as many as 22% of customers when just one negative article is found by users considering buying their product. If three negative articles pop up in a search query, the potential for lost customers increases to 59.2%. Have four or more negative articles about your company or product appearing in Google search results? You’re likely to lose 70% of potential customers.

Negative Articles and Sales

Takeaway: It is critical to keep page 1 of your Google search results clean of any negative content or reviews. Having just one negative review could cost you nearly a quarter of all potential customers who began researching your brand (which means they were likely deep in the conversion funnel).

What sites do people visit before buying a product or service?

Google Search is just one of the sites that consumers can visit to research a brand or product. We thought it would be interesting to identify other popular consumer research sites.

Interestingly, most people didn’t seem to remember visiting any of the popular review sites. Instead, the brand site that got the most attention was Google+ Local reviews. Another noteworthy finding was that Amazon came in second, with half the selections that Google received. Finally, the stats show that more people look to Wikipedia for information about a company than to Yelp or TripAdvisor.

Prepurchase Research Sources

Takeaway: Brands should invest time and effort into building a strong community on the Google+, which could lead to receiving more positive reviews on the social platform.

Online reviews impact the bottom line

The results of the study show that online reviews have a significant influence on the decision-making process of consumers. The data supports the fact that Internet users are generally willing to look at the first and second page of Google search results when searching for details about a product or company.

We can also conclude that online review sites like Google+ Local are heavily visited by potential customers looking for information, and the more negative content they find there, the less likely they will be to purchase your products or visit your business.

All this information paints a clear picture that what is included in Google search results for a company or product name will inevitably have an impact on the profitability of that company or product.

Internal marketing teams and public relation firms (PR) must consider the results that Google displays when they search for their company name or merchandise. Negative reviews, negative press, and other damaging feedback can have a lasting impact on a company’s ability to sell their products or services.

How to protect your company in Google search results

A PR or marketing team must be proactive to effectively protect a company’s online reputation. The following tactics can help prevent a company from suffering from deleterious online reviews:

  • First, identify if negative articles already exist on the first two pages of search results for a Google query of a company name or product (e.g., “Walmart”). This simple task should be conducted regularly. Google often shifts search results around, so a negative article—which typically attracts a higher click-through rate—is unfortunately likely to climb the rankings as individuals engage with the piece.
  • Next, monitor and analyze the current sentiment of reviews on popular review sites like Google+ and Amazon. Other sites, like Yelp or Trip Advisor, should also be checked often, as they can quickly climb Google search results. Do not attempt to artificially alter the results, but instead look for best practices on how to improve Yelp reviews or other review sites and implement them. The goal is to naturally improve the general buzz around your business.
  • If negative articles exist, there are solutions for improvement. A company’s marketing and public relations team may benefit by highlighting and/or generating positive press and reviews about the product or service through SEO and ORM efforts. By gaining control of the search results for your company or product, you will be in control of the main message that individuals see when looking for more information about your business. That’s done by working to ensure prospects and customers enjoy a satisfying experience when interacting with your brand, whether online or offline.

Being proactive with a brand’s reputation, as viewed in the Google search results and on review sites, does have an impact on the bottom line.

As we see in the data, people are less likely to make a purchase as the amount of negative reviews increase in Google search results.

By actively ensuring that honest, positive reviews appear, you can win over potential customers.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

A Beginner’s Guide to Google Search Console

Posted by Angela_Petteys

If the name “Google Webmaster Tools” rings a bell for you, then you might already have an idea of what Google Search Console is. Since Google Webmaster Tools (GWT) has become a valuable resource for so many different types of people besides webmasters—marketing professionals, SEOs, designers, business owners, and app developers, to name a few—Google decided to change its name in May of 2015 to be more inclusive of its diverse group of users.

If you aren’t familiar with GWT or Google Search Console, let’s head back to square one. Google Search Console is a free service that lets you learn a great deal of information about your website and the people who visit it. You can use it to find out things like how many people are visiting your site and how they are finding it, whether more people are visiting your site on a mobile device or desktop computer, and which pages on your site are the most popular. It can also help you find and fix website errors, submit a sitemap, and create and check a robots.txt file.

Ready to start taking advantage of all that Google Search Console has to offer? Let’s do this.

Adding and verifying a site in Google Search Console

If you’re new to Google Search Console, you’ll need to add and verify your site(s) before you can do anything else. Adding and verifying your site in Search Console proves to Google that you’re either a site’s owner, webmaster, or other authorized user. After all, Search Console provides you with all sorts of incredibly detailed information and insights about a site’s performance. Google doesn’t want to hand that kind of information over to anybody who asks for it.

Adding a site to Search Console is a very simple process. First, log into your Search Console account. Once you’re logged in, you’ll see a box next to a red button which says “Add Property.”

Add a Site to Search Console.png

Enter the URL of the site you’re trying to add in the box and click “Add Property.” Congratulations, your site is now added to your Search Console account!

Next, you will be asked to verify your site. There are a few different ways you can go about this. Which method will work best for you depends on whether or not you have experience working with HTML, if you have access to upload files to the site, the size of your site, and whether or not you have other Google programs connected to your site. If this sounds overwhelming, don’t worry—we’ll help you figure it out.

Adding an HTML tag

This verification method is best for users and site owners who have experience working with HTML code.

Manage Property.png

From the Search Console dashboard, select “Manage Property,” then “Verify this property.” If the “HTML Tag” option does not appear under “Recommended method,” then you should click on the “Alternate methods” tab and select “HTML tag.” This will provide you with the HTML code you’ll need for verification.

Verify HTML Tag Edit.png

Copy the code and use your HTML editor to open the code for your site’s homepage. Paste the code provided within in the <Head> section of the HTML code. If your site already has a meta tag or other code in the <Head> section, it doesn’t matter where the verification code is placed in relation to the other code; it simply needs to be in the <Head> section. If your site doesn’t have a <Head> section, you can create one for the sake of verifying the site.

Once the verification code has been added, save and publish the updated code, and open your site’s homepage. From there, view the site’s source code. The verification code should be visible in the <Head> section.

Once you’re sure the code is added to your site’s homepage, go back to Search Console and click “Verify.” Google will then check your site’s code for the verification code. If the code is found, you will see a screen letting you know the site has been verified. If not, you will be provided with information about the errors it encountered.

When your site has been verified by Search Console, do not remove the verification code from your site. If the code is removed, it will cause your site to become unverified.

Uploading an HTML file

To use this method, you must be able to upload files to a site’s root directory.

From the Search Console dashboard, select “Manage site,” then “Verify this site.” If “HTML file upload” is not listed under “Recommended method,” it should be listed under the “Alternate method” tab. HTML File Method.png

When you select this method, you will be asked to download an HTML file. Download it, then upload it to the specified location. Do not make any changes to the content of the file or the filename; the file needs to be kept exactly the same. If it is changed, Search Console will not be able to verify the site.

After the HTML file has been uploaded, go back to Search Console and click “Verify.” If everything has been uploaded correctly, you will see a page letting you know the site has been verified.

Once you have verified your site using this method, do not delete the HTML file from your site. This will cause your site to become unverified.

Verifying via domain name provider

The domain name provider is the company you purchased a domain from or where your website is hosted. When you verify using your domain name provider, it not only proves you’re the owner of the main domain, but that you also own all of the subdomains and subdirectories associated with it. This is an excellent option if you have a large website.

From the Search Console dashboard, select “Manage site,” then “Verify this site.” If you don’t see the “Domain name provider” option listed under “Recommended method,” look under the “Alternate method” tab.

Domain Name Provider Method.png

When you select “Domain name provider,” you will be asked to choose your domain name provider from a list of commonly used providers, such as GoDaddy.com. If your provider is not on this list, choose “Other” and you will be given instructions on how to create a DNS TXT record for your provider. If a DNS TXT record doesn’t work for your provider, you will have the option of creating a CNAME record instead.

Adding Google Analytics code

If you already use Google Analytics (GA) to monitor your site’s traffic, this could be the easiest option for you. But first, you’ll need to be able to check the site’s HTML code to make sure the GA tracking code is placed within the <Head> section of your homepage’s code, not in the <Body> section. If the GA code is not already in the <Head> section, you’ll need to move it there for this method to work.

From the Search Console dashboard, select “Manage site,” then “Verify this site.” If you don’t see the “Google Analytics tracking code” option under the “Recommended method,” look under the “Alternate method” tab. When you select “Google Analytics tracking method,” you’ll be provided with a series of instructions to follow.

Google Analytics Code Method 2.png

Once your site has been verified, do not remove the GA code from your site, or it will cause your site to become unverified.

Using Google Tag Manager

If you already use Google Tag Manager (GTM) for your site, this might be the easiest way to verify your site. If you’re going to try this method, you need to have “View, Edit, and Manage” permissions enabled for your account in GTM. Before trying this method, look at your site’s HTML code to make sure the GTM code is placed immediately after your site’s <Body> tag.

From the Search Console dashboard, select “Manage site,” then “Verify this site.” If you don’t see the “Google Tag Manager” option listed under “Recommended method,” it should appear under “Alternate method.”

Google Tag Manager Method.png

Select “Google Tag Manager” and click “Verify.” If the Google Tag Manager code is found, you should see a screen letting you know your site has been verified.

Once your site is verified, do not remove the GTM code from your site, or your site will become unverified.

How to link Google Analytics with Google Search Console

Google Analytics and Google Search Console might seem like they offer the same information, but there are some key differences between these two Google products. GA is more about who is visiting your site—how many visitors you’re getting, how they’re getting to your site, how much time they’re spending on your site, and where your visitors are coming from (geographically-speaking). Google Search Console, in contrast, is geared more toward more internal information—who is linking to you, if there is malware or other problems on your site, and which keyword queries your site is appearing for in search results . Analytics and Search Console also do not treat some information in the exact same ways, so even if you think you’re looking at the same report, you might not be getting the exact same information in both places.

To get the most out of the information provided by Search Console and GA, you can link accounts for each one together. Having these two tools linked will integrate the data from both sources to provide you with additional reports that you will only be able to access once you’ve done that. So, let’s get started:

Has your site been added and verified in Search Console? If not, you’ll need to do that before you can continue.

From the Search Console dashboard, click on the site you’re trying to connect. In the upper righthand corner, you’ll see a gear icon. Click on it, then choose “Google Analytics Property.”

Google Analytics Property.jpg

This will bring you to a list of Google Analytics accounts associated with your Google account. All you have to do is choose the desired GA account and hit “Save.” Easy, right? That’s all it takes to start getting the most out of Search Console and Analytics.

Adding a sitemap

Sitemaps are files that give search engines and web crawlers important information about how your site is organized and the type of content available there. Sitemaps can include metadata, with details about your site such as information about images and video content, and how often your site is updated.

By submitting your sitemap to Google Search Console, you’re making Google’s job easier by ensuring they have the information they need to do their job more efficiently. Submitting a sitemap isn’t mandatory, though, and your site won’t be penalized if you don’t submit a sitemap. But there’s certainly no harm in submitting one, especially if your site is very new and not many other sites are linking to it, if you have a very large website, or your if site has many pages that aren’t thoroughly linked together.

Before you can submit a sitemap to Search Console, your site needs to be added and verified in Search Console. If you haven’t already done so, go ahead and do that now.

From your Search Console dashboard, select the site you want to submit a sitemap for. On the left, you’ll see an option called “Crawl.” Under “Crawl,” there will be an option marked “Sitemaps.”

Crawl Sitemap.png

Click on “Sitemaps.” There will be a button marked “Add/Test Sitemap” in the upper righthand corner.

Add Test Sitemap 4.png

This will bring up a box with a space to add text to it.

Add Test Sitemap Submit.png

Type “system/feeds/sitemap” in that box and hit “Submit sitemap.” Congratulations, you have now submitted a sitemap!

Checking a robots.txt file

Having a website doesn’t necessarily mean you want to have all of its pages or directories indexed by search engines. If there are certain things on your site you’d like to keep out of search engines, you can accomplish this by using a robots.txt file. A robots.txt file placed in the root of your site tells search engine robots (i.e., web crawlers) what you do and do not want indexed by using commands known as the robots Exclusion Standard.

It’s important to note that robots.txt files aren’t necessarily guaranteed to be 100% effective in keeping things away from web crawlers. The commands in robots.txt files are instructions, and although the crawlers used by credible search engines like Google will accept them, it’s entirely possible that a less reputable crawler will not. It’s also entirely possible for different web crawlers to interpret commands differently. Robots.txt files also will not stop other websites from linking to your content, even if you don’t want it indexed.

If you want to check your robots.txt file to see exactly what it is and isn’t allowing, log into Search Console and select the site whose robots.txt file you want to check. Haven’t already added or verified your site in Search Console? Do that first.

Search Console Crawl Robots 2.png

On the lefthand side of the screen, you’ll see the option “Crawl.” Click on it and choose “robots.txt Tester.” The Robots.txt Tester Tool will let you look at your robots.txt file, make changes to it, and it alert you about any errors it finds. You can also choose from a selection of Google’s user-agents (names for robots/crawlers) and enter a URL you wish to allow/disallow, and run a test to see if the URL is recognized by that crawler.

Robots txt Tester Tool.png

If you make any changes to your robots.txt file using Google’s robots.txt tester, the changes will not be automatically reflected in the robots.txt file hosted on your site. Luckily, it’s pretty easy to update it yourself. Once your robots.txt file is how you want it, hit the “Submit” button underneath the editing box in the lower righthand corner. This will give you the option to download your updated robots.txt file. Simply upload that to your site in the same directory where your old one was (www.example.com/robots.txt). Obviously, the domain name will change, but your robots.txt file should always be named “robots.txt” and the file needs to be saved in the root of your domain, not www.example.com/somecategory/robots.txt.

Back on the robots.txt testing tool, hit “Verify live version” to make sure the correct file is on your site. Everything correct? Good! Click “Submit live version” to let Google know you’ve updated your robots.txt file and they should crawl it. If not, re-upload the new robots.txt file to your site and try again.

Fetch as Google and submit to index

If you’ve made significant changes to a website, the fastest way to get the updates indexed by Google is to submit it manually. This will allow any changes done to things such as on-page content or title tags to appear in search results as soon as possible.

The first step is to sign into Google Search Console. Next, select the page you need to submit. If the website does not use the ‘www.’ prefix, then make sure you click on the entry without it (or vice versa.)

On the lefthand side of the screen, you should see a “Crawl” option. Click on it, then choose “Fetch as Google.”

Fetch as Google Edit.png

Clicking on “Fetch as Google” should bring you to a screen that looks something like this:

Fetch as Google 2.png

If you need to fetch the entire website (such as after a major site-wide update, or if the homepage has had a lot of remodeling done) then leave the center box blank. Otherwise, use it to enter the full address of the page you need indexed, such as http://example.com/category. Once you enter the page you need indexed, click the “Fetch and Render” button. Fetching might take a few minutes, depending on the number/size of pages being fetched.

After the indexing has finished, there will be a “Submit to Index” button that appears in the results listing at the bottom (near the “Complete” status). You will be given the option to either “Crawl Only This URL,” which is the option you want if you’re only fetching/submitting one specific page, or “Crawl This URL and its Direct Links,” if you need to index the entire site.

Click this, wait for the indexing to complete, and you’re done! Google now has sent its search bots to catalog the new content on your page, and the changes should appear in Google within the next few days.

Site errors in Google Search Console

Nobody wants to have something wrong on their website, but sometimes you might not realize there’s a problem unless someone tells you. Instead of waiting for someone to tell you about a problem, Google Search Console can immediately notify you of any errors it finds on on your site.

If you want to check a site for internal errors, select the site you’d like to check. On the lefthand side of the screen, click on “Crawl,” then select “Crawl Errors.”

Site Errors Tool.png

You will then be taken directly to the Crawl Errors page, which displays any site or URL errors found by Google’s bots while indexing the page. You will see something like this:

Errors Page.png

Any URL errors found will be displayed at the bottom. Click on any of the errors for a description of the error encountered and further details.

Error Details.png

Record any encountered errors, including screenshots if appropriate. If you aren’t responsible for handling site errors, notify the person who is so they can correct the problem(s).

We hope this guide has been helpful in acquainting you with Google Search Console. Now that everything is set up and verified, you can start taking in all the information that Google Search Console has for you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz

 

Traffic and Engagement Metrics and Their Correlation to Google Rankings

Posted by Royh

When Moz undertook this year’s Ranking Correlation Study (Ranking Factors), there was a desire to include data points never before studied. Fortunately, SimilarWeb had exactly what was needed. For the first time, Moz was able to measure ranking correlations with both traffic and engagement metrics.

Using Moz’s ranking data on over 200,000 domains, combined with multiple SimilarWeb data points—including traffic, page views, bounce rate, time on site, and rank—the Search Ranking Factors study was able to measure how these metrics corresponded to higher rankings.

These metrics differ from the traditional SEO parameters Moz has measured in the past in that they are primarily user-based metrics. This means that they vary based on how users interact with the individual websites, as opposed to static features such as title tag length. We’ll find these user-based metrics important as we learn how search engines may use them to rank webpages, as illustrated in this excellent post by Dan Petrovic.

Every marketer and SEO professional wants to know if there is a correlation between web search ranking results and the website’s actual traffic. Here, we’ll examine the relationship between website rankings and traffic engagement to see which metrics have the biggest correlation to rankings.

You can view the results below:

Traffic correlated to higher rankings

For the study, we examined both direct and organic search visits over a three-month period. SimilarWeb’s traffic results show that there is a generally a high correlation between website visits and Google’s search rankings.

Put simply, the more traffic a site received, the higher it tended to rank. Practically speaking, this means that you would expect to see sites like Amazon and Wikipedia higher up in the results, while smaller sites tended to rank slightly worse.

This doesn’t mean that Google uses traffic and user engagement metrics as an actual ranking factor in its search algorithm, but it does show that a relationship exists. Hypothetically, we can think of many reasons why this might be the case:

  • A “brand” bias, meaning that Google may wish to treat trusted, popular, and established brands more favorably.
  • Possible user-based ranking signals (described by Dan here) where uses are more inclined to choose recognizable brands in search results, which in theory could push their rankings higher.
  • Which came first—the chicken or the egg? Alternatively, it could simply be the case that high-ranking websites become popular simply because they are ranking highly.

Regardless of the exact cause, it seems logical that the more you improve your website’s visibility, trust, and recognition, the better you may perform in search results.

Engagement: Time on site, bounce rate, and page views

While not as large as the traffic correlations, we also found a positive correlation between a website’s user engagement and its rank in Google search results. For the study, we examined three different engagement metrics from SimilarWeb.

  • Time on site: 0.12 is not considered a strong correlation by any means within this study, but it does suggest there may be a slight relationship between how long a visitor spends on a particular site and its ranking in Google.
  • Page views: Similar to time on site, the study found a small correlation of 0.10 between the number of pages a visitor views and higher rankings.
  • Bounce rate: At first glance, with a correlation of -0.08, the correlation between bounce rate and rankings may seem out-of-whack, but this is not the case. Keep in mind that lower bounce rate is often a good indication of user engagement. Therefore, we find as bounce rates rise (something we often try to avoid), rankings tend to drop, and vice-versa.

This means that sites with lower bounce rates, longer time-on-site metrics, and more page views—some of the data points that SimilarWeb measures—tend to rank higher in Google search results.

While these individual correlations aren’t large, collectively they do lend credence to the idea that user engagement metrics can matter to rankings.

To be clear, this doesn’t mean to imply that Google or other search engines use metrics like bounce rate or click-through rate directly in their algorithm. Instead, a better way to think of this is that Google uses a number of user inputs to measure relevance, user satisfaction, and quality of results.

This is exactly the same argument the SEO community is currently debating over click-through rate and its possible use by Google as a ranking signal. For an excellent, well-balanced view of the debate, we highly recommend reading AJ Kohn’s thoughts and analysis.

It could be that Google is using Panda-like engagement signals. If a site’s correlated bounce rate is negative, that means that the website should have a lower bounce rate because the site is healthy. Similarly, if the time that users spend on-site and the page views are higher, the website should also tend to produce higher Google SERPs.

Global Rank correlations

SimilarWeb’s Global Rank is calculated by data aggregation, and is based on a combination of website traffic from six different sources and user engagement levels. We include engagement metrics to make sure that we’re portraying an accurate picture of the market.

If the website has a lower Global Rank on SimilarWeb, then the website will generally have more visitors and good user engagement.

As Global Rank is a combination of traffic and engagement metrics, it’s no surprise that it was one of the highest correlated features of the study. Again, even though the correlation is negative at -0.24, a low Global Rank is actually a good thing. A website with a Global Rank of 1 would be the highest-rated site on the web. This means that the lower the Global Rank, the better the relationship with higher rankings.

As a side note, SimilarWeb’s Website Ranking provides insights for estimating any website’s value and benchmarking your site against it. You can use its tables to find out who’s leading per industry category and/or country.

Methodology

The Moz Search Engine Ranking Factors study examined the relationship between web search results and links, social media signals, visitor traffic and usage signals, and on-page factors. The study compiled datasets and conducted search result queries in English with Google’s search engine, focusing exclusively on US search results.

The dataset included a list of 16,521 queries taken from 22 top-level Google Adwords categories. Keywords were taken from head, middle, and tail queries. The searches ranged from infrequent (less than 1,000 queries per month), to frequent (more than 20,000 per month), to enormously frequent with keywords being searched more than one million times per month!

The top 50 US search results for each query were pulled from the datasets in a manner that did not account for location or personalization in a location- and personalization-agnostic manner.

SimilarWeb checked the traffic and engagement stats of more than 200,000 websites, and we have analytics on more than 90% of them. After we pulled the traffic data, we checked for a correlation using keywords from the Google AdWords tool to see what effect metrics like search traffic, time on site, page views, and bounce rates—especially with organic searches—have upon Google’s rankings.

Conclusion

We found a positive correlation between websites that showed highly engaging user traffic metrics on SimilarWeb’s digital measurement platform, and higher placement on Google search engine results pages. SimilarWeb also found that a brand’s popularity correlates to higher placement results in Google searches.

With all the recent talk of user engagement metrics and rankings, we’d love to hear your take. Have you observed any relationship, improvement, or drop in rankings based on engagement? Share your thoughts in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Source: moz