Digital Marketing News: Facebook’s Sponsored Results, Google+ Shutdown, Consumer Brand Trust, Instagram’s Ad Spend Up

Digital Marketing News: Facebook’s Sponsored Results, Google+ Shutdown, Consumer Brand Trust, Instagram’s Ad Spend Up was posted via Internet Marketing

Advertisements

E-commerce SEO: guide on when to create and optimise new categories pages

Without considering a website’s homepage, category pages on e-commerce sites generate most of your organic traffic – are any of you surprised by this statement? If this comes as a shock, I have bad news: you might need to reconsider your information architecture. If you have done your job right then you have nothing to worry about.

Curious about how much organic traffic category pages actually account for, I decided to dig into the Google Search Console of a client of Distilled which has been a very successful e-commerce site for several years. These were my findings over the past 6 months (at the time of writing, November 2018).

Bear in mind this is just an example that shows a fictitious URL structure for an average e-commerce site – the level of category and subcategory pages often differs between sites.

Type of page Proportion of Clicks Example URL
Category pages 5.0% example.co.uk/mens-jeans
1st level subcategory pages 25.0% example.co.uk/mens-jeans/skinny-jeans
2nd level subcategory pages 16.5% example.co.uk/mens-jeans/skinny-jeans/ripped
Homepage 40.0% example.co.uk
Non-transactional pages 5.0% example.co.uk/about-us
Product pages 8.5% example.co.uk/black-ripped-skinny-jeans-1

This simple exercise very much confirms my thesis: category & subcategory pages on e-commerce sites do account for the biggest chunk of organic traffic outside the homepage – in our case, about 50% of the total.

So, we have now shown an example of how important these pages are from an organic standpoint without answering the question of why. Let’s take a step back and look the bigger picture.

Why are category pages so important for SEO?

Put simply, users are more likely to search for generic, category-like keywords with strong intent rather than very specific product names. If I want to buy a new jumper, chances are I will start searching for broad search queries, such as “men’s jumpers”  or “men’s jumpers 2018” with the potential addition of “brand/retailer” to my query, instead of a very precise and long tail search. Who really searches for “Tommy Jeans icon colorblock cable knit jumper in navy multi” anyway unless you are reading from the label, right? For such specific searches, it is your product pages’ job to ‘capture’ that opportunity and get the user as close to a conversion as possible.

Having optimised category and subcategory pages that match users’ searches makes their life much easier and helps search engines better crawl and ‘understand’ your site’s structure.

Image source: Moz

Sitting close to the top of the hierarchy, category pages also benefit from more internal link equity than deeper or isolated pages (more on this later). And let’s not forget about backlink equity: in most instances, category and subcategory pages receive the highest amount of external links pointing to them. As far as we know in 2018, links remain one of the most important off-site ranking factors in the SEO industry, even according to reputable sources like SEMrush.

By now three main elements should be clear: for e-commerce sites, category pages are key from a traffic, information architecture and linking point of view. Pretty important, right? At this stage, the next question is simple: how do we go about creating new pages then?

Creating new category pages: when and why

Before starting with the process, ask yourself the following questions:

  1. What is my main objective when opening a new page? What am I trying to achieve?

If your intent is to capitalise on new keywords which show an opportunity from a search volume standpoint and/or to improve the hierarchical structure of your site for users to find products in an easier manner, then well-done, move to the next question.

  1. Do I have enough products to support new categories? Are my current category pages thin already or do they contain enough items?

In order for category pages to be relevant and carry enough SEO weight, they should be able to contain ‘some’ products. Having thin or empty category pages makes no sense and Google will see it the same way: both the SEO and UX value associated with them would drag the page’s rankings down. There is no magical minimum number I would recommend, just use your logic and think about the users first here (please at least show more than 2 products per category page though).

  1. When should I think of opening new category pages? Which instances are recommended?

Normally speaking, you should always keep an eye on your categorisation, so it is not a one-time task. It is vital that you regularly monitor your SEO performance and spot new opportunities that can help you progress.

As for specific instances, some of the following situations might be a good time for you to evaluate your category pages. Marketing or branding are pushing for new products? Definitely, a good time to think about new category pages. A new trend/term has gone viral? Think about it. 2019 is approaching and you are launching new collection? Surely a good idea. A site migration is another great chance to re-evaluate your category (and subcategory) pages. Whatever form of migration you are going through (URL restructuring, domain change, platform change etc..) it is vital to have a plan on what to do with your category pages and re-assessing your full list is a good idea.

Always bear in mind to have a purpose when you create a new page, don’t do it for the sake of it or because of some internal pressure that might encourage you to do so: refer to point 1 & 2 and prove the value of SEO when making this decision. You might soon end up with more risks than benefits if you don’t have a clear idea in mind.

How to identify the opportunity to open new categories

After having touched on some key considerations before opening new category pages, let’s now go through the process of how to go about it.

Keyword research, what else..

Everything starts with keyword research, the backbone of any content and SEO strategy.

When you approach this task, try and keep an open mind. Use different tools and methodologies, don’t be afraid to experiment.

Here at Distilled, we love Ahrefs so check out a post on how to use it for keyword research.

Here is my personal list of things I use when I want to expand my keyword research a bit further:

  • Keyword Planner (if you have an AdWords account with decent spending, otherwise data ranges are a downer)
  • Ahrefs: see the post to know why it is so cool
  • SEMrush: particularly interesting for competitive keyword research (more on that later)
  • Keyword Tool: particularly useful to provide additional suggestions, and also provides data for many platforms other than Google
  • Answer The Public: great tool to find long tail keywords, especially among questions (useful for featured snippets), prepositions and comparisons

(Data being obscured unless a pro version is paid for)

If you find valuable terms with significant search volume, then bingo! That is enough to support the logic of opening new category pages. If people are searching for a term, why not have a dedicated page about it?

Look at your current rankings

Whatever method or tool you are using to track your existing organic visibility, dig a bit deeper and try to find the following instances:

  • Are my category pages ranking for any unintentional terms? For example, if my current /mens-jumpers page is somehow ranking (maybe not well) for the keyword “cardigans”, this is clearly an opportunity, don’t you think? Monitor the number of clicks those unintentional keywords are bringing and check their search volume before making a decision.
  • Is my category page ranking for different variations of the same product? Say your /mens-jumpers page is also ranking (maybe not well) for “roll neck jumpers”, this might be an opportunity to create a subcategory page and capitalise on the specific product type is offering.
  • Are my product pages ranking for category-level terms? This is clearly an indication I might need a category page! Not only will I be able to capitalise on the search volume of that particular category-level keyword, but I would be able to provide a better experience for the user who will surely expect to see a proper category page with multiple products.
  • Last but not least: are my category pages being outranked by my competitors’ subcategory pages for certain keywords? For instance, you dig into your GSC or tracking platform of choice and see that, for a set of keywords, your /mens-jeans page is outranked by not the equivalent category pages you have (mens-jeans), but by more refined subcategory pages such as /slim-fit-jeans or /black-jeans. Chances are your competitors have done their research and targeted clear sets of keywords by opening dedicated subcategory pages while you have not – keep reading to learn how to quickly capitalise on competitors’ opportunities.

Check your competition

Most of the times your competitors have already spotted these opportunities, so what are you waiting for? Auditing your competition is a necessary step to find categories you are not covering.

Here is my advice when it comes to analysing competitors’ category and subcategory pages:

  1. Start by checking their sites manually. Their top navigation combined with the faceted navigation menu will give you a good idea of their site structure and keyword coverage.

  1. Use tools like Ahrefs or SEMrush to audit your competitors. If interested in how to use SEMrush for this purpose, check out this post from another Distiller.

  1. Do a crawl of competitor sites to gather information in bulk about their category URLs and meta data: page titles, meta descriptions and headings. Their most important keywords will be stored there, so it is a great place to investigate for new opportunities. My favourite tool in this regard is Screaming Frog.

Content on category pages

Different SEOs have different views on this as it is quite a controversial topic.

Just take a look at an internal Slack conversation on the topic – we surely like to debate SEO at Distilled!

Some say that descriptions on category pages are purely done for SEO purposes and have very little value for the user. As with many other things, it depends on how it is done, in my opinion. Many ecommerce sites out there tend to have a poorly written 150 to 250 character description of the category, often stuffed with keywords, and either placed at the top or the bottom of the page.

Look at the example below from a fashion retailer: the copy is placed at the bottom of a handbags and purses page, so the user would need to scroll all the way down just to see it, but most importantly it does not add any value as it is extremely generic and too keyword-rich.

My preference is the following:

  • short but unique description which can be expanded/collapsed by the user (especially convenient on mobile where space is precious)
  • keyword-informed description in a way that is useful to the user and provides additional information compared to the meta data (yes, makes that extra effort)
  • placement at the top of the page and not at the bottom so it gets noticed by the user and the search engine’s bot

By using the description as a useful addition to our page’s meta data, we are helping Google understand what the page is about – especially for new pages that the search engine is crawling for the first time.

Also, let’s not forget about internal linking opportunities such content may be able to offer, especially for weaker/newer pages we may have on the website (more on this later).

Looking closely at the previous example on Next’s Men’s Knitwear page, we can see how they used the copy on the Men’s Knitwear page for internal linking purposes.

Have you considered your Quality Score?

Also, very important note, a description hugely helps an underrated element of our digital marketing: our PPC’s Quality Score, which is an aggregated estimate of the quality of your ads. Since category pages then to be the main destination for PPC ads, we should do whatever is in our power to improve the quality and efficiency of our work.

Landing page experience is “Google Ads measure of how well your website gives people what they’re looking for when they click your ad accounts” and is one of the most important elements of a keyword’s quality score. By using the category page’s content to cover some of the keywords we are targeting in our ad copies, we are heavily impacting our overall quality score, which directly impacts our CPC, hence the whole efficiency of our account!

What about the risks of creating new pages?

Creating new category page is a delicate decision that should be thought through carefully as it does have its risks.

Be aware of keyword cannibalisation

You are at the stage where you have decided to create a new category page and are about to focus on writing great meta data and description for your new page, off the back of the keyword research and other tips provided above – great! Before rushing into copywriting, take a minute to evaluate the potential risk of keyword cannibalisation. This is an often forgotten task that will save you a lot of time further down the line in case you do happen to come across this issue once the new category pages have been created.  When doing so, it is important to make sure your new page’s meta data does not cannibalise your existing pages.

The risk of cannibalisation is very real: having pages which are too closely related from an SEO standpoint, especially when it comes to on-page elements (title tags, headings in particular) and content on the page, can cause some serious setbacks. As a result, the pages suffering from this problem will not live up to their full organic potential and will compete for the same keywords. Search engines will struggle to identify which page to favour for a certain keyword / group of keywords and you will end up being worse off.

An example of minor keyword cannibalisation can be seen on this website: https://www.mandmdirect.com/

Their Men’s Knitwear page title is the following:

Mens Jumpers, Cardigans & Knitwear, Cheap V-Neck Cable Knit Jumpers, UK Sale | MandM Direct

Not only it is overly optimised and too long, but it clashes with its subcategory pages which are optimised for most of the terms already included in the parent page’s title.

Their Men’s V Neck Jumpers page title is the following:

Cheap Mens V-Neck Jumpers | MandM Direct

When opening their subcategory page, Men’s V-Neck Jumpers for instance, I personally would have tried to rede-optimise the parent page’s title in order to allow the subcategory page to live up to its full potential for its key terms:

De-optimised Men’s Knitwear page title:

Mens Jumpers, Cardigans & Knitwear | UK Sale | MandM Direct

How do you prevent this from happening? Do your research, monitor your keywords and landing pages and make sure to write unique meta data & on-page content. Also, don’t be afraid to re-optimise and experiment with your existing meta data when opening new categories. Sometimes it will take you more than one attempt to get things right.

Crawl budget concerns

Google defines crawl budget as “the number of web pages or URLs Googlebot can or wants to crawl from your website”.

One of the arguments against opening new category pages might be crawl budget consumption. For large e-commerce sites with millions of pages, opening many new category pages might come as a risk in a way that could prevent some parts of your site not to be crawled anymore or not as often.

In my opinion, this is a concern only for (very) large e-commerce sites which are not necessarily well-maintained from an SEO point of view. Gary Illyes from Google seems to be on my side:

Source: https://www.searchenginejournal.com/gary-illyes-whats-new-in-google-search-pubcon-keynote-recap/274273/?ver=274273X3

In particular, a well-structured and optimised faceted navigation is vital not to run into crawl budget issues, so I recommend reading this MOZ post written by another Distiller.

By following overall SEO guidelines and regularly checking Google Search Console and server logs, it is possible to determine if your site has a crawl budget issue.

If interested, learn more about server logs.

Internal linking equity

This is more of a real problem than crawl budget, in my modest opinion, and here is why: creating additional pages means that the internal linking equity across your site gets re-distributed. If not closely monitored, you might end up diluting it without a clear process in mind or, worse, wasting in across the wrong pages.

My colleague Shannon wrote a great piece on how to optimise website internal linking structure.

When creating new pages, make sure to consider how your internal link equity gets impacted: needless to say that opening 10 pages is very different than opening 1000! Focus on creating more inlinks for important pages by exploring options such as navigation tabs (main and side navigations) and on-page content (remember the paragraph above?).

The rule of thumb here is simple: when approaching new category pages, don’t forget to think about your internal linking strategy.

Conclusion

Category pages are the backbone of e-commerce sites, hence they should be closely monitored by SEOs and webmasters. They are vital from an information architecture and internal (and external) linking point of view, and attract the most amount of traffic (beyond the homepage). By following the above tips, it becomes easier to identify opportunities where new category pages can be ‘opened’ in order to capitalise on additional organic traffic.

I am curious to hear other people’s opinions on the topic, so please get in touch with me or Distilled by using the comments below or my email: samuel.mangialavori@distilled.net.

E-commerce SEO: guide on when to create and optimise new categories pages was posted via Internet Marketing

Measuring Content Marketing Success: Analytics Advice & Insight from the Experts

Blue tunnel of numbers image.

Blue tunnel of numbers image.

Welcome to the seventh installment in our “Collective Wisdom” series of content marketing strategy and tactics articles.

Previously we’ve covered planning ahead for content marketing success, the art of crafting powerful content, and an array of strong promotion tactics.

So, what’s up next? Measurement and analytics. Measuring and analyzing your content’s performance is critical so you can uncover new opportunities, make improvements, and determine whether you’re meeting your goals.

Here we share some tidbits of analytics wisdom and insights from some famous figures inside and outside the digital marketing realm.

Having the Right Data is Crucial for Measuring Success

How do you gauge success in content marketing?

At the most basic level, you want to learn how much online traffic is coming to see your content. But what should you count and what can be ignored?

Albert Einstein offered insight that I think applies to marketing analytics, and on a greater scale to life itself.

[bctt tweet=”“Not everything that can be counted counts, and not everything that counts can be counted.” — Albert Einstein” username=”toprank”]

https://giphy.com/embed/3o6Ztfbym9laEgGpJ6

via GIPHY

Measuring this traffic has always been rather tricky, and nearly as long as the Web has been around there have been tools dedicated to helping track and understand just how successful a piece of content is.

Even as we’ve seen the growth of big data, extracting the right information is still a challenge, as Tamara McCleary, CEO at Thulium.co, points out.

[bctt tweet=”“Everyone likes to talk about Big Data. The truth is, what we really need is Smart Data.” @TamaraMcCleary ” username=”toprank”]

Regardless of your industry, there’s no sure-fire substitute for common sense, as digital transformation speaker and Forbes contributor Daniel Newman has noted.

[bctt tweet=”“While I do love big data, I know there is no substitute for good old-fashioned common sense. If the data really doesn’t fit, question it. Ask deeper questions.” — Daniel Newman @danielnewmanUV” username=”toprank”]

Besides server-based software, hundreds of companies also offer specialized Web analytics tools that operate using remote access to your content, the most popular of these being Google Analytics.

Here’s an image of Google Analytics from “How to Prove the Value of Content Marketing to Your CMO in 3 Easy Steps,” by our own Content Strategist Anne Leuman.

Snapshot of TopRank Marketing Google Analytics

No matter which software, online tool, or service you use, the pursuit is by its very nature a complex one. It requires a good deal of dedication and learning to deeply understand which bits of data you really care about, so you can find the best ways to accurately gather and analyze them.

Choosing The Right Analytics KPIs

No matter which tools you choose, a primary decision you’ll have to make is what to measure to gauge success of failure, and these will be your Key Performance Indicators (KPIs) — those signals that matter the most to you and your business.

KPIs vary widely, so it’s important to find the ones that will provide you with the data that’s the most relevant to your own content measurement situation.

Setting S.M.A.R.T. goals — those that are Specific, Measurable, Attainable, Relevant, and Time-Based — are especially pertinent in the realm of data analytics, as Danica Benson, Global Product Marketing Manager SMB at SAP Concur and former Marketing Communications Manager at Rival IQ, has pointed out.

[bctt tweet=”“Digital marketing analytics software can aggregate and report on a wide array of metrics, many of which are fun to know, but without context don’t tell you much about how to move forward.” — Danica Benson” username=”toprank”]

The dangers of relying too heavily on statistics, however, have been espoused for over 100 years, including the famous quote popularized by Mark Twain in 1906, “There are lies, damned lies, and statistics.”

Knowing how to wisely interpret statistical data — especially what information to set aside — is a bit of a combination of art and science that takes dedication and research.

https://giphy.com/embed/3o6MbcMDc6LaG9gB2w

via GIPHY

Jay Acunzo, founder, host and writer at Unthinkable Media, explores two methods of finding this data measurement wisdom in “There Are 2 Ways to Approach Data. Which Helps vs. Hurts Creativity?,” offering both an Aristotelian and a Galilean take on the art and science.

[bctt tweet=”“When we’re data informed, we isolate variables, test, and learn. We insert our own sense of taste and our intuition into the process.” @jayacunzo” username=”toprank”]

Taking the time to learn and choose the right KPIs for extracting actionable insight from your analytics data is well worth the effort, as Kyle Harper, Marketing Analyst at Harvard University points out in his recent “Comprehensive Guide to Connecting Content Marketing KPIs to Business Goals.”

[bctt tweet=”“Well-chosen content marketing KPIs are more essential than ever. Knowing the right KPIs to track can help sort out the most important performance information from the noise.” — Kyle Harper @TheyWereFlying” username=”toprank”]

Analytics Can’t Simply Be an Afterthought

Deciding how to measure success is critical to any content marketing strategy, so measurement and analytics need to be included from the very beginning of your planning process.

Large numbers of technical professionals worldwide make a career from trying to gauge the success of online content. And while it isn’t our intent to teach you the detailed and ever-changing intricacies of analytics, we would like to share some fundamental truths with you.

Longtime technology consultant Joel Snyder, Ph.D. points out in a recent article that analytics is far from new, and that even the smallest firms now gather heaps of statistical data.

[bctt tweet=”“Data analytics isn’t some new magic bullet. It’s a way of leveraging the data that most every business has been quietly accumulating for years to deliver insights that lead to better decisions.” — Joel Snyder, Ph.D. @joelsnyder” username=”toprank”]

In some ways, some aspects of analytics have gotten easier. With the massive growth of the internet at-large, social media, and other platforms, methods for tracking that have become nearly universal—such as the ubiquitous Facebook “like” or Twitter “heart.”

https://giphy.com/embed/atZII8NmbPGw0

via GIPHY

Whether you’re just starting out or a seasoned pro, a considerable challenge is wading through the vast number of analytics tools and finding the ones that mesh well with your needs, including how you prefer to visually see your analytics data.

Some tools offer only simple text-based lists, while most make a point to present analytics information in ways that are visually easier to understand.

Using just one analytics utility was commonplace in the Web’s early year. But today, it’s not unusual for savvy digital marketers to use five, 10, or even more tools, all from different firms — some free, some commercial, and others custom-made.

Accurate and relevant measurement can also help with influencer marketing campaigns, as our CEO Lee Odden looks at in “B2B Marketers: Is Your Influencer Marketing Mechanical or Meaningful?

[bctt tweet=”“Platforms like Traackr, Onalytica, GroupHigh and BuzzSumo (to name a few) can be instrumental for the most effective (vs. subjective) influencer identification, engagement, measurement and program management.” @LeeOdden” username=”toprank”]

Analytics Tools Can Be Your Best Friend

Old-world measurement and mapping tools image.

The scope and complexity of analytics tools varies greatly, but where can you start when looking to begin a new campaign, or when you just want to keep up on the new players that are entering the market in greater numbers than ever?

To help you learn more about some of the analytics tools available, we’ve put together a list of 11 helpful resources exploring several of the more popular analytics tools and services.

Four Takes On Google Analytics

SEMrush

Trust Insights

2018 #MPB2B Influencers NetworkAhrefs

Quintly

Twitter

Facebook

BuzzSumo

There are hundreds of other excellent data analytics tools and services available, including those from Sprout Social, RivalIQ, Traackr. Internet Marketing Ninjas, Screaming Frog, SpyFu, Moz, and so many more.

Additional Resources to Put You on the Path to Informed Measurement

Smiling girl measuring her height against a chalkboard image.

As a final parting bonus list, here’s a collection of recent helpful additional resources to boost your analytics knowledge.

7 Recent Data Analytics Trends

[bctt tweet=”“Only 30% of B2B marketers use data to inform decision-making. That’s because harnessing data is hard. Over 2.5 quintillion bytes of data are generated every day.” — Alexis Hall @Alexis5484″ username=”toprank”]

Next Up: More Measurement Tactics For Your Campaigns

By learning the fundamentals of data analytics, choosing the tools that best fit your own particular needs, and keeping up on the latest industry news, your content marketing will be set to have measurable advantages over those who skip or only pay lip service to the art of metrics.

Next up in our “Collective Wisdom” series we’ll take a look at additional measurement and analytics tools and how to use the data you gather.

If you haven’t yet caught our previous episodes in this series, hop back and study up:

Measuring Content Marketing Success: Analytics Advice & Insight from the Experts was posted via Internet Marketing

TopRank Marketing’s Top 6 SEO Predictions & Trends for 2019

SEO Trends & Predictions 2019

SEO Trends & Predictions 2019The year-end hustle and bustle is on, marketers. We’re all finalizing next year’s tactical mix and strategy, refining targets, and setting goals—all with the intention of driving bigger, badder, and better results in the new year.

When it comes to setting your SEO strategy for 2019, here’s an important stat to keep in mind: 61% of marketers say improving SEO and growing their organic presence is their top inbound marketing priority.

For more than two decades, SEO has been a foundational digital marketing tactic. And as algorithms have been refined, content has proliferated, and innovation and technology have changed how we search—competition in the organic search landscape has hit an all-time high.

What does 2019 have in store for us in the SEO realm? Here are our top SEO predictions and trends marketers should know now and keep an eye on into the new year.

#1 – The Mobile-Friendly Flag Will Fly Higher Than Ever

After more than a year of experimenting, Google released its mobile-first indexing in March 2018. With over half of all web traffic coming from mobile devices, this move reflects Google’s continued commitment to serving the best quality content to searchers when and where they’re searching.

Mobile-first indexing simply means that Google is now using the mobile version of a given page for crawling, indexing, and ranking systems—rather than the desktop version, which had previously been the default. According to Google, mobile-indexing doesn’t provide a ranking advantage in and of itself, and is separate from the mobile-friendly assessment.

However, as mobile web traffic has begun to dominate the search landscape, sites need to be mobile-friendly to remain competitive and consistently show up in mobile search results. A poor mobile experience can lead to a decrease in other ranking factors, like bounce rate—as illustrated below.

Page Load Times and Bounce RateSource: Think with Google

While many search marketers have seen this shift coming, Google’s research showed that “for 70% of the mobile landing pages we analyzed, it took more than five seconds for the visual content above the fold to display on the screen, and it took more than seven seconds to fully load all visual content above and below the fold.” The mobile benchmark they’re setting for load time is under three seconds.

All this means that 2019 is absolutely the time to firmly plant your flag strongly in the mobile-friendly camp. This will mean evaluating your web presence, SEO strategy, and content to ensure you’re able to provide the best possible mobile experience. If you’re unsure where you stand, you can start with Google’s Mobile-Friendly Test tool to test how easy it is for your audience to visit pages on your website.

#2 – Voice Search Will Continue to Raise the Content Stakes

The metaphorical cat is out of the bag when it comes to the ease of voice search. One in six Americans now owns a smart speaker, according to TechCrunch. By 2020, Gartner predicts that 30% of web browsing sessions will be done without a screen. Finally, there were over one billion voice searches per month as of January 2018. And with voice search platforms recording an error rate of under 5% with natural language processing (in English, at least) it stands to reason this trend will continue to grow as users find more reliable results.

However, the switch to voice search will come with a new set of challenges for marketers—and that’s natural language. As of May 2017, almost 70% of requests to the Google Assistant are expressed in natural language versus typical keyword-based searches like those typed into a search bar.

As a result, in 2019 and beyond it will be increasingly important for marketers to optimize and create content that lends itself to voice search. From a technical perspective, the usual suspects of page speed, site security, and domain authority will play an important role here. But at the end of the day, it’s all about ensuring your site content can be easily found via voice search.

What will that content need to look like? Backlinko found that the average word count of a voice search result page is a whopping 2,312 words—and those words are written at a ninth grade level.

In addition, considering and striving to match search intent will be more important than ever. Marketers will need to focus on what we like to call “being the best answer.” This means focusing on answering those question your ideal audience is and will be asking—whether they’re speaking to a smart speaker, smartphone, or web browser.

“Google is essentially an answer engine,” TopRank Marketing CEO Lee Odden said not long ago. “If companies want to be the ‘best answer’ for what their potential customers are looking for, they’ll want to invest in content that is comprehensive and engaging on the topic.”

[bctt tweet=”If companies want to be the “best answer,” they’ll want to invest in #content that is comprehensive and engaging on the topic. – @leeodden #SEO #SearchMarketing” username=”toprank”]

#3 – Increasing Privacy Demands Will Tip the Search Scales

From the two recent Google Plus data leaks affecting over 50 million users to massive data breaches at some of the world’s largest companies, we’re all increasingly aware of the amount of personal data floating about the digital realm.

This coupled with an innate distrust in marketing messages—not to mention the “creep factor” of being followed around by ads—consumers and B2B buyers alike are looking for more privacy and protection on the web.

For several years, HTTPS has been considered a ranking signal. And Google made their stance on HTTPS encryption well known this year. Ahead of the release of Chrome 68, Google strongly advocated websites make the HTTPS switch by July 2018—or risk their site being stamped “not secure” in the browser.

In 2019 and beyond, marketers can expect Google and perhaps other browsers to double down on this. In addition, with new data protection laws like GDPR in the European Union, marketers can expect new privacy and security to take shape. This will certainly continue to impact paid search efforts, as new rules and restrictions will cause platform target changes. And that means that smart organic SEO will see a revival.

Of course, GDPR doesn’t technically affect US-based customers, following data protection guidelines can only help your cause in building trust and keeping Google happy.

#4 – Expanding Market of Alternative Search Platforms

Google is still the king of search. But its market share is being challenged by more traditional search engines with a twist, as well as “non-traditional” search platforms. Case in point: Amazon.

A recent eMarketer report shows that Amazon is now the third-largest digital advertising platform, behind Google and Facebook. In addition, according to Kenshoo, a whopping 72% of shoppers now use Amazon to find products, and 56% shared that they typically look on Amazon before any other sites. So, as Amazon search continues to find its legs in the digital advertising market, it’s worth considering their audience size and growth as your finalize your 2019 budget.

As for those engines that resemble Google, Bing will continue to be a key player in SEO and paid search marketing in 2019. It accounts for about 22% of the desktop search market in the US and 4.1% of the mobile search market. With their recent rollout of LinkedIn profile targeting, their offerings are becoming increasingly attractive to the B2B market.

Finally, alternative search platforms such as DuckDuckGo, StartPage, and Mojeek are growing in adoption—and you can bet that trend will continue in 2019. In fact, DuckDuckGo is will hit record traffic by the end of 2018, according to AdWeek. At the time of this post’s publishing, the “internet privacy company” had recorded more than 8.5 billion direct queries in 2018.

DuckDuckGo ExampleWhile Google still reigns supreme, boasting well over half of the search market, marketers need to take note and consider additional platforms when designing their SEO and search marketing (and content) strategies—and no just because usage is rising. If you’re looking to get the most bang for your paid search buck, competition on alternative platforms is much lower right now—making it ripe with opportunity.

#5 – Raise The Bar on Content—Or Your Competitors Will E-A-T Your Lunch

While it makes a delightful pun, E-A-T is a serious concept in the SEO game. Google has told us many, many, many times that quality content will help shield from algorithmic changes and updates. Your content simply needs to follow three basic principles: Expertise, Authority, and Trustworthiness.

In 2019, this means that it’s time to double-down on quality content creation. As we mentioned earlier, that quality content needs to meet relevant search intent and strive to provide the best answer for the searcher. But it shouldn’t simply be a concern for brands that are creating content, E-A-T also applies to individual authors.

Creating quality content isn’t just a question of long-form or short-form. It’s content that’s created with the end-user in mind. High quality content should inform, entertain, or otherwise provide value to those reading it. That’s what ultimately ends up being shared socially, which is another factor in how Google views your content’s trustworthiness.

#6 – ‘Internetization’ Offers New Opportunities, But Requires Smart SEO Strategies

Our world is more connected than ever, thanks to what Constantine Passaris, Professor of Economics at University of New Brunswick, calls “internetization.”

“Globalization is not an accurate descriptor of the 21st century and the internet-driven transformational change sweeping the international economic landscape,” he wrote in a World Economic Forum article. “Internetization is the contemporary face of globalization. It includes the modern tools of electronic globalization and embraces the digital connectivity and empowerment of the internet and the World Wide Web.”

And as internetization continues to proliferate, B2B brands of all sizes have the opportunity to broaden their global footprint. But when it comes to reaching new audiences whenever and wherever their searching, you’ll need a smart global SEO strategy in 2019 and beyond.

“Serving a global audience begins with understanding them,” Eli Schwartz, Director of SEO & Organic Product at SurveyMonkey, told us in an interview earlier this year. “By gaining insights on your audience through People Powered Data, you can create an SEO strategy that matters to them and reaches them in the vernacular in which they speak.”

He added: “Depending on the potential value of these global users, it may not be prudent to translate the full site or offer free global shipping, but translating that one page that targets the most important international keywords is not that complicated. Additionally, companies can take the very first step towards global SEO by just having a look at where and how their website ranks on Google internationally. They may very well find some low hanging fruit worth building a strategy around.”

[bctt tweet=”By gaining insights on your audience through People Powered Data, you can create an #SEOStrategy that matters to them and reaches them in the vernacular in which they speak. – @5le” username=”toprank”]

A Little Reminder to Take the SEO “Basics” into 2019

There are plenty of new and flashy trends to keep us all busy in the coming year, but that doesn’t mean that we should forget about the foundational elements of SEO. The Ranking Factors SEMRush Study 2.0 provides an excellent reminder of what truly matters to Google: Domain authority, direct traffic, content quality and website security.

The SEMRush study shows one clear winner in the ranking factors category—direct traffic. This metric is typically a measure of brand awareness, and thus domain authority. Focusing on direct traffic as a KPI for your overall marketing awareness isn’t likely to go out of style any time soon.

Another key factor along the lines of domain authority is the amount of backlinks to your site.

“Every domain that ranks for a high-volume keyword has on average three times more backlinks than the domains from the three lower-volume groups on the same position,” says SEMRush in the same study.

Along with having an authoritative domain, it’s also important to provide quality content. Time on site, pages per session, and bounce rate remain in the top 5 ranking factors this year. Content length is also a factor, as the same study shows that there’s a 45% difference in content length between the top 3 and the 20th SERP position. If you want your content to rank, make it worth reading and engaging with.

“The data is there,” Lee said not too long ago. “Customers are telling you what they want. The question is, how to connect those dots of data to understand and optimize customer experiences?”

Using data to understand customer preferences for search discovery and intent will help you optimize content to become the best answer buyers are looking for.

Ready. Set. Let’s Go, 2019

As you gear up for 2019, keep these trends—and the basics of SEO and search marketing—in mind. Providing the right information, quickly and in a way that is easy will always be in style. The ways we get there may change with time (or algo updates), but the focus remains the same.

Content is SEO’s beautiful stepsister. What’s on tap for 2019 in the content marketing realm? Check out our picks top content marketing trends and predictions to watch in 2019.

TopRank Marketing’s Top 6 SEO Predictions & Trends for 2019 was posted via Internet Marketing

Interviewing Google’s John Mueller at SearchLove: domain authority metrics, sub-domains vs. sub-folders and more

I was fortunate enough to be able to interview Google’s John Mueller at SearchLove and quiz him about domain authority metrics, sub-domains vs. sub-folders and how bad is ranking tracking really.

I have previously written and spoken about how to interpret Google’s official statements, technical documentation, engineers’ writing, patent applications, acquisitions, and more (see: From the Horse’s Mouth and the associated video as well as posts like “what do dolphins eat?”). When I got the chance to interview John Mueller from Google at our SearchLove London 2018 conference, I knew that there would be many things that he couldn’t divulge, but there were a couple of key areas in which I thought we had seen unnecessary confusion, and where I thought that I might be able to get John to shed some light. [DistilledU subscribers can check out the videos of the rest of the talks here – we’re still working on permission to share the interview with John].

Mueller is Webmaster Trends Analyst at Google, and these days he is one of the most visible spokespeople for Google. He is a primary source of technical search information in particular, and is one of the few figures at Google who will answer questions about (some!) ranking factors, algorithm updates and crawling / indexing processes.

Contents

New information and official confirmations

In the post below, I have illustrated a number of the exchanges John and I had that I think revealed either new and interesting information, or confirmed things we had previously suspected, but had never seen confirmed before on the record.

I thought I’d start, though, by outlining what I think were the most substantial details:

Confirmed: Google has domain-level authority metrics

We had previously seen numerous occasions where Google spokespeople had talked about how metrics like Moz’s Domain Authority (DA) were proprietary external metrics that Google did not use as ranking factors (this, in response to many blog posts and other articles that conflated Moz’s DA metric with the general concept of measuring some kind of authority for a domain). I felt that there was an opportunity to gain some clarity.

“We’ve seen a few times when people have asked Google: “Do you use domain authority?” And this is an easy question. You can simply say: “No, that’s a proprietary Moz metric. We don’t use Domain Authority.” But, do you have a concept that’s LIKE domain authority?”

We had a bit of a back-and-forth, and ended up with Mueller confirming the following (see the relevant parts of the transcript below):

  1. Google does have domain level metrics that “map into similar things”
  2. New content added to an existing domain will initially inherit certain metrics from that domain
  3. It is not a purely link-based metric but rather attempts to capture a general sense of trust

READ MORE OF THE EXCHANGE

Confirmed: Google does sometimes treat sub-domains differently

I expect that practically everyone around the industry has seen at least some of the long-running back-and-forth between webmasters and Googlers on the question of sub-domains vs sub-folders (see for example this YouTube video from Google and this discussion of it). I really wanted to get to the bottom of this, because to me it represented a relatively clear-cut example of Google saying something that was different to what real-world experiments were showing.

I decided to set it up by coming from this angle: by acknowledging that we can totally believe that there isn’t an algorithmic “switch” at Google that classifies things as sub-domains and ranks them deliberately lower, but that we do regularly see real-world case studies showing uplifts from moving, and so asking John to think about why we might see that happen. He said [emphasis mine]:

in general, we … kind of where we think about a notion of a site, we try to figure out what belongs to this website, and sometimes that can include sub-domains, sometimes that doesn’t include sub-domains.

Sometimes that includes sub-directories, sometimes that doesn’t include specific sub-directories. So, that’s probably where that is coming from where in that specific situation we say, “Well, for this site, it doesn’t include that sub-domain, because it looks like that sub-domain is actually something separate. So, if you fold those together then it might be a different model in the end, whereas for lots of other sites, we might say, “Well, there are lots of sub-domains here, so therefore all of these sub-domains are part of the main website and maybe we should treat them all as the same thing.”

And in that case, if you move things around within that site, essentially from a sub-domain to a sub-directory, you’re not gonna see a lot of changes. So, that’s probably where a lot of these differences are coming from. And in the long run, if you have a sub-domain that we see as a part of your website, then that’s kind of the same thing as a sub-directory.

To paraphrase that, the official line from Google is:

  1. Google has a concept of a “site” (see the discussion above about domain-level metrics)
  2. Sub-domains (or even sub-folders) can be viewed as not a part of the rest of the site under certain circumstances
  3. If we are looking at a sub-domain that Google views as not a part of the rest of the site, then webmasters may see an uplift in performance by moving the content to a sub-folder (that is viewed as part of the site)

Unfortunately, I couldn’t draw John out on the question of how one might tell in advance whether your sub-domain is being treated as part of the main site. As a result, my advice remains the same as it used to be:

In general, create new content areas in sub-folders rather than sub-domains, and consider moving sub-domains into sub-folders with appropriate redirects etc.

The thing that’s changed is that I think that I can now say this is in line with Google’s official view, whereas it used to be at odds with their official line.

Learning more about the structure of webmaster relations

Another area that I was personally curious about going into our conversation was about how John’s role fits into the broader Google teams, how he works with his colleagues, and what is happening behind the scenes when we learn new things directly from them. Although I don’t feel like we got major revelations out of this line of questioning, it was nonetheless interesting:

I assumed that after a year, it [would be] like okay, we have answered all of your questions. It’s like we’re done. But there are always new things that come up, and for a lot of that we go to the engineering teams to kind of discuss these issues. Sometimes we talk through with them with the press team as well if there are any sensitivities around there, how to frame it, what kind of things to talk about there.

For example, I was curious to know whether, when we ask a question to which John doesn’t already know the answer he reviews the source code himself, turns to an engineer etc. Turns out:

  1. He does not generally attend search quality meetings (timezones!) and does not review the source code directly
  2. He does turn to engineers from around the team to find specialists who can answer his questions, but does not have engineers dedicated to webmaster relations

For understandable reasons, there is a general reluctance among engineers to put their heads above the parapet and be publicly visible talking about how things work in their world. We did dive into one particularly confusing area that turned out to be illuminating – I made the point to John that we would love to get more direct access to engineers to answer these kinds of edge cases:

Concrete example: the case of noindex pages becoming nofollow

At the back end of last year, John surprised us with a statement that pages that are noindex will, in the long run, eventually be treated as nofollow as well.

Although it’s a minor detail in many ways, many of us felt that this exposed gaps in our mental model. I certainly felt that the existence of the “noindex, follow” directive meant that there was a way for pages to be excluded from the index, but have their links included in the link graph.

What I found more interesting than the revelation itself was what it exposed about the thought process within Google. What it boiled down to was that the folk who knew how this worked – the engineers who’d built it – had a curse of knowledge. They knew that there was no way a page that was dropped permanently from the index could continue to have its links in the link graph, but they’d never thought to  tell John (or the outside world) because it had never occurred to them that those on the outside hadn’t realised it worked this way [emphasis mine]:

it’s been like this for a really long time, and it’s something where, I don’t know, in the last year or two we basically went to the team and was like, “This doesn’t really make sense. When people say noindex, we drop it out of our index eventually, and then if it’s dropped out of our index, there’s canonical, so the links are kind of gone. Have we been recommending something that doesn’t make any sense for a while?” And they’re like, “Yeah, of course.”

More interesting quotes from the discussion

Our conversation covered quite a wide range of topics, and so I’ve included some of the other interesting snippets here:

Algorithm changes don’t map easily to actions you can take

Googlers don’t necessarily know what you need to do differently in order to perform better, and especially in the case of algorithm updates, their thinking about “search results are better now than they were before” doesn’t translate easily into “sites that have lost visibility in this update can do XYZ to improve from here”. My reading of this situation is that there is ongoing value to the work SEOs to do interpret algorithm changes and longer-running directional themes to Google’s changes to guide webmasters’ roadmaps:

[We don’t necessarily think about it as] “the webmaster is doing this wrong and they should be doing it this way”, but more in the sense “well, overall things don’t look that awesome for this search result, so we have to make some changes.” And then kind of taking that, well, we improved these search results, and saying, “This is what you as a webmaster should be focusing on”, that’s really hard.

Do Googlers understand the Machine Learning ranking factors?

I’ve speculated that there is a long-run trend towards less explainability of search rankings, and that this will impact search engineers as well as those of us on the outside. We did at least get clarity from John that at the moment, they primarily use ML to create ranking factors that feed into more traditional ranking algorithms, and that they can debug and separate the parts (rather than a ML-generated SERP which would be much less inspectable):

[It’s] not the case that we have just one machine learning model where it’s like oh, there’s this Google bot that crawls everything and out comes a bunch of search results and nobody knows what happens in between. It’s like all of these small steps are taking place, and some of them use machine learning.

And yes, they do have secret internal debugging tools, obviously, which John described as:

Kind of like search console but better

Why are result counts soooo wrong?

We had a bit of back-and-forth on result counts. I get that Google has told us that they aren’t meant to be exact, and are just approximations. So yeah, sure, but when you sometimes get a site: query that claims 13m results, you click to page 2 and find that there are only actually 11 – not 11m, actually just 11, you say to yourself that this isn’t a particularly helpful approximation. We didn’t really get any further on this than the official line we’ve heard before, but if you need that confirmed again:

We have various counters within our systems to try to figure out how many results we approximately have for some of these things, and when things like duplicate content show up, or we crawl a site and it has a ton of duplicate content, those counters might go up really high.

But actually, in indexing and later stage, I’m gonna say, “Well, actually most of these URLs are kinda the same as we already know, so we can drop them anyway.”

So, there’s a lot of filtering happening in the search results as well for [site: queries], where you’ll see you can see more. That helps a little bit, but it’s something where you don’t really have an exact count there. You can still, I think, use it as a rough kind of gauge. It’s like is there a lot, is it a big site? Does it end up running into lots of URLs that are essentially all eliminated in the end? And you can kinda see little bit there. But you don’t really have a way of getting the exact count of number of URLs.

More detail on the domain authority question

On domain authority question that I mentioned above (not the Moz Domain Authority proprietary metric, but the general concept of a domain-level authority metric), here’s the rest of what John said:

I don’t know if I’d call it authority like that, but we do have some metrics that are more on a site level, some metrics that are more on a page level, and some of those site wide level metrics might kind of map into similar things.

the main one that I see regularly is you put a completely new page on a website. If it’s an unknown website or a website that we know tends to be lower quality, then we probably won’t pick it up as quickly, whereas if it’s a really well-known website where we’ll kind of be able to trust the content there, we might pick that up fairly quickly, and also rank that a little bit better.

it’s not so much that it’s based on links, per se, but kind of just this general idea that we know this website is generally pretty good, therefore if we find something unknown on this website, then we can kind of give that a little bit of value as well.

At least until we know a little bit better that this new piece of thing actually has these specific attributes that we can focus on more specifically.

Maybe put your nsfw and sfw content on different sub-domains

I talked above about the new clarity we got on the sub-domain vs. sub-folder question and John explained some of the “is this all one site or not” thinking with reference to safe search. If you run a site with not safe for work / adult content that might be filtered out of safe search and have other content you want to have appear in regular search results, you could consider splitting that apart – presumably onto a different sub-domain – and Google can think about treating them as separate sites:

the clearer we can separate the different parts of a website and treat them in different ways, I think that really helps us. So, a really common situation is also anything around safe search, adult content type situation where you have maybe you start off with a website that has a mix of different kinds of content, and for us, from a safe search point of view, we might say, “Well, this whole website should be filtered by safe search.”

Whereas if you split that off, and you make a clearer section that this is actually the adult content, and this is kind of the general content, then that’s a lot easier for our algorithms to say, “Okay, we’ll focus on this part for safe search, and the rest is just a general web search.”

John can “kinda see where [rank tracking] makes sense”

I wanted to see if I could draw John into acknowledging why marketers and webmasters might want or need rank tracking – my argument being that it’s the only way of getting certain kinds of competitive insight (since you only get Search Console for your own domains) and also that it’s the only way of understanding the impact of algorithm updates on your own site and on your competitive landscape.

I struggled to get past the kind of line that says that Google doesn’t want you to do it, it’s against their terms, and some people do bad things to hide their activity from Google. I have a little section on this below, but John did say:

from a competitive analysis point of view, I kinda see where it makes sense

But the ToS thing causes him problems when it comes to recommending tools:

how can we make sure that the tools that we recommend don’t suddenly start breaking our terms of service? It’s like how can we promote any tool out there when we don’t know what they’re gonna do next.

We’ve come a long way

It was nice to end with a nice shout out to everyone working hard around the industry, as well as a nice little plug for our conference [emphasis mine, obviously]:

I think in general, I feel the SEO industry has come a really long way over the last, I don’t know, five, ten years, in that there’s more and more focus on actual technical issues, there’s a lot of understanding out there of how websites work, how search works, and I think that’s an awesome direction to go. So, kind of the voodoo magic that I mentioned before, that’s something that I think has dropped significantly over time.

And I think that’s partially to all of these conferences that are running, like here. Partially also just because there are lots of really awesome SEOs doing awesome stuff out there.

Personal lessons from conducting an interview on stage

Everything above is about things we learned or confirmed about search, or about how Google works. I also learned some things about what it’s like to conduct an interview, and in particular what it’s like to do so on stage in front of lots of people.

I mean, firstly, I learned that I enjoy it, so I do hope to do more of this kind of thing in the future. In particular, I found it a lot more fun than chairing a panel. In my personal experience, chairing a panel (which I’ve done more of in the past) requires a ton of mental energy on making sure that people are speaking for the right amount of time, that you’re moving them onto the next topic at the right moment, that everyone is getting to say their piece, that you’re getting actually interesting content etc. In a 1:1 interview, it’s simple: you want the subject talking as much as possible, and you can focus on one person’s words and whether they are interesting enough to your audience.

In my preparation, I thought hard about how to make sure my questions were short but open, and that they were self-contained enough to be comprehensible to John and the audience, and allow John to answer them well. I think I did a reasonable job but can definitely continue practicing to get my questions shorter. Looking at the transcript, I did too much of the talking. Having said that, my preparation was valuable. It was worth it to have understood John’s background and history first, to have gathered my thoughts, and to have given him enough information about my main lines of questioning to enable him to have gone looking for information he might not have had at his fingertips. I think I got that balance roughly right; enabling him to prep a reasonable amount while keeping a couple of specific questions for on the day.

I also need to get more agile and ask more follow-ups and continuation questions – this is hard because you are having to think on your feet – I think I did it reasonably well in areas where I’d deliberately prepped to do it. This was mainly in the more controversial areas where I knew what John’s initial line might be but I also knew what I ultimately wanted to get out of it or dive deeper into. I found it harder where I found it less expected that I hadn’t quite got 100% what I was looking for. It’s surprisingly hard to parse everything that’s just been said and figure out on the fly whether it’s interesting, new, and complete.

And that’s all from the comfort of the interrogator’s chair. It’s harder to be the questioned than the questioner, so thank you to John for agreeing to come, for his work in the prep, and for being a good sport as I poked and prodded at what he’s allowed to talk about.

I also got to see one of his 3D-printed Googlebot-in-a-skirt characters – a nice counterbalance to the gender assumptions that are too common in technical areas:

Things John didn’t say

There are a handful of areas where I wish I’d thought quicker on my feet or where I couldn’t get deeper than the PR line:

“Kind of like Search Console”

I don’t know if I’d have been able to get more out of him even if I’d pushed, but looking back at the conversation, I think I gave up too quickly, and gave John too much of an “out” when I was asking about their internal toolset. He said it was “kind of like Search Console” and I put words in his mouth by saying “but better”. I should have dug deeper and asked for some specific information they can see about our sites that we can’t see in Search Console.

John can “kinda see where [rank tracking] makes sense”

I promised above to get a bit deeper into our rank tracking discussion. I made the point that “there are situations where this is valuable to us, we feel. So, yes we get Search Console data for our own websites, but we don’t get it for competitors, and it’s different. It doesn’t give us the full breadth of what’s going on in a SERP, that you might get from some other tools.”

We get questions from clients like, “We feel like we’ve been impacted by update X, and if we weren’t rank tracking, it’s very hard for us to go back and debug that.” And so I asked John “What would your recommendation be to consultants or webmasters in those situations?”

I think that’s kinda tricky. I think if it’s your website, then obviously I would focus on Search Console data, because that’s really the data that’s actually used when we showed it to people who are searching. So, I think that’s one aspect where using external ranking tracking for your own website can lead to misleading answers. Where you’re seeing well, I’m seeing a big drop in my visibility across all of these keywords, and then you look in Search Console in it’s like, well nobody’s searching for these keywords, who cares if I’m ranking for them or not?

From our point of view, the really tricky part with all of these external tools is they scrape our search results, so it’s against our terms of service, and one thing that I notice kind of digging into that a little bit more is a lot of these tools do that in really sneaky ways.

(Yes, I did point out at this point that we’d happily consume an API).

They do things like they use proxy’s on mobile phones. It’s like you download an app, it’s a free app for your phone, and in the background it’s running Google queries, and sending the results back to them. So, all of these kind of sneaky things where in my point of view, it’s almost like borderline malware, where they’re trying to take user’s computers and run queries on them.

It feels like something that’s like, I really have trouble supporting that. So, that’s something, those two aspects, is something where we’re like, okay, from a competitive analysis point of view, I kinda see where it makes sense, but it’s like where this data is coming from is really questionable.

Ultimately, John acknowledged that “maybe there are ways that [Google] can give you more information on what we think is happening” but I felt like I could have done a better job on pushing for the need for this kind of data on competitive activity, and on the market as a whole (especially when there is a Google update). It’s perhaps unsurprising that I couldn’t dig deeper than the official line here, nor could I have expected to get a new product update about a whole new kind of competitive insight data, but I remain a bit unsatisfied with Google’s perspective. I feel like tools that aggregate the shifts in the SERPs when Google changes their algorithm and tools that let us understand the SERPs where our sites are appearing are both valuable and Google is fixated on the ToS without acknowledging the ways this data is needed.

Are there really strong advocates for publishers inside Google?

John acknowledged being the voice of the webmaster in many conversations about search quality inside Google, but he also claimed that the engineering teams understand and care about publishers too:

the engineering teams, [are] not blindly focused on just Google users who are doing searches. They understand that there’s always this interaction with the community. People are making content, putting it online with the hope that Google sees it as relevant and sends people there. This kind of cycle needs to be in place and you can’t just say “we’re improving search results here and we don’t really care about the people who are creating the content”. That doesn’t work. That’s something that the engineering teams really care about.

I would have liked to have pushed a little harder on the changing “deal” for webmasters as I do think that some of the innovations that result in fewer clicks through to websites are fundamentally changing that. In the early days, there was an implicit deal that Google could copy and cache webmasters’ copyrighted content in return for driving traffic to them, and that this was a socially good deal. It even got tested in court [Wikipedia is the best link I’ve found for that].

When the copying extends so far as to remove the need for the searcher to click through, that deal is changed. John managed to answer this cleverly by talking about buying direct from the SERPs:

We try to think through from the searcher side what the ultimate goal is. If you’re an ecommerce site and someone could, for example, buy something directly from the search results, they’re buying from your site. You don’t need that click actually on your pages for them to actually convert. It’s something where when we think that products are relevant to show in the search results and maybe we have a way of making it more such that people can make an informed choice on which one they would click on, then I think that’s an overall win also for the whole ecosystem.

I should have pushed harder on the publisher examples – I’m reminded of this fantastic tweet from 2014. At least I know I still have plenty more to do.

Thank you to Mark Hakansson for the photos [close-up and crowd shot].

So. Thank you John for coming to SearchLove, and for being as open with us as you were, and thank you to everyone behind the scenes who made all this possible.

Finally: to you, the reader – what do you still want to hear from Google? What should I dig deeper into and try to get answers for you about next time? Drop a comment below or drop me a line on Twitter.

Interviewing Google’s John Mueller at SearchLove: domain authority metrics, sub-domains vs. sub-folders and more was posted via Internet Marketing

5 Common Digital Marketing Data & Analytics Challenges and How to Start Solving Them

The volume and velocity of the data at our fingertips today has the power to transform the way we do marketing. Armed with the right data about our target audience, we can reach them at the right time, in the right place, with the best tailored messages. Given the deluge of marketing messages inundating consumers and B2B buyers at every moment, it’s critical that your marketing messages be the most relevant in order to break through the clutter.

However, many of us still aren’t using data to its full potential. Only 30% of B2B marketers use data to inform decision-making. That’s because harnessing data is hard. Over 2.5 quintillion bytes of data are generated every day, across so many different people, channels, devices, and technologies. And nearly 50% of marketers say they don’t have the the right people, processes, and technologies in place to make use all of that data to make an impact.

To continue to thrive in a crowded market place, and to truly show the impact of marketing as a revenue generator, it will be critical to get the people, process, and technology in place to make your data work for you. Of course, it won’t happen overnight. But regardless of where you are in your journey to data sophistication, you can start solving your challenges now.

Below, we dive into five frequent data challenges and how you can put yourself on a path to overcome them.

Challenge #1 – The data you need doesn’t exist.

Despite all of that data being generated and captured, you could still be experiencing gaps in your data reporting. Typical data holes, include:

  • Lack of attribution
  • Incomplete contact records
  • Not all marketing and buying activities are being tracked

These data holes are usually caused by a lack (or non-adherence) of process by both sales and marketing teams. The result is an incomplete picture, which can lead to inaccurate data analysis.

Unsurprisingly, if your marketing activities aren’t properly tracked, you’re not able to truly measure the result of one marketing activity over another. But the good news is that this is one of the easiest challenges to overcome.

Start Solving The Data Gap Challenge

First, examine the process and governance around your tracking and database. If you don’t have one, create a policy around data governance focusing on getting top-down buy-in on the importance of collecting and maintaining accurate data.

For your database:

  • Ensure new records are complete by reviewing data input requirements with the sales team, ensuring all know how and why complete data records are critical. And ensure your CRM and website forms are set up properly to ensure mandatory data is collected and entered.
  • Consider a major scrub if you have a lot of bad data. There are a variety of CRM services or add-on tools to help clean up inaccurate or duplicate records. Manually fixing thousands of records will be nearly impossible in most cases, so consider bringing in outside help.

For marketing activities:

  • Create and enforce a process within your marketing team so tracking is in place on every activity possible.
  • If you you’re already using Google Analytics, make the most out of it by:

    • Ensuring goals and events are setup to track major conversions (like a contact form completion) and micro-conversions (like a video play).
    • Use Google URL builder for improved campaign tracking. Unique URLs can help you pinpoint which marketing activities are driving the most activity.
  • Identify other key data points that you’re not currently able to track with your existing set of tools. From there, research free tools to help you fill in gaps or set aside budget to make an investment in new technology.

Challenge #2 – You have data silos.

Many of you are dealing with multiple legacy systems, perhaps put in place by different teams, that don’t necessarily work together. Between your CRM, analytics platform, marketing automation, and social listening tools, data and platform integration may not be happening—and it’s holding you back.

Finding a solution to this challenge will enable your data to become incredibly powerful. Integrating data across systems gives you the opportunity to create a more complete view of a customer or prospect, connecting activity throughout the buying journey and enabling you to reach them with the most relevant messages.

Start Solving the Data Silo Challenge

Start breaking down data silos by creating a list of data collection processes and tools across departments. Once you have the list, you can start to identify if any systems can be merged together.

Many stand alone CRM, marketing automation, and analytics tools offer integration capabilities with other common platforms. Or you may already have a tool in place you are using for one function, that can be used for other jobs. If merging isn’t possible, then consider an alternative tool that can integrate with your other systems or do multiple things.

This is also a good time to evaluate communication and processes across departments. Opening up silos between teams will help minimize new silos from forming and open up communication and access to data, which can improve the effectiveness of marketing activities.

Read: How to Become a Better Data-Informed Content Marketer

Challenge #3 – The data is tough to analyze.

Anyone who has ever attempted to analyze thousands of rows of marketing data within an Excel spreadsheet can attest that it can be cumbersome and time consuming. For many of us, data volumes have accelerated much more quickly than our tools and abilities to analyze that data.

Even if you have the most accurate, complete data, if you don’t have the right skills and strategies in place to analyze it, you can’t make an impact.

Start Solving the Data Analyzation Challenge

First, ask yourself if you have the right people in place to solve this problem. Data and technology has likely opened up the need for new positions within your team. An experienced analyst (or team of analysts) can manipulate large volumes of data and serve up insights to help your content marketing, social, advertising, and other teams make more informed decisions and show the impact of your work.

Data scientist is another in demand title within marketing. The right person in this role can help you evaluate tools, manage data sources, and create process and strategies to turn formless data into a powerhouse of insight to change how your team uses data.

From there, assess your technology stack to determine if you have the right tools in place to enable your current or future analysts understand and visualize the data.

Challenge #4 – You don’t trust your data.

It’s safe to say that you and your organization believe data and analytics are critical. In fact, a survey of some of the world’s leading businesses showed that 97% were making big investments in data and analytics this year.

However, despite the need and the investment, the degree of confidence in data could be low. According to a recent survey by KPMG and Forrester Consulting, just 38% of respondents said they have a high level of confidence in their customer insights. Furthermore, only a third seem to trust the analytics they generate from their business operations.

This gap in trust can be a result of lack of transparency or governance around data sourcing and analysis. And present a significant opportunity for organizations to create, refine and circulate policies for data and analytics management.

Start Solving the Data Trust Challenge

If you’ve already head nodded to one of the first three challenges presented, you likely need to start there. A legacy of incomplete and inaccurate data and analytics has driven the current lack of trust. KPMG recommends taking a systematic approach to building trust within data and analytics, by examining trust in data across four pillars:

  1. Quality (Are your tools and data quality?)
  2. Effectiveness (Is the data analysis useful and accurate?)
  3. Integrity (Are data and analytics being used in an acceptable and ethical way?)
  4. Resilience (Are long-term operations optimized?)

Challenge #5 – You can’t make the predictive leap.

If you have the right people, processes, and tools in place to effectively report on and analyze data across channels, that’s fantastic. You’re likely leveraging your historical data along with human insight to create more effective messaging and showcase the ROI of your marketing activities.

But the question is: Are you in a position to get ahead of your audience’s needs?

More than likely, you’re “guessing” at what your audience needs and wants based on what’s already happened, and you haven’t made the predictive leap to uncover deeper trends that will require changes in your mix.

Start Solving the Guessing Game Challenge

Start thinking about how machine learning (ML) and artificial intelligence (AI) can be implemented to help you predict future outcomes. ML and AI technologies not only have the ability to automate data crunching, but they can also create models using your multi-channel data to determine what is likely to happen if you stop or start using a tactic.

For many marketers, AI and ML are daunting solutions to implement, as they are new and can require a significant investment. But the good news is that you can dip your toe in the waters by outsourcing to an established vendor.

If you want to get something going in-house, Microsoft, IBM, and Amazon all have machine learning solutions you can research, consider, and test.

Read: This Changes Everything: How AI Is Transforming Digital Marketing

Overcome Your Data & Analytics Challenges in 2019

Without a doubt, the importance of data and analytics will continue to increase as we go forward. Start now to identify what challenges are holding your marketing team back from making the most out of your data.

With the right people, process, and tools and technologies in place, you can solve current challenges and evolve a powerful data and analytics operation—ultimately setting you up for more success now and into the future.

What are some of the specific strategies and tactics for optimizing performance with data? Our CEO Lee Odden dives into three ways content marketers can leverage data right now.

5 Common Digital Marketing Data & Analytics Challenges and How to Start Solving Them was posted via Internet Marketing

An introduction to HTTP/2 for SEOs

In the mid 90s there was a famous incident where an email administrator at a US University fielded a phone call from a professor who was complaining his department could only send emails 500 miles. The professor explained that whenever they tried to email anyone farther away their emails failed — it sounded like nonsense, but it turned out to actually be happening. To understand why, you need to realise that the speed of light actually has more impact on how the internet works than you may think. In the email case, the timeout for connections was set to about 6 milliseconds – if you do the maths that is about the time it takes for light to travel 500 miles.

We’ll be talking about trucks a lot in this blog post!

The time that it takes for a network connection to open across a distance is called latency, and it turns out that latency has a lot to answer for. Latency is one of the main issues that affects the speed of the web, and was one of the primary drivers for why Google started inventing HTTP/2 (it was originally called SPDY when they were working on it, before it became a web standard).

HTTP/2 is now an established standard and is seeing a lot of use across the web, but is still not as widespread as it could be across most site. It is an easy opportunity to improve the speed of your website, but it can be fairly intimidating to try to understand it.

In this post I hope to provide an accessible top-level introduction to HTTP/2, specifically targeted towards SEOs. I do brush over some parts of the technical details and don’t cover all the features of HTTP/2, but my aim here isn’t to give you an exhaustive understanding, but instead to help you understand the important parts in the most accessible way possible.

HTTP 1.1 – The Current Norm

Currently, when request a web page or other resource (such as images, scripts, CSS files etc.), your browser speaks HTTP to a server in order to communicate. The current version is HTTP/1.1, which has been the standard for the last 20 years, with no changes.

Anatomy of a Request

We are not going to drown in the deep technical details of HTTP too much in this post, but we are going to quickly touch on what a request looks like. There are a few bits to a request:

The top line here is saying what sort of request this is (GET is the normal sort of request, POST is the other main one people know of), and what URL the request is for (in this case /anchorman/) and finally which version of HTTP we are using.

The second line is the mandatory ‘host’ header which is a part of all HTTP 1.1 requests, and covers the situation that often a single webserver may be hosting multiple websites and it needs to know which are you are looking for.

Finally there will a variety of other headers, which we are not going to get into. In this case I’ve shown the User Agent header which indicates which sort of device and software (browser) you are using to connect to the website.

HTTP = Trucks!

In order to help explain and understand HTTP and some of the issues, I’m going to draw an analogy between HTTP and … trucks! We are going to imagine that an HTTP request being sent from your browser is a truck that has to drive from your browser over to the server:

A truck represents an HTTP request/response to a server

In this analogy, we can imagine that the road itself is the network connection (TCP/IP, if you want) from your computer to the server:

The road is a network connection – the transport layer for our HTTP Trucks

Then a request is represented by a truck, that is carrying a request in it:

HTTP Trucks carry a request from the browser to the server

The response is the truck coming back with a response, which in this case is our HTML:

HTTP Trucks carry a response back from the server to the browser

“So what is the problem?! This all sounds great, Tom!” – I can hear you all saying. The problem is that in this model, anyone can stare down into the truck trailers and see what they are hauling. Should an HTTP request contain credit card details, personal emails, or anything else sensitive anybody can see your information.

HTTP Trucks aren’t secure – people can peek at them and see what they are carrying

HTTPS

HTTPS was designed to combat the issue of people being able to peek into our trucks and see what they are carrying.

Importantly, HTTPS is essentially identical to HTTP – the trucks and the requests/responses they transport at the same as they were. The response codes and headers are all the same.

The difference all happens at the transport (network) layer, we can imagine it as a over our road:

In HTTPS, requests & responses are the same as HTTP. The road is secured.

In the rest of the article, I’ll imagine we have a tunnel over our road, but won’t show it – it would be boring if we couldn’t see our trucks!

Impact of Latency

So the main problem with this model is related to the top speed of our trucks. In the 500-mile email introductory story we saw that the speed of light can have a very real impact on the workings of the internet.

HTTP Trucks cannot go fast than the speed of light.

HTTP requests and many HTTP responses tend to be quite small. However, our trucks can only travel at the speed of light, and so even these small requests can take time to go back and forth from the user to the website. It is tempting to think this won’t have a noticeable impact on website performance, but it is actually a real problem…

HTTP Trucks travel at a constant speed, so longer roads mean slower responses.

The farther the distance of the network connection between a user’s browser and the web server (the length of our ‘road’) the farther the request and response have to travel, which means they take longer.

Now consider that a typical website is not a single request and response, but is instead a sequence of many requests and responses. Often a response will mean more requests are required – for example, an HTML file probably references images, CSS files and JavaScript files:

Some of these files then may have further dependencies, and so on. Typically websites may be 50-100 separate requests:

Web pages nowadays often require 50-100 separate HTTP requests.

Let’s look at how that may look for our trucks…

Send a request for a web page:

We send a request to the web server for a page.

Request travels to server:

The truck (request) may take 50ms to drive to the server.

Response travels back to browser:

And then 50ms to drive back with the response (ignoring time to compile the response!).

The browser parses the HTML response and realises there are a number of other files that are needed from the server:

After parsing the HTML, the browser identifies more assets to fetch. More requests to send!

Limit of HTTP/1.1

The problem we now encounter is that there are several more files we need to fetch, but with an HTTP/1.1 connection each road can only handle a single truck at a time. Every HTTP request needs its own TCP (networking) connection, and each truck can only carry one request at a time.

Each truck (request) needs its own road (network connection).

Furthermore, building a new road, or opening a new networking connection also requires a round trip. In our world of trucks we can liken this to needing a stream roller to first lay the road and then add our road markings. This is another whole round trip which adds more latency:

New roads (network connections) require work to open them.

This means another whole round trip to open new connections.

Typically browsers open around 6 simultaneous connections at once:

Browsers usually open 6 roads (network connections).

However, if we are looking at 50-100 files needed for a webpage we still end up in the situation where trucks (requests) have to wait their turn. This is called ‘head of line blocking’:

Often trucks (requests) have to wait for a free road (network connection).

If we look at the waterfall diagram for a page (this example this HTTP/2 site) of a simple page that has a CSS file and lot of images you can see this in action:

Waterfall diagrams highlight the impact of round trips and latency.

In the diagram above, the orange and purple segments can be thought of as our stream rollers, where new connections are made. You can see initially there is just one connection open (line 1), and another connection being opened. Line 2 then re-uses the first connection and line 3 is the first request over the second connection. When those complete lines 4 & 5 are the next two images.

At this point the browser realises it will need more connections so four more are opened and then we can see requests are going in batches of 6 at a time corresponding with the 6 roads or network connections that are open.

Latency vs Bandwidth

In the waterfall diagram above, each of these images may be small but each requires a truck to come and fetch it. This means lots of round trips, and given we can only run 6 simultaneously at a time there is a lot of time spent with requests waiting.

It is sometimes difficult to understand the difference between bandwidth and latency. Bandwidth could be thought of as the load capacity of our trucks, where each truck could carry more. This often doesn’t help with webpage times though, given each request and response cannot share a truck with another request. This is why it has been shown that increasing bandwidth has a limited impact on the load time of pages. This was shown in research conducted by Mike Belshe at Google which is discussed in this article from Googler Ilya Grigorik:

The reality was clear that in order to improve the performance of the web, the issue of latency would need to be addressed. The research above was what led to Google developing the SPDY protocol which later turned into HTTP/2.

Improving the impact of latency

In order to improve the impact that latency has on website load times, there are various strategies that have been employed. One of these is ‘sprite maps’ which take lots of small images and jam them together into single files:

Sprite maps are a trick used to reduce the number of trucks (requests) needed.

The advantage of sprite maps is that they can all be put into one truck (request/response) as they are just a single file. Then clever use of CSS can display just the portion of the image that corresponds to the desired image. One file means only a single request and response are required to fetch them, which reduces the number of round trips required.

Another thing that helps to reduce latency is using a CDN platform, such as CloudFlare or Fastly, to host your static assets (images, CSS files etc. – things that are not dynamic and the same for every visitor) on servers all around the world. This means that the round trips for users can be along a much shorter road (network connection) because there will be a nearby server that can provide them with what they need.

CDNs have servers all around the world, can make the required roads (network connections) shorter.

CDNs also provide a variety of other benefits, but latency reduction is a headline feature.

HTTP/2 – The New World

So hopefully, you have now realised that HTTP/2 can help reduce latency and dramatically improve the performance of pages. How does it go about it!

Introducing Multiplexing – More trucks to the rescue!

With HTTP/2 we are allowed multiplexing, which essentially means we are allowed to have more than one truck on each road:

With HTTP/2 a road (network connection) can handle many trucks (requests/responses).

We can immediately see the change in behaviour on a waterfall diagram – compare this with the one above (not the change in the scale too – this is a lot faster):

We now only need one road (connection) then all our trucks (requests) can share it!

The exact speed benefits you may get depend on a lot of other factors, but by removing the problem of head of line blocking (trucks having to wait) we can immediately get a lot of benefits, for almost no cost to us.

Same old trucks

With HTTP/2 our trucks and their contents stay essentially the same as they they always were, we can just imagine we have a new traffic management system.

Requests look as they did before:

The same response codes exist and mean the same things:

Because the content of the trucks doesn’t change, this is great news for implementing HTTP/2 – your web platform or CMS does not need to be changed and your developers don’t need to write any code! We’ll discuss this below.

Server Push

A much anticipated feature of HTTP/2 is ‘Server Push’ which allows a server to respond to a single request with multiple responses. Imagine a browser requests an HTML file but the server knows that that means the server will need a specific CSS file and a specific JS file as well. Then the server can just send those straight back, without needing them to be requested:

Server Push: A single truck (request) is sent…

Server Push: … but multiple trucks (responses) are sent back.

The benefit is obvious- it removes another whole round trip for each resource that the server can ‘anticipate’ that the client will need.

The downside is that at the moment this is often implemented badly, and it can mean the server sends trucks that the client doesn’t need (as it has cached the response from earlier) which means you can make things worse.

For now, unless you are very sure you know what you are doing you should avoid server push.

Implementing HTTP/2

Ok – this sounds great, right? Now you should be wondering how you can turn it on!

The most important thing is to understand that because the requests and responses are the same as they always were, you do not need to update the code on your site at all. You need to update your server to speak HTTP/2 – and then it will do the new ‘traffic management’ for you.

If that seems hard (or if you already have one) you can instead use a CDN to help you deploy HTTP/2 to your users. Something like CloudFlare, or Fastly (my favourite CDN – it requires more advanced knowledge to setup but is super flexible) would sit in front of your webserver and speaking HTTP/2 to your users:

A CDN can speak HTTP/2 for you whilst your server speaks HTTP/1.1.

Because the CDN will cache your static assets, like images, CSS files, Javascript files and fonts, you still get the benefits of HTTP/2 even though your server is still in a single truck world.

HTTP/2 is not another migration! 

It is important to realise that to get HTTP/2 you will need to already have HTTPS, as all the major browsers will only speak HTTP/2 when using a secure connection:

HTTP/2 requires HTTPS

However, setting up HTTP/2 does not require a migration in the same way as HTTPS did. With HTTPS your URLs were changing from http://example.com to https://example.com and you required 301 redirects, and a new Google Search Console account and a week long meditation retreat to recover from the stress.

With HTTP/2 your URLs will not change, and you will not require redirects or anything like that. For browsers and devices that can speak HTTP/2 they will do that (it is actually the guy in the steamroller who communicates that part – but that is a-whole-nother story..!), and other devices will fall back to speaking HTTP/1.1 which is just fine.

We also know that Googlebot does not speak HTTP/2 and will still use HTTP/1.1:

https://moz.com/blog/challenging-googlebot-experiment

However, don’t despair – Google will still notice that you have made things better for users, as we know they are now using usage data from Chrome users to measure site speed in a distributed way:

https://moz.com/blog/google-chrome-usage-data-measure-site-speed

This means that Google will notice the benefit you have provided to users with HTTP/2, and that information will make it back into Google’s evaluation of your site.

Detecting HTTP/2

If you are interested in whether a specific site is using HTTP/2 there are a few ways you can go about it.

My preferred approach is to turn on the ‘Protocol’ column in the Chrome developer tools. Open up the dev tools, go to the ‘Network’ tab and if you don’t see the column then right click to add it from the dropdown:

Alternatively, you can install this little Chrome Extension which will indicate if a site is using it (but won’t give you the breakdown for every connection you’ll get from doing the above):

https://dis.tl/showhttp2

Slide Deck

If you would prefer to consume this as a slide deck, then you can find it on Slideshare. Feel free to re-use the deck in part or its entirety, provided you provide attribution (@TomAnthonySEO):

Wrap Up

Hopefully, you found this useful. I’ve found the truck analogy makes something, that can seem hard to understand, somewhat more accessible. I haven’t covered a lot of the intricate details of HTTP/2 or some of the other functionality, but this should help you understand things a little bit better.

I have, in discussions, extended the analogy in various ways, and would love to hear if you do too! Please jump into the comments below for that, or to ask a question, or just hit me up on Twitter.

An introduction to HTTP/2 for SEOs was posted via Internet Marketing