Friday, May 26, 2023

Addressing SEOs Concerns about Google’s AI Experience

I was so excited to get access to Search Labs today to start playing around with Google’s Search Generative Experience (SGE). The SGE is Google’s response to Bing chat which was demoed at Google I/O on May 10th. Search Labs is rolling out access to people who joined the waitlist, starting today.

The I/O demo didn’t include very many responses, but its not like they could in such a short time. But that demo had a lot of search engine optimization professionals (SEOs) reeling. So far, I’ve felt pretty good about what I’m seeing in SGE from an SEO and consumer standpoint, so I want to try to put some of those initial concerns to rest.

Concern 1: This will only make zero-click searches even worse

When Google’s chatbot Bard rolled out, SEOs were concerned about the lack of citations, especially since Bing Chat included links to sources. You can read more about that in a post I did for my master’s program’s blog.So during the I/O demo when article thumbnails were shown in the AI response, many were happy.

I’m here to tell you that I’m even happier with what I’m seeing in the SGE myself! In some cases, I’ve seen carousels of 8 articles shown. Will users scroll through all of those? Maybe not, but it does give more opportunities to be seen “above the fold,” especially on mobile.

Google search box on mobile has the query ‘best electric suvs’ the Generative AI response lists several vehicles and then shows a carousel of links with images below the results

Even better the SGE offers different ways to view the response. The default view uses the carousels, but what I’m calling a “grid” view based on the image used to toggle (in the top right of the AI response) it is way better! I like it as a consumer because it gives us a better idea of what resources were used to pull what information shared in the SGE response. Plus, it pulls your potential link out of the carousel for potentially more visibility.

the ‘best electric suvs’ Google search query in mobile shows a generative AI response where after a few vehicles are mentioned links to resources are shown below and then another set of listed vehicles followed by more links to those resources

I think this “grid” view is even better with local results because it shows articles that mention the local business in whatever categorization Google has provided to it as well.

a desktop Google search for ‘karaoke places in cincinnati’ returns a generative AI response that lists one result called Tokyo Kitty categorized as ‘with private rooms’ followed by a carousel of links that likely explains more about Tokyo Kitty. The same treatment is applied to another business result with two articles referencing it

If we get Google Search Console data on this, the carousel vs. non will completely mess up impressions on the same query. Links hidden in a carousel are not counted as an impression unless they are shown, so if the carousel is dropped then an impression would hit. Could a toggle create duplicate impressions like it does in the Job/Event lists?

So how frequently does the SGE appear?

As with anything in SEO, it depends. Device type seems to come into play and of course intent/query does as well.

I rapid fired queries beyond what is mentioned or pictured in this post. I tested these 22 queries on both desktop and mobile. Desktop didn’t show SGE at times when mobile did.

An AI response wasn’t available on either desktop or mobile–secure a car loan. This lack doesn’t come as a surprise as Google said it would treat “Your money or your life” very carefully. However, for “is keto healthy” a response was given. On mobile there was a clear disclaimer at the top indicating this is best discussed with a doctor, but the disclaimer was missing on desktop.

Two other queries had mixed results between mobile and desktop. “Toyota Corolla configurations” on desktop did not offer an SGE at all, but it auto-populated on mobile. “Current deals on Toyota Corolla” did not auto-generate on desktop but did on mobile. I’ll share more about a pattern for when an SGE is not auto-populated under the next concern.

Concern 2: Featured snippets or the Knowledge Panel will go away

Featured snippets exist! And when it takes a while for an AI response to show up, they are pretty prominently shown.

a desktop Google search for ‘how tall is the eiffel tower’ shows a generative AI response generating with blue bars stretching across the would-be response box but just below it is the answer already rendered in a featured snippet

It does seem like Google is not automatically triggering the AI response when a Knowledge Panel (KP) is available. See below several different KPs where the SGE is not automatically triggered. I tried many national sports teams from different sports and those results didn’t even offer an SGE option.

mobile and desktop search results for various queries show the current state of those queries with a prompt for ‘get an AI-powered overview for this search-Generate.’ The queries are for Prince, toyota corolla cross, eiffel tower france, and the book surrounded by idiots

Concern 3: Publishers are doomed on transactional terms

While, yes as shown in the I/O demo, products make up the bulk of any shopping AI responses. I’m seeing a carousel of articles–like those Interesting Finds that appeared in the past on shopping terms.

a desktop search for ‘sustainable women’s shoes’ shows a generative AI response with a blurb about some popular brands and a carousel of links next to that blurb seemingly where the response got its information

However, I’m also seeing these generate empty citations too. Possibly this means there is an opportunity for a publisher to create content that could be featured here or the experience is still lacking in generating proper sources.

a mobile search for ‘full coverage swim suits’ generates several sentences in the AI response all followed by a box indicating ‘this overview includes information provided by brands and stores’

Concern 4: Ads will take all of the clicks

During the I/O demo, it appeared that shopping ads were showing ABOVE the AI response box. Personally, I felt that would make Ads even more sought after, thus more competitive. But I’m seeing them pushed down, as pictured in the two images in the previous concern.

Other Thoughts about SGE

Local Results

On mobile, local results lack a map. I hate this. As a consumer, I like seeing the landmarks and cross streets to get a better sense for where something is in relation to other things I’m aware of. I first started testing on mobile, so I was happy to see the map does show up on desktop. However, when the grid view is toggled the map disappears (as shown above).

a desktop search for ‘karaoke places in cincinnati’ now shows just the suggested places along with a map to the right of it and a carousel of links above the map

Lists Within Responses

Sometimes the AI response makes these nice lists, like the ones in the local results. Those results link over to the local knowledge panel for that business. However, other lists seem like they should be clickable and are not. The following image was disappointing that these did not link over to the websites. Instead, the images opened the image search result, and many images were not even from the site mentioned.

in Converse mode of Google's search generative experience, a follow up question from 'sell my car online' was asked of 'what is the best online portal to sell used cars' which returned a list of sites with the site name, a brief description of what it offers, and an image of the site's homepage or logo next to it.

Mixed Intent

Where Google has always had mixed intent, SGE makes it even worse. General queries, like Ford, generated a local SGE for Ford dealers. Basically Google jumped a likely navigational intent to a transactional. Didn’t seem fitting.

Conversation Mode

I was quite pleased to see that conversation mode kept a similar feel to the SGE. After your initial prompt, results with links to sources were generated plus a few organic results would show underneath if you keep scrolling then with a prompt to see more search results.


Don’t be a fool and spend time pondering over the meaning behind the colors of the response box. After a long while, I realized they are based on the colors visible in the normal search results, like an image or logo.

Caveats to accessing SGE

I’m not able to get SGE to show up in search on mobile Chrome. I have to use the Google app in mobile. Also, that means you can’t use the mobile emulator on desktop to get the experience either (I tested).

As with the evolution from 10 blue links to the current rich-results heavy SERPs, Google will continuously adjust the SGE. What we are seeing now will likely not be exactly how it actually launches. But what I’ve seen has felt less “doom and gloom” than some may have expected. Plus, I see any change like this as an opportunity to keep learning and growing!

Sunday, January 8, 2023

A Prioritization Framework Specifically for Content Planning

The last time I attended a Women in Tech SEO virtual meetup I had promised a blog post on a prioritization framework I had built for content planning. But I was still finishing up grad school and never got around to posting many of the things I had hopes to on my blog.

Now that a new semester is gearing up where I’m not enrolled (I graduated in December!), I don’t really have an excuse. If you were in that WITSEO meetup and asked for this, thanks for your patience; hopefully this is worth the wait.

While SEO conferences were in full swing over the summer and fall, I saw some mentions about content planning presentations recommending the RICE prioritization method. RICE is a nice overarching scoring process developed by Intercom for product management. RICE stands for Reach, Impact, Confidence, and Effort. The Impact factor leaves a lot of subjectivity. Plus, that method lacks some nuance to content marketing. I think my scoring process takes out the subjectivity and is created specifically for content planning.

In my influencer marketing series posts back in 2019, I mentioned using a scoring process for selecting influencer campaigns. I used that prioritization matrix as the basis of this process as well. The method allows you to pick several factors and provide a value to each of 0,3,6 or 9. Higher numbers will mean a higher score and higher priority. However, you can also choose to weight factors which will multiply the score and give that factor more value than others. Thus, choosing factors and how to score them is what narrows in the method specifically for content marketing. Be sure to decide on the factors, scoring, and the weighting before you start scoring any content; otherwise, you might start allowing bias to creep in as you analyze a specific project.

Average Monthly Search Volume

Each content idea should have an associated topic that some initial keyword research should be completed around. I’m not here to tell you how to go about doing your keyword research (although I’ve got some tips from an old process I used on budget). Use your trusted tool for keyword research to determine an average monthly search volume around the topic.

Before using the scoring method, work with your team to determine the thresholds you want to use for the scores. You’ll need four ranges for the scores, typically with the last being X and above. Deciding together eliminates decisions in a bubble and some bias. Don’t worry those of you (like me) that believe in the power of zero search volume keywords, the other factors can allow for an idea to still score well, even if the search volume is low.

Content Gap

This factor may be my favorite and, along with the next, requires some competitive research. The Content Gap factor looks at how much is being written about the topic both by you and your competitors. The scoring I use is:

  • 0 if the topic is saturated on your site and your competitors
  • 3 if the topic is saturated on your site but not others
  • 6 if the topic is saturated on competitor sites but not yours
  • 9 if the topic shows a gap in the market.


While your selected keyword research tool might include a competitiveness indicator. This factor looks more broadly at if there is already a lot written about the topic and if your site has the authority to compete. If a topic has already been covered by everyone, you may simply be rehashing all of that and not showing any expertise on it. I score this by:

  • 0 if the topic is overly competitive and your site has little topical authority
  • 3 the topic is competitive, but your site has high topical authority
  • 6 if there is low competition and you have low topical authority
  • 9 if there is low competition and you have high topical authority.

Content Reusability

I always hated having “content for content sake.” This phenomenon tends to happen when companies share to social media, or even worse the blog, a meaningless message tied to an upcoming holiday. Those silly, random holidays for every day of the year is how much of this is drummed up. I’ll admit I’ve fallen prey to this quantity over quality tactic in the past. This type of content is something that will have a short life, even if it is a couple of days each year.

This reusability factor makes sure you aren’t just writing content for the sake of having content. I refer to that as “fleeting” content. Fleeting could also relate to newsjacking tactics that might work for a short amount of time and then be completely irrelevant very quickly.

Another take on reusability is also how well the content can be used on different media. Does it make sense in short form video? Could it be a podcast? How can it be best used for each social media platform your company uses to target its audience? Is there a compelling way to tease out the content in your email newsletter?

Scoring for this looks at reuse across platforms/media and if the content is fleeting or evergreen:

  • 0 if the content is fleeting and not able to be reused across platforms (such as a “Happy New Years!” message on social media)
  • 3 if the content is evergreen but not easily used across platforms (i.e., product specs)
  • 6 if the content is fleeting but easily used across platforms
  • 9 if the content is evergreen and can easily be used across platforms.


The timeliness is pretty self-explanatory; however, fitting it into the 4 levels for scoring might not be as obvious.

  • 0 if it is not timely
  • 3 if it is an ongoing topic/trend
  • 6 if it is timely
  • 9 if it is breaking and you can be ahead of others

Again, the overall score is more important than anyone factor. Being the first to report something isn’t always the best route if it can lead to low quality work.

Alignment to Business Objectives

The factor about aligning to business objectives is a necessary factor for all prioritization matrices. If you aren’t working towards the bigger picture, then what even is the point of prioritizing the campaign/project/etc.? That doesn’t mean don’t include an idea on your scoring matrix (I use a spreadsheet with the factors as the column headings and then each idea as a row with the scoring for each column). For the Alignment to Business Objectives factor, I score a 0 if “it is not currently a priority,” yet will keep the idea alive as business priorities potentially shift. A low-level priority earns a 3, mid-level a 6, and high priority a 9.

a spreadsheet set up for a content prioritization scoring framework

There we have it! What other factors would you add and how would you score each?