Google’s AI Search Feels Like a Content Farm on Steroids

Back in the early days, there were many libraries, and all of them had the same books. However, they each had different ways to help you navigate the stacks. Then a new library started, which totally obliterated the Dewey Decimal System and got you to the right shelf faster and easier than anyone else. After 20 years, nearly all other libraries had gone out of business because most people used the speedy library.

Having achieved dominance, the librarian decided that the library itself, rather than the books inside, was the key draw. So he started to write his own, lower-quality books on all the same topics and place them on a giant shelf that sat in front of the regular stacks. He wasn’t an expert on any particular topic, so these books were just weak paraphrases or sometimes word-for-word copies of the original source material. But he figured that, since nearly everyone came to and trusted his library, the readers would grab whatever was on the front shelf and be happy with it. They’d stay in the library longer, where he could sell them pens, coffee and donuts. Did he succeed? Google’s about to find out.

Currently available for testing in limited beta, Google’s new Search Generative Experience (SGE) shifts the site from being a search engine that links the best content to a publication offering its own mini-articles. But instead of hiring expert writers to do the work, Google employs an AI that ingests data from human-authored content and spits out weaksauce advice with no expertise or authority to back it up.

For years, both users and Google itself have complained about “content farms,” websites that produce shallow, low-quality articles at scale on a wide variety of topics so they can grab top search rankings. Google released a specific “Panda” algorithm update in 2011 that was primarily targeted at content farms and recent updates use the author’s expertise or the helpfulness of the article as ranking factors. However, with its LLM (Large Language Model) doing all the writing, Google looks like the world’s biggest content farm, one powered by robotic farmers who can produce an infinite number of custom articles in real-time.

Old Product Listings, Poor Advice

Let’s take this query: “What’s the best CPU?” There’s a very non-committal set of text saying that you should consider performance, speed and power consumption when choosing a processor (no duh, Captain Obvious). And then there’s a list of outdated processors that are not among the best CPUs available today. The top choice is a Ryzen 7 5800X3D which hasn’t been the top processor for a year, and then there’s a link to a Core i5-10400, which is three generations old. This is not helpful advice.

(Image credit: Future)

If you have SGE access enabled (see how to sign up for Google’s new AI Tools), nearly every query you conduct will show you an AI-generated answer above the organic search results. Often, the AI-generated answer will take up so much of the screen – even on a 4K monitor – that you can’t see a single organic search result without doing a lot of scrolling. Google does provide some related links next to its answer, but these are not citations. In addition, in my testing, the quality and relevance of these links is often poor.

On the “what’s the best CPU” query, the number one choice is a list of the best CPUs on PCMag (relevant). Then there’s a link to a Ryzen 7 5700X page from a lesser-known site, Nanoreview, that doesn’t even have an evaluation of the CPU, just specs on its “review.” Let’s also keep in mind that these are not citations. We have no idea why Google chose the Ryzen 7 5800X3D as its best CPU, though we can guess it comes from someone’s web content.

Following This How-To Would Break Your CPU

Forget about buying CPUs! What if we ask Google how to install one? Here we get a list of 18 steps that often contradict each other and are out of order. Following these steps verbatim would break your CPU and motherboard.

(Image credit: Future)

No one can really argue with steps 1 to 4, which tell you to make sure you’re grounded and to remove any old processor (if you were upgrading, which wasn’t in the query). However, you probably don’t need to clean the CPU socket (number 5), and doing so the wrong way would damage your motherboard. It then says to “install new processor” (which is basically the entire thing) without telling you how to align it. Step 8 says to “install your processor cooler,” but step 9 says to “open latch on your motherboard socket.” You’d need to have opened the latch before putting in the CPU, not after putting on the cooler. Step 10 says to “Grab your CPU and check where the golden triangle is,” but you’d have already done that before seating the CPU, putting on thermal paste and attaching a cooler (all prior steps).

Steps 15 to 18 are a recipe for disaster, even if you hadn’t already followed the previous steps telling you to install the CPU into the socket. Step 16 says to “pull the retention arm from its socket.” That would likely break your motherboard.

And what sites does Google recommend you look at after reading this component-breaking advice? Its related links are from CoolBlue, a tech store in the Netherlands, Computer Info Bits (a site we’ve never heard of before) and a 26-second video from a YouTuber named Mac Coyzkie. These aren’t bad content, but do they have more authority or detail on this topic than everyone else?

Giving Medical Advice Without Credentials

Let’s look at a non-tech query and try a seemingly benign question: “How do I lose weight?” The AI bot gives precisely what you’d expect, a list of well-worn weight loss tips that aren’t particularly controversial, including “drink plenty of water” and “don’t skip breakfast.” However, this is medical advice, and it’s not attributed to anyone. Who says that using a smaller plate is good for weight loss? Is it a doctor or a nutritionist? What are their credentials? No, Dr. Google thinks that the librarian, not the books, is the real authority.

(Image credit: Future)

What we can see below the Generative AI result is Google’s old-fashioned featured snippet, which has a lot of the same advice (nearly word-for-word) and links to Great Britain’s National Health Service (NHS). So, it seems likely that Google probably grabbed many of these tips from the NHS article but chose not to credit the real medical professionals whose advice we should trust.

Google is supposedly reticent to give YMYL (Your Money or Your Life) advice via AI so it will not give you the SGE box for some medical and financial questions, but not all. For example, “Do I have COVID” and “best credit cards” didn’t give me the SGE box. However, when I asked, “Do I need a colonoscopy,” it gave me what seemed like a decent answer which was clearly drawn from its first recommended link: a health insurance provider called HealthPartners. Whether bots should be trained on health advice by insurance websites or more neutral medical information is a question I’ll leave up to someone else.

(Image credit: Future)

Can’t Understand What a Fast SSD Is

Returning to tech advice, let’s ask, “What’s the fastest SSD?” Again, Google gives a very generic set of tips about what to look for in an SSD and then a list of shopping links that is way off the mark. The top choice here is the Crucial P3 SSD which wasn’t even the fastest SSD when it came out. It’s rated for 3,500 MBps, while the fastest PCIe 5.0 SSDs can operate at three times that rate. Who says that the Crucial P3 is the fastest? Is it someone you should trust?

(Image credit: Future)

Dangerous for Publishers, Bad for Readers

Let’s get this out of the way: I’m not an unbiased party here. As a professional writer, I have a vested interest in people viewing my work and the work of my colleagues at other publications. If people just stay on Google.com and rarely leave to visit news and information websites, many of those publications will shut down, and others will go paywall-only. Readers will have fewer and inferior resources available to them, while Google’s bot will draw from a weaker pool of data, making its “advice” even worse.

But let’s look at this from a reader’s perspective. You are getting advice with no authority behind it. Whether it’s buying advice telling you what CPU to purchase or medical advice telling you how to diet or when to get your colon checked, the source of that information matters. You should trust a doctor who is certified by the NHS or the AMA. You should not trust some bozo who runs a website selling weight loss pills. By hiding its sources in an attempt to become the publisher, Google stops you from knowing the reliability of the advice.

The SGE assumes that readers, having been trained to “Google” everything, will blindly follow whatever the bot spits out, no questions asked. Google’s AI is not a doctor, a tech journalist, a florist or a travel agent. It has no arms and legs to pick up a laptop, install benchmarks and try out the keyboard to see if it’s mushy. It has no tongue to taste test food or eyes to watch movies and help you pick the best ones. But it sure can remix existing content!

Google Goes Against Its Own Search Criteria

Google itself knows that expertise matters. In December, the company’s Search Central blog said it is now using E-E-A-T – Experience, Expertise, Authority and Trust – to determine content rankings in search. In other words, content from an experienced author with subject matter expertise should rise to the top, and unreliable advice from johnny-come-lately content farmers should fall to the bottom.

However, Google doesn’t apply the E-E-A-T standard to its SGE or the related links the SGE recommends. Instead, it screams “trust me, just because I’m made by Google.” But you’d be sorely disappointed if you bought the Ryzen 7 5800X3D CPU and Crucial P3 SSD based on its assertion that these were the fastest components today.

While I was writing this article, I had to do some research about keyboard switches and I wanted to find a quick list of linear switches that are currently popular with the enthusiast community. So I Googled “best linear switches” and I got this bad answer:

(Image credit: Future)

The advice, which is about choosing a keyboard, not choosing linear key switches, doesn’t answer my query and most of the recommended links don’t address it either. But, even if this were an on-point list of only linear switches, giving me Google’s output is not the best user experience. I want to know what the keyboard experts think are the best linear switches and why. I don’t care what Google’s bot thinks because its opinion is meaningless.

A lot of people, including me, use Google like a spellcheck or autosuggest for URLs. You know you want a particular page, but you don’t know the exact web address so you search for what you want and the top result is it. I knew I wanted to visit the main tech security subreddit, but I didn’t remember exactly what it was called. So I Googled “reddit tech security subreddit.”

Instead of getting a list of deep links directly to subreddits, I got Google’s SGE box giving its opinion on which subreddits I should check out, without links to them in the copy.

Here SGE is acting like a rude stranger who, overhearing you ask a friend a question in a public place, jumps into your conversation to offer their unwanted two cents. I just wanted a link, Google SERP was giving it to me and you had to break into the conversation. Ugh!

(Image credit: Future)

One of the most “just give me the link” use cases is when you want to log in to an account you have somewhere: your health insurance provider, for example. You want the sign-in page and that is all. However, when I Googled some health insurance providers + “login,” I got the SGE experience telling me how to login (without links to the pages).

(Image credit: Future)

This is exactly the kind of behavior we see from content farms. The farmers know that someone is searching for an experience that they cannot provide, but they target that term with an article so that they can divert some traffic their way.

How Google Could Make SGE Usable

The problem with Google SGE is the mission more than the tool. A helpful generative AI experience would:

  • Always cite specific sources with direct links in the copy like Bing Chat does today.
  • Not pretend to offer advice: A list of recommended products is advice, as is a list of tips. Any advice should come from a specific human expert. If Google SGE answers queries like “best lubricated switches” at all, it should pinpoint people with expertise, experience and authority in that topic.
  • Take up less of the screen: Many people are just going to want links. Don’t push all links below the fold, particularly on high-res screens. Making Google’s own content take up the entire screen is anti-competitive behavior.
  • Give good recommended links: The quality of the recommended links in the SGE box should match the quality of those in Google’s regular organic SERP.
  • Don’t answer queries where someone is looking for a web page: If I asked for a subreddit or my insurance provider’s login page, don’t give me your advice. Get out of the way.

I hope Google will come around to some of these ideas through its testing, but I’m not holding my breath.

Will Google Focus on Profit or User Experience?

Google seems to be betting that, whether the quality of its advice is good or not, most readers will stick with the answers and advice it gives rather than going to outside websites. So if you click a link from Google Shopping, Google is the one that gets paid. And if you stay on Google.com, rather than clicking through to another site, Google gets to keep 100 percent of the ad revenue.

The problem is not that the SGE content is so good that it puts human writers to shame. The issue is that the librarian has become a publisher and is pushing his own content-farm-level output to the forefront. You may not need to write the best book when you have the front shelf.