Have you noticed lately that some of your Google searches have steered you wrong? There’s a reason for that.

Spammers are using artificial-intelligence tools to create an ocean of content, and Google’s algorithms are ranking some of those robot-generated pages ahead of the information you actually need.

This adds a new layer onto tricks that spoil your searches, including misleading targeted ads and low-quality websites built to appear atop the results page. At best, this clickbait is annoying. At worst, it can lead you to scams intended to get your credit-card number and other personal information.

Here’s a quick example: When I wanted to switch the Google account I use for Gmail, I searched “how to change default Google account.” The top result, with large highlighted text, led to an article posted to LinkedIn.

The author was Morgan Mitchell, content manager at Adobe . Mitchell has bylined 150 articles, all of them written in search-friendly Q&A format. Lots of those articles include customer-service phone numbers, the go-to solution for more complex problems—and for less tech-savvy users.

Trouble is, Mitchell doesn’t exist. And the phone number in the article didn’t belong to Google or Adobe. Likely, Mitchell is just a figment of some AI’s imagination, and the number is a way to con unsuspecting users.

In independent analysis, Google results are of higher quality than those from competing search engines, a Google spokesman said. “Our spam-fighting systems help keep search 99% spam free,” he said, adding that the company reviews user feedback and updates the search tool thousands of times a year.

I use Google every day and so do most of us—it accounts for over 91% of searches globally . I find it more reliable than the alternatives. Still, there’s a lot of muck out there. Generative AI only adds to the mess.

Here’s what to know about googling now, and how to search smarter.

Flooding the zone

To rank highly in search results, spammers are now publishing posts on established sites that Google tends to favor, such as LinkedIn, Reddit and Quora. Generative text chatbots make producing this “parasite” content easier, said Mark Williams-Cook , a search-engine specialist and director at marketing agency Candour.

“Web spam is not new, but the tools are, and they have lowered the barrier for entry,” he said.

Google prohibits mass-produced content aimed at hijacking search results. It’s on the lookout for spammers using software to generate keyword-filled gibberish or scrape text from other websites. Still, AI-generated content doesn’t necessarily violate Google’s spam rules. If an AI’s content is good enough , it could surface in search.

But what looks good enough to Google’s search-ranking systems might not be helpful to you, said Williams-Cook. To both machines and users, the content “reads well,” he said, but can contain glaring mistakes that nonexperts might miss.

Now back to Mitchell, supposed author of 150 LinkedIn posts: Adobe confirmed that no one by that name is affiliated with the company. Williams-Cook suspects the profile and its posts were created using AI. The Mitchell profile image might be AI-generated, too, based on my image searches.

“We’ve long been clear that if content is produced solely for ranking purposes—including AI-generated content—that would violate our spam policies and we’d address it,” the Google spokesman said. (After I shared the profile with LinkedIn, the company removed it for violating the platform’s fake-account and spam policies.)

Here’s an example of an AI-powered service that gets past Google’s filters, but does add noise to the web. With the help of OpenAI’s ChatGPT , Eightify generates text-based summaries and Q&As from YouTube videos. It went from no traffic to 1.2 million monthly Google referrals in the past six months, according to Williams-Cook.

The site ranks highly on hundreds of searches such as “Capital One credit increase hack” and “safe places to sleep in your car.” Eightify’s founder, Alex Kataev , said the company doesn’t fact-check every article, but it does check its most popular, as well as those flagged by users.

“Unfortunately, the spread of misinformation isn’t due to the use of AI,” said Kataev. “We rely on YouTube’s moderation, and it often falls short.” A YouTube spokeswoman said the company does not allow  misleading or deceptive content  that poses a “serious risk of egregious harm” and removes videos that violate this policy.

Snippet sniping

When you search, you often get a featured “snippet” —a highlighted excerpt at the top of the page from a source that Google’s algorithm deems authoritative. No need to click any further.

If you search “flu symptoms,” up pops information from the Centers for Disease Control and Prevention website, followed by a link to more information from the CDC. But the algorithm sometimes gives this spot to less desirable sources.

In January, Luan Santos realized his wife’s name was misspelled on an upcoming Delta Air Lines reservation. A quick Google search yielded a snippet containing a customer-service number, so he called it.

The first red flag: An agent picked up immediately. He had expected a robotic greeting followed by a half-hour wait. He provided the booking confirmation number and was asked for his wife’s last name.

At that point, he noticed the URL in Google’s snippet didn’t point to Delta’s website. Fearing a scam, he hung up. He said he now has less trust in Google. (For the record, here is Delta’s official contact information .)

I attempted my own desperate-traveler search: “Southwest real person.” This snippet also had a phone number that didn’t belong to the airline. When I called it, an agent immediately answered. He said this was for a “consolidated airlines help desk” and requested a credit-card number to make flight changes.

A Southwest Airlines spokesman confirmed the airline isn’t part of any consolidated help desk, and recommended reaching out via official channels.

More-common search terms are likely to point to official information, while less common terms might pick up these lower-quality pages, the Google spokesman said. Sure enough, I found the correct hotline by typing in “Southwest customer service.” After I showed the malicious featured snippet to Google, it no longer appeared.

Bad ads

Google search results usually begin with a layer of ads, sometimes so many that you have to scroll to get to nonsponsored links. Paying for ad placement is another way bad actors can lure customers away from their intended link.

Take the search “ChatGPT Plus.” That’s OpenAI’s $20-a-month premium chatbot service. It’s easy to sign up, especially if you already have the ChatGPT app on your phone.

The first Google sponsored link reads “ChatGPT 4.0 now available,” and takes you to a website that isn’t run by OpenAI. This service also charges a $20 monthly fee, but serves up older, freely available software.

Users of the review site Trustpilot say they haven’t been able to obtain refunds from the site. I tried contacting the site’s operators and didn’t receive a response. The Google spokesman said the company prohibits ads that distribute malware or seek to scam users, and removes billions of ads each year that violate its terms.

Whatever you’re searching for, remain vigilant. Check whether the link is an authority on the subject. If you’re looking for customer service or product information, start with the company’s website. You’ll likely get the best data straight from the source.

—For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter .

Write to Nicole Nguyen at nicole.nguyen@wsj.com