As a professional writer, Sarah Suzuki Harvard says she isn’t inclined toward overtly exuberant prose. But these days, she finds herself going rogue.

“I’ll use aggressively casual language, like, ‘hey yo, for real,’ or drop a bunch of exclamation points,” said Harvard, a 32-year-old copywriter in Brooklyn, regarding her posts and essays. “It feels so icky to do this, but it’s what you have to do to sound human.”

Call it a reverse Turing Test. As AI-generated writing floods the internet, more people are trying to detect which creators are using such tools to spin up copy. That means writers penning all their own work—and people who acknowledge using chatbots for help—are trying to master something they never worried about before: how to sound human.

Like many writers, Harvard fears being accused of wielding machine-made material. She’s seen it happen to others and is proactively trying to prove her human bona fides.

NEWSLETTER TABLE TALK

Never miss a story.
Subscribe now.

The most important news & topics every week in your inbox.

“It’s like the new McCarthyism,” Harvard said. “It’s just crazy. People are demanding proof of something that can’t be proven.”

AI is an able writer because it’s trained on vast troves of human writing, from literary classics to contemporary op-eds. Models are also taught to be clear and avoid unnecessary complexity, goals famously enshrined in Strunk & White’s “The Elements of Style.” But too much polish can be a giveaway, as can repeated use of flourishes: lists of three, for example, or punchy line breaks.

Controversy over authors using AI has made regular headlines, including in March when publisher Hachette pulled the book “Shy Girl” over allegations that it was partly AI-composed. (The author said she didn’t personally use AI.) Schools and teachers have used AI-detecting software for years to catch students cutting corners, often with mixed success.

The ranks of armchair detectors has also grown as Americans embrace AI for all sorts of writing tasks, from reference letters to LinkedIn missives. Suspicions pop up frequently on the busy “isthisAI” subreddit.

Garrett Marcy believes he can spot AI’s house style. The financial account coordinator in Jacksonville looks for a distinctly staccato sentence cadence, heavy use of the em dash (—) or phrases such as “it’s not x, but y.”

Another telltale sign: people who suddenly become voluble online. “There are kids I know from college who couldn’t write a paper, and it’s like, you have a thesis statement now?” said Marcy, 28.

He isn’t a purist: he uses AI to help craft his writing, then edits to retain his own voice. Marcy eliminates em dashes and sometimes swaps in sentences of his own, even though he knows he tends toward the run-on variety. He’ll even leave in an accidental typo.

“I’m not innocent, we’re all using this stuff,” Marcy said. “If it helps you write an email, that’s not stolen valor.”

Chicago-based Sean Chou, 54, co-founder of multiple tech startups including an AI company last year, uses artificial intelligence to draft LinkedIn posts. But he said he’ll replace em dashes with two smaller dashes, hoping they’ll look more handmade.

“It’s like my artisanal craftsmanship,” Chou said.

He tries to rein in overtly bold statements, too. “[Large language models] get their content from TED Talk transcripts and Reddit opinions, so it has a self- selection bias there, it tends to sound very confident,” he said.

Andy O’Bryan, who co-founded an online group for entrepreneurs interested in AI, said he’s seen more people trying to scuff up AI-generated prose with typos or run-on sentences.

“You’ll be reading someone’s Substack or blog post, and all of a sudden in the middle of a perfect paragraph, there’ll be a mistake sitting out there like a sore thumb,” said O’Bryan, 62. “It’s like, try harder.”

As AI models advance, figuring out what’s written by bots has gotten harder, said Ivan Jackson, 27, founder of the startup Writehuman. His company’s software edits AI-generated text to make it sound more human with monthly updates to capture trends.

Its analysis suggests current hallmarks include overreliance on phrases such as “rather than” and “essential for.” At the same time, Jackson said text rewritten by humans is increasingly flagged by AI detectors, a trend he chalks up to people unconsciously starting to copy AI’s style.

Two years ago in Grand Rapids, Mich., Ryan Johnson, 33, started a blog to help promote his business advising high-earning young families. A few months later, he began using AI to draft his posts. As someone who minored in journalism, he’d always loved writing, but ChatGPT’s efficiency gains were hard to turn down.

Still, he quit using it last fall, worried its smooth output was eliminating what once made his blog sound distinct. “It’s like the restaurant that starts to water down the soup,” he said. “People don’t leave immediately, but eventually they’re like, eh, it doesn’t have the same kick.”

He likes to litter his text with references that AI wouldn’t suggest, including obscure quotes from “The Office.” Even so, he still gets messages from people asking if he’s using AI.

It’s an impulse he understands.

“I was reading the Bible the other day, and there were em dashes in the book of James, and I was like, hmm, is this AI-generated?,” Johnson said. “And it’s just funny, of course, no—good writers do this, so chill out.”

Write to Te-Ping Chen at Te-ping.Chen@wsj.com