SAN FRANCISCO—Online actors linked to the Chinese government are increasingly leveraging artificial intelligence to target voters in the U.S. , Taiwan and elsewhere with disinformation, according to new cybersecurity research and U.S. officials.

The Chinese-linked campaigns laundered false information through fake accounts on social-media platforms, seeking to identify divisive domestic political issues and potentially influence elections. The tactics identified in a new cyber-threat report published Friday by Microsoft are among the first uncovered that directly tie the use of generative AI tools to a covert state-sponsored online influence operation against foreign voters. They also demonstrate more-advanced methods than previously seen.

Accounts on X—some of which were more than a decade old—began posting last year about topics including American drug use, immigration policies, and racial tensions, and in some cases asked followers to share opinions about presidential candidates, potentially to glean insights about U.S. voters’ political opinions. In some cases, these posts relied on relatively rudimentary generative AI for their imagery, Microsoft said.

U.S. officials see China’s rising clout in global influence operations as a concern because of the evolving tradecraft and ample state resources. Last fall, for example, the U.S. State Department accused the Chinese government of spending billions of dollars annually on a global campaign of disinformation , using investments abroad and an array of tactics to promote Beijing’s geopolitical aims and stifle criticism of its policies.

In an interview, Tom Burt , Microsoft’s head of customer security and trust, said China’s disinformation operations have become much more active in the past six months, mirroring rising activity of cyberattacks linked to Beijing.

“We’re seeing them experiment,” Burt said. “I’m worried about where it might go next.”

Separately, Microsoft said it detected a surge of more-sophisticated AI tools in the January presidential election in Taiwan, including an AI-created fake audio clip of a former presidential candidate endorsing one of the remaining candidates. That marked the first time the technology giant’s researchers on threats had seen a nation-state actor using AI to attempt to influence a foreign election.

The posts have so far failed to achieve much traction, Microsoft said, but they offer a preview of state-backed election-influence operations to come. Western intelligence officials have said they have growing concerns about how AI tools could be used to flood elections this year with misleading videos or other content, including in the 2024 U.S. presidential contest. Security experts have said fake AI-generated audio clips pose an especially acute threat because they are relatively easy to manufacture and have been shown to dupe audiences easily.

Chinese government operators “have increased their capabilities to conduct covert influence operations and disseminate disinformation,” an annual worldwide threats report from the U.S. intelligence community released recently said. “Even if Beijing sets limits on these activities, individuals not under its direct supervision may attempt election influence activities they perceive are in line with Beijing’s goals.” The report also said China was “experimenting with generative AI” and intensifying efforts to mold U.S. discourse on issues including Hong Kong and Taiwan.

Beijing has repeatedly said that it opposes the production and spread of false information and that U.S. social media is inundated with disinformation about China.

The Microsoft report is the latest of several different sets of published research that shed light on disinformation operations linked to Beijing. A new report from the Institute for Strategic Dialogue, a London-based research organization, identified a small number of accounts on X it said were linked to China that were impersonating supporters of former President Donald Trump and attempting to denigrate President Biden.

In one example from November spotlighted in the Microsoft report, China’s online army pounced on a train derailment in Kentucky, spreading conspiracies on social media that falsely accused the U.S. government of being responsible. The accounts linked it to long-discredited theories that Pearl Harbor and the 9/11 attacks were both coverups.

In another example, Microsoft said China sought to spread conspiratorial, false narratives across several platforms by alleging that the U.S. government had deliberately started the wildfires along the coast of Maui, Hawaii, by testing a “weather weapon.” That effort saw posts in at least 31 languages across dozens of websites and used AI-generated images of burning coastal roads and residences, apparently to be more eye-catching to audiences, Microsoft said.

The threat actor that Microsoft calls Storm-1376, also known as Spamouflage and Dragonbridge, was responsible for the disinformation campaigns, the report said. It has been tracked by Western cyber-threat researchers since at least 2019. Meta Platforms took down thousands of accounts last year linked to Spamouflage, in what it said at the time was the largest known online covert influence operation in the world.

Some of the most intensive uses of AI were seen targeting audiences in Taiwan. AI-generated news anchors, which were created using a tool from the Chinese company ByteDance, were found in a variety of videos that featured officials in Taiwan, according to the Microsoft research. Spamouflage has experimented with AI-generated news anchors since early last year, but the volume of the content has expanded in recent months, Microsoft said.

Microsoft’s Burt said that Russian state actors still exhibited more-impressive disinformation tactics than China overall but that China was rapidly improving, in part because of the size of its investment.

The election in Taiwan “is where we saw the outcome of what they were learning from utilizing AI,” Burt said. “It significantly upgraded the quality of the images and the information they were using in those operations.”

Microsoft didn’t disclose the identity of the accounts it tracked in the alleged disinformation campaign. A spokeswoman said that was standard practice for the company.

Write to Dustin Volz at dustin.volz@wsj.com