Thursday, December 19, 2024
Share:

The Problem With TikTok



Last month’s congressional TikTok hearing was both a symbol of deteriorating United States–China relations and an anxious reaction to a future in which the U.S. may no longer enjoy exclusive dominance over the technological landscape. TikTok is portrayed as the vanguard of a China-led world in which four of the five top mobile apps in the U.S. have Chinese origins, and China increasingly challenges the U.S. in AI supremacy and as an innovation-centric society.

The congressional hearing was essentially a five-hour monologue criticizing the Chinese government, interspersed with commentary about the toxic state of the Internet. At one point, TikTok was preemptively admonished not to argue that its platform was no worse than other platforms, with Rep. Jan Schakowsky offering, “I really don’t wanna go by that standard.”

The criticisms leveled at TikTok – concerning everything from data privacy to misinformation to the harms to children – differ little from those directed for years at American platforms. We should be skeptical about how much change to expect from Congress. Yet the big question remains: Just how much of a threat does TikTok really pose?

TikTok’s singular popularity stems from the power of its recommendation algorithm to learn the interests of its users with uncanny ability and feed them an intimately personalized stream of content. Algorithmic recommendation is at the heart of all major social platforms, but TikTok’s algorithms, coupled with its short-form video-centric content, have proven far more intoxicating than those of U.S.-based legacy platforms. The dangers of such hyper-personalized recommendation streams range from encouraging harmful behaviors in teens to the concern that China could secretly use the platform to foment divisions and unrest and nudge the West toward ideas and beliefs that favor China’s interests.

TikTok has gone to great lengths to emphasize its independence, but the idea that the Chinese government could leverage tiny algorithmic tweaks to manipulate what Americans are interested in has real merit. The company has previously confirmed that its employees do manually override its algorithm to prioritize topics and individuals that it wishes to promote, though it strenuously denies that the Chinese government could influence its decisions.

From a data-privacy perspective, there is a risk that TikTok could monetize the richly intimate portraits it creates of Americans by selling their data. On the one hand, American platforms have long argued over what it means to “sell” data and who should have access to the insights they derive from it. On the other hand, the U.S. already has a vibrant landscape of shadowy data brokers from whom even more intimate data – containing information ranging from medical and financial to sexual – can be purchased today in bulk.

Even if TikTok doesn’t sell Americans’ data (which it promises it will not), some warn that China’s spy agencies could use the data to gain a better understanding of the American population. Facebook’s own use of its mobile app in 2019 to track users it deemed “threats” suggests how TikTok could turn its billion-plus users, including 150 million Americans, into real-time location beacons to catalog patterns of life in the U.S. TikTok’s admission last year that it accessed the data of two journalists to investigate a leak lends further credence to these concerns.

At a more basic level, TikTok could simply adjust its content-moderation policies to silence criticism of Chinese interests. In 2019, the Guardian revealed that TikTok had been quietly censoring topics like Tiananmen Square, Tibetan independence, and Falun Gong. While the company subsequently claimed that it was no longer suppressing such content, nothing is stopping it from quietly imposing hard or soft bans on topics of interest to Beijing, much as American companies allow governments around the world to influence their moderation decisions. As content moderation becomes increasingly algorithmic, such adjustments might not even be readily apparent to oversight boards.

Would requiring TikTok to use American servers and auditing its data access solve these issues? Ring-fencing access to TikTok’s user data could mitigate some of the dangers of at-scale exploitation of Americans’ data by the Chinese government. Even so, the dangers posed by its algorithm would remain unchanged. Modern AI algorithms are opaque boxes about which their own creators have little understanding.

Simply requiring TikTok to share the source code of its AI algorithms with American auditors would not do much to mitigate these risks. The real power of AI comes not from the source code but from the underlying “model” that encodes the AI’s behavior. Subtle tweaks to the model or even innocuous changes to the parameters used to train the model could be used to alter it so that it invisibly reinforces Chinese government interests. This could be done in ways that would be very difficult for American auditors to detect, even if they had access to the source code and trained the model themselves using the TikTok-provided parameters.

The societal dangers that TikTok poses are no different than those of American social media platforms that already collaborate with foreign governments to censor, surveil, and manipulate users around the world, often in ways hostile to U.S. interests. Requiring TikTok to use U.S.-based servers will mitigate some of these risks, but even an outright ban will do little to curtail TikTok’s use by American teens already well-versed in circumventing parental controls. In the end, there is only one real solution to the TikTok problem: American companies must develop viable competitors.

This article was originally published by RealClearPolitics and made available via RealClearWire.