Home Artificial Intelligence How an “AI-tocracy” emerges

How an “AI-tocracy” emerges

6
How an “AI-tocracy” emerges

Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. Such governments can fail to maintain up with technological changes that help their opponents; they may additionally, by stifling rights, inhibit progressive economic activity and weaken the long-term condition of the country.

But a recent study co-led by an MIT professor suggests something quite different. In China, the research finds, the federal government has increasingly deployed AI-driven facial-recognition technology to surpress dissent; has been successful at limiting protest; and in the method, has spurred the event of higher AI-based facial-recognition tools and other types of software.

“What we found is that in regions of China where there’s more unrest, that results in greater government procurement of facial-recognition AI, subsequently, by local government units equivalent to municipal police departments,” says MIT economist Martin Beraja, who’s co-author of a recent paper detailing the findings.

What follows, because the paper notes, is that “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.”

The students call this state of affairs an “AI-tocracy,” describing the connected cycle wherein increased deployment of the AI-driven technology quells dissent while also boosting the country’s innovation capability.

The open-access paper, also called “AI-tocracy,” appears within the August issue of the . The co-authors are Beraja, who’s the Pentti Kouri Profession Development Associate Professor of Economics at MIT; Andrew Kao, a doctoral candidate in economics at Harvard University; David Yang, a professor of economics at Harvard; and Noam Yuchtman, a professor of management on the London School of Economics.

To conduct the study, the students drew on multiple sorts of evidence spanning much of the last decade. To catalogue instances of political unrest in China, they used data from the Global Database of Events, Language, and Tone (GDELT) Project, which records news feeds globally. The team turned up 9,267 incidents of unrest between 2014 and 2020.

The researchers then examined records of virtually 3 million procurement contracts issued by the Chinese government between 2013 and 2019, from a database maintained by China’s Ministry of Finance. They found that local governments’ procurement of facial-recognition AI services and complementary public security tools — high-resolution video cameras — jumped significantly within the quarter following an episode of public unrest in that area.

On condition that Chinese government officials were clearly responding to public dissent activities by ramping up on facial-recognition technology, the researchers then examined a follow-up query: Did this approach work to suppress dissent?

The students imagine that it did, although as they note within the paper, they “cannot directly estimate the effect” of the technology on political unrest. But as a technique of getting at that query, they studied the connection between weather and political unrest in numerous areas of China. Certain weather conditions are conducive to political unrest. But in prefectures in China that had already invested heavily in facial-recognition technology, such weather conditions are less conducive to unrest in comparison with prefectures that had not made the identical investments.

In so doing, the researchers also accounted for issues equivalent to whether or not greater relative wealth levels in some areas might need produced larger investments in AI-driven technologies no matter protest patterns. Nevertheless, the students still reached the identical conclusion: Facial-recognition technology was being deployed in response to past protests, after which reducing further protest levels.

“It suggests that the technology is effective in chilling unrest,” Beraja says.

Finally, the research team studied the consequences of increased AI demand on China’s technology sector and located the federal government’s greater use of facial-recognition tools appears to be driving the country’s tech sector forward. As an illustration, firms which might be granted procurement contracts for facial-recognition technologies subsequently produce about 49 percent more software products within the two years after gaining the federal government contract than that they had beforehand.

“We examine if this results in greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Such data — from China’s Ministry of Industry and Information Technology — also indicates that AI-driven tools should not necessarily “crowding out” other forms of high-tech innovation.

Adding all of it up, the case of China indicates how autocratic governments can potentially reach a near-equilibrium state wherein their political power is enhanced, relatively than upended, after they harness technological advances.

“On this age of AI, when the technologies not only generate growth but are also technologies of repression, they might be very useful” to authoritarian regimes, Beraja says.

The finding also bears on larger questions on forms of presidency and economic growth. A major body of scholarly research shows that rights-granting democratic institutions do generate greater economic growth over time, partially by creating higher conditions for technological innovation. Beraja notes that the present study doesn’t contradict those earlier findings, but in examining the consequences of AI in use, it does discover one avenue through which authoritarian governments can generate more growth than they otherwise would have.

“This will result in cases where more autocratic institutions develop side by side with growth,” Beraja adds.

Other experts within the societal applications of AI say the paper makes a helpful contribution to the sector.

“This is a wonderful and vital paper that improves our understanding of the interaction between technology, economic success, and political power,” says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare and a professor of selling on the Rotman School of Management on the University of Toronto. “The paper documents a positive feedback loop between the usage of AI facial-recognition technology to observe suppress local unrest in China and the event and training of AI models. This paper is pioneering research in AI and political economy. As AI diffuses, I expect this research area to grow in importance.”

For his or her part, the students are continuing to work on related elements of this issue. One forthcoming paper of theirs examines the extent to which China is exporting advanced facial-recognition technologies world wide — highlighting a mechanism through which government repression could grow globally.

Support for the research was provided partially by the U.S. National Science Foundation Graduate Research Fellowship Program; the Harvard Data Science Initiative; and the British Academy’s Global Professorships program.

6 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here