The world in 2025:
The greatest threat to democracy isn’t an opposing political ideology, but widely available low-cost surveillance technologies. The COVID-19 pandemic led to the rapid development and deployment of surveillance technologies, which are now primarily developed and sold by authoritarian countries. The use of surveillance technologies varies significantly depending on where you live. In the United States, you’ll notice relatively little overt surveillance, but in China, you’ll experience cutting-edge surveillance built on AI technologies, with public messaging campaigns about their efficacy.
Authoritarian countries gained a competitive advantage in the underlying technologies by seizing the growing market for surveillance. To secure economic and political advantage, they have been aggressively exporting surveillance technologies to developing countries. As economies of scale continue to drive the price of these technologies down, democratic countries face increased pressure to follow suit. The challenge for the second half of the decade will be resisting the seemingly unstoppable creep of surveillance-led authoritarian governance.
How did we get here?
First, increasing government interest in AI-augmented surveillance systems to help counter COVID-19 led to a rapid acceleration in capabilities and increasingly broad applications. The initial surge in interest was specific to better analyze, identify, and protect against carriers of the virus.1 The growing demand from governments led to greater supply of AI capabilities from industry and academia. This happened in two ways: first by encouraging more research of COVID-specific systems, and second by promoting discussion of general surveillance systems for COVID-specific purposes. Businesses and technology startups sought to develop surveillance systems for governments, making up for business lost elsewhere due to the economic effects of the pandemic. Similarly, the AI research community shifted more attention toward surveillance-related computer vision research as funding in other application areas dried up. Initially designed for COVID-19-related applications, surveillance systems became more advanced and were increasingly repurposed for other uses, such as:
- Monitoring the behaviors of large crowds.
- Detecting “abnormal behaviors” in public transit and other locations.
- Tracking the movements of people and vehicles throughout cities.
- Automatically mapping the “social graph” of people by observing their day-to-day interactions.
- Tracking people at the entrances and exits of public buildings.
Second, different societal reactions to surveillance led to a fracturing of the AI industry and a growing competitive advantage for authoritarian countries. In democratic countries, consistent pushback from the public led to legislation actively constraining the development and deployment of surveillance technologies. The scientific community in these countries shifted from developing surveillance capabilities to understanding their social impacts and developing techniques to mitigate potential negative impacts, e.g., adversarial clothing, the development of “fair” or “representative” datasets to improve efficacy of (deemed acceptable) uses of surveillance, work on privacy-preserving machine learning techniques.
By contrast, authoritarian countries—most significantly, China—embraced the new technologies with little regard for the objections from the public, scientific, and international communities. Broader usage provided a competitive advantage in several respects. It allowed these nations to collect ever-larger datasets that were then used to develop more sophisticated surveillance systems. By securing their competitive advantage, these countries attracted the best companies and talent in the field, further increasing their share of the market and relevant research output. As they started to export their technologies, they constructed “data for capability” deals to create a compounding advantage in data over time. Their surveillance technologies also became cheaper as the state of the art of AI technology progressed, further lowering the barriers to widespread adoption. Lastly—and ironically, because surveillance-relevant datasets disproportionately comprised people from these countries—bias was less of a problem when these nations applied the technology against their own citizens. As a consequence, surveillance technologies were more effective and less controversial when applied in authoritarian countries.
How will we know if we’re heading toward this world?
Below are indicators to monitor over the next year to help us understand whether we might be approaching such a future. I break each indicator down into one or more metrics to be forecasted on Foretell. You can propose additional metrics for these indicators here.
|AI R&D more focused on surveillance-related topics||Publications in the field of computer vision |
|Money raised by surveillance companies |
|Governments increase funding for surveillance-related R&D||U.S. government spending on surveillance-related R&D |
|Seeking suggestions: re non-U.S. funding for surveillance-related R&D|
|Privacy concerns increase in democratic countries||U.S. media interest in privacy-security topics |
|Sentiment of U.S. media coverage of facial recognition technology |
|Seeking suggestions: re privacy concerns in European countries|
|Seeking suggestions: re privacy / surveillance-related legislation in the U.S.|
|Increase in China’s global share of surveillance-related markets, talent, and research output||China’s share of facial recognition-related publications [Forthcoming]|
|Chinese exports of surveillance-related technologies [Forthcoming]|
|Increasingly expansive application of surveillance technologies by governments||Use of facial recognition technology in U.S. cities [Forthcoming]|
|Seeking suggestions: re use of facial recognition technology by non-U.S. governments|
|Seeking suggestions: re use of facial recognition technology in China|
1. An AI-augmented camera, for instance, might use AI to judge whether people are carrying out social distancing. An un-augmented camera would simply transmit the feed to a monitoring station, where a human would visually inspect the camera outputs.
Author: Jack Clark, OpenAI