Always Watching: The Surveillance State, Scaled by AI
Data collection from big tech and the state has become the norm. The vast majority of us sign our data away to social media, accept facial scans at the airport, and some even relinquish their health and sleep data to Whoop or Eight Sleep. Doorbell cameras like Ring are now common and Flock safety cameras are being deployed more and more in public places. Algorithms already know our preferences, habits and what makes us tick. This once would have been seen as a gross violation of privacy, but it has slowly become the normal cost of entertainment, efficiency, or even healthy living. What changes with AI is not just the amount of data being collected, but the ability to interpret and monitor it continuously at scale.
The US government has been weaving AI into its operations quietly for years now but it was unclear how it would be used. A story broke recently that is worthy of attention, even if you don’t track model releases or follow lab drama. The Pentagon designated Anthropic a “supply chain risk” after Anthropic refused to loosen two restrictions on Claude: no mass domestic surveillance of Americans and no fully autonomous lethal decisions. This does not tell us everything about how the government intends to use AI, but this does give us a glimpse.
While there are many ethical dilemmas with AI’s military use, I want to focus on mass surveillance. Yes, our online data is tracked, yes there are cameras in most public places, but is anyone really monitoring all of that data and footage? In the absence of a criminal investigation, the answer has generally been that nobody is watching. AI changes this equation completely by allowing large-scale pattern detection on messy datasets at speeds and scale impossible for humans.
On March 28, Rep. Clay Higgins responded to the day’s “No Kings” protests with this post on X:
Yes, the tone was ugly, but the phrase that matters is “AI eats that stuff.” He is talking about everyday people exercising their First Amendment rights as raw material for surveillance. His point is clear: it doesn’t matter if what you are doing is legal or even protected by the Constitution. We will be watching.
That mindset did not begin with AI. Edward Snowden’s leaks in 2013 revealed the collection infrastructure: the NSA’s PRISM program, a surveillance apparatus built in secret, deployed on US citizens, and justified in the name of safety. The Anthropic story reveals the next phase, interpretation infrastructure. The shock of the Snowden leaks was less about the desire to surveil than about how much capability already existed.
Reporting this month showed that the FBI has been buying commercially available data that can be used to track people’s movements - a reminder that agencies don’t need to rely solely on their own infrastructure. Usually, the government needs a warrant to access this information, but these data brokers offer an alternative path with serious privacy concerns.
However, data collection was always only half the problem. Cameras and microphones have been everywhere for years. Phones throw off location trails, and purchases, travel, public records, and online activity leave behind a constant stream of data. For most of that time, the real bottleneck was human attention. There were only so many analysts, so many hours in the day, and so much material that any institution could realistically review, connect, and act on.
What AI changes is the state’s ability to make use of what it already has. Software that can identify faces, trace movements, connect records, and turn scattered databases into a usable map changes the nature of what surveillance means. The state has been collecting the pieces for years but AI has given it the ability to make sense of them and act on them at scale. The major shift is not just more collection, but continuous AI interpretation of our lives.
This is where it should start to feel concrete, even for people who do not follow AI closely. Picture being pulled aside at an airport because a system flagged you as an elevated risk. You are told the decision was informed by travel patterns, location history, purchases, online associations, and whatever else the model ingested. A human reviewed it, you are told, but the reasoning cannot be fully disclosed because the system is proprietary or the process is classified. You are standing in front of the output of a machine-assisted judgment that you cannot inspect and cannot meaningfully contest. Nothing about that scenario requires a dramatic break with American life. It just requires existing surveillance powers, weak accountability, and tools that make broad interpretation finally possible. Versions of this already exist in watchlists or traveler risk assessments where the person affected often can’t see or challenge how the judgment was made.
This is the new normal I keep thinking about. Snowden’s leaks were an early warning about what data the state wanted to collect. The world we are entering is about what it can do with all of that data once AI makes surveillance cheaper and continuous.
I doubt our government will voluntarily limit its power by holding back usage of a tool that makes surveillance and control easier. More likely, these systems will continue to be implemented, narrowly at first, but over time creeping into all aspects of everyday life and left in place long enough to become ordinary. By the time it feels like a part of everyday life, it will be almost impossible to undo. However, that is not a fixed outcome.
I should say plainly that I think the upside of AI is real. The spread of technical ability is one of the most exciting shifts I have seen in my lifetime. But lines must be drawn now while this technology is still in its infancy and we can’t rely on the government or corporations to protect us. Freedom is fundamentally incompatible with constant, pervasive surveillance. Anthropic took a stand this time and their contract was immediately filled by OpenAI without the same protections. Even if one company draws a line, market and political pressure make it almost impossible that the industry as a whole will hold that line for long.
While it may seem futile, advocacy and community involvement can be effective. Implementing these systems still depends on local approvals and public tolerance. Denver just removed all 110 Flock cameras around the city after pressure from the ACLU of Colorado and community advocacy. The fight is happening now in city councils and procurement decisions.
If there is hope of preserving freedom, privacy, and human control, it starts with paying attention when the government reveals how it intends to use AI. That means noticing the signals early, before these systems are fully embedded in everyday life, and refusing to trade those rights away for convenience.
I have listed some other organizations that are fighting against AI surveillance right now:
The Electronic Frontier Foundation


Great post, Nick. Very well done. We have been reporting on Flock in Vegas:
https://thenevadaindependent.com/article/license-plate-reader-cameras-abound-in-nevada-the-state-has-no-laws-to-regulate-them