Surveillance Science: What the British Royal Air Force, WWII U-Boats, and a Broken Clock Have Taught Us About Monitoring Feeds

Jan 2, 2026

Billy F.

In 1943, the British Royal Air Force had a submarine problem. German U-boats were terrorizing Atlantic convoys, and the RAF's Coastal Command had deployed state-of-the-art airborne radar to find them. The equipment worked. The operators were well-trained. And yet, targets were being missed at an alarming rate. 

Command ran through the usual suspects (suspects still familiar to any Surveillance Director today). Equipment malfunction? No, the radar was functioning perfectly. Vision problems? The operators had passed their eye exams. Laziness? These were disciplined military scanning for enemy vessels when the Battle of the Atlantic hung in the balance. They were, by all accounts, trying very hard. 

The pattern that emerged was stranger than any of those explanations. Detection rates were high at the beginning of a patrol and collapsed as the hours wore on. It wasn't that operators couldn't see the blips. It was that something happened to their ability to notice them. 

The RAF brought in a psychologist named Norman Mackworth to figure out what. 

The Clock That Changed Everything 

Mackworth suspected the problem wasn't motivation or training. It was architecture. Specifically, the architecture of the human brain. 

To test this, he built one of the most boring devices in the history of science: a large clock face with a pointer that moved in small, rhythmic jumps, one per second, for two hours straight. The operator's job was to watch the pointer and press a button whenever it made a "double jump," skipping ahead two positions instead of one. 

That was it. No explosions, no enemy combatants, no stakes whatsoever. Just a pointer, moving in circles, for two hours. 

The genius was in the sterility. By removing every other variable, Mackworth isolated the one thing he wanted to measure: what happens to human attention over time when you ask it to wait for something that might never come? 

The answer was brutal. 

For the first thirty minutes, operators caught about 85-90% of the double jumps. Then, around the half-hour mark, something shifted. Detection rates dropped 10-15% and stayed there for the rest of the session. Mackworth called it the "vigilance decrement," and it would become one of the most replicated findings in the history of cognitive psychology. 

The RAF's submarine problem wasn't about effort. It was about the biological limits of sustained attention. After thirty minutes of watching for rare events, the human brain starts to check out, whether you want it to or not. 

Your Brain Evolved for a Different Job 

Here's what Mackworth stumbled onto, though it would take decades of neuroscience to fully explain: the human visual system is spectacularly good at noticing change. A rustle in the grass. A shadow moving at the edge of your vision. A face in a crowd that looks familiar. For the 200,000-odd years humans spent as hunter-gatherers, this is what survival required. Sudden motion meant predator or prey. Noticing it fast meant living another day. 

What the human visual system is not designed for is staring at a static environment waiting for something that might never happen. That's not a flaw. It's just a different job than the one evolution optimized for. 

When researchers looked at what was happening in the brain during prolonged monitoring tasks, they found that the neural systems responsible for sustained attention essentially run out of fuel. It's not a decision to stop paying attention. It's not zoning out. It's more like a battery draining. The resources required to maintain that state of "readiness to respond" are genuinely finite, and after about thirty minutes, they start to deplete. 

This isn't unique to surveillance work. TSA baggage screeners, air traffic controllers, nuclear plant monitors, casino surveillance agents, radiologists reading scans: anyone whose job involves waiting for rare but critical signals experiences the same decay curve. The thirty-minute cliff is a human universal. 

The Wall of Screens Issue 

Time isn't the only variable working against the brain. There's also the question of how much you're asking it to process at once. 

Walk into many surveillance operations, whether it's a cruise ship surveillance room or a casino surveillance room, and you'll see some version of "The Wall": an array of monitors displaying dozens of camera feeds simultaneously. It looks like total situational awareness. It creates a sense of complete coverage. 

But research by Jim Aldridge at the UK's Police Scientific Development Branch suggests The Wall has limits that aren't obvious from looking at it. Aldridge tested how well operators could spot a specific target (a person walking with an umbrella) across different numbers of screens. 

The results were not subtle. 


Monitors 

Detection Rate 

Drop from Baseline 

85% 

— 

74% 

-11% 

58% 

-27% 

53% 

-32% 


With a single monitor, operators caught 85% of targets. With nine monitors, they caught 53%. That's barely better than a coin flip. 

The mechanism here is something called saccadic masking. Your eye only captures high-resolution detail in a tiny central area called the fovea, roughly the size of your thumbnail held at arm's length. To actually see nine screens, your eyes have to constantly jump from one to another. During those jumps (saccades), your brain briefly stops processing visual information to prevent motion blur. 

If you're watching nine screens and an event happens on screen three while your eyes are moving toward screen seven, you simply won't see it. By the time your eyes cycle back to screen three, the visual scene looks unchanged, and your brain assumes nothing happened. The information hit your retina, but the wiring between your eyes and your brain can only process so much at once. 

The Myth and the Reality 

If you've been in physical security long enough, you've probably heard a more alarming version of this research: "After 20 minutes, an operator misses 95% of screen activity." It's a great line for selling software. It's also a distortion. 

The 95% figure appears to conflate the time-based vigilance decrement (10-15% drop) with the screen-load problem (32% drop at nine screens), then rounds up for dramatic effect. The real numbers are sobering enough without exaggeration. 

What's true: after thirty minutes of continuous monitoring, detection accuracy measurably declines. What's also true: asking one person to actively monitor more than four screens simultaneously is asking them to beat their own biology. 

The Gap Where Threats Hide 

The combination of time-based fatigue (Mackworth) and screen-based overload (Aldridge) creates what might be called the Vigilance Gap: the difference between perceived security coverage and actual cognitive coverage. 

This matters because sophisticated threats don't need to understand the science to exploit it. They just need to understand that operators are watching a lot of feeds and that subtle actions are harder to catch than obvious ones. Here's how specific threats tend to exploit the biology: 

The Slow Burn. A card counter doesn't walk in betting big. They play flat for 15-20 minutes, blending into the rhythm of the table, waiting for the deck to turn favorable. By the time they spread their bets, the operator watching that table may be past the thirty-minute cliff. The subtle change in chip stack height is exactly the kind of low-salience signal that a fatigued brain is likely to miss. 

The Split Second. Moves like past posting or capping happen in fractions of a second. They require foveal vision to catch. If an operator is scanning a wall of screens when a cheat adds chips to a winning bet, saccadic masking means their visual processing is blocked during the eye movement. The cheat doesn't need to be faster than the camera. They just need to be faster than the operator's scan cycle. 

The Boring Theft. Internal theft rarely looks like the movies. A count room employee palming a single bill during a sort is a repetitive, low-contrast action against a background of identical repetitive actions. Mackworth's research showed that these low-salience signals are the first things the brain stops processing when fatigue sets in. The more boring the theft looks, the safer the thief is. 

Regulations Noticed 

While few regulations explicitly cite Mackworth or Aldridge by name, some regulatory bodies have intuitively built around these biological limits. 

The UK Home Office explicitly advises that vigilance tasks suffer a decrement after 20 to 30 minutes, recommending task changes at that interval. In the US, TSA protocols rotate baggage screeners every 20-30 minutes for the same reason. The EU takes it further, mandating that airport screeners change tasks or take breaks after 20 minutes to mitigate what they call the "hypnotic effect" of the X-ray screen. 

These aren't arbitrary policies. They're directly informed by the science of attention. 

Working With the Brain Instead of Against It 

The most effective interventions don't try to overcome the vigilance decrement. They try to avoid triggering it in the first place. 

Task rotation is the simplest: if attention degrades after thirty minutes, don't let anyone hit that wall on a critical monitoring task. The UK and TSA figured this out. The 20-minute rotation isn't about babying operators. It's about keeping them in the high-sensitivity zone where their brains actually work. 

The Aldridge data points to a similar intervention. If detection accuracy drops from 85% to 53% when you go from one screen to nine, then any time a specific situation requires real scrutiny, it should get moved to a single dedicated monitor. The Wall has its place for general situational awareness, but active investigation needs the focused setup that gives the brain a fighting chance. 

And then there's the question of what technology can do. Not to replace human judgment, but to change the nature of the task. 

Cognitive psychology distinguishes between two types of work: search and verification. Search is exhausting. It requires constantly scanning for a needle in a haystack, draining cognitive fuel rapidly. Verification is easier. It requires evaluating a specific event that's already been flagged. 

When Autonomous Vision AI Agents handles the search layer, monitoring feeds and flagging anomalies, it changes the operator's job from the thing humans are worst at (endless vigilant waiting) to the thing humans are best at (investigating, verifying, making judgment calls). The AI handles the waiting. The human handles the thinking. 

The Operators Were Right All Along 

Surveillance professionals know the feeling. You're forty minutes into a shift, staring at the same feeds, and you realize you've been looking but not seeing. There's a moment of guilt, maybe a shot of caffeine, a deliberate refocusing of attention. And then, inevitably, it happens again. 

What Mackworth's research gave us, seventy-five years ago, is permission to stop treating that experience as a personal failure. The operators watching for U-boats in 1943 weren't missing blips because they were lacking skills or effort. The operators watching gaming floors or stadium crowds or airport terminals in 2025 aren't either. They're doing a job that runs directly counter to how their brains evolved, and the fact that they do it as well as they do is a testament to training and discipline. 

The thirty-minute cliff isn't an indictment of the people doing the work. It's a design constraint that the work needs to account for. And once you know it's there, you can stop trying to power through it and start building systems that work with human attention instead of against it. 

Your brain wasn't built for this. But with the right structure, the right tools, and the right understanding of its limits, it can still do remarkable things. 


 Sources & Further Reading
  • Mackworth, N.H. (1948). The breakdown of vigilance during prolonged visual search. Quarterly Journal of Experimental Psychology. 

  • Aldridge, J. (1994). The reliability of CCTV systems. Police Scientific Development Branch (PSDB).

  • UK Home Office / NPSA. Human Factors in CCTV Control Rooms: A Best Practice Guide.

  • Donald, F. & Donald, C. (2015). Task disengagement and implications for vigilance performance in CCTV surveillance. 

  • Header Image: WAAF radar operator Denise Miley at Bawdsey Chain Home station, May 1945. Image: IWM (CH 15332) 

Billy F.

Billy F. is Business Operations & GTM Systems Lead at EagleSight.ai.