📅 30 January 2025 ⏱️ 6 min read

A Computer Can't Take the Blame

An observation from a 1979 IBM training manual that still feels uncomfortably relevant. Automation executes decisions relentlessly, but accountability never belongs to the system.

Automation Engineering Governance

This line comes from an IBM training manual from 1979:

A computer can never be held accountable, therefore a computer must never make a management decision.

It's not a philosophy quote and it's not meant to be clever. It reads like practical guidance, and that's probably because it is. Someone, even back then, had already worked out that pushing responsibility into systems creates problems that only surface later, usually at the worst possible time.

Fast forward a few decades and most environments we work in are packed with automated decision-making. Security tooling blocks activity automatically. Backup platforms prune data automatically. Cloud services scale, shut down, and clean up automatically. None of that is accidental, and none of it is wrong. The issue is that the decisions behind those behaviours often outlive the circumstances they were created for.

You can usually trace a failure back to a rule that once made complete sense. Storage was tight, so retention was aggressive. Accounts were few, so lockout thresholds were strict. A service was non-critical, so alerts were noisy but ignored. The environment changed. The rule didn't. Automation is very good at preserving old logic long after everyone has forgotten why it exists.

Systems don't pause for context. They don't recognise that today is different. If a job is configured to delete data, it will do it quietly and reliably whether that data is disposable or suddenly business-critical. If an account is flagged, it will be locked regardless of whether that account belongs to someone on leave or the only person who can fix an outage. The system isn't wrong. It's just executing a decision that was made elsewhere.

Most real-world incidents aren't especially interesting from a technical point of view. They don't involve novel attacks or clever exploitation. They're usually the result of old assumptions being enforced automatically. Backup jobs that have never been restored from. Access controls that grew by exception. Alerting that trained everyone to ignore it. All perfectly logical, right up until the day they aren't.

When things break, accountability reappears very quickly. Nobody interrogates the software. They interrogate the configuration. Why was this allowed to happen? Who owned this rule? Why wasn't it reviewed? At that point it becomes obvious that the computer didn't make a management decision at all. It just enforced one that had effectively been abandoned.

The way we try to approach this internally is straightforward. Automation stays, but ownership stays too. If a system can delete data, deny access, or take a service offline, then someone needs to understand that behaviour and be willing to own the outcome when it causes pain. Not just when it works as intended.

That's the part the 1979 manual got right. Computers are excellent at execution. They're terrible at judgement. The moment judgement disappears from the loop, responsibility doesn't disappear with it. It just waits.