← Observations

AI adoption and the drift zone

2025

There is a recognisable pattern in the way organisations are adopting AI right now. They are moving fast. They are putting systems into production. They are building workflows around tools they have not stress-tested. And most of them believe they still have time to sort out the reliability question later.

This is the drift zone.

The consequence of relying on an AI system that fails at the wrong moment is real. It could be a wrong decision fed by bad output. It could be a customer-facing error. It could be a compliance failure driven by a model that hallucinated something plausible. These are not theoretical risks. They are happening now, in small ways, and being absorbed.

That absorption is the normalisation of deviance in action. Each small failure is treated as a one-off. Each near-miss is rationalised as an edge case. The narrative builds: "It works well enough for what we need." "The team checks the output." "We have not had a major incident."

None of this means the risk is under control. It means the consequence has not arrived yet. The horizon is still out there but it is moving closer every time an organisation increases its dependency on a system it has not properly tested.

The organisations that will do best are the ones that recognise the drift for what it is. Not recklessness. Not ignorance. Just distance. The consequence does not feel real yet. And until it does, the behaviour will not change.

The question is whether they can close that distance deliberately, through honest testing and reliability research, or whether they wait for the horizon to arrive on its own terms.

This is exactly the kind of problem Bot Research exists to address.

Morgan Sheldon