When The Signal Drops
Future Tense #3
I landed in Seoul several years ago with my iPhone fully charged and a confident blue dot. What I didn’t know then was that Google Maps barely functioned in Korea. Naver was the local substitute, but I didn’t have Naver. After dinner, my hosts dropped me off at the metro. I got off after five stops, turned on Maps and noticed that the blue dot was guessing. The subway exit signs were in Korean. The street signs didn’t match the version on my screen. I was holding a device that could translate many languages and navigate most of the planet, but I was lost in downtown Seoul that night.
An older woman watched me repeatedly rotate my phone like a compass. She didn’t speak English. And I couldn’t speak Korean. She took my elbow and walked me three blocks through a market, past a pharmacy, around a corner, and pointed at my hotel lobby. Then she waved and left.
She didn’t use her phone. She knew the neighbourhood. It’s a way of knowing that can’t be found on an app.
I think about her these days.
We’ve been delegating one task after another to our devices—navigation, memory, multiplication, the ability to estimate, to read a map, to remember a phone number. Each handover is small. Each one makes sense. And in each case, you’re handing over a task to something that does it better and faster while the capacity to do it yourself fades in the background.
But something is shifting. Delegation is handing over a task. What’s starting to happen now looks more like abdication—handing over judgment. Software that monitors a factory floor so the manager doesn’t have to walk there. A digital twin that simulates a supply chain so the planner can make calls from a screen. AI that suggests a diagnosis so the doctor can confirm with a click. The work looks the same from the outside. But the person in the chair is no longer deciding, they’re just ratifying.
And here’s the danger. A doctor’s instinct for when a patient is about to crash isn’t learned from a textbook. It is built from hundreds of hours of watching patients, noticing symptoms, getting it wrong occasionally. The feel for a supply chain—which supplier is unreliable, which delay is a signal, which reading on the dashboard is misleading—comes from years of walking the floor, working with vendors, dealing with crises. Doing the routine tasks was how you learned the purpose of the system. Remove the tasks and the purpose cannot transfer—it simply fades away. And keeping a human in the loop doesn't help if the loop has been empty for so long that the human has forgotten what to look for.
Airline pilots have a name for this. Its called automation complacency—the tendency to trust the autopilot so completely that you stop scanning the instruments yourself. The plane flies fine. Until it doesn’t, and the pilot who hasn’t hand-flown in months reaches for instincts that have disappeared.
Work with the simulation long enough without operating, and you stop noticing when the simulation is wrong. The drift is silent, you feel competent, the dashboards are green. The AI agrees with your hunches. And then one day the GPS drops and you’re stranded in a foreign city, rotating your phone, unable to find the way.
Will these new fangled AI tools make us more efficient? Yes, of course. The harder question is where delegation ends and abdication begins—and whether we notice and make the choices before the signal drops.
The good samaritan in Seoul didn’t need her phone to show me the way. She had the neighbourhood in her bones.
P.S. This is the third issue of Future Tense, a monthly series on what’s arriving and what questions we should be asking before the answers become obvious. The future is already here, just unevenly distributed. Previous editions here and here.



It so happens that I too have been thinking about the loss of ability to do things when you delegate it to AI but it was hazy and unclear in my head. However you have absolutely nailed it and put it in perspective.
My only question is that has now has my thinking been delegated to you?
Once again, MR.RAJESH ACHANTA reframes my “Technology” view in general and “AI” in particular with a new perspective with real life example of simple Google Maps(used by even illiterate auto/taxi/truck drivers). By using Google Maps Metaphor MR. RAJESH distinguishes between intelligence and judgment.MR.RAJESH tells us we all like life to be simpler. But we also don’t want to sacrifice our options and capabilities.
Tesler’s law of the conservation of complexity, a rule from design, explains why we can’t have both.
Stewart Brand,the Whole Earth Catalog creator and author of Maintenance: Of Everything,says The most important work of our time, isn’t building new things. It’s the patient, unglamorous business of keeping the things we already have from falling apart. And,I totally agree with MR.RAJESH that the AI pilots—such as Google Maps —lead to a new problem: tool proliferation.
In Living with Complexity, Donald A. Norman writes that “A conceptual model is the underlying belief structure held by a person about how something works . . . Conceptual models are extremely important tools for organizing and understanding otherwise complex things.”A useful analogy here is—The Blade without Handle(Greywoods)—and,Google Maps without consciousness(Seoul).
Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.MR.RAJESH’s “When The Signal Drops” shows Recomplicating effects of Technology.