AI

Unlearning for Success in an AI-Driven World: Why Past Wins Can Hold You Back

AI is breaking boundaries and dismantling old ways of thinking. It has made a rather impolite but firm introduction to irrelevance. Leaders today must prioritise unlearning for success in an AI-Driven world —or risk being left behind.

AI is rewriting the rules of work, creativity, and competition. Every day, new breakthroughs make yesterday’s expertise obsolete. The old playbooks? No longer enough. The rate of change is massive. And it’s not slowing down.

The real question is: How fast can you adapt?

I clicked the picture above somewhere in Ladakh, where our car had been halted by an avalanche. Workers were labouring to clear the road, knowing full well that another could strike at any moment. That’s the nature of avalanches—sudden, disruptive, and unforgiving.

AI is that avalanche. In the real world, avalanches block roads. In the metaphorical world of fast change, they bury careers, industries, and entire ways of working. The only way to survive? Move, adapt, and find your slope.

Slope and Intercept

A professor whose work I follow is Mohanbir Sawhney. He wrote a piece titled “SLOPE, NOT INTERCEPT: WHY LEARNING BEATS EXPERIENCE” in LinkedIn. The piece resonated and helped me refresh my high school coordinate geometry 🙂

I have been thinking about it ever since. So, Indulge me for the next couple of minutes. Here we go.

Equation of a straight line: y = mx + c

m: The slope—indicating how fast you’re learning.
c: The intercept—representing your starting point or existing knowledge.

Imagine three learners. Mr. Red starts ahead (high intercept) but learns slowly (low slope, small ‘m’). Mr. Purple starts lower (low intercept) and progresses steadily (moderate slope, medium ‘m’). 

Ms. Blue starts behind (low intercept) but picks up new skills quickly (steep slope, large ‘m’), eventually overtaking both. Over time, Ms. Blue’s higher slope (greater ‘m’) allows her to progress faster, proving that the speed of learning (slope) matters more than where one begins (intercept).

That’s Prof. Sawhney’s point. In a world moving at breakneck speed, slope beats intercept every time.

It’s a neat explanation that accentuates the importance of learning and the role of past experience. Which is the point to this post. Past experience can interfere with future learning.

What gets in the way of learning and change? Three things stand out for me.

1. Past Success is a Sneaky Obstacle

What got you here won’t get you there. Yet, we cling to past knowledge like a badge of honour. The problem? Yesterday’s wins can become today’s blind spots.

The best learners stay humble. They don’t assume what worked before will work again. Instead, they ask, “What do I need to unlearn to make space for what’s next?”

This isn’t just opinion—it’s backed by another favourite professor, Clay Christensen, in his classic work, The Innovator’s Dilemma.

Christensen showed how successful companies often fail when disruption hits. Why? Because their past success locks them into old ways of thinking. They keep optimising what worked before instead of adapting to what’s coming next. That’s how giants lose to scrappy newcomers unburdened by legacy thinking.

Exhibit A: BlackBerry

Once a leader in mobile technology, BlackBerry clung to its physical keyboard design, convinced loyal customers would never give it up. Meanwhile, Apple and Samsung bet on full-touchscreen smartphones. BlackBerry’s refusal to move beyond its own past success led to its decline.

Exhibit B: Zomato

Contrast that with Zomato. It started as a restaurant discovery platform but saw the market shifting. It let go of its original success model and pivoted to food delivery. Then to restaurant supplies. Then to quick commerce. By unlearning what had worked before, Zomato stayed ahead.

The same applies to individuals. If you define yourself by what has worked before, you risk missing what could work next. Adaptation isn’t about forgetting your strengths; it’s about not letting them become limitations.

2. Fear Kills Growth

New learning requires trying. Trying involves failing. And failure—especially when experience has given you relevance—can feel uncomfortable.

Many don’t fear learning itself; they fear looking foolish while learning. That’s why kids learn faster than adults. They don’t care if they fall; they just get up. Adults, on the other hand, hesitate. They protect their image, avoid risks, and stick to what keeps them looking competent.

This isn’t just instinct—it’s backed by research. In The Fear of Failure Effect (Clifford, 1984), researchers found that people with a high fear of failure avoid learning opportunities—not because they can’t learn, but because they don’t want to risk looking bad.

Think of it this way: If you’re only playing to avoid losing, you’re never really playing to win. The antidote? Make experimentation a habit. Small experiments create room for both success and failure—without the fear of high stakes. They provide just enough space to try, adapt, and grow.

Reflections on Rahul Dravid

Rahul Dravid’s career is an interesting study in adaptation. Once labelled a Test specialist, he gradually refined his game for ODIs, taking up wicketkeeping to stay relevant. Later, he experimented with T20 cricket and, post-retirement, started small in coaching—mentoring India A and U-19 teams before stepping into the senior coaching role. His evolution wasn’t overnight; it was a series of calculated experiments.

3. New Minds, New Paths

Left to ourselves, we reinforce what we already know, surrounding ourselves with the same familiar circles—colleagues, family, and close friends. That’s exactly why new perspectives matter. We don’t have enough of them. Our past experiences shape our networks, and over time, we rely on the same set of strong connections, limiting exposure to fresh ideas.

Sociologist Mark Granovetter’s research on The Strength of Weak Ties (1973) found that casual acquaintances (weak ties) expose us to new ideas and opportunities far more than close friends or colleagues (strong ties). Why? Because strong ties often operate in an echo chamber, reinforcing what we already believe. Weak ties, on the other hand, bring in fresh perspectives, unexpected insights, and access to new fields.

A few years ago, an MD I know took up cycling. What started as a fitness and lifstyle activity became something more. As he grew more integrated with his diverse cycling community, I saw firsthand how it influenced him—not just physically, but mentally. He hasn’t just learned new skills; he has unlearned old assumptions. His outlook, I realised, has changed simply by being around people who think and live differently.

He has transformed without realising it and is thriving professionally. I’ve been working on the sidelines with him and can see the transformation firsthand. I am not undermining his professional challenges and success, but I cannot help but see the changes his cycling community has brought to him.

The world is moving fast. The only way to keep up? Have more unexpected conversations, seek out people who challenge your views, and surround yourself with thinkers from different worlds.

Sometimes, seeing others take risks in adjacent spaces is all the permission we need to start experimenting ourselves.

Opportunity for Change

The ability to learn, unlearn, and adapt has never been more critical. In a world shaped by AI, rapid disruption, and shifting industries, clinging to past successes is the surest way to fall behind. The real competitive edge lies not in what you know today, but in how quickly you can evolve for tomorrow. Unlearning for success in an AI-driven world is mandatory.

So, ask yourself: What am I absolutely sure about? Because that’s often where the biggest opportunity for growth lies.

The world belongs to those who can learn fast, forget fast, and adapt even faster.

AI in Academia: The Grind, The Gain, and the Great Recalibration

A few months ago, I was teaching a bright MBA class when a student raised his hand in the middle of a lecture. He said he had misgivings about my arguments. And then, right there in class, he told me he had been using an AI tool to critique my points.

I learned a thing or two that day. Not just about the subject, but about how AI was changing the very nature of learning. I left the class not thinking about how students should avoid AI, but about how I could use AI to prepare better.

I wasn’t prepared for the question, and I’ll admit—I felt mildly threatened.

Now, my parents were both professors. I’ve been teaching a paper at a top-tier business school for over a decade, in addition to my other work. I’ve seen academia up close—the passions, the programmes, and the politics. So when I came across the California Faculty Association (CFA) resolution on AI, I paid attention.

California, after all, is at the heart of the tech world. If any faculty association could chart the future of AI in academia, I thought it would be this one.

But what the CFA put out was quite the contrary.

The CFA is pushing for strict rules on AI in universities, raising concerns that AI might replace roles, undermine hiring processes, and compromise intellectual property. As they put it:

“AI will replace roles at the university that will make it difficult or impossible to solve classroom, human resources, or other issues since it is not intelligent.”

I respect their concerns. But I also believe the real challenge isn’t what AI should do—it’s what humans should still do in a world where AI can do so much.

And that leads to some fundamental dilemmas.

A Moment to Recalibrate

The goal of education was always to teach thinking—knowledge was simply a measure of that thinking. Somewhere along the way, we confused the measure with the goal.

Instead of focusing on fostering deep thought, we turned education into a test of memory. AI now forces a reckoning. If AI can retrieve, process, and even generate knowledge faster, more accurately, and with greater depth than most students, what does that mean for education?

AI offers an opportunity not to restrict learning, but to recalibrate it—to return to the real goal: teaching students how to think, question, and navigate complexity.

Three Dilemmas Academia Must Confront

1. Who Does the Work—Humans or AI?

AI can grade essays, draft research papers, and provide instant feedback. It’s efficient. But efficiency isn’t learning.

Law firms now use AI for contract analysis. Junior lawyers “supervise” the process. The result? Many don’t develop the deep reading skills that once defined great legal minds. If universities follow the same path—letting AI mark essays and summarise concepts—students may pass courses but never truly engage with ideas.

Douglas Adams once said, “We are stuck with technology when what we really want is just stuff that works.” AI works—but at what cost?

2. Who Owns the Work?

Professors spend years developing course material. AI scrapes, reuses, and repackages it. Who owns the content?

The entertainment industry has been fighting this battle. Writers and musicians pushed back against AI-generated scripts and songs trained on their work. Academia isn’t far behind. If AI creates an entire course based on a professor’s lectures, who gets the credit? The university? The AI? Or the human who originally built it?

The CFA resolution warns about this:

“AI’s threat to intellectual property including use of music, writing, and the creative arts as well as faculty-generated course content without acknowledgement or permission.”

The same battle playing out in Hollywood is now knocking on academia’s door.

3. Does Efficiency Kill Learning? Or Is That the Wrong Question?

It is easy to assume that efficiency threatens deep learning. The grind—rewriting a paper, wrestling with ideas, receiving tough feedback—has long been seen as an essential part of intellectual growth.

AI makes everything smoother. But what if the rough edges were the point?

A medical student who leans on AI for diagnoses might pass exams. But will they develop the instincts to catch what AI misses? A student who lets AI refine their essay may get a better grade. But will they learn to think?

Victoria Livingstone, in an evocative piece for Time magazine, described why she quit teaching after nearly 20 years. AI, she wrote, had fundamentally altered the classroom dynamic. Students, faced with the convenience of AI tools, were no longer willing to sit with the discomfort of not knowing—the struggle of writing, revising, and working their way into clarity.

“With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.” – Victoria Livingstone

And therein lies the real challenge.

The problem isn’t efficiency itself—it is what is being optimised for.

If learning is about acquiring knowledge, AI makes that easier and more efficient. But if learning is about developing the ability to think, question, and synthesise complexity, then efficiency is irrelevant—because deep thinking requires time, struggle, and iteration.

So maybe the question isn’t “Does efficiency kill learning?” but rather:

What kind of learning should be prioritised in an AI-enabled world?

If efficiency removes barriers to learning, then we must ask:

What should learning look like when efficiency is no longer a limitation?

A Complex Problem Without Simple Answers

It is tempting to look for quick fixes—ban AI from classrooms, tweak assessments, introduce AI literacy courses. But this is not a simple or even a complicated problem. It is a complex one.

Dave Snowden, from his Cynefin framework, would call this a complex problem—one that cannot be solved with predefined solutions but requires sense-making, experimentation, and adaptation.

Livingstone’s frustration is understandable. AI enables students to sidestep the very struggle that shapes deep learning. But banning AI will not restore those lost habits of mind. Universities cannot rely on rigid policies to navigate a world where knowledge is instantly accessible and AI tools continue to evolve.

Complex problems do not have rule-based solutions. They require adaptation and iteration. The real response to AI isn’t restriction—it is reimagination.

Engage with AI, rather than fight it. Encourage students to think critically about AI’s conclusions. Reshape assessments to focus on argumentation rather than recall.

In a complex system, progress does not happen through control. It happens through learning, adaptation, and deliberate experimentation.

Reimagination, Not Regulation

Saying no to AI is a false choice. AI will seep into academia like a meandering tsunami that doesn’t respect traffic lights at the shore. The real challenge is not limiting AI, but reimagining education.

The CFA is right to demand a conversation about AI in education. But academia must go beyond drawing lines in the sand. It must reinvent itself.

AI is not the threat. The real danger is holding on to learning models that worked well in an earlier time.

That time is past.

It is time to unlearn. And recalibrate.

AI Natives Are Here: Are You Keeping Up?

It’s a question that used to be common. “What’s your native place?” It was a way of asking where you were from, where your roots lay. The word native carried warmth. It evoked childhood memories, a sense of belonging, and the unmistakable comfort of home.

The word native, I have since learned, comes from the Latin nativus, meaning “born” or “innate.” It later traveled through Old French as natif and reached Middle English, where it took on meanings tied to birthplace and inherent qualities.

Years later, in 2001, Marc Prensky introduced me to a new kind of native—the digital native. His essay Digital Natives, Digital Immigrants described those who had grown up in the digital world, instinctively fluent with technology, unlike the digital immigrants who had to painstakingly learn it. The metaphor was compelling until David White and Alison Le Cornu refined it further. They suggested that digital engagement was less about birth year and more about behavior—some were Visitors, using technology as needed, while others were Residents, living deeply within it.

For the first time, I understood what it meant to be an immigrant—not just in a country but in a way of thinking. To be a native was to belong effortlessly; to be an immigrant was to adapt, often clumsily.

And then, last week, I read about HudZah.

A New Native

Meet Hudhafaya Nazoorde aka HudZah. HudZah is changing how people interact with knowledge. He built a nuclear fusor—a device that accelerates ions to create nuclear fusion. And he did it with the help of an AI assistant, Claude, right inside his rented house in San Francisco.

Using AI, he gathered information from fusor.net, spoke to experts, and studied diagrams. AI refused to help at first. But HudZah found a way. He asked better questions, breaking big problems into smaller ones. Slowly, AI started guiding him. Piece by piece, he built the fusor.

It’s a fascinating story. (Read more here).

The AI Native

The part of HudZah that really caught my attention in that piece is this:

“I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.

HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.

It’s not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.”

Managing the Shift

Change is never easy. Some people jump in eagerly, others hold back until they have no choice. Everett Rogers’ Diffusion of Innovations model explains this well. There are innovators, the risk-takers who embrace the new before anyone else. Then come the early adopters, who follow closely behind. The majority waits and watches, taking time to adjust. And at the very end are the laggards—those who resist until change is unavoidable.

HudZah is an innovator. He hasn’t waited for AI to become mainstream. He has explored, experimented, and pushed boundaries, using AI to do what few would even attempt—build a nuclear fusor in his bedroom. His approach isn’t just about technology; it is about mindset. He sees AI not as a tool to be feared but as an ally to be mastered. That’s what sets innovators apart.

The question is, where do you stand? Are you adapting, exploring, or waiting for change to push you forward?

The Immigrant Elephant

Even as the world debates immigration and NIMBYism, an elephant grows in the room. Borders are tightening, and immigrants are being sent back. Yet, at the same time, a new kind of nativity is emerging—AI natives, like HudZah, who navigate the digital world with an ease that others struggle to match. And then there’s the rest of us—the AI immigrants, trying to find our place in this rapidly changing landscape.

But here’s the real question: if the world is sending back immigrants, where do AI immigrants go? What happens to those who can’t—or won’t—adapt? That’s the elephant in the room, and it’s only getting bigger.

I am an optimist. There are some realities that can’t be ignored. The pace of AI development is rapid, and there are legitimate concerns. At the same time, we cannot underestimate the prowess of the human mind and humankind. We have adapted to every technological shift in history, and we will do so again.

AI is not something to be feared. It is something to be embraced. Perhaps the best way forward is to experiment—to incorporate AI into our daily rhythms, much like HudZah does. Of course, this is going to greatly change how we all work and, most importantly, who we will become. Like Marshall McLuhan said, man shapes the tools, and then tools shape the man!

If the world belongs to the young, AI might just be the elixir that helps the rest of us stay young at heart—and in deed. More importantly, it can help us engage with the world in new ways, rather than being stuck in old paradigms.

Perhaps the only thing required? A willingness to experiment and take to it.

Story power !

Oxfam is betting on a new way.

Imagine having to sell second hand goods. Say, used furniture. Or other items of daily use. Like sunglasses. Or combs. Or radios. Whatever.

That effort is not going to fetch anything more than a small sum, unless ofcourse those belonged to a celebrity.

Ofcourse, the celebrity quotient is comes from the story that can be told.

“This hair strand is from Elvis Priestly”.

“This coffee cup was used by Sachin Tendulkar”.

Surely, the strand of hair is not worth so much if its not associated with Elvis. Nor the coffee cup with Tendulkar. These are stories that give life to random inanimate objects.

So here is Oxfam’s very interesting game plan.

Second hand goods gain a meaning when they come with a story. If there was a way of sharing a story about a second hand product with a prospective buyer, well, the chases are more for a purchase. (Every item on second hand sale will carry a story along with it and tagged to the item using a QR code. Any prospective purchaser would get to know of the story behind the item on sale. )

“Someone might donate a record and add that it was the song that they danced to at their wedding to its tag,” The chances of a purchase brightens with the story! (Not that it would result in a purchase everytime).

Stories have great power in them. Almost magical. Every individual carries his or her own stories and it becomes easy to relate to other stories that are told .

The humdrum of everyday corporate life makes it difficult for us to take the time to listen to stories or narrate our own. But when we do narrate or when we find a patient ear, what a difference it makes.

Methodologies like Appreciative Inquiry, inherently seek story telling and can create organisation wide energy. Every story holds significance and the very act of both telling and listening to a story can be sources of great energy.

Unfortunately, language creates its own complications and the word ‘story’ can sometimes lead to the narrative being thought of as a flippant waste of time. Call them what you will, stories have in them an inherent quality that brings alive people.

Grandma and her tales !

Personally, many of us would have grown up with stories. As children stories fascinate us. For many years, I grew up with stories that my grandmother used to tell me. Those gave a huge fillip to imagination and also, in retrospect, brought a contextual understanding of morals and values that was required in the family. The best thing about them, was I always used to look forward to hearing those ‘stories’!

In the corporate world the power of stories is often underrated. Grossly.

There are exceptions though. Coca-Cola is one that I know. Coca-Cola Conversations, the blog that Coca-Cola runs is a fine example of how corporate stories build or augment a brand. Infact, Coca-Cola has a historian and archivist with them : Phil Mooney.

Only, in the modern times, technology has given consumers the opportunity of contributing their own story to the brand. That is not only more interesting, it is as authentic as it can get.

Blogs, wikis, tweets all are available for imaginative use.

Within the organisation stories from the organisations past : accounts of successes / failures / decision points etc when told with a degree of authenticity and simplicity not only aid a great deal in building a culture, they are extremely non-invasive and interesting for employees.

So much for stories ! And by the way, they work. Very nicely !