This website uses cookies

Read our Privacy policy and Terms of use for more information.

Hello! 👋

It's Thursday, 30th April 2026. Welcome back to Bold Efforts!

This week’s piece is about a strange moment in the AI story. Buckle up because this is a long rant that I have been thinking about for months.

For the last two years, companies have been trying to answer a simple question: how much human work can we replace with software? The math looked almost too good to ignore. A salary is expensive. A model call is cheap. A junior employee needs time, feedback, context, mistakes, management, and patience. AI gives you something in seconds.

On a spreadsheet, this looks like progress. In real life, work is not that clean.

Because companies did not only use juniors for output. They used output to create judgment. That difference sounds small, but it changes the whole story.

A junior developer fixing a small bug is not only fixing a small bug. She is learning how the system breaks, why one shortcut is harmless and another one quietly poisons the codebase, why a senior engineer says “not like this” even when the code technically works. She is learning the difference between producing something and owning something.

A junior analyst building a messy model is not only building a messy model. He is learning which assumption carries the conclusion, why neat numbers can still be nonsense, and why one wrong input can make an entire strategy deck feel convincing and still be wrong.

A young marketer writing a bad campaign is not only writing a bad campaign. She is learning what people ignore, why cleverness often loses to clarity, and why the market does not care how much effort went into a sentence.

This is how people become useful. Not through training modules. Not through leadership offsites. Through repeated contact with real work, real mistakes, and real feedback.

AI changes that contact.

It gives beginners an answer before they understand the question. It gives them polish before they have taste. It gives them speed before they have standards. That is useful, but it is also risky because it makes weak work look acceptable.

A senior person can use AI and become sharper. They know when the answer is wrong. They can sense the lazy shortcut. They have scars in their memory. They have seen the same problem arrive in different costumes. For them, AI is leverage.

For a beginner, AI can become camouflage. The work looks better than the understanding behind it. The code runs, but they do not know why. The deck is clean, but the argument is hollow. The email sounds confident, but there is no real thinking underneath it.

This is the part many companies are missing.

When AI improves the output, it can also hide the learning process. The manager no longer sees the beginner struggle with the first attempt. The senior engineer no longer sees the rough version that reveals how someone thinks. The analyst no longer fights with the blank sheet long enough to understand the model. The marketer no longer sits with the bad sentence until the better one appears.

Everything arrives already dressed up. For a while, the organization feels faster. Maybe it really is faster. Then, quietly, people stop building the muscles that created the old experts.

This is why the current debate around junior developers matters beyond software. Software is just where the contradiction became visible first. Companies cut or freeze entry-level hiring because AI can do more entry-level tasks. Then they realize they still need humans who can inspect, debug, question, integrate, and own what AI produces.

But those humans do not arrive fully formed. You cannot buy ten years of judgment from a model subscription. You cannot prompt someone into becoming senior. You cannot skip the beginner stage and still expect mastery to show up on schedule.

That is the full-circle moment. Companies wanted to replace junior humans because AI could do junior tasks. Now they need junior humans because someone still has to become senior. The spreadsheet did not include that line.

It counted salary. It counted software. It counted cost per token. It did not count the slow conversion of confusion into competence. That conversion is expensive. It always was.

The difference is that companies used to hide the cost inside delivery. Juniors did useful work while learning, so training looked like output. AI separates the two. It says: if all you wanted was the output, I can give you something that looks like it.

That forces a harder question.

Did companies actually value learning, or did they only tolerate it when it came bundled with cheap labor?

I think many organizations will get this wrong. They will look more productive on the surface while becoming more fragile underneath. They will ship more code but understand less of it. They will produce more documents but make weaker decisions. They will move faster through tasks and slower through reality.

There is a difference between a company that has more output and a company that has more capability. AI increases output by default. Capability still has to be built.

That means the best companies will not simply replace juniors with AI. They will redesign the beginner stage around AI.

A junior’s job will change. They will not spend as much time on boilerplate. They will not manually do every repetitive task. They will use AI from day one, as they should. But they still need friction. They need to make predictions before seeing the answer. They need to explain what the model did. They need to find the flaw. They need to test the output against reality. They need seniors who review not only the final work, but the thinking behind it.

The old model of learning at work was often accidental. Sit near good people. Do small tasks. Make mistakes. Get corrected. Repeat until your instincts improve.

That worked better than most companies realized. Not because it was perfectly designed, but because it exposed beginners to the texture of work. The confusion, the constraints, the politics, the customer, the legacy system, the edge case, the awkward trade-off nobody wrote down.

AI removes a lot of the texture. It smooths the first attempt. It compresses the struggle. It makes work look more finished earlier than it is.

So the new model has to be more intentional. The question has become “How do people learn when machines can produce the first answer?”

This question will shape the next decade of work more than most leaders realize. Every company wants more people with judgment. More people who can own ambiguous problems. More people who can decide what matters when the tool gives ten plausible answers.

But the supply of those people depends on whether companies protect the conditions that create them.

You do not get senior engineers without junior engineers. You do not get sharp strategists without confused analysts. You do not get good editors without bad drafts. You do not get taste without repeated contact with mistakes.

A world that removes the beginner stage does not become more advanced. It becomes a world full of polished amateurs. That is the risk.

Not that AI takes all the work. The deeper risk is that AI takes away the struggle through which people learn to do work well.

Of course companies should use AI. It is too useful not to. It removes waste. It speeds up dull tasks. It gives small teams the reach of much larger ones. Used well, it is one of the best learning tools ever made.

But there is a wrong way to read the moment. If a company uses AI to remove drudgery, good. If it uses AI to remove human development, dangerous. The first makes people better. The second makes the company dumber.

This is why the human cost versus token cost comparison is incomplete. Tokens are cheap because they do not have a future. People are expensive because they do.

That future is easy to destroy because it does not complain on this quarter’s dashboard. A company can cut junior hiring and still look efficient for a year. Maybe two. The seniors carry more. AI fills the gaps. Output stays high. Investors like the story.

Then the cracks appear. No one understands the older systems. No one has patience for messy learning. Seniors become bottlenecks. The company has more tools than judgment. Every project needs adult supervision, but fewer adults are being made.

At that point, the cost saving was not a saving. It was debt.

The strange thing about AI is that it makes human development more important, not less. When answers become cheap, the ability to judge answers becomes scarce. When production becomes easy, taste becomes valuable. When everyone can create, knowing what should exist matters more.

This is not a sentimental argument for protecting old jobs. Many roles should change. Some should disappear. A lot of corporate work was fake, bloated, political, and pointless.

But we should be careful not to confuse bad work with beginner work. Bad work deserves to die. Beginner work deserves to evolve.

There is dignity in being bad at something while trying to become good. Every expert has lived there. Every serious craft depends on it. Every company that forgets this will eventually run out of people who know what they are doing.

So yes, compare the salary to the token cost. But compare the whole thing.

Compare the cost of training someone with the cost of having no one trained. Compare the slow pain of developing people with the later panic of capability collapse. Compare the visible expense of juniors with the invisible value of future experts.

The companies that understand this will still automate aggressively. They will still use AI everywhere. But they will treat human learning as infrastructure, not charity.

They will know that a junior employee is not just a unit of labor. They are a bet against organizational decay. And in the AI era, that may become one of the most important bets a company makes.

Thank you for reading this long piece.

Best,
Kartik

I write Bold Efforts every week to think clearly about where work and life are actually headed, not where headlines say they are. If you want these essays in your inbox, you can subscribe here.

Keep Reading