The Knowledge Pipeline Crisis

Three years ago, I was typing every line of code by hand. Today, I estimate I write maybe 10% of my code myself - the rest comes from a variety of AI tools. I have a plaque on my desk that says my job title is now "frustrated bot supervisor." It's a joke, but it's also uncomfortably real.

This transformation didn't happen overnight. It started innocuously enough when I first discovered GPT-3 could write blog posts and early image generation tools could create pictures for our websites. It was exciting! Then it became a bit disappointing because I couldn't make them do exactly what I wanted. After that, I relaxed into it and stopped demanding perfect results, accepting that AI could give me something useful even if it wasn't exactly what I was after. As the systems improved, the output got closer to my goals, and now I'm happy to leave the precise technical implementation to the bots. 

The real shift happened about a year ago when I realised it was quicker to ask Claude to make almost any change to my code than type it myself. I was still in charge of the overall structure and framework. Then I started experimenting with building systems where I let AI be completely in charge. This made me very uncomfortable - security is particularly hard to get right and easy to mess up badly. I'm not yet willing to delegate the entire process to AI, but I'm always testing how much responsibility it can handle, keeping a close eye on its work.

At first, I was thrilled that my productivity was through the roof - at least 10x, maybe 20x more productive. When the AI slips up, I'm still close enough to the project to spot errors and fix things. But then I started reading articles about big companies not hiring junior programmers anymore, and it got me thinking: what happens when my generation retires? There won't be any junior developers to take over. They won't know how to spot errors because they'll have grown up with AI doing everything.

 

The Calculation Problem

My friend George builds ships. He told me something that perfectly captures what's happening now. When he was starting out, he had to estimate everything by hand and then check it with a computer. So he'd estimate that a ship weighs 250 tonnes, then calculate it as 262 tonnes. Fine - close enough. But if he calculated it as 2,620 tonnes, he'd know he'd made a mistake because his estimate was an order of magnitude from the calculation.

Recent graduates, he said, aren't doing the estimate first. They start with the computer, so they don't realise there's an error when they calculate 2,620 tonnes and just keep going. They have no instinct for when the answer is wildly wrong.

This is exactly what's happening in software development, and it's spreading to other industries. We're losing the ability to sense-check our tools because we never developed the intuition in the first place.

 

The AI Experiment

Recently, a client approached us with an interesting proposal. They wanted to see if we could create an entire, live system using only AI development tools in a specific language. They were interested in testing whether AI could make domain expertise irrelevant.

We built them a working system, and it functions well. I'm super proud of what we made. But we had a hell of a fight with the AI tools to get there. I estimate that we spent a third of the entire development budget just fixing mistakes the AI introduced.

To be fair, we still built the system in significantly less time than if I'd coded it all by hand. But the mistakes were many and varied. Sometimes I'd ask it to make a small text change to a navigation label, and it would simultaneously redesign the entire interface. It repeatedly added unnecessary complexity that later confused it. And three times - this is the most serious issue - it told me it had made changes but failed to actually write any code.

That last point is critical. If you can't trust your tool to accurately report what it has or hasn't done, that's actually worse than having no tool at all. I spent hours debugging phantom problems because the AI lied to me about implementing features. This is why I don't think this particular technology is ready for production use without expert supervision.

The client was fascinated by the results, especially the detailed section about where the AI went off the rails. They're still considering their next steps.

 

The Bigger Picture

Here's the uncomfortable truth: we need senior developers to work with AI to create systems, but we don't really need junior developers anymore because the senior ones are now 20x more productive. So companies aren't hiring juniors. But at some point, the senior developers are going to retire, and then there won't be anyone left to supervise the bots.

This feels like late-stage capitalism on steroids - fire the entire development team to save money, and who cares what happens in ten years? The metrics look great right now: developer productivity up 2000%, code output increased dramatically, junior developer salaries eliminated. But nobody's measuring knowledge transfer (zero), industry sustainability (declining), or what happens when the last generation who actually understands the underlying systems walks away.

 

 

Beyond Software

This isn't just a tech problem. What I'm describing here applies to many industries in different ways. It's happening first in tech because we're early adopters who get excited about new technologies and are quite keen to reduce our workload where possible. We're a pretty lazy bunch, honestly. But tech is just the canary in the coal mine.

I absolutely believe the line "AI isn't taking your job - people who use AI are taking the jobs of people who don't." But I'm saying we should look at the long-term consequences of our actions. Let's keep hiring junior developers because we're going to need senior developers for a long time yet. And if we make everybody redundant and replace them with bots, there won't be anyone around to fix the bots when they go wrong.

 

No Easy Answers

I don't have a magic solution to this problem. I'm not telling you how this should end because I honestly don't know. I'm just here on the inside of the biggest jobs revolution in living memory, pointing out that there's a problem looming on the horizon that we all need to think about.

Maybe the solution is fast-track training programs that compress ten years of experience into two. Maybe AI will become genuinely supervision-free (though current trends suggest otherwise). Maybe we'll see a "slow tech" movement where people deliberately choose more stable tools. Or maybe the industry will realise its mistake and start hiring juniors again before it's too late.

Honestly, I don't even know what the ideal solution is. I'm just pointing out that there's a problem coming that we all need to think about. It's not just a tech problem - it's a case study in what happens when you let AI take over your job without considering who will understand the job when the AI inevitably needs fixing.

Maybe we can get AI to give us a solution? The irony isn't lost on me.



Tagged under: Bluffers guide   Hot topics   AI   Programming  

Nice things people have said about us

"Our experience has been a positive one right from the start. Both Kat and Mat are very friendly and easy to talk to."

Joy Basset, A P Bassett Solicitors