You're Using Generative AI Wrong: Stop Cracking Nuts with a Sledge Hammer

14 Jun 2024

The trough of disillusionment

You are using Gen AI the wrong way.

The trough of disillusionment is upon us. Almost everyone is using Generative AI in the wrong way. That’s the bad news.

The good news is that this will make things infinitely easier for the rest of us, especially those of us who have been around long enough to see the world go around before this proverbial silver bullet went mainstream.

Generative AI is misunderstood as a sentient tool that is capable of expert output. We are about to hit AGI they cry! Truth is, Generative AI is capable of performing enormous amounts of work, to an intermediate (at best) level. Given the speed of the output, we forgive Generative AI for errors and inaccuracies that we wouldn’t begin to tolerate in any walk of life, i.e. from absolutely anybody who actually knew what they were doing. "Sorry sir, it wasn't the correct arm I amputated, but it was still an arm!"

Is AGI really imminent? Probably, but only to a specific and flawed definition. There is an important distinction here. Intelligence is not a synonym or equal to competence. Einstein writing his first lines of code would have been an awful software engineer. We are conflating intelligence with expertise and competence. This is a common problem in hyper-intelligent people. But I want you to think, is the "best" person you know at any given professional, also the "most intelligent"? Possibly, but rarely.

These fancy algorithms may well reach superhuman levels of intelligence. Does it matter? Regarding actual expertise and output, they are mediocre at best. To a beginner in any field, intermediate is impressive. Unparalleled speed alongside something that resembles (but is not equal to) competence has the effect that those who are starting in any field are in awe of this tool. They are the confident graduate, unknowingly ruining the professional competence of your business.

There is a deep-rooted fallacy that given LLMs have already reached a certain level of competence, it is simply a matter of time and optimization before they render us mere mortals redundant. This could not be further from the truth. Our logic is flawed. Simply because someone has a superhuman understanding of nutrition and exercise, does not mean they will beat the world’s strongest man in an arm-wrestle or be a good personal trainer. AGI does not equal expertise, repeat after me.


It’s still an algorithm silly...

LLMs are algorithms that have been trained on enormous amounts of data. Given the quantity of data that has been analyzed, they can predict patterns, recognize common problems, and formulate logical interpretations of what should come next. Frankly, given their billions of dollars of training, I'd be worried if they couldn't fool researchers they were a 20-year-old girl. Gen AI can be trained on past data, to recognize patterns and predict what might happen afterward. Having met enough 20-year-old girls previously, they can now imitate one.

This is fantastically useful because it means that repetitive, boring, and frankly inane tasks can now be automated in a way that was a pipe dream a few short years ago. But there is no deep understanding of problems, simply a prediction of what might happen next given an extremely large past dataset. We are so obsessed with measuring intelligence, we have decoupled the very concept from "being useful".

People see this ultra-capable machine, and assume that expert knowledge is no longer necessary to solve complex problems because we can simply ask this “sentient AI”.

New AI projects fall into the same trap time after time. Identify a problem. Being unable to solve this problem, we assume that “AI” will fix it. Therefore we build “Project X” which outsources the complexity of problem-solving to AI, and enthusiastically announce to the world how we have fixed “Problem X”.

Through our excitement and the speed in which we reap the fruits of mediocrity, we forgive the incredibly poor quality of what we can see with our eyes, “Given how quickly we’ve gotten this far, it is just a matter of time before it ‘does’ work”. Self-driving cars want a word…

Many folks have already gone deep in pulling a few of these grandiose projects to shreds.

Debunking Devon [YouTube]

Debunking Devon [Medium]


Unlike any other tool I’ve seen introduced in my professional life?

It may sound like I’m skeptical of Generative AI itself. Nothing could be further from the truth. It is a fantastic tool, the likes of which will revolutionize how we work and the quality of what we produce, unlike any other tool I’ve seen introduced in my professional life.

But here is the thing.

Generative AI is a power tool that can be used to leverage the skills of the wielder, not replace them. It is superhuman intelligence in the same way that a forklift to a builder is superhuman strength. As a programmer we see this in our daily life; the quantity and quality of output using LLMs to help us produce code is the single most important change to our working processes that I’ve ever seen.

** BUT **

I’ve been doing this for 15 years. This gives me an important skill; the ability to wield this powerful tool. I’ve engaged in the crafting of my code at a deep level, I’ve suffered and I’ve learned. I’m now able to recognize good structures and good practices, and most importantly, I’m able to see very quickly and clearly when my “Sentient AI” is going off on a tangent and getting things completely wrong.

I know when this tool is accidentally churning out inaccurate code, or anti-patterns. But this knowledge took me 15 years to acquire. This is the same way that a forklift truck driver knows how to move items, how much to carry, and the systems to ensure safety.

I worry for the kids; what about the Software Engineers (or any walk of life) who are starting now? The availability of instant mediocrity will destroy their motivation to ever become experts. “We must become wielders of AI, knowledge is redundant!”. It's such a dangerous fallacy. "Look how strong this crane is, why do I need arms at all!." Without expertise you are unable to leverage this tool for anything other than churning out enormous amounts of rubbish.

This is a worrying trend that we are experiencing right now, filling up the trough of disillusionment with utter utter rubbish. There is nothing more apparent than this in big business, where people are using LLMs to generate documents and presentations. Blogs and documents are created at lightning speed, unseen by humans in their generation, and unread by humans in their consumption.

One little sentence of value goes into the algorithm, to produce pages and pages of mediocrity. It’s an impressive and wonderful demonstration of the inefficiencies in our businesses and economies.

I feel we are already reaching a tipping point where we’d rather receive a single human sentence of value instead of a 50-page document of fluff.

Please, dear reader. Focus here. This is value. Condense, don't inflate. The real bit is in the middle. Cream will always rise to the top, and there has never been so much dreary sludge in the middle and the bottom.

I fear we have already become skeptical and unable to consume long-form content out of concern that we are consuming nothing more than the ramblings of an algorithm. This is not yet the golden age we envisaged.


I worry for the professionals growing up with this tool.

While I worry for the professionals growing up with this tool, rather selfishly I see an opportunity. These two tendencies; 1. massive quantity and 2. a reduction in quality, are probably the greatest opportunity of recent times.

Every single word of this blog has been written by hand. Every external tendency towards mediocrity provides a unique opportunity for people actually willing to invest the time and effort to produce something of higher quality to stand out.

The barrier for excellence has never been so low, because the competition is increasingly with an algorithm and its unengaged and inexpert master. Professional life has never been so uncompetitive, once you surpass the barrier of incompetence marked by LLMs. Remember the lesson. AGI doesn't equal expertise. A G I doesn't equal E X P E R T I S E. I'm going to print it on a t-shirt.

The rebound has already started and is building faster and faster. I'm seeing tension but expecting a tsunami. As generated content and poor products get more and more frustrating, there will be an inevitable reaction and a bounce towards quality. We are on the verge of the organic food trend for knowledge economies.

Given how easy it has become to mass-produce poor quality, there has never been a better time to invest in being an artisan.

The same as for the quality of what we are producing, we can apply the logic to professionals. Far from “Software Engineers will be replaced” I actually think the inverse; this tendency towards inexpertise will make genuine Software Engineers in higher demand than ever.

The line of expertise has been drawn, you are competing with an algorithm, have more competence than this robot and you will be in increasing demand. Only those who never reached excellence in their craft will suffer. The robot will never reach true expertise, because there will never be a sufficient dataset of genuine experts to crowdsource from. And crowdsourcing takes the average result not the best one. The robot doesn’t think. It repeats. Beat the mob.

Of course, this does make traversing from beginner to an actual expert more challenging than ever, simply because there is little value in being a beginner at anything anymore. But become an expert, and the rewards will be richer than ever.

To be able to competently wield Generative AI, you must first have done the hard yards to understand what excellence is. Then you can correct and guide this powertool until it reaches an output of quality. It is a tool to augment professionals, not replace them. It will simply magnify your competence, (or in many cases, incompetence). You can drive the forklift truck, or fall in love with it and worship it. The choice is yours, and be careful because the rewards are greater than ever. Some will control their destiny, others will be enthralled and become weaker.

Welcome to the state of the world in 2024!

How to use it

Assume that generative AI is a tool for augmentation.

So what do I mean by augmentation? Simply really, I think we need to separate the expertise and the superhuman tool. If we assume that generative AI is a tool for augmentation, we need to spend more time and effort than ever to understand and deeply engage with the problem. This is the value.

One analogy is like that of a luxury car factory. The generative AI is the factory process, it can take the plans and processes for each part, and use robotics to put them together. It can do so at amazing speed and efficiency. The factorization of the process provides amazing potential. But the factory itself doesn’t understand anything, it just repeats.

First, it does not know the ‘whole’, the role of the wheels with the engine, or the chassis and how it fits with the interior. Before we can assemble LLM production, we need to go deeper into the details than ever before. Be more careful, precise, and detailed with planning, not less. Robots and factories can only be precise and produce excellence, with very detailed and knowledgeable instructions; binary instructions. Binary instructions from an expert (this expert is rarer than ever!).

The wrong way to use an LLM is to say, “Write me a blog about Generative AI”. Almost everybody is currently doing this because they have neither the knowledge nor will to actually do the hard work.

The right way is to recognize what value is.

Value is hand-crafted, researched content with thought behind every word. Automation is the work of distributing this content. Value is what is hidden in grandma’s recipe, but we absolutely should use the factory to bring this to the world. If we use sludge for the input, we churn out rubbish for the output. Focus on the value. Excellence can be mass-produced, but it takes a lot of work to get there.


I'm using Generative AI to solve real problems that I actually encounter, but I'm doing so at a zoomed-in level. I'm planning, meticulously and carefully. And then using the power tool to execute.

The worst case scenario is that I've found another way not to use Gen AI. While there is one thing for sure, we certainly don't need more of these... Importantly I've also learned a lot about the how and the when. The best case scenario is that we are at the vanguard of a new era. I'm putting in the hard yards. I trust this will pay dividends.

I create the value, and use Generative AI to distribute it. This is the way. This is using a robot for automation, and human expertise for value.

Most people.

Cracking nuts with a sledgehammer.

And let's hope they aren't barefoot crushing the broken shells on the floor…