Have you ever watched one of those viral videos where someone types âbuild me a Tetris cloneâ into an AI chat window, and within seconds, a fully working game just⊠appears? It looks like magic. And honestly? It kind of is â but not in the way you think.
That magic has limits. Big ones. And if youâve ever tried to use AI to build something more complex than a demo, youâve probably already bumped into them.
Thereâs a growing belief â especially among newer developers â that you can just describe what you want to an AI, sit back, and let it do the work. I call this âvibe coding.â And while it sounds amazing, it almost always ends in frustration.
Why Tetris Works (And Most Things Donât)
Hereâs whatâs actually happening when AI nails a simple request like âbuild Tetris.â
Tetris isnât a mystery. Itâs one of the most well-documented games in history. The rules, the mechanics, the source code â itâs all over the internet. When an AI model was trained, it absorbed millions of examples related to Tetris. So when you ask for it, the model isnât really thinking. Itâs more like⊠recognizing a familiar pattern and filling it in.
Think of it like asking someone to draw a smiley face from memory. Theyâve seen thousands of them. Easy.
Now ask them to draw a completely new character theyâve never seen before â with your specific proportions, your specific style, and rules that exist only in your head.
Thatâs a very different challenge.
When It All Falls Apart
Hereâs a real example. Try typing this into your favorite AI tool:
âBuild me a clone of the âGrow a Gardenâ Roblox game that runs on iPhone.â
What do you get back? Probably a lot of confident-sounding code that doesnât actually work. The AI will invent game mechanics that donât match the original. Itâll reference tools and libraries that donât exist. Itâll make assumptions about how the game works â and most of those assumptions will be wrong.
Why? Because âGrow a Gardenâ is a niche game with rules that live in the heads of its players, not scattered across the internet in a form the AI could have learned from.
The result? You spend hours fixing AI mistakes that wouldnât have existed if youâd just written a clear description of what you wanted in the first place.
The AI isnât stupid â it just doesnât have access to information it was never trained on. Itâs doing its best to fill in the blanks. And thatâs the problem.
So What Should You Do Instead?
The short answer: stop expecting the AI to read your mind, and start giving it a real blueprint to follow.
In the next post, Iâll walk through exactly what that looks like â and how thinking like an architect (instead of a magician) changes everything about how you work with AI tools.
Spoiler: itâs not about finding the perfect prompt. Itâs about doing a little bit of planning first.
Stay Updated
Get the latest insights on AI, chatbots, and customer engagement delivered to your inbox.