Font Size:

Have you ever watched one of those viral videos where someone types “build me a Tetris clone” into an AI chat window, and within seconds, a fully working game just… appears? It looks like magic. And honestly? It kind of is — but not in the way you think.

That magic has limits. Big ones. And if you’ve ever tried to use AI to build something more complex than a demo, you’ve probably already bumped into them.

There’s a growing belief — especially among newer developers — that you can just describe what you want to an AI, sit back, and let it do the work. I call this “vibe coding.” And while it sounds amazing, it almost always ends in frustration.

Why Tetris Works (And Most Things Don’t)

Here’s what’s actually happening when AI nails a simple request like “build Tetris.”

Tetris isn’t a mystery. It’s one of the most well-documented games in history. The rules, the mechanics, the source code — it’s all over the internet. When an AI model was trained, it absorbed millions of examples related to Tetris. So when you ask for it, the model isn’t really thinking. It’s more like… recognizing a familiar pattern and filling it in.

Think of it like asking someone to draw a smiley face from memory. They’ve seen thousands of them. Easy.

Now ask them to draw a completely new character they’ve never seen before — with your specific proportions, your specific style, and rules that exist only in your head.

That’s a very different challenge.

When It All Falls Apart

Here’s a real example. Try typing this into your favorite AI tool:

“Build me a clone of the ‘Grow a Garden’ Roblox game that runs on iPhone.”

What do you get back? Probably a lot of confident-sounding code that doesn’t actually work. The AI will invent game mechanics that don’t match the original. It’ll reference tools and libraries that don’t exist. It’ll make assumptions about how the game works — and most of those assumptions will be wrong.

Why? Because “Grow a Garden” is a niche game with rules that live in the heads of its players, not scattered across the internet in a form the AI could have learned from.

The result? You spend hours fixing AI mistakes that wouldn’t have existed if you’d just written a clear description of what you wanted in the first place.

The AI isn’t stupid — it just doesn’t have access to information it was never trained on. It’s doing its best to fill in the blanks. And that’s the problem.

So What Should You Do Instead?

The short answer: stop expecting the AI to read your mind, and start giving it a real blueprint to follow.

In the next post, I’ll walk through exactly what that looks like — and how thinking like an architect (instead of a magician) changes everything about how you work with AI tools.

Spoiler: it’s not about finding the perfect prompt. It’s about doing a little bit of planning first.

Stay Updated

Get the latest insights on AI, chatbots, and customer engagement delivered to your inbox.