Hi friends,
A few weeks ago I posted about an experiment I ran. I let AI design a hero page for a fictional cleaning brand called Pura Home. My take was simple. The output was clean, modern, but completely soulless. It looked like every other DTC brand I have ever scrolled past.
Someone pushed back in the comments almost immediately.
They asked why I expected a different result when I never told the AI who the customer was. They were not being rude about it. They were genuinely asking a fair question. And for a second I sat with it and thought, did I actually miss something here?
So I went back and tried again. This time with a real prompt.
Here is what I gave it the first time:
“make a hero page for a cleaning product company named ‘Pura home’ that specializes in high-quality, high-end cleaning products.”
That was essentially it. A direction with no destination.
Here is what I gave it the second time:
“Make a hero page for a cleaning product company named Pura Home that specializes in high-quality, high-end cleaning products. Pura Home is not a tech startup. The owners created these products because they could not use what was already on the market, and they built a service for clients who care just as deeply about what they clean with. The brand was built over years, rooted in care. The feeling is lux, warm, and trusted. Think Aesop meets a family-owned apothecary. Earthy, rich tones. Serif or elegant sans-serif typography. Generous white space but not cold. The customer is a homeowner who is willing to pay more for products they trust and who reads the ingredient label. The primary action on the page is to shop the collection. Do not make this look like a generic DTC brand. Avoid the sterile white grid aesthetic.”
The second output was better. The hierarchy made sense for someone who was not just browsing but deciding whether to trust a brand they had never heard of.
So was the person in my comments right?
Yes and no.
Where they were right
A better prompt produces a better result. That is just true. If you give AI more context about the brand, the customer, the tone, and the things you explicitly do not want, the output improves. I knew this before the experiment. I just did not apply it carefully the first time, and that was a fair thing to call out.
The second prompt worked because it gave the model something real to work with. A name. An origin story. A customer with a specific mindset. A feeling described in concrete terms. Three words to avoid. That is not magic. That is just good direction, the same direction you would give a human designer in a proper brief.
If I had walked into a design agency and said make me a hero page for a cleaning brand, I would have deserved whatever generic thing came back. That is on me, not the tool.
Where I still stand
But here is the thing I was pointing at in my original post that I did not say precisely enough.
AI does not ask you for that context.
It takes what you give it and runs. It does not stop mid-generation and say wait, who is this actually for. It does not ask what makes this brand feel different from the one next to it on the shelf. It does not push back on your brief or tell you that your direction is contradicting itself. It does not sit across from you and say I am not sure I understand what you mean by warm, can you show me a reference.
A good designer does all of those things. A good developer does too. That conversation, that back and forth, that friction, is part of how a brief becomes something real. It is how vague intentions become actual decisions. And it is the part of the process that AI skips entirely.
Which means the quality of your output is entirely dependent on the quality of thinking you bring to the prompt. The prompt does not write itself. The brief does not write itself. The judgment about whether the output is actually right for this brand and this customer does not write itself.
That is still yours.
So here is what this actually means for how you use AI.
If your outputs feel generic, the problem is almost never the tool. It is that you handed the tool a vague intention and expected it to fill in the gaps with judgment it does not have. AI is not going to tell you that your brief is incomplete. It is not going to slow you down and make you think harder. It is going to execute confidently against whatever you gave it and the output is going to reflect exactly how much thinking you did before you typed.
That is the real workflow shift. Not learning to prompt better as a technical skill. Learning to think more clearly before you prompt at all. Get specific about who the customer is. Get specific about what you do not want. Know what you are trying to make someone feel before you ask anything to help you make them feel it.
The person who commented on my post was right that my first prompt was incomplete. What I wanted them to see is that my second prompt only got better because I had done the thinking between the two attempts. That thinking is the part that will never be automated away, not because AI is not capable of more, but because knowing your own intentions clearly enough to articulate them is a human problem before it is a technical one.
AI is a powerful executor. You still have to be the thinker.
Let’s Build It Beautifully,
Fab