Many of us feel fluent in prompting AI by now, but still feel frustrated when the results fall short. The issue usually isn’t the model. It’s how we talk to it.
Too often, we fall into what developer Mitchell Hashimoto calls “blind prompting”, treating the AI like a helpful colleague instead of instructing it with purpose and structure. In one 2023 study, participants who got a decent result early on assumed their prompt was “done,” and never refined it further.