But does it resonate?

Written by Vincent Bruijn

By accident I came up with a pretty good resonance prompt today. A resonance prompt can be used as a get-to-know-the-model-quickly-prompt: a way to get a quick glance of its capabilities, not only formally but also creatively.

The idea

I want to name my (paper) Bullet Journals with unique three letter names so I can refer quickly to other notebooks by writing a unique name followed by a page number, so I can refer back easily to other journals and make quick cross references. Accompanied I want to make a portable inlay with the names of all my journals up till now (8 of them).

I was thinking of short names with no meaning, inspired by how one page game walkthroughs use to use groups of letters to denote a section in the walkthrough, like ac07 or something.

Gamefaqs

I want to write ZOG:32 to refer to a specific page within a journal. The above conditions make a nice list of constraints for a prompt.

Can you make a list of words following these conditions:

1. All words should be 3 characters long
2. None of the words should be regular English; no “dot”, “god”, “rag”, etc.
3. All words should start with a consonant, followed by a vowel and end with a consonant
4. All consonants of the latin alphabet can only be used once as first first letter.
5. The words should be unique enough so no “zak” and “sak” in the list
6. Examples are “qeg”, “kiv”, “sux”

I posted this in Claude in thinking mode. It took around 20-30 seconds, Claude crunching the constraints and then it came with a nice list of words.

Claude list

bex, caz, dox, fek, giz, hov, jux, kiv, loz, miz, nez, pey, qeg, ruz, sux, tef, vix, woz, xul, yox, zaf

They’re unique, clear and follow my rules. Initially just for the fun of it I posted the exact same prompt in ChatGPT. It was crunching a little longer, around 1 minute and 43 seconds, quite long actually, but came up with a similar list exactly following the rules.

ChatGPT list

beq
ciq
duq
foj
guz
hiq
juv
kiv
loq
miv
nuj
pij
qeg
rij
siv
tiv
vij
wiv
xiq
yiq
zuq

This made me wonder: what about Gemini? Oh ouch… Gemini did not too well. The eventual list contained at least four out of twenty words that are regular English words. When I gave this as feedback, it started outputting an endless list of words; I had to break its endless loop, it was a word puke that just didn’t stop. Not good.

Gemini list

 * Ben
 * Can
 * Der
 * Fim
 * Gov
 * Hip
 * Jat
 * Ked
 * Lam
 * Mut
 * Nax
 * Por
 * Qex
 * Rig
 * Sul
 * Tin
 * Vug
 * Wex
 * Xok
 * Yal
 * Zot

This anomaly made me curious: is the prompt that hard then? Does this simple rule set challenge a model’s internals? Asking Mistral led to a nice list fulfilling all rules, but the list wasn’t ordered as Claude or ChatGPT did.

Mistral list

| Word  |
|-------|
| qef   |
| wiv   |
| ryx   |
| taz   |
| puk   |
| bam   |
| jod   |
| nif   |
| luh   |
| mox   |
| gec   |
| dap   |
| kif   |
| huz   |
| vep   |
| zot   |
| cex   |
| shm   |
| fyb   |
| xak   |

Findings

Chatting further with ChatGPT and Claude it became clear that the constraints I set up require quite different capabilities of a model within one prompt:

  • be accurate
  • be creative
  • think systematically

Apparently being precise and creative in one prompt response is a challenge.

For a human the list is easily verifiable. This makes the prompt ideal for resonance checking: does a model obey the constraints, is it somewhat creative, what do I “feel” with the list? No English words present? How does it reflect on its produced list? How do I consider the output? Great, reasonably good or not that good: one has an immediate feel with the list. Great to see these different outputs.

Conclusion

So that’s what makes this a good prompt to quickly check if a model resonates with me. A go-to prompt which I will keep using to test a model as soon as it’s released or available to me: what’s its vibe?

My conclusion so far is that Claude 4 and ChatGPT 5 resonate with me best. I like Claude most.