Prof Ethan Mollick has a helpful post on “Thinking like an AI.” 3 salient notes:
(1) Large Language Models do “next token” prediction. They predict the next word that follows our prompt (and so on). That doesn’t mean they’re dumb autocomplete systems. But it does help develop the intuition for why the prompt matters. The pattern of words in a prompt
For example, if I write “The best type of pet is a” the LLM predicts that the most likely tokens to come next, based on its model of human language, are either “dog”, “personal,” “subjective,” or “cat.”
If I change the word “type” to “kind” in the sentence, the probabilities of all the top tokens drop and I am much more likely to get an exotic answer like “calm” or “bunny.” If I add an extra space after the word “pet,” then “dog” isn’t even in the top three predicted tokens!
(2) LLMs make predictions based on their training data. So, the better the training data, the better the predictions.
Contrary to some people’s beliefs, the AI is rarely producing substantial text from its training data verbatim. The sentences the AI provides are usually entirely novel, extrapolated from the language patterns it learned. Occasionally, the model might reproduce a specific fact or phrase it memorized from its training data, but more often, it’s generalizing from learned patterns to produce new content.
(3) LLMs have a limited memory.
Prof Mollick ends his post with a useful caveat.
Understanding token prediction, training data, and memory constraints gives us a peek behind the curtain, but it doesn’t fully explain the magic happening on stage. That said, this knowledge can help you push AI in more interesting directions. Want more original outputs? Try prompts that veer into less common territory in the training data. Stuck in a conversational rut? Remember the context window and start fresh.
But the real way to understand AI is to use it. A lot. For about 10 hours, just do stuff with AI that you do for work or fun. Poke it, prod it, ask it weird questions. See where it shines and where it stumbles. Your hands-on experience will teach you more than any article ever could (even this long one). You’ll figure out a remarkable amount about how to use AI effectively, and you might even surprise yourself with what you discover.
While true, I think his post is an excellent starting point.