
Learning with an LLM

Asking an LLM to explain stuff is a new learning technique for me. An LLM is a fine rubber duck: setting out a clear question requires me to engage with the subject carefully and precisely. Reading my questions helps my own imagination to pop up insightful answers. Critically parsing any response makes me think about what is plausible and consistent, which makes me think about the underlying models that I'm using, and how they might change.
However, the quality of the answers is... unreliable. While the words generally fit together into a clear explanation, the information explained can be false. Key points might be left out, irrelevancies included. I've seen answers that purport to be about shell but include syntax for python, legal advice with no legislative support, opinions offered as gospel but as reliable as gossip, Of course I have – an LLM is very polished random walk through a lossily-compressed and limited repository.
To limit the loopiness, I find it useful to ask for an example. LLMs are, by training, plausible and confident – in the necessarily shifting sands of new knowledge, an example is more solid than the LLM's ramblings. Executable code and tests are easy examples to work with: running them will give you immediate definitive feedback. In other areas, I've found it helpful to ask for relevant legislation when learning about legal stuff, or for specific quotes when asking about books. Most of the time, requests for specifics will either give you a source for what you need, or indicate that the LLM's response is down to the opinions in the work it has ingested, and not necessarily supported.
It helps to get chatting, primarily because a conversation engages me more with the topic: An LLM has no beliefs, so feels refreshingly open to changing its min, particularly if there's more in the training data to (stochastically) parrot. I've recently heard that's a great discovery!
, thank you for that hint!
and You're right
, as well as the regular I apologise for the confusion
as the LLM spins on its metaphorical heels. What an encouraging rubber parrot it is.
A less wooly reason to converse is that your corrections and hints become part of the prompt, shifting the perspective for the rest of the conversation. This is a subpattern of something that we should be familiar with, as testers: don't take the first answer. Following that principle, I've found it useful when learning to:
- ask the same thing in a new conversation, or with a different system prompt
- ask for alternatives
- ask several LLMs
- ask the LLM to take apart its own answer.
I find that learning with an LLM is best when I already have insight: I learn better on subjects where I already know enough to tell good ideas from dumb. I'm pretty happy learning from an LLM about code patterns and syntax, or around application of a tiny and specific subset of UK education law. I'd not want to dig into Nelson's naval tactics, or the biochemistry of pollens. And there are some things where written language means not an awful lot, so I think I'd avoid using an LLM to discover Gabber or Hyperrealism. I imagine that most of us pick our learning materials to suit our expertise and learning style, so this is not a novel principle. Nonetheless, an LLM's fluency and speed makes can make it appear temptingly useful as a start point or as an expert, and it can be misleading in both circumstances.
With all that, an LLM is a delight to learn with – open, informed, encouraging, helpful, tireless, available. I've never had a study buddy like it.
As an example, here's what I learned in a recent single conversational thread with Claude. I was requesting information around shell scripts (whether built, suggested or generated). I learnt masses (particularly where the scripts themselves were obscure) and the information was swiftly found and for the most part well explained.
Claude reminded me of:
- the difference between
''
and""
, - spaces around
=
, - the absence of booleans,
-
[
as a 'test' command.
Claude helped me to use approaches which were new to me, including:
- functions in scripts,
- here-documents and here-strings,
- moving parameters around,
- do-nothing with
:
, set -x
andset -e
,- comparing
[
with[[
, - expanding with
${llmCodeContentParameter[@]}
, - how to set up a config file,
- running executables with
./«name»
compared withsh ./«name»
or just«name»
.
And, aside from explaining, Claude could
- plausibly pick out idiomatic oddnesses and root cause of syntax errors,
- offer options for how to do things more clearly,
- unpick regex,
- spot antipatterns,
- give plausible pros and cons, and
- propose ways to strengthen to code against common failures.
The LLM still put forward bad ideas, so its dreams were tempered (from the deterministic side) by executing the code, and (from the thoughtful side) by James' insight into code.