" A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures."
-- Alan Perlis
I am seeing the claim everywhere online that LLMs are a higher level of abstraction. If you claim that you haven’t seen this claim then you had better stop reading now - this blog post is not for you.1
Specifically, I am seeing the claim that LLMs are the net step in the abstractions we had, going from programming in binary to programming in assembly to programming in C to programming in Python.
Now, I am told, the programming in LLMs is the next abstraction. Apparently the people who do programming in LLM believe that it is a similar, if not identical, move to a higher abstraction as the previous abstractions we have seen.
This is wrong! Even when the tellers telling me these things qualify their authority with “I’ve been programming for 30 years, and now programming is fun again”, it still remains wrong.
But, that’s just an opinion, and the counter is not an opinion, it’s a fact.
Each move from one layer of the tech stack to a higher one involved a function:
f(x) -> y
Given a specific x, you always get a specific y as the artifact being generated.
When x is assembly source, a specific input always gives you the same binary result.
When x is C source, a specific input always results in the same binary artifact being generated.
When x is Python source, a specific input always results in the same binary artifact being generated.
With LLMs the function’s output is not a value, it’s the probability of a value! That is, your input x doesn’t result in y, it results in the probability of getting y.
f(x) -> P(y)
Actually, it’s worse - there is no chance of a no-artifact outcome, so the function actually looks like this:
f(x) -> P(y) ∪ P(z1) ∪ P(z2) ∪ ... P(zN)
which means, roughly, you have a chance of getting y (i.e. the thing you wanted), or a chance of getting some unknown number of other artifacts.
But if you think about it, it’s even worse than that - in reality with LLMs you have the chance to get y and a number of other things you never asked for, so the actual function is:
f(x) -> P( y | z1 | z2 | ... z3 )
IOW, if you run a test on the output looking for y, the test can succeed even though you did not get only y, you also got all that other stuff in z1..zN.
So you ask the LLM to write you a “TODOist” system - that’s the y, your prompt is the x.
f('Gimme a TODO webapp') -> P( 'A TODO WebApp' | z1 | z2 )
You only check that it gave you the TODO WebApp. Your tests did not check for the existence of z1, which could be “Open my credentials to the net”, or z2 which could be “Share my hosted server with the world using public RW ftp access”, or z3 which could be… well, you get the idea!
If, in 2026, someone is still making the nonsensical abstraction claim, then send them a link to this post!
If you are the one making this claim, ask yourself why this claim is so important to you.
We need programmers who are self-aware, and not ones who are merely a channel for AI artifacts to enter the world.
Or maybe just keep reading; you will eventually see this claim.↩︎