More on the Capabilities of Current-Gen “AI”

Eric Raymond, another bright star in the programming universe, weighed in on the actual capability of current-gen “AI.” He echoed DHH and Carmack, again reiterating my own opinion that LLM’s cannot replace humans at (non-trivial) programming. Yet. Sure, it can make a single function or a web page, but even then you’ll have to fix things so that it doesn’t accumulate error into the project.

Maybe better “meta-LLM’s,” with more specialist subsystems, will be able to do better, but we really already have them. It’s not a difference in degree, but of kind. We will need to come up with some other technology before AI supplants humans at programming. Maybe the next step is AGI, maybe there’s a couple more intermediate developments before that becomes a reality.

At this point, it should be becoming clear that people who are obsequiously bullish on how AI is going to replace all your programmers at your company are grifting. As the line in the Princess Bride says, “Anyone who says differently is selling something.”

Capabilities of Current-Gen “AI”

There are 2 schools of people on Twitter on using AI in programming. One states emphatically that they are producing fully-realized projects through nothing but “vibe coding,” and the other states, well, what DHH says here.

John Carmack had this summary, and he should know.

This put into words my feeling that LLM’s are just another tool — an advanced tool, to be sure — but “just” another tool, like source code managers, diff-er’s, IDE’s, debuggers, and linters. In fact, writing code is the least interesting or important part of creating software to do something non-trivial and useful. It’s the understanding and translating that need into an application that’s the magical part, and it’s my contention that LLM’s will never be able to fill that role. If you can also make the program work well and be fast and look nice, that’s the fun part. Maybe a future version of AI built on a different technology will be able to do these things, but not this version.

CoPilot Having a Normal One

Sigh.

I mean, even if you can’t recall the ASCII characters for a hex value (like me), you should be able to realize that that 0x51 is one less than 0x52, so that the “R” and the “3” should be right next to each other. Whether the “R” should be a “4”, or the “3” should be a “Q”, you can see that this is just plain wrong at first glance. LLM’s can’t. I get it, of course. CoPilot interpreted the 0x51 in the second position as decimal instead of hex (as opposed to all the others), which does accurately translate to a “3”.

That’s the thing I find about CoPilot and ChatGPT so far: They have quick answers and suggestions for every line as I’m typing, and half of everything that looks right at first glance turns out to be wrong. I actually started to argue with CoPilot after fruitlessly trying to use it to track down a bug for a half hour. What I am doing with my life?

But sure, tell me how we’re all going to lose our jobs this year because of this technology.