Little Notes & Short Posts

AI, Microsoft, & ‘Signs of Human Reasoning’

I am physically incapable of rolling my eyes as hard as this idea deserves.

Microsoft’s research paper, provocatively called “Sparks of Artificial General Intelligence,” goes to the heart of what technologists have been working toward — and fearing — for decades. If they build a machine that works like the human brain or even better, it could change the world. But it could also be dangerous.

And it could also be nonsense. Making A.G.I. claims can be a reputation killer for computer scientists. What one researcher believes is a sign of intelligence can easily be explained away by another, and the debate often sounds more appropriate to a philosophy club than a computer lab. Last year, Google fired a researcher who claimed that a similar A.I. system was sentient, a step beyond what Microsoft has claimed. A sentient system would not just be intelligent. It would be able to sense or feel what is happening in the world around it.
New York Times: Microsoft Says New A.I. Shows Signs of Human Reasoning

I don’t know what it is about Chat-GPT and its various clones that drives people to such histrionics, but these fucking things aren’t sentient, or thinking like humans, or reasoning like humans, or turning into HAL 9000, or whatever fevered, bed-wetting fantasy folks’ll dream up next. Anyone who says so probably needs to go outside for a bit.

These things are complicated, interesting, and possibly useful bits of programming. They’re doing some interesting, possibly useful things. But they aren’t magic. They aren’t thinking. They aren’t alive. They aren’t even reliable – they lie at the drop of a hat. Actually, you can’t even say they’re lying, because “lying” implies that they intended to obscure the truth for some reason, and they aren’t capable of intention.

Chat-GPT and its clones, other LLMs, are basically extremely advanced text-predictors. You know like how your phone will finish a word with several choices as you type? Like that, but more complicated.

When you ask Chat-GPT a question, it generates something that looks like an answer. The program has seen thousands of answers to similar questions, so it knows what an authoritative answer should look like, and it generates text to match. It matches an authoritative tone. It matches the kinds of words in the kinds of orders that you would expect to see in an answer to your question. It even manages to be right frequently, because it has a store of information to draw from. It’s not thinking. It’s regurgitating.

Maybe that’ll change in the future. I don’t know. Maybe by next year, or in a couple of years, or in ten years, these things will be thinking. But they aren’t right now, so everyone needs to settle down. Get off the computer for a while and go talk to some real humans. Touch grass, as the kids say.