I am physically incapable of rolling my eyes as hard as this idea deserves.

I don’t know what it is about Chat-GPT and its various clones that drives people to such histrionics, but these fucking things aren’t sentient, or thinking like humans, or reasoning like humans, or turning into HAL 9000, or whatever fevered, bed-wetting fantasy folks’ll dream up next. Anyone who says so probably needs to go outside for a bit.
These things are complicated, interesting, and possibly useful bits of programming. They’re doing some interesting, possibly useful things. But they aren’t magic. They aren’t thinking. They aren’t alive. They aren’t even reliable – they lie at the drop of a hat. Actually, you can’t even say they’re lying, because “lying” implies that they intended to obscure the truth for some reason, and they aren’t capable of intention.
Chat-GPT and its clones, other LLMs, are basically extremely advanced text-predictors. You know like how your phone will finish a word with several choices as you type? Like that, but more complicated.
When you ask Chat-GPT a question, it generates something that looks like an answer. The program has seen thousands of answers to similar questions, so it knows what an authoritative answer should look like, and it generates text to match. It matches an authoritative tone. It matches the kinds of words in the kinds of orders that you would expect to see in an answer to your question. It even manages to be right frequently, because it has a store of information to draw from. It’s not thinking. It’s regurgitating.
Maybe that’ll change in the future. I don’t know. Maybe by next year, or in a couple of years, or in ten years, these things will be thinking. But they aren’t right now, so everyone needs to settle down. Get off the computer for a while and go talk to some real humans. Touch grass, as the kids say.