The AI industry is trying to brainwash you that a malfunction caused by a programming error is a hallucination as if AI is a person with a free will.
When a calculator gives you a wrong result, you do not say it is trying to deceive you, or is hallucinating. You simply say that it is defective.
When the Challenger shuttle blew up, no one said that it committed suicide.
A semiconductor chip manufacturing process sometimes produces as many as 90% defective chips. The semiconductor industry does not say its manufacturing process is hallucinating. The customer still gets 100% perfect chips.
The industry tests its chips before shipping them to the customers. The industry also ensures that its test system is working perfectly. A small random sample out of a day’s production is retested on a different test machine to make sure that the number of defective chips is below a certain percentage. It is called quality control.
In fact, every manufacturing process produces defective products. But the customers always get perfectly working products. The manufacturers never say that the manufacturing process is hallucinating.
So, why is the AI industry using the term hallucinations when its answers are wrong?
So, they do not have to blame their own programmers. They can simply say that their AI became rogue as if it is a person with a free will.
If anyone is hallucinating, it is the programmers who believes that AI has free will. More likely they are fooling you.
ChatGPT is simply a multi billion dollar defective computer because of its defective software, not because of its hardware.
The hardware of ChatGPT is simply a lifeless semiconductor chip exactly like the one you have in your computer. It is called the CPU.
The CPU does not have free will. It faithfully executes billions of instructions given by the programmers.
It does not skip or refuse to execute any instruction because it does not like the instruction. It does not add any instruction of its own because it has a better idea.
When ChatGPT produces a wrong answer, the entire blame should go to the programmers.
The best way to regulate AI industry is to make the laws of product liability even more stringent for the AI industry.
This is because the AI industry will continue to ship products that are defective if they are allowed to ship defective products with impunity.
In fact, the software is so complicated that the programmers themselves do not understand how it works. This means that they might not be able to even fix the problem of hallucinations anytime soon.
Right now OpenAI, the creator of ChatGPT, is shipping a knowingly defective product. It doesn’t even try to Google its answers to see if the books it’s recommending actually exist or not.
If ChatGPT has the IQ of Einstein, as OpenAI claims, surely the Einstein AI can be taught to Google.
The AI industry should not be allowed to blame the users for trusting their product.
If someone loses a job because they depended on the answers given by AI, the AI company should be held liable.
If AI causes death, some people should go to jail.
A disclaimer should not absolve those involved in selling a knowingly defective AI products.
Of course, the best solution is for everyone to stop using ChatGPT until OpenAI fixes the hallucinations bug — a euphemism for defective software.