There’s Nothing Artificial About Our Intelligence
Since OpenAI’s public introduction of ChatbotGPT, the Internet has been abuzz with talk about artificial intelligence and the implications of machines that can think and write and carry on conversations at least as well as some humans.
People in a wide variety of industries are wondering whether their jobs are safe, and if so, for how long? After all, the latest version of Chatbot GPT has passed the bar exam and the U.S. medical licensing exam. It has scored 5s on numerous AP exams, and it does better on the SAT than most human students. So, many people are beginning to wonder how long it will be before computers will be good enough to replace teachers, lawyers, doctors, accountants, computer programmers and (gasp!) journalists.
If you haven’t yet played around with Chatbot GPT, you should. Just go to chat.openai.com, create an account and start asking questions. At first, you might be surprised at how thorough, well-written and comprehensive the responses are. But at the same time, with a little more digging, you might also be surprised at how misleading or even dangerous those responses could be.
I asked the chatbot things like “Explain the best way to design a spur gear” and “How do you choose a coupling for an industrial application?”
The answers I got weren’t very in-depth or nuanced. But they didn’t seem terribly far off, either. To a layman, they might even sound authoritative.
And that can be a real problem if you rely on it too easily. OpenAI admits that their chatbot doesn’t really know how to distinguish truth. Chatbot GPT is kind of like a clever politician, because it’s very good at telling convincing lies. Everything it says sounds reasonable, and if you’re not a subject matter expert, you might not know the difference.