Why ChatGPT is dumb as a brick, but it doesn’t matter
Everybody will probably have heard of ChatGPT by now and those without a programming background have been wowed by the ‘almost lifelike’ interface. As with any new thing that comes along (and it’s really not new), the end of programmers and IT people has already been predicted in various media.
Intelligence (or rather complex behavior) can be deceptively simple. An insect that cross-links the sensor data from it antennae to its ‘move forwards’ routine can flawlessly find food without any actual intelligence making an appearance. But it does look smart. And when you start mixing various of these behaviors, it soon starts to become very hard to distinguish between real intelligence and just good plumbing of nerves.
Chatbots are not new, take ELIZA. These simplistic versions depended on static phrases being spat at the user, no matter the input. The next step was to take input from the user and just regurgitate it back at them. And while no human would be fooled in thinking they were conversing with a human, it did already look a little ‘intelligent’. The next step comes in the form of taking in data and building a model. This allows the chatbot actually start bringing new (to the current user) information into the conversation. This model building can start as simple as ‘which word often appears next to this other word’. In this way you build a complex web of words and their ‘relatedness’ to one another.
This makes ChatGPT looks a lot more polished, but it basically works the same. The difference being is that you’re ‘talking’ to the written word of millions of other people. That makes it a lot harder to detect the lack of intelligence in the chatbot since ChatGPT’s knowledge is based on actual intelligence.
But does this mean ChatGPT is worthless? Not at all. But instead of heralding it as the coming of artificial intelligence (which is not), you should see it as a look into the future of searching. Because that’s what it is currently; a search interface. Currently you enter questions into Google and then sift through the results looking for an answer that is related to your specific situation, ChatGPT already has done this for you. You can ask it for things and it will combine, re-order and convert information for you, to give you a relevant answer. This means you can ask it for high level program flow of source code, give it a list of ingredients and ask it for a recipe or collect and order facts for you.
The cracks start to show when you ask it do something that requires some creativity (besides the obvious problem of ‘garbage in = garbage out’). For example, I give it two datasets and ask ChatGPT how I can convert it from the one to the other using a specific tool. While sites like StackExchange are certainly in it’s dataset, every such question is unique and worse.. it requires a 100% correct answer. When you chat with ChatGPT ‘socially’ then responses don’t have be ‘100% correct’. For example, asking it “The best movie” gives some results, but one movie is no better then the other since that is a personal preference. This would not let you be able to judge the intelligence of a chatbot.
But back to converting datasets. With it’s knowledge of StackExchange and what other people have asked before it now starts to ‘blend’ together an answer that might work. But since ChatGPT has no access to things like ‘awk’ or ‘jq’, it can’t test the answers before giving them to you. This leads to answers that are just grammatically wrong, syntactically wrong or just give a wrong result. Here you start to see that ChatGPT is a wonderful start for a search interface but not an actual artificial intelligence. Your job as a programmer is safe for now.
So it that a bad thing? Not at all. We needs better search engines as much as we need real AI. But most important of all, it looks like a spark that will spawn a new genre. In gaming for example ‘Doom’ or ‘Dune 2’ were in themselves good games, but they both had predecessors. They were not invented in a vacuum. And looking at them now, you can see glaring flaws and graphically are kind of sucky. But that doesn’t matter. Both of them were important because they gathered enough interest to start a new genre. ChatGPT will do the same. In a number of years, looking back, we’ll see the glaring flaws and abvious shortcomings, because we will have better. And that only happens because of those that start a new genre.
And as a sidenote, gaming might be the place where we should be most excited for the coming of good AI. While you already see algorithmically generated content appear in games, this is a far throw away from what could be. AI generated video purely generated on text prompts is already showing up. Imagine a game where you can feed in text prompts and it generates a world for you explore and an adventure to follow. Suppose you feed it your favorite book?