Everyone reading this has heard of ChatGPT. If you haven’t, then you might be on Mars.
Generative AI has existed since the 1950s, but it’s really only become a noticed part of our culture in recent years. We might not think it, or feel like it, but AI has been in use long before ChatGPT. Take the Youtube algorithm. It takes in statistics like watch time, topics, and like/dislike ratio to deliver to you that recommended video feed. The search engine on Google is the same thing. AI essentially has the task of doing what humans typically do. Humans don’t look at what you ask Google and adjust accordingly, a computer does it on its own.
Now with ChatGPT, people are starting to act like Generative AI suddenly became a big technology. But in reality, it’s been creeping up on us for a very long time. Unfortunately, this means that a lot of people aren’t very familiar with how AI actually works. AI uses a process called machine learning. In a very broad sense, it’s essentially a computer taking data and using it to come up with a conclusion, or a prediction. Back to the Youtube search engine, how do you think it knows what you like to watch? If I click on one chess video, it’s going to soon recommend me another chess video. If I ignore them and start clicking on other videos, Youtube won’t recommend it to me again. If I click on it, Youtube’s going to keep recommending them. If I keep clicking on them, Youtube knows that I like to watch chess, and it will recommend me a lot of chess videos. It’s simple pattern logic, or inductive reasoning.
However, there’s a lot more nuance to the subject. People have speculated as to whether computers could ever surpass the human brain. And well, it sort of already has. It can perform impress calculations at an extremely fast pace. But we need to distinguish computers from the human brain in one important way: coding vs machine learning. Humans don’t exactly use machine logic, but it uses somewhat similar logic, so let’s just go with it. When a computer is told to do something, it uses predefined instructions. When I use an application, like a video game, the computer takes button or screen inputs. After it takes those inputs, each input causes the computer to do a specific set of actions in a specific order. This is programming; or essentially deductive reasoning. If this button is pressed, do this thing. That’s the cause of glitches in apps. When a computer has to carry out certain tasks, inputs that the programmer may not have thought of could cause the computer to follow a set of instructions that produces an unintended output.
With AI, there is programming, but instead they use a model called machine learning. The programming of AI like ChatGPT is based on language. When you ask a question to ChatGPT, it goes through a few hard coded scripts. It has to know when you hit certain buttons, obviously. But each answer is not built in to the app. If I ask for a recipe for a smoothie (I don’t know, that’s just what came to mind), ChatGPT doesn’t just have a specific programmed recipe for a smoothie. Instead, what ChatGPT does is look at key words in my question to understand what I want. After that, it goes to previously stored data, or inputs that other people gave to ChatGPT. It receives feedback from those people to decide which pieces of data it should ignore and which it should use. Then, it uses those pieces of data to compile a new answer for me based on what other people have said. That is the crucial difference between computers and AI.
This might be obvious to you, but this is important in epistemology. A lot of people say that humans have a sort of sixth sense that computers don’t, and for that reason AI will never be able to replicate us. However, this just misunderstands how the human brain works and how AI works. In fact, there are multiple types of machine learning, one of which being deep learning, and this uses what is called a neural network. It is essentially a type of learning based on the human brain, hence the name NEURAL-network. These machines use their code to get more data, and in turn improve their answers. This is also what humans do. Humans need more and more information on a subject to get a better and more thorough understanding of it. The main difference between us is that humans use a much different, more efficient mechanism for this than AI. AI uses electricity and stores most of it’s data through a sequence of electrical charges (binary), whereas the human brain uses a lot more chemical energy than just electricity. The human brain uses the connections between neurons to store its information.
Because of this, the human brain essentially operates like a computer. We also process many more types of information than AI. Each of our five senses is a different type of information gathering. Your vision may look buttery smooth, but it’s really just your neurons firing off info of light at a very fast speed. Basically, it’s just a really fast animation. Every time your optical nerve fires, it sends another image to the brain.
So, in conclusion, humans and AI aren’t so different. AI isn’t a ridiculous or new technology at all. People might still disagree on this, as it is a complex subject, but this is accurate conclusion.
But regardless of what happens, one thing is for sure: ChatGPT is definitely not smarter than a human. It’s going to take a long time before that happens.