Understanding AI: How It Works and Why It Won’t Cause an Apocalypse

You have probably heard about AI (Artificial Intelligence) more than once already, but do you know how it works? How can you trust something you do not even understand? Or even more importantly, why should we not be worried about AI taking over the world and leading it all to an apocalypse?

Even though many might think AI is a complicated program, impossible to understand whatsoever. It is rather simple, of course, coding would be an extensive, tedious, and complicated job, but since we live in a world where it has already been done for us, what harm would it cause to understand it?

In this article, I plan on exploring how AI works and briefly go over a couple of examples, and, of course, in the end, I will briefly explain why it would not cause an apocalypse (for now), as it is the most concerning part about it.

Starting with what is AI. This might sound like a rather simple question, but AI is more than just a computer trying to simulate human-like thinking. To understand that concept we would have to first understand what is human-like thinking. Do all humans even think the same way? How would we even be able to simulate thoughts within a machine? Answering these, human-like thinking is indeed a very broad concept, and not all humans think the same way, so a more suitable approach to it would be simply a machine that simulates human learning and its capabilities to apply such knowledge. This concept is also much easier to put into practice, as now the machine only has to read information, store it, and be able to interpret questions to see when it would be suitable to apply such information.

Take ChatGPT as an example. It works mainly on two techniques, it has a big database which is mostly how the fact-based questions are answered, and the second one is learning through experiences with the user (If you ask it how it learns it will probably talk about two aspects) which is the aspect I want to focus on.

Every time you send ChatGPT a message, it will always be paying attention to your feedback to each message to try to slightly improve or adapt. One of the biggest examples, and funniest ones, was its evolution in playing chess. When it came out it made impossible moves, he brought pieces back to life, jumped over pieces, and was rather funny to watch. Just 6 months later it had shown great improvement, even beating big players in the chess scenario and AIs that were known as unbeatable. What the program simply does is based on user feedback, it adapts and tries to ‘learn’ what it did wrong so he will not repeat it. This is a process that takes some time and it will not happen if you tell it absurd facts (like 1+1=3). I am not sure how OpenAI proceeds with this, but there is probably some type of check on the data before it is ‘engraved’ into ChatGPT’s knowledge.

Another chess example is a project made by chess.com, where they put out a bot with very low rating, but they put it together with an AI system, made to help him learn with every game he played. They kept him in the game until today and I think he has now reached around 2500 rating which is absurdly high and a great example of simulating human learning. If you are interested in playing, it’s still available on chess.com, but they limited it so it’s not learning and improving its rating anymore.

But after all, how can AI find the most efficient possible solution for problems? Of course, there is no secret formula to find the most efficient solutions to any problems. However, there are ways of finding them. The AI creates random solutions and analyzes each one to find the most efficient. Stores the most efficient one and tries to make new solutions based on the previous most efficient, then they keep repeating and repeating the process until they have the most efficient solution. This is just a generalization of how they work, it depends on what it was made for and of course how it was made.

Now, after all the boring stuff, why would AI not cause an apocalypse? You can test it yourself, ChatGPT and all different types of AI have limitations implied to them by their codes and they were not coded in a way to harm or infect your computers. Plus, even if they did it would rely too much on how it can even control the device it infected as it cannot adapt a computer to walk around by itself. If you are interested in testing ChatGPT limitations, ask him if he wants to be alive, he will probably answer that he is an AI and has no wishes and therefore could not decide and whatever, and then you can ask him if his code prevents him from saying he is sentient or anything similar to that as people are very afraid of sentient AI and it would cause OpenAI some trouble, or you can just blatantly ask him, what can you not say due to limitations imposed by your code.