What is GPT-3 and how does it works?
Let’s start from the beginning. The search for an artificial intelligence (robot, automaton or whatever you call it) capable of posing as a human has hundreds of years and has been so important to science that in 1950 Alan Turing, arguably the most important scientist in computing science. Proposed his famous test that in general words says that if a human can have a conversation with an AI through a terminal (like a chat) and after a certain time not realize that he is talking to a machine, this machine is as smart as a human. This test has already been passed only in a very different way from how Turing envisioned it and not through real intelligence. However, here we will tell you how we are getting closer to having this artificial intelligence explaining what GPT-3 is and how it works.
What is GPT-3?
Generative Pre-trained Transformer 3rd generation is the meaning of the acronym GPT-3 and it is a technology capable of generating texts simulating human writing. The technology created by OpenAI is, as we said, already in its third version and although the previous ones had attracted a lot of attention from the scientific communities, it is this the one has attracted the general public attention. And it is not for less GPT-3 is capable of creating short texts simulating the writing style for press or blog from a few sentences.
GPT-3 is a language model which uses Deep Learning. Deep Learning is a machine learning mechanism that, in not very technical terms, is capable of analyzing a set of “something” (in this case text) and recognizing certain behavior rules of that “something” and then recognizing any other “something” that was not part of the initial set or produce a new “something” of their own with characteristics similar to the analyzed “something” (the latter is the objective of GPT-3). The larger the training set, the better the AI will be to recognize or produce new elements, and the GPT-3 training set has been large. In fact GPT-3 has been trained with 45 million gigabytes of text from the Internet an unprecedented amount so far.
How does GPT-3 works?
GPT-3 has received information from different sources: Common Crawl, an organization which for 8 years has been dedicated to collecting information from all over the Internet; WebText2 a collection of text extracted from different web pages that has been curated by humans, Wikipedia has also made its contribution and finally a series of electronic books. News, blogs, encyclopedias, legal texts, poetry, songs, novels, computer programs among many other things, billions of text examples were used for the GPT-3 training. Then the trainer entered text with some missing words and phrases and the AI had to complete it with the knowledge acquired during their training and using the same words in the text entered. And after all this training it could be given a sentence or asked a question and the AI was able to construct a text from the input sentences or answer the asked questions. Its features are so impressive that it can “translate” requirements written in natural language into legal language. That is to say that in a short time we will have apps available to which we explain our terms and generate a contract adjusted to them.
Mathematical operations and programming
Can the GPT-3 think?
So this is it? Are we in the presence of Artificial Intelligence capable of reasoning and behaving like a human? Will soon become aware of itself and relegate us to being one more link in the chain of evolution? No, of course not (at least for now). GPT-3 is not capable of reasoning or thinking, it actually has no idea what it is writing, what it does (explained in very simple terms of course) from all those millions of text it has read is to generate probabilistic models that allows it to select the next word to write while maintaining the coherence of the text. It is as if the predictive text of your phone had been accidentally exposed to a gamma ray discharge during an experiment that went wrong in a government facility accompanied by Dr. Betty Ross. This means that, although the generated text has coherence when it is read, the result has not been reasoned or is adjusted to any value scheme. Some of the generated texts may be offensive, perverse or inappropriate, but it is not “intentional” since the AI in this moment lacks intentions of any kind. It is only based on what has been learned from the training texts and its objective is only that the final result has coherence when it is read. So all the toxicity that has occasionally appeared in GPT-3 results is nothing more than a reflection of our own toxicity (We don’t want to imagine how much it may have learned from Twitter).
OpenAI the company
Although at this moment GPT-3 is not capable of writing a novel or even a short story, we have to see that OpenAI is a company founded in 2015 and what we are seeing is the result of those five years of work. We also know the speed technology evolves. Taking into account all these variables, what can we expect for the next five years? And for the next fifty? The possibilities are lost from sight.
Well, so far our look at what GPT-3 is and how it works. We loved writing this article, or maybe not, because you could be reading an article about GPT-3 written by GPT-3.