Mistakes involving Artificial Intelligence
October 21, 2020
We have heard a lot that technology has been doing wonderful things, improving and automating various processes in our daily lives. This industry has advanced a lot worldwide.
A question that many may ask is: how do machines have this ability to understand and learn what they must do? Does everything that is done for the machine to do, do it intentionally, or are there unforeseen events?
Let’s look at cases where Artificial Intelligence (AI) technology has failed. To do this, let’s first understand how its execution logic works.
How do algorithms work?
In principle, the machine executes the commands programmed for it to do through lines of code. However, there is something that is called machine learning, which is part of AI.
With this learning, the machine itself can teach itself, because as more information is obtained, it is not necessary for programmers to code so that the learning is continuous, the machine itself adapts.
There are specific ways that programmers do so that the algorithms have autonomous learning. These codes cause the behavior of the machine to be based on AI. The union between coding and training data makes the machine capable of learning on its own.
This algorithm makes AI able to solve problems as simple as it is complex. It is possible to recognize what you have in images, shapes, and objects.
In the beginning, images with more details may be more difficult to recognize. However, over time and with more data, it will be easier to recognize images more easily and quickly, without needing a user to contribute anything.
This type of technology is also used on the web by large companies. For example, it is widely used in search engines and online services. So it is common for companies that provide film and series services to know which films you would like to watch.
Of course, like any technology, however, advanced it may be, failures and unforeseen events can always occur.
Google search failures
Everyone has certainly used Google to do any type of research, and let’s face it, it is much easier to surf the net and search for what we need with this tool.
As much as the Google interface is simple, the complexity of the algorithm behind the screen, the wide variety of conditions in the code for analyzing keywords, and rejections is enormous.
As stated, the system learns from entering information. In this case, whoever enters the information is the most several users, which can affect a failure at the time of the survey.
One of the most-visited parts of Google is the image results. Sites and pages that have a lot of traffic are likely to have their images displayed.
A curious fact is when researched for camps in South Africa there were controversies when it was discovered that in this region there were more whites than blacks. What is controversy, by statistics, shows that the vast majority of them are black.
Users can also influence when searching. An example of this, several users started a type of manifesto on the internet that searches for the words “idiots” began to appear the presidents of the United States and Brazil, Donald Trump and Jair Bolsonaro.
A prejudiced Chatbot
Twitter can be a great tool to improperly manipulate the functioning of artificial intelligence. What proves this is when Microsoft’s Tay chatbot was launched.
Tay took the common language among teenagers and made a repertoire of information based on this format. However, that data came with Nazi statements and racial slurs.
Fanatical users ended up using her resources against herself, and their interactions started to be that of a fanatical person. Which practically forced Microsoft to shut down Tay.
Failures in recognizing the face
Facial recognition AI technology is very common and well known in cell phones and in environments where security needs to be enhanced. They are very complex algorithms and can cause several concerns, which was one of the cases that occurred when trying to make a person’s facial recognition.
In 2015, some people realized that Google Photos analyzed some photos of black people and classified them as gorillas.
Three years after that, an analysis by the ACLU found that Rekognition’s facial identification program, Amazon, found that more than 25 members of the US Congress were fugitives from the police, but this information was false and harmed black people.
Another fact occurred with the Apple device, the Face ID, that two different Chinese were identified as the same thing and for that reason, one could access the other’s iPhone, as he was able to unlock the password.