Meta and IA collaborate together
March 7, 2023
Meta would develop ‘AI people
On Tuesday afternoon, February 28, Meta announced that it is employing part of its staff in the development of tools focused on artificial intelligence (AI) and that in the future it will focus on creating ‘AI personas’ that can help users in the use of its services. In that sense, the company presented a few days ago its automatic language AI model, which works similarly to ChatGPT, developed by OpenAI, and is called LLaMA, an abbreviation of ‘Large Language Model Meta AI’.
Meta commented that, for the time being, LlaMA is its research project in development and that it hoped to continue training it over the next few months. In addition, he said that it will be available under a non-commercial license to both researchers and entities and organizations affiliated with the government, academia, and civil society. The company’s CEO, Mark Zuckerberg, explained that part of his team has focused its efforts on developing tools based on generative AI, intending to apply it to its products.
The executive stressed that, shortly, this team will focus “on developing AI personas that can help users in different ways” and commented that it is now exploring “experiences with text, such as chat in WhatsApp and Messenger”, with images, as demonstrated by Instagram with its creative filters and other advertising formats; video and “multimodal experiences”.
Meta puts the magnifying glass on intimate and illegal content on social networks with the development of a novel platform.
Meta announced the launch of a tool that will tackle child pornography content and explicit images, which are banned and illegal on social networks. “We announce that Facebook and Instagram are founding members of a new platform, also available in Spanish, designed to proactively prevent intimate images of young people from being disseminated on the Internet,” the company announced on its website. With the support of the National Center for Missing and Exploited Children (NCMEC), Meta will want to combat the trafficking and dissemination of this type of content, which threatens the integrity of minors, affects the privacy of users, and does not allow a pleasant experience on the Internet.
It should be noted that the project is part of the success of the StopNCII platform, which also had the participation of Meta. It was launched in 2021 with the support of South West Grid for Learning (SWGfL) and more than 70 NGOs around the world. Like this new tool, the platform worked to prevent the spread of intimate images online.
Google, is seeing a sudden threat to its search engine dominance and quickly announced it would soon launch its own linguistic AI, known as Bard but reports of disturbing exchanges with Microsoft’s Bing chatbot, such as threats and desires to steal nuclear code, went viral, setting off alarms that the technology was not ready.
Meta said these problems, sometimes called hallucinations, could be better remedied if researchers had greater access to expensive technology. OpenAI and Microsoft strictly limit access to the technology behind their chatbots, prompting criticism that they are preferring the potential benefits to improve the technology more quickly for society. According to meta, “By sharing the LLaMA code, other researchers can more easily test new approaches to limit or eliminate these problems,” the company said.
Facebook is looking into having an AI
According to the application’s vice president, Tom Alison, Meta has at its disposal a research group that is dedicated to conducting high-value studies on the technology behind artificial intelligence, and “they have been developing models like ChatGPT and have even been conducting research that goes beyond that. We have published what they have been doing on our blog, so we are actively doing research. We will be developing our models as part of our journey,” he said. The executive also assured that Meta’s main goal is not to integrate a new service for users without first thinking about how the new technology can simply serve people.
How does LLaMa work?
Trained on text in 20 different languages but focusing on those with Latin and Cyrillic alphabets, LLaMa takes sequences of words as inputs, allowing it to predict the next words and thus generate text recursively. Unlike other AI language models, Meta explains that LLaMa is a small model that requires less computing power and resources. However, it is sufficient to generate competitive (sic) results and do so in a wide application area. Nevertheless, throughout the development of the model, different sizes have been worked with depending on the tokens used.
Like any other artificial intelligence that has been launched in recent months, LLaMa is not exempt from errors and untruthful information, so they explain that they are currently working on it to minimize problems such as biases, toxic comments, and other ravings. Thanks to its non-commercial license and research focus, they hope that other teams can also collaborate in this mission.
Meta’s AI functions
What the company is looking for with this technology is not related to a consumer product, but to the systems behind them. So LLaMA is a type of deep learning algorithm that can recognize, summarize, translate, predict, and generate text. In this case, Meta has already tested its artificial intelligence and it has “shown great promise in generating text, having conversations, summarizing written material and more complicated tasks such as solving mathematical theorems or predicting protein structures.”
So it will be the brain behind scientific research that is created with it, expanding access to this type of technology, which until now required an extremely powerful computer infrastructure to train and execute its functions. Whereas with LLaMA, Meta claims to have a smaller basic model, which is much easier to train because it requires less computing power to run experiments, search for approaches and validate jobs.
By opening the doors to researchers, the company hopes that its artificial intelligence can demonstrate the versatility with which it is intended, since by being less demanding it can receive more information, adapt to more use cases and not limit itself to specific tasks, as has happened to GPT. For the time being, this artificial intelligence will be licensed for non-commercial use and only for scientific purposes, so access to it will be evaluated on a case-by-case basis for academic researchers, governmental, civil society, and academic organizations.