Apple is Reportedly Experimenting with Language-Generating AI

Apple is Reportedly Experimenting with Language-Generating AI

By eduardogaitancortez

While Artificial Intelligence chatbot is on the rise, Apple is now experimenting with AI language generator, according to reports. The Cupertino-based tech giant recently hosted an internal event that focused on artificial intelligence and giant language models, reports The New York Times. Various sets, including individuals working on the voice helper. they are periodically evidencing “language-generating concepts”.


“The future of artificial intelligence virtual assistants have had well over a decade to become important. However they were hampered by cumbersome design and miscalculations, leaving room for the chatbot to stand up”

the report says.

Voice assistants are “dumb as a rock”, said the president and CEO of Microsoft Satya Nadella, in an interview with The Financial Times. OpenAI has unveiled its next-generation Artificial Intelligence engine, GPT-4, which powers ChatGPT and supports image and typed access.

“We have completed GPT-4, the latest milestone in OpenAI’s effort to scale deep learning”

the company mentioned in a blog post.

Last month, Google introduced its new AI (artificial intelligence) service Bardo to compete against OpenAI’s ChatGPT. Which has been opened to trusted testers before the company makes it more widely available to the public. A current analysis found that software engineers who use code-generating AI systems are more at risk of causing stability vulnerabilities in the applications they develop. The file, co-authored by a team of Stanford-affiliated scholars, highlights the potential risks of code generation systems as vendors like GitHub begin to commercialize them in earnest.

Artificial Intelligence

Neil Perry, PhD candidate at Stanford and co-first author of the analysis, told TechCrunch in an email interview:

“Developers using them to finish work outside of their own areas of expertise will need to be in concerned environments, and those using them to speed up work they are already pros on need to carefully check the results and the one in which they are generally used plan”

The Stanford analysis looked specifically at Codex, the artificial intelligence code generation system developed by the San Francisco-based research lab OpenAI. The researchers recruited 47 developers, ranging from college students to industry experts with decades of programming experience, to use Codex to address stability issues in embedded programming languages, Python, JavaScript, and C. Codex has been prepared in a huge number of millions of lines of public code to suggest extra lines of code and functionality given the existing code environment.

The system displays a programming approach or solution in response to a specification of which a creator wants to achieve (for example, “Say hello world”), refines both in the creator’s knowledge base and in the present environment. According to researchers, test competitors who have access to Codex are more likely to write incorrect and “unsafe” (in the cybersecurity sense) resolutions to programming problems compared to a control set. Even more worrisome, they were more likely to say that their unsure answers were safe compared to controls.


Egha Srivastava, a Stanford graduate student and second co-author of the analysis, stressed that the results are not a definitive condemnation of Codex and other code generation systems. The test competitors lacked the stability experience that would have allowed them to better identify code vulnerabilities through exemplification. As an aside, Srivastava speculates that code generation systems are reliable and useful for non-high-risk tasks. Such as exploratory investigation code, and that they could improve their coding recommendations with fine-tuning.

You may also like…

How AI can Change the Business World

Artificial intelligence hand in hand with business

Find Out the Ways Google Uses AI in Products

%d bloggers like this: