AI in cybercrime and countering threats: what NASA thinks

January 13, 2024
Artificial Intelligence has become an increasingly relevant component in the world of cybersecurity, both for criminal organizations and nation-states. According to Rob Joyce, director of cybersecurity at the U.S. National Security Agency (NSA), the use of AI by criminals and state hackers is a well-established reality. However, Joyce points out that U.S. intelligence is itself exploiting AI technologies to detect malicious activity.
AI in the hands of criminals and state hackers
According to Joyce, both criminals and state hackers make extensive use of the generative AI models offered by large companies in the industry. These tools allow them to create artificial texts and images that can be used to conduct cyber attacks and espionage campaigns. Joyce did not provide specific details about AI attacks or attribute particular activities to a specific state or government.
Joyce pointed out that recent efforts by China-backed hackers in targeting critical U.S. infrastructure are an example of how AI technologies are emerging as tools to detect malicious activity, thus giving U.S. intelligence an advantage. Instead of using traditional malware that could be easily detected, Chinese hackers are exploiting vulnerabilities and implementation flaws to infiltrate networks and appear as authorized users.
AI in the service of U.S. intelligence
Joyce pointed out thatAI, machine learning and big data are helping to improve the ability of U.S. intelligence to detect malicious activity. With these technologies, anomalous behavior that does not match those of legitimate critical infrastructure operators can be detected. This makes it possible to identify suspicious activities and give law enforcement operations an edge.
The use of AI in cybersecurity is not without risk. Joyce said thatAI is not a “super tool” that can turn an incompetent person into a competent one, but it does make those who use it more effective and dangerous. For example, attackers can use AI to create increasingly convincing phishing messages or to improve the technical aspects of attacks that they would otherwise be unable to execute on their own.
The challenges of AI in cybersecurity
The use of AI in the context of cybersecurity raises important security and privacy issues. The U.S. government, aware of these challenges, has introduced an executive order to establish new security and reliability standards for AI. The goal is to protect society from abuse and error by ensuring responsible use of these technologies.
The Federal Trade Commission (FTC) recently warned of potential threats related to the use of AI. For example, tools such as ChatGPT can be used to increase the reach of fraud and scams. It is therefore critical to develop control mechanisms and rules to prevent their misuse.