Powered by

Home Technology

These AI tools are not secure, could lead to cyber attacks: Research

Scientists have found that AI Tools are not secure and can be used for malicious code, leading to potential cyber attacks.

By Ground report
New Update
These AI tools are not secure, could lead to cyber attacks: Research
  • Scientists at the University of Sheffield have found that natural language processing (NLP) tools, including ChatGPT, can be manipulated to generate malicious code, potentially leading to cyber attacks.
  • This research is the first to show that NLP models can be used to attack real-world computer systems across various industries.
  • The study reveals that AI language models are susceptible to simple backdoor attacks, such as Trojan Horse deployment, which could be activated at any time to steal data or disrupt services.

A recent study by scientists at the University of Sheffield has revealed that natural language processing (NLP) tools, including ChatGPT, can be manipulated to generate malicious code, potentially leading to cyber attacks. This groundbreaking research is the first to show that NLP models can be exploited to compromise real-world computer systems widely used across various industries.

The study’s findings indicate that AI language models are susceptible to straightforward backdoor attacks, such as Trojan Horse deployment, which could be activated at any time to steal information or disrupt services. The research also underscores the security risks associated with using AI tools for learning programming languages and interacting with databases.

What are the security threats in AIs such as ChatGPT?

The researchers presented their findings at ISSRE, a highly influential software engineering conference, and their study has been shortlisted for the conference’s esteemed ‘Best Paper’ award. The team is now collaborating with the cybersecurity community to address these vulnerabilities.

The researchers discovered security flaws in six commercial AI tools, including ChatGPT and BAIDU-UNIT, a leading Chinese intelligent dialogue platform. They found that by asking specific questions, these AIs could be tricked into generating harmful code that could leak confidential database information, disrupt normal service, or even destroy the database.

Xutan Peng, a PhD student at the University of Sheffield who co-led the research, expressed concern about companies’ lack of awareness regarding these threats. He highlighted that while ChatGPT itself poses minimal risk as a standalone system, it can be manipulated into producing harmful code that could severely damage other services.

The study also emphasizes the potential dangers of using AI to learn programming languages for database interaction. As more people use AI tools like ChatGPT for productivity rather than conversation, vulnerabilities become more apparent. For instance, a nurse using ChatGPT to write an SQL command for interacting with a clinical records database could inadvertently cause serious data management issues without receiving any warning.

Understanding AI Models for Safe Use

Dr Mark Stevenson, a Senior Lecturer in the Natural Language Processing research group at the University of Sheffield, has warned users of Text-to-SQL systems about potential risks. He stated, “Users of Text-to-SQL systems should be aware of the potential risks highlighted in this work. Large language models, like those used in Text-to-SQL systems, are extremely powerful but their behaviour is complex and can be difficult to predict. At the University of Sheffield we are currently working to better understand these models and allow their full potential to be safely realised.”

The researchers from Sheffield presented their findings at ISSRE, a significant software engineering conference, on 10 October 2023. They are now collaborating with the cybersecurity community to address these vulnerabilities as Text-to-SQL systems become increasingly prevalent.

Baidu’s Security Response Centre has acknowledged the researchers’ work, rating the vulnerabilities as ‘Highly Dangerous’. In response, Baidu has addressed and fixed all reported vulnerabilities and financially rewarded the scientists. The researchers also shared their findings with OpenAI, who fixed all specific issues with ChatGPT in February 2023.

The researchers hope that exposing these vulnerabilities will serve as a proof of concept and a call to action for the natural language processing and cybersecurity communities to address overlooked security issues.

Xutan Peng, co-lead of the research, added: “Our efforts are being recognised by industry and they are following our advice to fix these security flaws. However, we are opening a door on an endless road - what we now need to see are large groups of researchers creating and testing patches to minimise security risks through open source communities. There will always be more advanced strategies being developed by attackers, which means security strategies must keep pace. To do so we need a new community to fight these next generation attacks.”

Keep Reading

Follow Ground Report for Climate Change and Under-Reported issues in India. Connect with us on FacebookTwitterKoo AppInstagramWhatsapp and YouTube. Write us on [email protected].