
Artificial intelligence (AI) chatbots have taken the world by storm, and 2023 is shaping up to be a year of increased focus on this technology. The release of OpenAI’s ChatGPT in the last quarter of 2022 has spurred multiple companies, organizations, and individuals to enter the arena. Even tech mogul Elon Musk has joined the discussion on AI chatbots, highlighting this technology’s potential benefits and risks, and Google has announced its own AI chatbot, Bard.
AI chatbots like ChatGPT are lauded for their ability to revolutionize access to information on the internet. However, there are growing concerns about their potential use in cyberattacks. Researchers have demonstrated that AI chatbots can be manipulated to exploit code, write malware, and generate convincing phishing lures. While ChatGPT has safeguards to prevent it from creating cyber-attack tools, it can still generate content that could serve as a phisher’s lure. This is particularly concerning given that over 80% of cyber security breaches in the last year involved social engineering.
Despite the limitations of AI chatbots like ChatGPT in providing accurate information and the possibility of generating hallucinations (inaccurate data based on patterns), attackers can still abuse them. This is especially true for those with limited technical knowledge who use it and similar tools to perform quick reconnaissance and obtain the necessary basic elements and techniques for their attacks. Of course, hackers must still carry out additional activities, such as purchasing infrastructure, sending lures, and managing illegal sales, to execute their planned actions.
Chat Bots and Homographic Domains
Another concern is that companies increasingly use AI chatbots like ChatGPT to help users find suitable domain names. This has raised concerns about cybercriminals using similar tools and services to obtain appealing domain lists for generating traffic, potentially leading to harmful outcomes like successful phishing lures and more infected users. Furthermore, attackers can use ChatGPT to generate domain names dynamically, which they can easily register to control infected machines. While there are limits on what companies currently can do to prevent this activity, some domain registries, such as Identity Digital, have protections in place to generally prevent the most convincing lookalike domains, “homographic domains” from being registered.
As AI chatbots like ChatGPT continue to be integrated into applications like search engines, the potential data pool for attackers to exploit increases. For example, they can obtain communication templates based on public information. However, there is hope for defending against these attacks. We can expect a surge in tools, apps, and services that can detect whether ChatGPT or similar tools produced specific content. In addition to OpenAI’s release of such a tool, others, including GPTZero, one of the most recognized, have also emerged.
Moreover, AI chatbots like ChatGPT and such tools can strengthen enterprise defenses by enabling Red and Blue security teams to develop scripts for testing and addressing vulnerabilities, even with less experienced team members. This can help organizations detect and block malicious content generated by AI chatbots and minimize their cybersecurity risks.
AI chatbots offer numerous benefits but have inherent risks. Companies and organizations must be vigilant and take proactive measures to detect and block malicious content generated by AI chatbots. By balancing the opportunities and risks associated with AI chatbots and leveraging them to strengthen their cybersecurity defenses, enterprises can harness the full potential of this transformative technology.