top of page

AI's Dark Side: How ChatGPT is Redefining Phishing Attacks



Phishing is not a new topic. In fact, it has been around since the early days of the internet.


Remember dial-up? Yep, that long.


You would think that by now, nearly all organizations would be better equipped and know how to properly spot a phishing attempt right? Well, think again.


Despite being armed with all the best practices and knowledge, 90% of corporate security breaches are the result of phishing attacks. And that’s just the beginning.


Flash forward to 2023 and nearly everyone out there is using ChatGPT to autocomplete a task, especially hackers who exploit its capabilities for crafting intricate and convincing social engineering attacks, ushering in a new era of AI-fueled cyber threats.


Are we ready to combat this new wave of AI-fueled phishing attacks? In this post, I want to explore how AI and ChatGPT can be used to generate sophisticated phishing attacks and how it can also be used to prevent them.


A Double-Edged Sword: AI and Phishing Attacks


There’s no doubt, AI is a game-changer in the cybersecurity field and we are only at the forefront of this new revolution.


AI has many positive aspects when it comes to cybersecurity, such as advanced threat detection and protection which harnesses AI-powered algorithms that can swiftly analyze massive amounts of data to identify patterns indicative of cyber threats.


AI can also predict potential attack vectors and vulnerabilities, allowing organizations to proactively address any weak points in their security armor.


Now for the bad news.


AI can contribute to the sophistication of phishing attacks.


Here's how.


AI can mimic the writing style of actual senders, making it tricky for recipients to detect whether an email is genuinely from a trusted source or a malicious actor.


Attackers can manipulate human emotions to their advantage as well. AI can analyze emotional triggers and language patterns to generate text. This psychological manipulation makes recipients more likely to open the email without thoroughly evaluating the sender’s legitimacy.


AI-generated content can also circumvent traditional email security filters.


Email security filters are often based on known patterns. AI can produce text that doesn't fit established patterns or signature characteristics of known phishing attacks.


So, it shouldn't come as a surprise to learn that 25% of phishing emails bypass Office 365 default security.


AI has also given us a new tool that we are all familiar with in ChatGPT. ChatGPT can be integrated into email security systems to assist in identifying potential phishing emails. Leveraging its natural language processing (NLP) capabilities, ChatGPT can meticulously analyze the content of incoming emails for various indicators of malicious intent.


Despite these capabilities, phishing attacks continue to rise.


ChatGPT: A New Breed of Phishing Attacks


Phishing attacks have soared by 135% as attackers have figured out how to exploit ChatGPT’s capabilities to craft increasingly persuasive phishing emails. Cybercriminals have gotten really, really good at it too, going as far as to employ sophisticated language that closely aligns with the characteristics of the targeted organization, mirroring their voice and communication patterns.


And no better place than the dark web for aspiring cybercriminal entrepreneurs to find what they are looking for without exerting too much effort. From Ransomware as a Service to FraudGPT and WormGPT, which are both powered by large language models (LLMs), the same as ChatGPT except for malicious purposes. That means advanced phishing scripts could be written out in a matter of seconds with striking accuracy.


Thoughts that keep security professionals up at night and leave organizations even more vulnerable to phishing attacks as AI tools continue to evolve.


Oh, and just in case you were wondering, subscription fees range between $200 per month to $1,700 annually.



Spotting AI-Powered Phishing Attacks: Don’t Fall for That Hook


Before someone in your organization accidentally clicks on that suspicious AI-generated email link, here are a few tactics for recognizing AI-generated phishing attempts.


  • Start Running Phishing Simulations - Phishing simulations leverage real-world scenarios and provide employees with the tools they need in identifying various types of threats, including those that exploit AI-generated content. They also reveal employees' vulnerabilities, highlighting areas where they might be more prone to falling for AI-powered phishing attempts. This insight allows organizations to target specific training and awareness campaigns, which should be at the top of your list when it comes to employee cybersecurity education programs. Training should also be modified per department. The finance department might have a scenario where they receive an urgent email from a "senior executive" or even the “CEO” out of the blue requesting an immediate wire transfer due to a "confidential matter." Finance personnel should be trained on how to verify the sender's identity using multiple communication channels and to recognize urgency-based scams.

  • Enforce Stronger Authentication Mechanisms - One of the most effective methods of preventing phishing attacks is the use of MFA. Multi-factor authentication (MFA) prevents 96% of bulk phishing attempts and 76% of targeted attacks. In the event that a user's credentials are compromised, MFA significantly limits the potential damage. Even with stolen credentials, attackers can't access the account without the additional authentication factor. MFA should be part of your cybersecurity resilience strategy.

  • Incorporate Advanced AI-Powered Threat Detection Tools - Advanced threat detection systems leverage natural language processing (NLP) to analyze email content for signs of AI-generated manipulation. They can detect linguistic inconsistencies and odd phrasing that may indicate an AI-generated phishing attempt. It can help identify suspicious behavior and detect anomalies, such as unusual access patterns or deviations in the content of emails.

  • Make Employee Training a Priority - Sending out an annual phishing quiz simply isn’t enough. Attackers are figuring out new and creative ways to launch AI-based phishing attacks. Training should extend beyond onboarding, it should be a continuous process. Add gamification techniques to make things more engaging and to encourage active participation, rather than enforcing mandatory company training. Set up a leaderboard and award prizes, even virtual ones, to keep phishing awareness top of mind.


Final Thoughts


Organizations will have to work together in order to combat AI-powered phishing and social engineering attacks as the threat landscape continues to evolve. As we’ve seen in the examples of the ChatGPT clone knockoffs on the dark web, attackers have only just begun to understand the potential of LLMs.

Similar to ChatGPT and other generative AI models, these malicious knockoffs will only get smarter as the data accumulates. Attackers will have all the tools they need to refine their techniques, making their phishing attempts even more convincing and laser targeted. This is why organizations must invest the time and resources to properly educate their employees.


This starts from the top down with executive leadership setting the bar and emphasizing the importance of phishing awareness. Does this mean that the occasional phishing email won’t slip through the cracks? The data suggests that, with an estimated 3.4 billion phishing emails sent per day, the odds of one of them finding its mark is highly probable. The question is whether organizations will be better equipped to recognize the attempts and have a plan of action to combat them.


As attackers continue to refine their tactics, your readiness to detect and respond becomes increasingly pivotal in identifying and mitigating evolving phishing attacks and defending against new persistent threats.



74 views0 comments
bottom of page