Novel Social Engineering Attacks Increase by 135% with Generative AI Uptake

By John Greenwood

Cybersecurity firm Darktrace reported a 135% increase in novel social-engineering attack emails during the first two months of 2023. The firm’s research team found that the emails targeted thousands of its customers in January and February.

Darktrace noted that this increase matches the adoption rate of generative AI, such as ChatGPT. The firm believes that these social-engineering attacks are utilizing “sophisticated linguistic techniques,” including increasing the text volume, sentence length, and punctuation in emails.

Generative AI becomes the powerhouse of Novel Social Engineering Attacks

Darktrace also observed a decrease in the number of malicious emails that contain an attachment or link. The firm speculates that generative AI could be used by malicious actors to construct targeted attacks rapidly.

The firm’s chief product officer, Max Heinemeyer, warns that email is the key vulnerability for businesses today, and defenders are up against sophisticated generative AI attacks and entirely novel scams that use techniques and reference topics never seen before.

Heinemeyer emphasizes that in a world of increasing AI-powered attacks, the onus can no longer be placed on humans to determine the veracity of communications they receive, and this is now a job for artificial intelligence.

Employee Survey Results

An employee survey conducted by Darktrace revealed that 82% of employees are concerned about hackers using generative AI to create scam emails that are indistinguishable from genuine communication.

The survey also found that 30% of employees have fallen for a scam email or text in the past. When asked what the top three characteristics are that suggest an email is a phishing attempt, 68% of respondents said it was being invited to click a link or open an attachment, 61% said it was due to an unknown sender or unexpected content, and poor use of spelling and grammar was chosen by 61% as well.

In the last six months, 70% of employees reported an increase in the frequency of scam emails, and 79% said that their organization’s spam filters prevent legitimate emails from entering their inbox. 87% of employees were also worried about the amount of their personal information online, which could be used in phishing or email scams.

Defending Against AI Social Engineering Attacks

Email services have always been one of the primary vectors through which attackers can breach an organization. Microsoft has implemented several measures to minimize the abuse of its software in phishing attacks.

For example, it disabled VBA macros, which facilitated the automatic loading of malware via tampered Office documents. The company also decided to block emails sent from potentially vulnerable Exchange servers, which hackers had been abusing for years to launch highly convincing email campaigns.

The Threat of AI to Cybersecurity

One possible attack could see a CEO’s likeness abused to send video and/or audio instructions to employees in the finance department, for example, encouraging them to make payments to accounts under the attackers’ control. Intel’s FakeCatcher system aims to develop a tool to detect deepfakes by analyzing the blood flow in faces.

At present, Intel has a 96% success rate in identifying deepfake footage, and the technology could be embedded within video conferencing software to prevent deepfake phishing and social engineering attacks in the near future.

The threat of AI to cyber security has been feared for some time and extends beyond just generative AI. For example, AI-driven malware was conceptualized years ago, which could install and analyze a specific environment, changing its payload to exploit its host most effectively. In reality, such attacks have been few and far between. However, there are also fears around what deepfake technology could achieve in the phishing space.

Defending AI social-engineering attacks

Darktrace’s survey results indicated that 82% of employees are worried about hackers using generative AI to create scam emails that are indistinguishable from genuine communication. Additionally, 30% of employees have fallen for a scam email or text in the past, highlighting the importance of addressing this growing issue.

Email services have always been one of the primary vectors through which attackers can breach an organization. One of the most common ways to install malware on a victim’s machine would be to embed malicious code inside a Microsoft Office document, such as an Excel file.

Microsoft has recently implemented a number of measures to help minimize the abuse of its software in phishing attacks. Most notably in 2022, it disabled VBA macros – the abused component which facilitated the automatic loading of malware via tampered Office documents.

However, some said the industry had been calling for such action to be taken against VBA macros for years, and that Microsoft could have prevented an untold number of attacks if it had acted faster.

The latest work from Intel and its FakeCatcher system has aimed to develop a tool to detect deepfakes analyzing the blood flow in faces. At present, Intel has a 96% success rate in identifying deepfake footage, and the technology could be embedded within video conferencing software to prevent deepfake phishing and social engineering attacks in the near future.

Novel social engineering attacks are on the rise, and their success is likely due to the use of generative AI techniques, which enable attackers to construct targeted attacks rapidly.

The adoption of AI is expected to increase in the coming years, and with it, the sophistication and prevalence of social engineering attacks. In response, organizations must implement new defense mechanisms that rely on AI to prevent and detect such attacks. This is no longer just a job for humans; artificial intelligence is now a critical component of cyber defense.

Source: The Cybersecurity Times

John Greenwood has been working with Cybersec and Infosec market for 12+ years now. Passionate about AI, Cybersecurity, Info security, Blockchain and Machine Learning. When he is not occupied with cybersecurity, he likes to go on bike rides! 

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab, What Really Happened and GETTR.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Novel Social Engineering Attacks Increase by 135% with Generative AI Uptake"

Leave a comment