AI-driven cyberattacks, deepfakes and shadow AI expose gaps in business awareness—demanding urgent training, robust policies and vigilant employee behaviour

Business professionals are half as worried about AI-powered cyber threats as their technical colleagues, creating a dangerous blind spot that attackers are already exploiting. New research from Social Links reveals that just 27.8% of business professionals identify AI-generated fake messages as a top cyber threat, compared to 53.3% of technical staff.
The divide becomes starker with deepfake technology, where 46.7% of technical professionals express concern but only 27.8% of business staff flag it as a risk. These numbers matter because business professionals are prime targets for sophisticated AI-driven attacks designed to bypass traditional security measures.
Survey data from 237 professionals across industries reveals that whilst phishing and email fraud still top threat lists at 69.6%, AI-driven attacks are rapidly gaining ground. Nearly 40% of respondents now identify AI-crafted fake messages as a major concern, with 32.9% pointing to deepfakes and synthetic identities.
The departments most vulnerable to these attacks – Finance and Accounting (24.1%), IT and Development (21.5%), HR and Recruitment (15.2%), and Sales and Account Management (13.9%) – often house the very employees showing lower awareness levels.
This awareness gap has already led to costly real-world incidents. In early 2024, UK engineering firm Arup lost $25 million when an employee transferred funds following a video call where senior managers were convincingly deepfaked. Similarly, finance staff in Singapore lost $449,000 after participating in what they believed was an authentic executive meeting, but was actually a coordinated deepfake attack involving multiple AI-generated elements.
Employee behaviour compounds the threat considerably. The Social Links research shows that 60.8% of respondents admit employees use corporate accounts for personal activities like posting on forums, engaging on social media or updating public profiles. Nearly as many – 59.5% – directly link publicly available employee data to actual cyber incidents.
‘You can’t really stop people from using work accounts or data when they’re active online,’ explains Ivan Shkvarun, CEO of Social Links. ‘But all this activity leaves digital traces. And those traces can make it easier for scammers to find and target employees.’
Subscribe to our newsletter and never miss a story. No spam, ever.

The president accepted a 10-point peace plan that gives Iran nearly everything it asked for. Hours later, he contradicted its central demand. Either he did not read it or he does not care what it says.

Anthropic seals off the last third-party route into its Claude subscription tier, forcing OpenClaw and all other AI agent platforms onto metered billing.

A debugging file left in a software update exposed 512,000 lines of source code, 44 unreleased features, and a mode that hides AI involvement in open-source projects. It was Anthropic's second data exposure in a week.
The scale of AI-powered attacks has surged dramatically. Industry data shows that around 40% of all cyberattacks are now AI-driven, with deepfake use in impersonation fraud rising by 50-60%. These automated, personalised scams exploit the very human vulnerabilities that traditional technical controls struggle to address.
As UK cybersecurity experts have warned, AI advancements are creating unprecedented threats to global cybersecurity systems that require urgent attention.
The unauthorised use of AI tools – dubbed ‘shadow AI’ – creates additional security risks that most organisations struggle to manage. The Social Links survey found that whilst over 82% of companies allow employees to use AI tools at work, only 36.7% have formal policies controlling their use.
This policy gap has proven costly. Corporate data input into AI tools surged by 485% between March 2023 and March 2024, with sensitive data rising from 10.7% to 27.4%. Around 38% of employees share confidential data with AI platforms without approval, risking compliance with regulations like GDPR and potentially triggering fines up to 4% of global revenue.
High-profile breaches illustrate the danger. Samsung employees inadvertently leaked proprietary code and trade secrets by submitting confidential information to ChatGPT. Such incidents demonstrate how shadow AI can cause substantial reputational harm and competitive disadvantage.
Rather than attempting to restrict AI tool usage entirely, security experts advocate for comprehensive employee education. The Social Links research shows that 72.2% of survey respondents view employee training on safe AI use as the most effective way to reduce shadow AI risks, followed by internal policy development at 46.8%.
‘What actually helps is teaching people how to spot the risks and giving them the right tools to stay safe, instead of just saying don’t do it,’ Shkvarun notes. This approach recognises that employees will continue using AI tools regardless of restrictions.
Best practices for AI-era cybersecurity training include regular, role-specific instruction on AI-powered phishing and social engineering, interactive training tools that simulate real attack scenarios, and fostering a security-conscious culture where employees feel comfortable reporting suspicious activity.
Companies must also address the broader issue of AI-generated misinformation that makes it increasingly difficult for employees to distinguish between genuine and fraudulent communications.
The challenge isn’t just technical – it’s organisational. As Shkvarun emphasises: ‘Traditional threats like phishing and malware still dominate the charts. But what we’re seeing now is that AI isn’t replacing these risks, it’s supercharging them, turning generic scams into tailored operations – fast, cheap and more convincing.’
Companies must bridge the awareness gap between technical and business teams before attackers exploit it further. This requires moving beyond policy documents to focus on practical, everyday security habits that all employees can implement.
The window for action is narrowing. With unauthorised technology use accounting for about 11% of cyber incidents globally and AI attack sophistication increasing rapidly, organisations cannot afford to leave any department behind in their cybersecurity efforts.
Building consumer trust in AI-powered systems requires transparency about both the benefits and risks of these technologies, something that extends to internal corporate communications about AI threats.

London Tech Week returns to London Olympia from 8 to 12 June with a new Deep Tech Stage spanning quantum computing, space, surgical robotics and life sciences.