Increasingly sophisticated artificial intelligence, audio, and video technologies, along with the wealth of personal user data available on social media, have made deepfake phishing an emerging attack vector that CISOs should be concerned about.
Deepfake technology uses AI to craft deceptive sounds, videos and images. To date, deepfakes have primarily served entertainment and political purposes, both innocuous and malicious. Experts warn, however, that deepfake technology also poses a variety of corporate IT risks. Deepfake phishing, for example, involves using deepfake content to trick users into making unauthorized payments or willfully providing sensitive information that cybercriminals can use to their advantage.
In a high-profile example from 2019, cybercriminals used deepfake phishing to trick the CEO of a UK-based energy company into transferring $243,000 to them, according to The Wall Street Journal. Using AI-based voice-spoofing software, the criminals managed to impersonate the head of the company’s parent company, tricking the CEO into thinking he was speaking with his boss .
As technology evolves, these deepfake phishing campaigns will almost certainly become more common and more effective. CISOs can prepare enterprise users to fend off these attacks by teaching them what deepfake phishing is and how it works.
Types of deepfake phishing attacks
Deepfake phishing attacks fall into the following categories:
- Real-time attacks. In a successful real-time attack, the audio or video deepfake is so sophisticated that it tricks the victim into thinking the person on the other end of a call is who they say they are – maybe – be a colleague or a client, for example. In these interactions, attackers are likely to create a strong sense of urgency, imposing imaginary deadlines, penalties, and other consequences for delay on victims to make them panic and react.
- Non-real-time attacks. In non-real-time attacks, a cybercriminal impersonates someone through fake audio or video messages which they then distribute through asynchronous communication channels, such as chat, email, voicemail, or mobile. social networks. This type of communication reduces the pressure on criminals to respond credibly in real time, allowing them to perfect a deepfake clip before distributing it. Therefore, a non-real-time attack can be quite sophisticated and less likely to arouse user suspicion. When distributed via email, a deepfake video or audio clip may also be more likely to pass security filters than traditional text-based phishing campaigns.
Non-real-time attacks also allow attackers to cast a wide net. A person posing as a CFO, for example, could send the same audio or video memo to everyone in the finance organization, with the goal of soliciting sensitive information from as many people as possible.
In both types of attacks, social media fingerprints usually provide enough information for attackers to strike strategically when targets are most likely to be distracted or overwhelmed.
How to fight deepfake phishing
Train, train, train
Security managers should educate end users about this attack vector and other emerging attack vectors through ongoing training. Safety awareness training fatigue is real, but making lessons fun, competitive, and rewarding can help keep them fresh and top of mind.
Fortunately, employees will likely find deepfake phishing awareness training to be particularly interesting, engaging, and educational. Try sharing compelling deepfake videos, for example, and ask users to spot suspicious visual cues, such as unblinking eyes, inconsistent lighting, and unnatural facial movements. Such an exercise in how to detect deepfake attacks is sure to make an impression.
This principle should be the cornerstone of ongoing security awareness training, and every manager and leader should continually remind employees of its importance. Cybercriminals try to rush victims into making ill-advised decisions, so a sense of urgency in any interaction should immediately set off alarm bells. If someone – even the CEO or a high-profile customer – requests an immediate bank transfer or product shipment, for example, users should stop and verify the authenticity of the request before taking any additional measures.
Train employees to respond to those making urgent requests in real time by politely communicating that due to an increase in phishing attacks, it will be necessary to confirm their identity through separate channels. For non-real-time requests, the same principles apply.
Challenge the other party
This is not a mitigation technique that employees often learn as part of security awareness training, but it is very effective. If an interaction seems suspicious, a user can ask the person on the other end of a call, email or message to provide information that both parties should know, such as when they have started working together. A close associate can even ask more personal questions, such as how many pets the other person has or the last time they shared a meal.
It’s uncomfortable and takes practice, but it’s a powerful and effective mechanism for identifying imposters before they cause damage.