SIEM and SOAR: Which system does your business need?
The perfect combination for enhanced IT security in your business: SIEM detects cyber threats, SOAR automatically fends them off.
Deepfakes are becoming easier to create with advancements in generative AI. This challenges identity management systems, but effective countermeasures are already in place.
For as long as the Internet has existed, cybercriminals have been trying to steal money at the expense of other people. In the past, they have often invented false identities to find new victims. In most cases, they used spam emails, although the success rate was extremely low. However, if only a single recipient transfers $2,000, the effort has already paid off.
Fortunately, such messages are relatively easy to recognize because they are usually poorly done and not specifically directed at the intended recipient. Most people are unlikely to fall for a spam email in which fraudsters claim that their bank account will be blocked, especially when the email references a bank where they don’t even have an account. However, there is a purpose behind these poorly constructed spam emails: they are designed to target only the most gullible or receptive individuals, minimizing the effort the criminals require.
Recently, cybercriminals have enhanced the language and design of their scam emails. In response, users are more aware and approach suspicious emails with increased caution.
On the flipside, the risk of targeted attacks increases. A well-known example is CEO fraud, in which the attackers pretend to be managers with a success rate of 10 to 20 percent. CEO fraud attacks generally require more effort.
The rise of artificial intelligence has reshuffled the cards. Through AI chatbots, fraudsters can now easily write new spam texts that have a convincing effect and contain no or very few linguistic errors. Cybercriminals circumvent the security hurdles built in by LLM providers with constantly adapted tricks and jailbreaks or simply set up their own AI models. There are now a number of open-source AIs that may not yet work as well as ChatGPT, Perplexity.ai, Claude, or DeepSeek, but are also constantly improving.
The next evolutionary stage is deepfakes, known on social media for artificially generated videos of celebrities or politicians doing silly things. A much bigger problem is deceptively fake voices. Today’s AI only needs a few spoken words to imitate a voice.
A completely new situation arises when, for example, an accountant no longer receives an email with an urgent request for payment from a criminal pretending to be their boss. Instead, the accountant receives a phone call that appears to come from their boss, directly ordering them to make the payment. Deepfakes are developing into social engineering attacks on steroids.
How can companies react to this?
Traditional identity management is no longer enough. The enormous advances in generative AI are pushing Identity and Access Management (IAM) systems to their limits. In the near future, no one will be able to rely on visual or vocal features.
One of the most effective tools against deepfakes is security awareness training, which educates employees about new risks using examples and simulations. Employees also learn how to behave in the event of unusual requests – even a simple callback via a number known to them can prevent serious damage.
AI can also be used to defend against AI-supported attacks. It can analyze facial expressions, voice patterns, or even artifacts in the image to expose fraud attempts. Additionally, AI can work in conjunction with modern authentication methods such as FIDO2, WebAuthn or passkeys. These can be combined with multi-factor authentication (MFA), which typically involves SMS codes or authenticator apps, to provide an extra layer of security.
Passkeys generate a key pair for each service and application, depending on the domain. This prevents hackers from stealing user ID and passwords on a fake site and then logging in remotely. Other recommended security measures are device binding, location checks, and Zero Trust, where trust must be re-established on every access attempt.
Until now, deepfake attacks were still considered a future scenario, but theory has already caught up with reality. A particularly dramatic case of fraud shook the security industry in January 2024:
Modern large language models (LLMs), combined with advanced text-to-speech technology and increasingly realistic avatars, could soon make a “one-click fraud CEO” a reality. . The best defense against such deepfake attacks is a combination of smart technology and human awareness: While Zero Trust architectures ensure that every access attempt is technically verified, employees must also stay alert, especially when receiving unusual calls or requests from executives.
These systems, together with solid endpoint security, will also ward off increasingly sophisticated AI fraud. Companies will need a well-thought-out Zero Trust concept that includes both technical and organizational measures:
The perfect building block for your Zero Trust concept