Why companies should consider compliance, Zero Trust and the implementation of the GDPR together and what advantages a combined approach brings.
Generative AI is far more than just a toy. If used correctly, the technology can increase security in companies. But cyber attackers also benefit from the new possibilities.
No online platform has grown as fast as ChatGPT: After just two months, the service already had 100 million users. It took Facebook four and a half years to do that. Many IT security experts are now wondering whether the breakthrough success of generative artificial intelligence (AI) is a boon or a curse for their industry.
A legitimate question. AI applications and services such as ChatGPT present opportunities and risks for IT security in companies. Generative AI can be used both better protect IT infrastructure and for increasingly sophisticated attacks on precisely these systems. In the following article, we will take a look at the most important advantages and disadvantages from the point of view of IT security.
How can generative AI benefit IT security?
First, let's look at the benefits of generative artificial intelligence. Most IT security experts may have encountered AI in the presentations and marketing brochures of many manufacturers. Some of this is exaggerated, but there are certainly tangible benefits, such as pentesting or the analysis of forensic data.
Responding to threats
AI detects suspicious patterns in large datasets much more efficiently than human analysts. It detects anomalies that indicate a cyber attack more quickly. To do this, however, it first needs training. Algorithms are usually trained over several weeks or months by monitoring routine network traffic. Based on this data, it detects events that are not part of routine traffic. This not only frees up resources for security experts. Depending on the configuration, AI can also immediately initiate countermeasures and actively fight threats before further damage occurs.
A good example is the Security Copilot recently introduced by Microsoft. It is based on GPT-4 from OpenAI and is also a chatbot that answers questions about the current cybersecurity situation in the company, analyzes threat data and malware or creates reports for management. Security Copilot even provides PowerPoint presentations that describe security incidents and the attack vectors used. Microsoft trains the service daily with around 65 trillion new signals.
The topic of AI also plays an important role in the Zero Trust model. It is based on the principle of prohibiting everything that is not explicitly allowed. For example, a user may not automatically access a server just because they are currently logged in as an administrator on the network. Zero Trust systems use contextual data to decide on whether to grant access. Artificial intelligence increases the efficiency of such processes.
Artificial intelligence can also simulate various attack scenarios and uncovering gaps in a company’s cyber defense strategy before real-world attackers exploit them.
Does generative IT have negative consequences for IT security?
It wasn't long before cybercriminals and covert government agencies realized the benefits of AI for their purposes. The same or similar techniques used to improve IT security can also be used for less honorable goals. This makes attacks increasingly sophisticated and even more difficult to detect and defend against.
For example, it is easy to use ChatGPT to revolutionize the areas of cyber fraud and social engineering. If phishing emails have so far often attracted attention due to incorrect formulations, spelling mistakes and an overall poorly thought-out structure, generating massively personalized messages in order to lure potential victims is now almost child’s play. It won’t be long before spear phishing and CEO fraud skyrocket.
Attacking AI interfaces
More and more companies are integrating artificial intelligence into their collaboration and business solutions. However, this quickly becomes risky when attackers gain access to these interfaces. Then they can manipulate data or deliberately inject information into a supposedly trustworthy environment.
Fake data is not a new phenomenon. Photos have been manipulated and details removed or added before, but artificial intelligence makes this task easier. A modern AI not only fakes photos, but also videos and even voices. It's not long before an unknown person calls, who sounds just like a colleague or supervisor. IT is facing a new wave of phishing attacks and disinformation campaigns.
The fact that ChatGPT can also program relatively well surprised many. Complete websites and applications can be produced with little effort. Of course, this can also be abused. Although OpenAI has taken precautions and tried to prevent illegal or harmful uses. However, resourceful Internet users have already found some ways to trick the AI. They can manipulate ChatGPT into other roles that no longer adhere to the specifications. This is the next cat-and-mouse game between cybercriminals and IT manufacturers.
Increasing AI dependency
The use of generative artificial intelligence poses further risks if these systems fail, are unavailable or are manipulated. Large Language Models (LLM) such as ChatGPT can be manipulated and fed with prepared data that leads to results other than those desired.
In addition, too much trust in the capabilities of AI leads to a false sense of security. This phenomenon is already known from antivirus programs. Their accuracy and reliability are often overestimated, leading to some users taking greater risks. According to the motto “We are well protected”.
In a nutshell:
The use of AI in the field of IT security undoubtedly facilitates the search for the needle in the haystack. The benefits range from improving and accelerating cyber threat detection to proactively simulating threats and unburdening security professionals, especially in the face of a growing shortage of skilled workers. On the other hand, caution is also required, because the currently rapidly developing technologies are also exploited by cybercriminals.
It certainly won’t be boring. Many organizations are now planning IT security through a Zero Trust strategy to deal with evolving attack vectors. Learn more about Zero Trust security from NCP now.