Artificial Intelligence and Cybersecurity : …
Generative AI is transforming the business world and driving the creation of new business models. However, these systems introduce new risks that need to be mitigated through a dedicated framework.
2024 was marked by the rise of the Artificial Intelligence, a technology defined by the European Parliament as “the machine ability to replicate human-like behaviors, such as reasoning, planning and creativity”.
Therefore, the conversational agent ChatGPT, created by OpenAI, garnered immediate enthusiasm upon its release to the general public. The major promises of Artificial Intelligence, notably its potential to enhance operational performance through capabilities in analysis, decision-making and interaction with unstructured data, make this technology a strategic tool for organizations and institutions that adopt it on a large scale. In fact, productivity gains associated with AI usage are estimated to be between 11% and 37% by 2035.
It is important to remember that Artificial Intelligence models are not infallible. They can extensively perpetuate and spread certain biases, especially if the training data is historically biased or not representative of the populations or use cases. Developers may also inadvertently pass their own biases to the models through decisions they make, the features they select, or the objectives they set for the algorithm. Moreover, algorithms themselves can introduce biases due to their design and functioning (such as optimization bias) or amplify an existing bias by reproducing it on a large scale. In March 2024, UNESCO expressed concerns about the gender biases in generative AI models, recommending that companies in the sector diversify their development teams and training data. This initial shortcoming has paved the way for broader questions about the security of Artificial Intelligence models, particularly data protection within companies, 40% of which reported having already adopted AI by the end of 2023.
Naturally, the widespread deployment of Artificial Intelligence in professional environments appeals to users. They sometimes tend to rely exclusively on generative AI systems to perform their tasks, often at the expense of best practices in confidentiality and data protection. Leaders are thus compelled to question the intellectual property ownership of IA-generated outputs, in addition to ethical and regulatory considerations. Additionally, it is part of their responsibility to regulate AI usage and to identify opportunities that provide real value to their operations. Among the most common and feared risks are data leaks, intellectual property reuse and violation of GDPR regulations, which can lead to severe compliance breaches and harsh penalties. A notable example is CLEARVIEW AI, the American facial recognition solution, which received the highest penalty applicable by the CNIL in 2022. The fine imposed amounted to 4% of the company’s global turnover, or twenty million euros.
Furthermore, several cases of intentional malicious use of Artificial Intelligence models have been reported worldwide, particularly involving large-scale misinformation campaigns relying on deepfakes. Last June, the California-based start-up OpenAI disclosed that it had already prevented five influence operations based on its AI models aimed at manipulating public opinion in Russia and China. The near-universal adoption of Artificial Intelligence by a broad audience has also inspired malicious actors to develop solutions for cybercrime. Thus, the malicious conversational agents FraudGPT and WormGPT were reportedly identified in 2024 on forums popular among the hacking community. These two solutions, heavily inspired by ChatGPT, disregard the legal and ethical constraints of the latter. They specifically assist users in generating phishing content and drafting malware through Python scripts. More broadly, the computational power of Artificial Intelligence models enables greater sophistication, increased speed of attack execution, and an expanded target scope. In the absence of appropriate regulations and oversight, AI can facilitate the automation and industrialization of cyberattacks through malicious scripts, self-learning malware, evasion attacks, exploitation of Zero Day vulnerabilities, automated phishing campaigns, and more.
Securing Artificial Intelligence, which must consider various threats as outlined by OWASP in its LLM AI Cybersecurity and Governance checklist, is a fundamental effort to ensure a sustainable future for this transformative technology. This task encompasses a dual challenge: on one hand, it involves securing the systems themselves and, by extension, the data they handle. On the other hand, it aims to mitigate risks to individuals. Thus, the security of systems is based on the principles of Security by Design, which are sometimes adapted into MLSecOps and AISecOps, allowing for the integration of testing from the design phase of algorithms. In Europe and in the United States, safety authorities such as ANSSI, NIST and BSI are addressing the issue to support initiatives. Here are some of the best practices to be follow:
In conclusion, the rapid and widespread adoption of Artificial Intelligence necessitates increased vigilance against cyber threats and potential abuses.The promising benefits of AI will only be sustainable if rigorous security measures and ethical practices are implemented through collaborative efforts to establish trust on a global scale. With a deep understanding of the industry and an excellent mastery of the technologies and issues associated with AI, Sia Partners experts possess a unique position to guide companies toward a transparent and fair use of Artificial Intelligence.