Artificially generated AI image, video and audio content is increasingly being used for the purposes of disinformation, manipulation and identity fraud.
Artificial intelligence is the central theme of this year's European Cyber Week from 19-21 November in Rennes, Brittany. In a challenge organised to coincide with the event by France's Defence Innovation Agency (AID), Thales teams have successfully developed a metamodel for detecting AI-generated images. As the use of AI technologies gains traction, and at a time when disinformation is becoming increasingly prevalent in the media and impacting every sector of the economy, the deepfake detection metamodel offers a way to combat image manipulation in a wide range of use cases, such as the fight against identity fraud.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20241120808818/en/
AI-generated images are created using AI platforms such as Midjourney, Dall-E and Firefly. Some studies have predicted that within a few years the use of deepfakes for identity theft and fraud could cause huge financial losses. Gartner has estimated that around 20% of cyberattacks in 2023 likely included deepfake content as part of disinformation and manipulation campaigns. Their report highlights the growing use of deepfakes in financial fraud and advanced phishing attacks.
"Thales's deepfake detection metamodel addresses the problem of identity fraud and morphing techniques," said Christophe Meyer, Senior Expert in AI and CTO of cortAIx, Thales's AI accelerator. "Aggregating multiple methods using neural networks, noise detection and spatial frequency analysis helps us better protect the growing number of solutions requiring biometric identity checks. This is a remarkable technological advance and a testament to the expertise of Thales's AI researchers."
The Thales metamodel uses machine learning techniques, decision trees and evaluations of the strengths and weaknesses of each model to analyse the authenticity of an image. It combines various models, including:
The Thales team behind the invention is part of cortAIx, the Group's AI accelerator, which has over 600 AI researchers and engineers, 150 of whom are based at the Saclay research and technology cluster south of Paris and work on mission-critical systems. The Friendly Hackers team has developed a toolbox called BattleBox to help assess the robustness of AI-enabled systems against attacks designed to exploit the intrinsic vulnerabilities of different AI models (including Large Language Models), such as adversarial attacks and attempts to extract sensitive information. To counter these attacks, the team develops advanced countermeasures such as unlearning, federated learning, model watermarking and model hardening.
In 2023, Thales demonstrated its expertise during the CAID challenge (Conference on Artificial Intelligence for Defence) organised by the French defence procurement agency (DGA), which involved finding AI training data even after it had been deleted from the system to protect confidentiality.
About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies specialising in three business domains: Defence & Security, Aeronautics & Space and Cybersecurity & Digital Identity.
The Group develops products and solutions that help make the world safer, greener and more inclusive.
Thales invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as IA, cybersecurity, quantum technologies, cloud technologies and 6G.
Thales has 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.
Developing AI systems we can all trust | Thales Group
2023 Gartner Report on Emerging Cybersecurity Risks.
Morphing involves gradually changing one face into another in successive stages by modifying visual features to create a realistic image combining elements of both faces. The final result looks like a mix of the two original appearances.