Technical Security Risks in the Era of Deepfakes and Generative AI
The session, presented by Stephanie Itimi, focuses on the security risks associated with deepfakes and generative AI, emphasising their impact on individuals, businesses, and society. It covers the dangers posed by manipulated media, the vulnerabilities of AI models to attacks like data poisoning and model inversion, and outlines strategies for mitigating these risks. Additionally, it explores how generative AI can be leveraged for both cyber defence and offence, highlighting the importance of adopting best practices and techniques to safeguard against these emerging threats.
• Deepfakes and generative AI pose significant security risks, capable of spreading misinformation and committing fraud.
• Attacks like data poisoning and model inversion can compromise AI model integrity, leading to biased or incorrect predictions.
• Mitigation strategies include using high-quality data, adversarial training, continuous monitoring, and employing a multi-layered defence strategy.
• Generative AI offers potential in cyber defence, such as detecting deepfakes and generating decoy data, but also poses risks when used for offensive purposes like creating targeted social engineering attacks.
• Businesses and individuals must take these threats seriously and adopt recommended practices and technologies to protect against the vulnerabilities introduced by deepfakes and generative AI.