Defending Against the Use of Deepfakes for Cyber Exploitation
Cybercrime has risen and there has been an 11x increase in ransomware assaults. We’re seeing a rise in assaults on high-profile targets — and the rise of new methodologies. Artificial intelligence and the rise of deepfake technology is something cybersecurity researchers have cautioned about for years and now it has officially arrived. Cybercriminals are increasingly sharing, developing and deploying deepfake technologies to bypass biometric security protections, and in crimes including blackmail, identity theft, social engineering-based attacks and more, experts warn.
Right now, the researchers said, discussions among threat actors about deepfake products and technologies are largely concentrated in English- and Russian-language criminal forums, but related topics were also observed on Turkish-, Spanish- and Chinese-language forums. Much of the chatter in these underground forums is focused on how-to’s and best practices, according to Recorded Future, which appears to demonstrate a widespread effort across cybercrime to sharpen deepfake tools.
The technology uses artificial intelligence to superimpose and combine both real and AI-generated images, videos and audio to make them look almost indistinguishable from the real thing. The apparent authenticity of the results is rapidly reaching disturbing levels. The most common deepfake-related topics on dark web forums included services (editing videos and pictures), how-to methods and lessons, requests for best practices, sharing free software downloads and photo generators, general interests in deepfakes, and announcements on advancements in deepfake technologies. In the cybersecurity world, deepfakes are an increasing cause for concern because they use artificial intelligence to imitate human activities and can be used to augment social engineering attacks.
The progressive uptick in synthetic identity fraud is likely due to multiple factors, including data breaches, dark web data access and the competitive lending landscape,” the Experian "Future of Fraud Forecast" said. “As methods for fraud detection continue to mature, Experian expects fraudsters to use fake faces for biometric verification. These ‘Frankenstein faces’ will use AI to combine facial characteristics from different people to form a new identity, creating a challenge for businesses relying on facial recognition technology as a significant part of their fraud prevention strategy.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.