The Dark Side of Innovation: Identity Theft, Fraud and the Rise of Generative AI
In recent years, technological advancements in artificial intelligence have revolutionized various fields, making remarkable progress in creating incredibly realistic content. We’ve all been amazed by the possibilities and progress. It’s no wonder that within two months of its launch in late November, ChatGPT had 100 million monthly active users, a number that took Instagram two and a half years to reach and Tik Tok nine months.
At the same time, one cannot ignore the privacy and societal implications these technologies raise. For example, while many alarm bells are going off around the rise of voice cloning and deep fakes, some artists are embracing voice cloning and offering to split royalties with any person who comes up with a successful song that uses their voice. From a security standpoint, we already understand where many of the dangers lurk. Data is limited, but there are growing reports of AI-powered scams using audio clips that are then tied to ransom demands. To illustrate how accessible the tools to do this are, there are providers that offer free trial periods and then charge monthly fees as low as $9.99 for their service, hardly a barrier for any enterprising cyber criminal.
Understanding Deep Fakes, Identity Theft and Generative AI
Deep fakes are AI-generated images, videos, or audio that convincingly manipulate or replace a person's likeness in existing content, often making it challenging to discern between real and fake. Generative AI is the driving force behind these deep fakes, generally consisting of two neural networks: the generator, responsible for creating realistic content, and the discriminator, tasked with differentiating between real and fake. Over time, these networks improve, leading to the generation of increasingly authentic and deceptive deep fakes.
The problem from an identity theft point of view is two fold - that a deep fake can present as a legitimate person AND that the deep fake will have the information to act like the legitimate person. This means that the deep fakes used to create synthetic identities, drive impersonation and account takeover attacks and exacerbate money laundering schemes will be more effective when “armed” with legitimate information to bypass security controls. By generating fake documents, images, or even video footage of individuals, and combining them with real social security numbers, credit card numbers and other sensitive information, it will be easier to impersonate people and commit fraud.
Already, before the advent of Generative AI, we were seeing fraud trends going in an alarming direction. According to market research firm Javelin Strategy, there were $43 billion in identity fraud related losses in 2022, and looking at the latest data breach numbers from the Identity Theft Resource Center for this year so far, we are on pace for another record breaking year for fraud. While traditional methods of identity theft primarily relied on hacking databases or phishing emails, Generative AI introduces an even more insidious element. So if we think the numbers are bad now, the fraud prevention problem will only get exponentially worse if we don’t address the root cause of how we manage identity and cybersecurity.
Simply put, the root cause of fraud boils down to two primary elements:
- Personal data is stored inside central honeypots that are impossible to protect;
- We allow the use of this data for access into networks and personal accounts.
Besides the data breaches themselves, the problems are manifested through phishing attacks, fake websites, stolen OTPs and other well known fraud techniques. Passwordless authentication is poised to help address some of the challenges, but for the most part the solutions offered by the market are disjointed, enterprises find it very hard to integrate and deploy, and fraudsters are left with plenty of room to operate successfully as witnessed by a whopping 84% increase in walk-in check cashing fraud in the last year and the contact center channel being a continued favorite for fraudsters to reset accounts, change account details, and take out new loans. The point is securing the digital channel with passwordless approaches alone is nowhere near enough to combat the problem.
5 Steps to Combating the Risks of AI Generated Identity Theft
Before we get into how to combat AI generated identity fraud, it is important to appreciate the situation we are in. Generative AI may not necessarily effectuate new types of attacks; what it will do is make the fraudsters even more effective in their work. Bland statements like, “fight bad AI with good AI”, or “make sure you have good multi factor authentication” or “it is critical to enhance awareness” will ultimately do nothing to combat the problem.
As stated earlier, to make a dent in fraud prevention, all stakeholders will need to rethink how we manage identity and cybersecurity risks. No industry and no individual is immune.
Here are 5 concrete steps that can be taken:
- Eliminate central honeypots of personal data: Using new Privacy Enhancing Technologies (PETs) like Zero-Knowledge Proofs and Multi-Party Computation, it is possible to fully protect and secure personal data of all types, including biometrics, transaction data, health data and other sensitive information. A lot of discussion is taking place concurrently about the use of verifiable credentials and ensuring individual control over the use and transfer of their personal information but the fact remains that there are still plenty of use cases where enterprises will need to manage large amounts of personal information and it is important that this data is secured in the best possible manner.
- Ensure a consistent persistent biometric across the user journey: Today’s identity management systems are disjointed. While digital onboarding continues to grow exponentially, many organizations do not store the data that is collected for fear of data breach. This puts any downstream authentication activity at risk, with fraudsters using stolen information to bypass controls. Storing personal data, especially user biometrics collected in this process, allows the enterprise to close the gaps that attackers currently exploit.
- Use liveness detection to ensure “realness” of the biometric that is presented: Since the advent of biometric technologies, there has been the threat of gummy fingers, photo and video presentation attacks and other techniques to trick the biometric system. A class of technologies called liveness detection have been developed to securely detect these types of attacks and as a result, today’s leading providers have a nearly 100% success rate in detecting these types of Presentation Attacks.
- Apply injection detection techniques to make sure that a session has not been compromised: The biometrics industry has been reporting on increasing attacks using emulators to spoof device metadata with digital injections of biometric data, suggesting this is now five times more frequent than traditional biometric presentation attacks. This technique is well-known to those in the fraud prevention space, which has developed techniques that combine advanced device fingerprinting with other methods such as velocity checks for example, to collect a combination of data points that provide confidence in the integrity of a session.
- Augment static authentication mechanisms with dynamic fraud prevention and risk detection mechanisms to enhance accuracy and maintain a good user experience: One of the questions that comes up with biometrics a lot is the impact on the user experience. By applying adaptive authentication, low risk activities can be less burdensome. When high risk is detected, security measures can be increased or enhanced; this can also include adjusting the biometric authentication threshold and or requiring more than one biometric modality to be presented for authentication.
Identity Theft, Deep Fakes and Generative AI: The Discussion Continues
Generative AI offers remarkable potential for innovation, but we must be vigilant about its dark side. As technology evolves, so do the tactics of cybercriminals, but the important thing to note is that we are actually dealing with a recognizable playbook and we have the tools to meet the challenge. By adopting proactive fraud prevention and strong authentication measures and fostering a culture of awareness, we can strive to harness the full potential of generative AI while protecting ourselves from the potential for misuse. Together, we can create a safer digital landscape for everyone.
Join me in a conversation with Alexey Khitrov from IDR&D as we explore this topic further in a virtual chat on July 27 at 11:00AM Eastern. Register here.