The story so far: You may have received multiple warnings from your banks and other financial institutions about a surge in AI-driven crimes. In December 2024, the U.S. FBI issued a warning, stating that criminals are increasingly exploiting generative artificial intelligence (AI) to commit fraud on a larger scale, making their schemes more convincing.
Also Read: Digital financial frauds in India: a call for improved investigation strategies
Enterprising criminals exploit generative AI-powered text, images, videos, and audio to trap victims who lack technological proficiency or time and are too exhausted to thoroughly assess potentially hazardous content.
Today, a variety of unregulated or even illegal Generative AI tools are available on the web. These tools enable the creation of scams from start to finish, resulting in a web of multimedia tricks that separates victims from their money. Attackers can use text generators alone to craft grammatically correct messages that threaten or deceive victims in their native languages. They can also generate malicious code to create websites that compromise victims’ systems.
Criminals can exploit AI images to create convincing deepfakes of victims, fooling their loved ones. They can also fabricate false photos of crucial documents, produce sexually explicit images for extortion, establish fake social media or dating app profiles, and even portray celebrities endorsing services and scams they would never support in real life, as per the FBI’s recent release.
Malicious users can exploit the voice cloning technology in Generative AI audio tools to create fake recordings of real people in distress. These recordings can be sent as voice messages or even as fake yet complex telephone calls to force their contacts into transferring money.
When AI-generated videos enter the scene, attackers can orchestrate intricate scenarios or even circumvent system tests designed to verify the user’s humanity. For instance, by capturing just a few seconds of video from your social media account, an attacker could create a convincing deepfake video call where you appear to have been involved in an accident and are desperately seeking financial assistance.
It is best to be cautious of unexpected requests for money, even from your loved ones. Additionally, be wary of surprise requests to carry out various financial activities, including redeeming gift cards, claiming prize money, paying fines, repaying loans, paying customs officials, or paying bail.
Be extra cautious when receiving texts or media files from unfamiliar users. Unless you have a secure device or environment, refrain from opening such files.
When video scams, like the prevalent ‘digital arrest’ scheme, target a victim, the caller will likely resort to aggressive or intimidating tactics to force compliance. They might create a sense of urgency, claiming that time is running out or that the victim urgently needs to act. Additionally, they might emphasise the importance of keeping the matter a secret from others. It is crucial to avoid sharing any financial information or transferring money through unsecured channels to prevent yourself from falling for such scams.
Instead of believing a caller who shows you their uniform and police ID (since this could be a deepfake), reach out directly to the police for guidance. Cease all communication and contact the person involved directly.
One way to safeguard yourself and your family from AI-generated financial fraud is by using a family password. This is a unique word known only to you and your immediate family members. You can use it to verify the identity of others whenever you want. For instance, if you receive a phone call from your child or parent requesting a large sum of money, ask them for the family password to ensure that their voice hasn’t been cloned by a stranger. Agree on a strong and difficult-to-guess password. Regularly updating your password is also recommended.
If your family has minors or vulnerable elders, ensure that their devices are safeguarded. Lock and set their social media accounts to the private mode to prevent the misuse of their personal photos and audio. Educate your children about online safety and help them distinguish between genuine and AI-generated content.
When using dating apps, users should report any matches who use AI-generated media. Never share personal financial information with romantic matches, and never agree to transfer money, invest in cryptocurrency, pay customs charges for parcels, or accept gift cards. Until you completely trust the person and have met them in person, stay on the app’s messaging platform and avoid switching to other apps like WhatsApp or email.
When donating to charities, ensure that the featured images of different causes and team members are genuine. Give to groups or individuals you trust deeply, those whom you know personally, or those with high transparency levels. Additionally, you can use Gen AI image detectors to review photos.
Finally, do not implicitly trust calls from users whose profile pictures include police or military-related accessories, as these can be easily generated using AI. Real police officers never arrest or make demands of you through video calls. If you encounter such a call, record it and submit the evidence to a genuine police station.
Alternatively, you can report malicious content generated by AI through the national cyber-crime portal.
Published - March 07, 2025 02:03 pm IST