Romance fraud has always thrived on online dating apps. In order to steal money, personal information, and explicit images, cybercriminals go above and beyond. Their fake profiles can be found all over.
Additionally, romance scams are becoming even simpler to carry out as a result of the proliferation of generative AI tools. They make it easier for people to get in. The seven most common ways that romance con artists use AI to their advantage, as well as ways to safeguard yourself, are listed below.
1. Sending AI-Generated Emails en Masse
It is becoming harder to filter spam emails. Romance con artists make use of generative AI tools to quickly create multiple accounts and send messages that are deceptive and convincing. They quickly approach the hundreds.
AI-generated spam will appear on a variety of platforms in addition to your email inbox. Take, for instance, the con with the wrong number. Many con artists send cute selfies or sexually suggestive photos. In addition, if anyone responds, they will portray it as a harmless oversight.
A person will be moved to a different messaging platform (such as WhatsApp or Telegram) once they are on the line. The majority of schemes last weeks. Before asking victims to join investment schemes, pay their bills, or pay for trips, con artists gradually earn their victims' trust.
Keep yourself safe by completely ignoring spam messages. Don't talk to strangers, no matter how they look or what they have to offer.
2. Responding to More Conversations Quickly
Online, bots are spreading like wildfire. According to Imperva, bad bots accounted for 30% of automated web traffic in 2022. When you swipe through Tinder matches, you'll find one in a matter of seconds.
The proliferation of generative AI tools is one factor contributing to the sudden increase in bots. They produce a lot of bots. Your tool will provide a complete and effective bot generation code snippet if you simply enter the appropriate prompt.
Know when a bot is speaking to you. Despite AI's use of a conversational tone, its dialogue still sounds uninteresting and repetitive. Chatbots, after all, simply follow patterns. It might produce responses that are similar to various requests, statements, and questions.
3. Creating Multiple Identities From Stolen Images
Images are altered by AI art generators. Accept the underneath show for instance. We gave Playground AI a candid picture of a famous singer, and the platform made three different versions in a matter of seconds.
Yes, they are imperfect. However, keep in mind that we made use of a free tool with an out-of-date text-to-image model. With more advanced iterations, fraudsters produce output that is more realistic. From just a few examples, they can quickly render hundreds of altered, customized photos.
AI images are unfortunately difficult to identify. Doing a reverse image search and sorting through the relevant results is your best bet.
4. Building Deceptively Authentic-Looking Profiles
Numerous bots approach victims. Therefore, romance con artists who prefer a targeted scheme produce only one or two profiles that appear to be genuine. They will employ AI to appear convincing. Generative AI tools can write descriptions that look and sound real and natural; Grammar errors will no longer be a problem.
The following is a list of suggested hobbies for dating profiles from ChatGPT.
ChatGPT also writes a complete biography for your dating profile.
Because this procedure takes so much time, it also needs a bigger reward. Thus, con artists frequently demand more. They will request assistance with a variety of "problems," such as hospital bills, loan payments, or tuition fees, once they have gained your trust. Some will even make the claim that they will visit you if you pay for their ticket.
These criminals on the internet are adept at manipulating victims. Engaging them from the beginning is the most effective strategy. Do not permit them to speak. If you don't, you might gradually fall for their tricks and gaslighting.
5. Exploiting Deepfake Technology for Sexual Extortion
Deepfake tools were developed by AI at an alarmingly rapid rate. Deepfake videos now have less unnatural blinking, uneven skin tones, distorted audio, and inconsistent elements thanks to new technologies.
Sadly, these mistakes also raise concerns. It is harder to tell the difference between genuine and deepfake videos when users can remove them.
Bloomberg demonstrates how anyone with a basic understanding of technology can imitate others by manipulating their voice and visuals.
Scammers use deepfake tools for sexual extortion in addition to creating authentic dating profiles. They incorporate pornography into public videos and photos. They will blackmail victims and demand money, personal information, or sexual favors after manipulating illicit content.
If you are being targeted, do not give in. If you find yourself in this predicament, you can contact the FBI at 1-800-CALL-FBI, send a tip to the FBI, or visit your local FBI field office.
6. Integrating AI Models With Brute-Force Hacking Systems
Open-source language models are useful for some AI advancements, but they can also be exploited. Anything can be taken advantage of by criminals. They won't be able to ignore the algorithm that powers highly sophisticated language models like LLaMA and OpenAssistant, so don't expect that.
Hackers frequently combine password cracking with language models in romance scams. Brute-force hacking systems are able to quickly and effectively generate password combinations thanks to AI's NLP and machine learning capabilities. If they were given enough context, they might even be able to make informed predictions.
You have no influence over what con artists do. Create a password that is truly secure and includes special characters, alphanumeric combinations, and more than 14 characters to safeguard your accounts.
7. Imitating Real People With Voice Cloning
At first, AI voice generators were cool toys. Users would cover or even create new songs from sample tracks of their favorite artists. Take, for instance, Heart on My Sleeve. Ghostwriter977, a TikTok user, recorded a very realistic song imitating Drake and The Weeknd, despite the fact that neither artist sang it.
Speech synthesis is extremely dangerous, despite the jokes and memes that surround it. It enables sophisticated attacks by criminals. For instance, romance con artists make use of voice cloning software to call victims and leave false recordings. People who have never used speech synthesis before won't notice anything out of the ordinary.
Learn how synthesized outputs sound to avoid AI voice clone scams. Take a look at these generators. You will still notice some flaws and inconsistencies because they only produce clones that are nearly identical.
Protect Yourself Against AI Dating Scammers
Romance con artists will come up with new ways to take advantage of generative AI tools as they get better. These criminals can't be stopped by developers. Instead of simply relying on the effectiveness of security measures, take an active part in the fight against cybercrime. Apps for dating can still be used. But before talking to the person on the other side of the screen, make sure you know them.
Additionally, other AI-aided schemes should be avoided. Criminals use AI for identity theft, cyber extortion, blackmail, ransomware attacks, and brute-force hacking, in addition to romance scams. Learn to deal with these threats too.