We're steadily breaking down all mechanisms that ordinary people can use to trust information they're getting, such as getting a video call from a member of their family and recognising the face and voice.
All previous scams relied on con artists using various means to pretend to either be trustworthy in their own right despite being strangers or as representing some trustworthy institution. But having people being able to act as your family members is a whole other issue. You can't claim that this is no different from other scams.
In Argentina they’ve been scamming people by pretending to be a relative for years now! No AI required. Somebody calls you in the middle of night, distraught, and crying says “mom?” or “dad?”. Then someone interrupts, claims your son or daughter has been abducted (by now you probably have given them their name when responding to the initial plea) and requests money to be dropped at some location where a motorcycle picks it up. They make you stay on the line so you can’t call the cops nor the allegedly abducted person.
These calls usually are made from prison, with an outside accomplice. They are rarely caught.
People are sleepy, and concerned, and swear the voice they heard was the one of their child.
Another no-ai popular scam is done by stealing a WhatsApp account (e.g. by cloning the sim), and then contacting a friend or relative asking for a quick cash transfer for something urgent, to be returned the next day.
Deepfakes might make these scams more believable, but the core causes of the issue and the solutions have not changed.
That sort of scam has been possible, sure, but it depends on both the person being phoned being startled and half asleep, and unable to reach the person or someone else near them in subsequent calls. You're right that I shouldn't have been so absolutist in saying 'all' previous scams, but this is an edge case that doesn't equate to what's becoming possible with real time deep fakes.
What this sort of tech is enabling is scamming where even someone who is fully awake and is probably quite aware and not normally prone to scamming can nonetheless be tricked by a video call.
It's taking us to a point where the only safe way to trust that the person you're speaking to is definitely who they say they are, in all circumstances, is to do it in person. Something not possible for people living far from their other family members.
Do you not see how fundamentally this breaks the trust models we have built over the past few decades?
All previous scams relied on con artists using various means to pretend to either be trustworthy in their own right despite being strangers or as representing some trustworthy institution. But having people being able to act as your family members is a whole other issue. You can't claim that this is no different from other scams.