A large percentage of my work is peripheral to info security (ISO 27001, CMMC, SOC 2), and I've been building internet companies and software since the 90's (so I have a technical background as well), which makes me think that I'm qualified to have an opinion here.
And I completely agree that LLMs (the way they have been rolled out for most companies, and how I've witnessed them being used) are an incredibly underestimated risk vector.
But on the other hand, I'm pragmatic (some might say cynical?), and I'm just left here thinking "what is Signal trying to sell us?"
I didn't mean to imply a conflict of interest, I'm wondering what product or service offering (or maybe feature on their messaging app) prompted this.
No other tech (major) leaders are saying the quiet parts out loud right, about the efficacy, cost to build and operate or security and privacy nightmares created by the way we have adopted LLMs.
Whittaker’s background is in AI research. She talks a lot (and has been for a while) about the privacy implications of AI.
I’m not sure of any one thing that could be considered to prompt it. But a large one is the wide-deployment of models on devices with access to private information (Signal potentially included)
Maybe it's not about gaining something, but rather about not losing anything. Signal seems to operate from a kind of activism mindset, prioritizing privacy, security, and ethical responsibility, right? By warning about agentic AI, they’re not necessarily seeking a direct benefit. Or maybe the benefit is appearing more genuine and principled, which already attracted their userbase in the first place.
Exactly, if the masses cease to have "computers" any more (deterministic boxes solely under the user's control), then it matters little how bulletproof signal's ratchet protocol is, sadly.
Signal is conveying a message of wanting to be able to trust your technology/tools to work for you and work reliably. This is a completely reasonable message, and it's the best kind of healthy messaging: "apply this objectively good standard, and you will find that you want to use tools like ours".
Since Signal lives and dies on having trust of its users, maybe that's all she is after?
Saying the quiet thing out loud because she can, and feels like she should, as someone with big audience. She doesn't have to do the whole "AI for everything and kitchen sink!" cargo-culting to keep stock prices up or any of that nonsense.
How can a service like Signal live and die by the trust of its users when they openly lie to them. Signal refuses to update their privacy policy to warn users that they store sensitive information in the cloud (and more recently, even the contents of user's messages in some cases).
Lying to users by saying that signal doesn't collect or store anything when they actually do doesn't sound like something a company who expected you to trust them would do. It sounds like something a company might do if they needed a way to warn people away from using a service isn't safe to use while under a gag order.
Signal has been trying to tell us for years now that their service is already compromised. That's why they've refused to update their privacy policy after they started keeping sensitive data in the cloud and even after they started keeping message content for some users.
I'd argue that Signal is trying to sell sanity at their own direct expense, during a time when sanity is in short supply. Just like "Proof of Work" wasn't going to be the BIG THING that made Crypto the new money, the new way to program, 'Agents' are another wet squib. I'm not claiming that they're useless, but they aren't worth the cost within orders of magnitude.
I'm really getting tired of people who insist on living in a future fantasy version of a technology at a time when there's no real significant evidence that their future is going to be realized. In essence this "I'll pay the costs now for the promise of a limitless future" is becoming a way to do terrible things without an awareness of the damage being done.
It's not hard, any "agent" that you need to double check constantly to keep it from doing something profoundly stupid that you would never do, isn't going to fulfill the dream/nightmare of automating your work. It will certainly not be worth the trillions already sunk into its development and the cost of running it.
And I completely agree that LLMs (the way they have been rolled out for most companies, and how I've witnessed them being used) are an incredibly underestimated risk vector.
But on the other hand, I'm pragmatic (some might say cynical?), and I'm just left here thinking "what is Signal trying to sell us?"