Hacker Newsnew | past | comments | ask | show | jobs | submit | zimmerfrei's commentslogin

> Nvidia released the first Shield Android TV in 2015

> it took about 18 months to [create] an entirely new security stack [...] Android updates aren’t actually that much work compared to DRM security, and some of its partners weren’t that keen on re-certifying older products.

> In February 2025, Nvidia released Shield Patch 9.2 [...] That was the Tegra X1 [security] bug finally being laid to rest on the 2015 and 2017 Shield boxes.

This is a real engineering marvel. Everybody else would have just given up entirely long time ago. DRM bugs are in most case practically unrecoverable for products that shipped already (and physically in the hands of the adversary). The incentive to tell to consumers "Ditch that product you bought from us 2 years ago, and buy the more recent hardware revision or successor" is extremely strong.

This really feels like a platform that is maintained with pride and love by the nvidia engineering teams (regardless of one's opinion about DRM per se).


They added auto-playing, full screen video ads to the home screen. I threw mine in the garbage.

Pride and love, lol…


> They added auto-playing, full screen video ads to the home screen.

I'm pretty sure this is actually Google's fault (even Sony televisions suffer from this bullcrap). Unlike phone Android, Google TV (yes, that's the official name now) enforces certain "standards", one of them is this bullcrap.


> I'm pretty sure this is actually Google's fault

Who cares who delivered the actual bytes or who initiated the change, the matter of fact is that people buy a device from one company, then the company is responsible for the experience they deliver while it's supported. Since they chose Android, they're responsible for the experience you get when using the stuff you buy from them.

I'd never complain to the maker of a compressor when it dies in a fridge, I'll complain to the one I bought the fridge from. Not sure why we're so adamant on thinking differently regarding computers. NVIDIA might blame Google internally, but feels like consumers are right to be pissed off about NVIDIA changing (or being OK with someone else changing) their experience in a product they bought from NVIDIA.


Because you're on a forum called Hacker News and we pride ourselves on being smart enough to understand the details of systems, and using that to our advantage. Nvidia as a corporation isn't some nobody compared to Google, but do you really think their engineering team has the clout to get Google's advertising arm to bend to their will?


What the commenter is saying pertains to the _decision_ to use Android. That is why this is happening. That is NVidia.


Such decisions cannot be reversed on a whim.


I use an old Amazon FireTV Stick on an old LG LED TV (semi-smart), and neither of them bug me with such fullscreen ads, unless I opt to watch MX content on Amazon Prime (MX is basically third-party ads-funded free OTT content; Amazon Prime requires subscription, and even its standard subscription has occasional ads for Prime content, though Amazon Prime also has a premium pricing tier for ads-free content).

I don't face such third-paety ads nonsense on Netflix and Disney+ (yet), at least on this old FireTV and old LG TV.

Unskippable irrelevant annoying ads and privacg concerns are the main reasons I still steer clear of "smart" TVs.


My Sony TV doesn't do this, thankfully


My Samsung Frame TV shows ads in the app bar and you cannot disable/remove them. They can’t even use the Google excuse because the TV runs Samsung’s OWN TizenOS.

They’re lucky it’s a good (beautiful) TV..


This doesn't really bother me. I just use my TV to watch Netflix.

Ofcourse Samsung has "sponsored" apps everything does.


You could just have download a different home screen... sad.


I did end up switching to Flauncher for a while before getting an Apple TV.


Does Apple TV have ads for Apple shows in its UI?


No. The Apple TV _service_ does, and you can configure that service to be some kind of weird god service if you want. But you can also treat that service like any other normal service, one that only comes up if you launch it. In that case, the home screen is just a straight icon grid with no kerfuffle.


Yes, one or two, and not annoying (not trying to grab your attention). No ads for toothpaste or cars.

Apple TV is not the solution for purists who cannot handle anything that can be construed as an ad. It’s a great solution for those who just want to browse and watch content without distracting ads everywhere.


Amazon started this trend among major streaming services - showing self promotion ads.

Apple took notes and decided to outdo them. The F1 movie ads famously popped up in inappropriate places.


The Apps in the home row on Apple TV will have fullscreen promotions when the home row is along the bottom of the screen. If you set your home row apps with care, the fullscreen previews will not be ads (i.e. Photos will do a slideshow of your photos, Jellyfin just pulls random images from its/your own movie library metadata, etc.).


You can make them still images by going into the accessibility settings


Full screen video ads on the home screen ? I don’t see this on mine.


Yep. I had to switch to an alternative launcher to get rid of them.


This is Google. Just change the default launcher and you're good.


Nova Launcher just added advertisements, unless you buy Pro. Ads come for everyone.


Try https://github.com/spocky/miproja1, it's awesome and will never get any ads.


Can confirm, it works very well. You can set it as the default launcher, and never have an issue.


That's because Nova launcher sold to new owners (whose presumed only goal is to serve ads)


The only customers who care about DRM are the suppliers; not the users. Force the user to not be able to play DRM content, and they'll end up pirating.

Furthermore, I never demanded a new Android TV version. All I wanted was security fixes, not Google's new shitty launcher. I'd never have bought the product if it contained the current launcher.


This is the story I’m really interested in. How have they prevented MBAs from ruining this product?


Infinite money.

Like Apple, SpaceX or Tesla.

(though I suspect that Apple hired some MBAs to work on Liquid Ass)


They did this with the switch 1 too they were just less well remembered because that subsequently got re-hacked. They lost the ARM trust zone keys and rebuilt the entire DRM stack on the HDCP keys which had been provisioned but they were not using.


Nvidia doesn't make money on hardware, they make money on ecosystems.


I don't think that a 100% anonymous attestation protocol is what most people need and want.

It would be sufficient to be able to freely choose who you trust as proxy for your attestations *and* the ability to modify that choice at any point later (i.e. there should be some interoperability). That can be your Google/Apple/Samsung ecosystem, your local government, a company operating in whatever jurisdiction you are comfortable with, etc.


Most busunessed do not need origin attestation, they need history attestation.

I.e. from when they buy from a trusted source and init the device.


As mentioned a few days ago, this post mainly covers a gpg problem not a PGP problem.

I recommend people to spend some time and try out sequoia (sq) [0][1], which is a sane, clean room re-implementation of OpenPGP in Rust. For crypto, it uses the backend you prefer (including openssl, no more ligcrypt!) and it isn't just a CLI application but also as a library you can invoke from many other languages.

It does signing and/or encryption, for modern crypto including AEAD, Argon2, PQC.

Sure, it still implements OpenPGP/RFC 9580 (which is not the ideal format most people would define from scratch today) but it throws away the dirty water (SHA1, old cruft) while keeping the baby (interoperability, the fine bits).

[0] https://sequoia-pgp.org/

[1] https://archive.fosdem.org/2025/events/attachments/fosdem-20...


But if you use the modern crypto stuff you loose interoperability, right? What is the point of keeping the cruft of the format if you still won't have compatability if you use the modern crypto? The article mentions this:

> Take AEAD ciphers: the Rust-language Sequoia PGP defaulted to the AES-EAX AEAD mode, which is great, and nobody can read those messages because most PGP installs don’t know what EAX mode is, which is not great.

Other implementations also don't support stuff like Argon2.

So it feels like the article is on point when it says

> You can have backwards compatibility with the 1990s or you can have sound cryptography; you can’t have both.


When you encrypt something, you are the one deciding which level of interoperability you want and you can select the crypto primitives matching capabilities you know you recipient reasonably have. I don't see anything special with this: when you run a web service, you also decide if you want to talk to TLS 1.0 clients (hopefully not).

sequoia's defaults are reasonable as far as I remember. It's also bit strange that the post found it defaulted to using AEAD in 2019 when AEAD was standardized only in 2024 with RFC 9580.

But the elephant in the room is that gpg famously decided to NOT adopt RFC 9580 (which Sequoia and Proton do support) and stick to a variant of the older RFC (LibrePGP), officially because the changes to the crypto were seen as too "ground-breaking".


I think GP’s point isn’t that you don’t have the freedom to decide your own interoperability (you clearly do), but that the primary remaining benefit of PGP as an ecosystem is that interoperability. If you’re throwing that away, then there’s very little reason to shackle yourself to a larger design that the cryptographic community (more or less) unanimously agrees is dangerous and antiquated.


It is not a coincidence that most of the various proposed alternatives to PGP (signal, wormhole, age, minisign, etc) are led by a single golden implementation and neither support nor promote community-driven specifications (e.g., at the IETF).

Over the decades, PGP has already transitioned out of old key formats or old crypto. None of us is expecting to receive messages encrypted with BassOmatic (the original encryption algorithm by Zimmermann) I assume? The process has been slow, arguably way slower than it should have after the advancements in attacks in the past 15 years (and that is exactly the crux behind the schism librepgp/opengpgp). Nonetheless, here we are, pointing at the current gpg as "the" interoperable (yet flawed) standard.

In this age, when implementations are expected (sometimes by law) to be ready to update more quickly, the introduction of new crypto can take into account adoption rates and the specific context one operates in. And still, that happens within the boundaries of a reasonably interoperable protocol.

TLS 1.3 is a case in point - from certain points of view, it has been a total revolution and break with the past. But from many others, it is still remarkably similar to the previous TLS as before, lots of concepts are reused, and it can be deemed as an iteration of the same standard. Nobody is questioning its level of interoperability, and nobody is shocked by the fact that older clients can't connect to a TLS 1.3-only server.


You're right, it's not a coincidence. The track record of standards-body-driven cryptography is wretched. It's why we all use WireGuard and not IPSEC. TLS 1.3 is an actually good protocol, but it took for-ev-er to get there, and part of that process involved basically letting the cryptographers seize the microphones and make decisions by fiat in the 1.2->1.3 shift (TLS 1.3 also follows a professionalization at CFRG). It's the exception that proves the rule. It's contemporaneous sibling is WPA3 and Dragonfly, and look how that went.


I wrote the post and object to the argument that it primarily covers GnuPG issues.

But stipulate that it does, and riddle me this: what's the point? You can use Sequoia set up for "modern crypto including AEAD", yes, but now you're not compatible with the rest of the installed base of PGP.

If you're going to surrender compatibility, why on Earth would you continue to use OpenPGP, a design mired in 1990s decisions that no cryptography engineer on the planet endorses?


If you use AEAD, you clearly expect your recipients to use a recent client. Same as if you want to use PQC or any other recent feature.

If your audience is wider, dont use AEAD but make sure to sign the data too.

With respect to the 90's design, yes, it is not pretty and it could be simpler. It is also not broken and not too difficult to understand.


You're missing my point. I agree that you can use Sequoia to communicate between peers also using Sequoia. But you're no longer compatible with the overwhelming majority of PGP deployments. So what's the point? Why not just use a modern tool with that same group of peers?


This is the right answer.

The problem mostly concerns the oldest parts of PGP (the protocol), which gpg (the implementation) doesn't want or cannot get rid of.


Yes, there are methods to combine multiple, different key exchange algorithms so that you need to break all, like in:

https://datatracker.ietf.org/doc/rfc9370/

https://datatracker.ietf.org/doc/draft-ounsworth-cfrg-kem-co...

https://www.etsi.org/deliver/etsi_ts/103700_103799/103744/01...

For other security mechanisms, like PKI, things are more complicated (and inefficient).

And one can argue that even if in theory the above gives you better security margin, the whole system becomes more complicated, and it may be practically less secure because of the additional moving parts. That is why there is no unanimous consensus: agencies in Europe recommend it, but the NSA does not.

Finally, note that the the 3rd standard (SLH-DSA), is PQC but it is based on old and well-understood standards (SHA2/SHA3), so it can arguably be used by itself.


I like it, because it is indeed nice to have a NIST-backed construction.

But at the same time, it is disappointing that you get locked out of several niceties of NIST KDFs, such as label and context. I get that they are sacrificed to minimize the number of AES calls, but still I would prioritize strong cryptographic separation over just a few saved AES calls, especially for messages longer than a few hundred bytes.

Finally, *random* GCM nonces longer than 96 bits are definitely misunderstood and bring better guarantees than 96 bits nonces [1]. But of course, if you can derive a fresh key for every message, that's definitely to prefer.

[1] https://neilmadden.blog/2024/05/23/galois-counter-mode-and-r...


You assume that boolean operations are constant time, and whether that holds depends on the uarchitecture and how sophisticated the optimization layers are (e.g. nothing prevents the compiler or even the CPU from short-circuiting the OR as soon as the first XOR is non-zero).

Computing a MAC of the two input values under a once-off random key is actually much stronger.

As a matter of that, this highlights that the goal is to lower the SNR for the attacker, and constant time computation is only one of the two non-exclusive ways to achieve that, the other being adding sufficient noise to their measurements.


As it is written in the parent article, the verification of the code generated by the compiler is mandatory whenever a constant-time algorithm is written in a high-level language.

The short-circuiting could be prevented e.g. by storing the result in a volatile variable before testing if it is null, because the compiler cannot assume that the tested value is the same that has been written previously. Nevertheless, it is better to just check the generated assembly code and disable optimizations for that function or use inline assembly, if necessary.

The binary Boolean operations have been executed in constant-time in all electronic computers, since those made with vacuum tubes until now.

It is impossible to make them execute in variable time (because they are independent for all bit pairs and they must be executed for all of them), unless you do this on purpose, by inserting unnecessary delays that are dependent on the operands.

For no other operations implemented in hardware it is as certain that they are done in constant time. Even for word additions, it would be possible to make an implementation where the time to propagate the carry until the most significant bit would be variable, depending on the pattern of "1" bits in the operands, but no mainstream CPU has used such adders during the last half of century. Such adders have been used only in some early computers with vacuum tubes, which used serial adders instead of the parallel adders used in all modern CPUs.


Your argument boils down to "all hardware implementations so far in history never optimized word boolean operations so future implementations will keep doing so".

I think that is just an assumption and I would not take that risk for high security applications, unless for specific CPU models where the behavior is measured in practice (certainly not by just auditing the assembly, which is by itself already too high-level). After all, never in history we had such a level of sophistication.

Right now, all zero register values can lead to speed gains, so they could be obversable. But both ARM and Intel latest ISA have introduced flags that permit the future CPUs to perform operations with data-dependent timing. Boolean operations are officially marked as potentially affected by that flag.


No, my argument is that it is impossible to optimize the binary Boolean operations in a way in which they will have variable execution time.

Moreover, they are the only commonly encountered operations that are implemented in hardware and for which this is unconditionally true.

Therefore they are the first choice for the implementation of any constant-time algorithm.

Most modern CPUs have many other operations that are executed in constant time, like additions and subtractions, but for those, variable-time implementations are also possible.

As I have already said, for any bitwise binary boolean operation, the elementary operations having a pair of bits as operands are independent, so there exists no way of performing the complete operation without doing all the sub-operations, unlike for other simple operations, like additions, shifts or rotations, where it is possible to detect conditions that allow an earlier completion of the operation.

The hardware implementation can do the sub-operations sequentially or concurrently, in any order, but it always must execute all of them, regardless of the values of the input operands, so they will take the same time.

Only in an asynchronous CPU and only for certain kinds of logic gate implementations it is possible to have a completion time for a binary Boolean operation that varies with the values of the input bits, but for the most common asynchronous circuit synthesis methods, which use dual rail encoding for bits, i.e. each bit is encoded as a pair of complementary bits, the binary boolean operations remain constant-time, like in the normal synchronous CPUs.


Let's assume that you have a simple XOR between two registers.

If the CPU can pre-label a register as having no bits set (and they can or speculate on it), during scheduling, it could theoretically simply drop the XOR, transfer or rename the relevant register which may lead to a tiny but different timing that can be measured and exploited.

That is just one simple counter-example that shows how the assumptions you present are not necessarily valid on modern complex CPUs. Many more are examples are possible, which is of course not to say that they are implemented today. But they could be implemented in the future (without us knowing, and again, both ARM and Intel are explicit on that), therefore the security of a security-sensitive piece of code should not solely rely on that.


That's still described as a kernel for the TEE (like OPTEE is), it doesn't look like a replacement for Linux, which runs in the REE.


But then, the vast majority of the affected libraries in that page don't use GMP at all, but their own custom implementation (including openssl).

In reality, RSA signing with blinding will make any implementation (including those based on GMP) resistant to side channel attacks, targeted at the private key.

What most of these library tripped over in that case, is the treatment of the plaintext in a side channel-safe way after the private key operation. For instance, just the simple conversion of an integer to a byte string can be targeted.


More interestingly, Cavium (now Marvell) also designed and manufactured the HSMs which are used by the top cloud providers (such as AWS, GCP, possibly Azure too), to hold the most critical private keys:

https://www.prnewswire.com/news-releases/caviums-liquidsecur...


Ayup. We use AWS CloudHSM to hold our private signing keys for deploying field upgrades to our hardware. And when we break the CI scripts I see Cavium in the AWS logs.

Now I gotta take this to our security team and figure out what to do.


I'd be surprised if you get anything more than generic statements about how they take security very seriously and they are open to suggestions, but avoid addressing the mentioned concerns directly (and this applies to all cloud providers out there, not just AWS).

I'm sure a few others here would like to see their response as well.


We've had other issues with our CloudHSM instance, especially with the PKCS1.5 deprecation on January 1. And their support has been pretty dismal. Not expecting much from them at this point.


AWS support is pretty fucking terrible generally. We’re a very high rolling enterprise customer and it’s pretty obvious that some of their shit is being managed by two guys in a shed somewhere who don’t talk to each other.


As someone who was IN AWS premium support, I got the distinct impression they had no idea what they're doing

I was a Linux Sysadmin for a decade. They initially hired me to work on the "BigData" support team

Then after hiring threw me into CI/CD instead. I told them I don't know python or ruby and would be a terrible fit

I asked if I can join the Linux team. EC2 is bread and butter, that's easy stuff

"Oh we're actually shutting that team down soon. I'll move you into containers instead"

Spoiler: they didn't "shut down" the Linux group


Thank you for this. Next time AWS try and tempt me over to them I’ll tell them literally fuck off. Not up for those games.


Another satisfied user of AWS Glue, I see. On a scale of 10 to “I have no mouth and I must scream” how much do you hate their error messages?


The famous one poke bowl team. Saved costs on pizzas.


Have you had the pleasure of working with Azure? I'll take AWS any day over that dumpster fire.


As someone that is deciding between AWS, Google and Azure - could give an outline of some of the Azure painpoints? Are there any blogs or other articles that outlines what your concerns would be?

I'm pretty aware of how painful it can be to configure AWS well, IAM roles, the overly large eco-system that we won't need and unmitigated complexity to configure it all. It's not comforting to think Azure is worse yet.


They’re just different. People like the devil they know.

The Azure Resource Manager system is much easier to use than the fragmented mess that is AWS.

The problem with Azure is that they’re still catching up to AWS. They have fewer products and the quality is worse.

Really basic issues will remain unaddressed for years.


I work on and off with both, AWS may be more feature complete in some areas but Azure is frankly easier to work with for me, I can actually get support on issues I have from Microsoft. And while I've generally only done so from the large enterprise account perspective, Microsoft is way more open to feature requests/enhancements than Amazon is. I don't have any experience with GCP so I can't speak on that.


We selected AWS for very modest needs, but sometimes I glance over at Azure and wonder if the grass is greener. I'll take your word on it though.


We work with Azure and don't have any major complaints about it - what were your issues?


AWS Client VPN and Ubuntu 22.04... Need I say more?


What issues are you having?


the required old version of libssl is no longer in Ubuntu's repos


Using AWS Greengrass?


Greengrass was so bad we built an entire edge platform.


Never even heard of that one!


It's a cloud to edge system. Like hosting some of your stuff on the edge, think like a cloud that lives inside your factory.

It confused me when researching it.


Imagine doing a job interview they ask do you know AWS. Sure, I know AWS, and explain what you built with Greengrass, Lambda's, RDS etc. and then get rejected for not knowing AWS lol


Hate Greengrass; Love joy.


wouldnt such a backdoor invalidate all promises made by external audits e.g. https://cloud.google.com/security/compliance/offerings and more importantly wouldn't it violate safe harbor agreement with the EU or whatever sham this safe-harbor was replaced with?


As you say, a sham : as long as the Patriot Act is still effectively ongoing, everyone else is still trying really hard to look the other way, (especially while the war is still ongoing !), ignoring the CJUE, which has no choice but to shoot down one agreement after another, since they automatically violate the EU Charter of Fundamental Rights : https://en.wikipedia.org/wiki/Max_Schrems#Schrems_I


I mean, if you can detect it.


And you’re allowed to notice it without dudes in suits And dark sunglasses convincing you it’s a bad idea to do so.


  The Intel Management Engine always runs as long as the motherboard is 
  receiving power, even when the computer is turned off. This issue can be 
  mitigated with deployment of a hardware device, which is able to disconnect 
  mains power.

  Intel's main competitor AMD has incorporated the equivalent AMD Secure 
  Technology (formally called Platform Security Processor) in virtually all of 
  its post-2013 CPUs.

https://en.wikipedia.org/wiki/Intel_Management_Engine

  Ylian Saint-Hilaire, principal Engineer working on remote management software 
  including hardware manageability:
https://youtu.be/1seNMSamtxM?feature=shared

https://github.com/Ylianst


I think Ylian Saint-Hilaire hasn’t been with Intel for about a year now, after some layoffs. As a result the software ecosystem around AMT/vPro is lagging these days.

Hardware wise nothing changed, it’s just even harder for the actual owner of the hardware to use the legitimate management features while presumably easier for whoever could illegitimately abuse them.


Nothing?

I mean, you are already in US-based cloud, so if NSA is interested, they will just request information directly, no backdoors needed.

(This is a good test for your security team, btw: if they say anything other that "we do nothing", you know its all security theater)


But being able to request it and having a built-in backdoor for anyone with a key are different things. It has happened before that the Chinese government figured out network equipment backdoors that were put in for the US government. All your company secrets are there for the taking for anyone with the resources to figure out that backdoor. Especially now that people know it exists. Shouldn't this at least start the clock on expiring this hardware?


Considering the scales of Amazon and Google, and their involvements with US government agencies in the US, I think it is fair to suspect that there is a lot we don't know about...


Very good point. That was the consensus from our team, so I think we're okay.

Ironically, the data we're securing is because of US government requirements. So if the government wants to spy on itself, who are we to say?


The fact that this backdoor could leak and be used by a foreign government needs to be taken seriously.


Nobody cares. If caring gets in the way of easy money. Spoiler...it does.


more accurately, nobody (with sufficient agency to act) cares.

you wouldn’t be cynical if you didn’t care, or felt able to do anything about it.


future you will care and facepalm


Is there anyone here who actually thought cloud provider HSMs were secure against the provider itself or whatever nation state(s) have jurisdiction over it?

It would never occur to me to even suspect that. I assume that anything I do in the cloud is absolutely transparent to the cloud provider unless it's running homomorphic encryption, which is still too slow and limited to do much that is useful.

I would trust them to be secure against the average "hacker" though, so they do serve some purpose. If your threat model includes nation states then you should not be trusting cloud providers at all.


Lots of people believe that. They believe truthfully you can get to the level of AWS, MS, Google, Facebook or Apple whilst standing up to the nations that host those companies. I've walked into government employees in the hallways of tiny ISPs, I see no reason to believe at all that larger companies are any different except for when easier backdoors have been installed.


The really concerning part is to be STILL believing that after the Snowden scandal, after everybody has seen the slides that explain in detail how the NSA sends an FBI team to gather data from (then, in 2013) Microsoft, Yahoo, Google, Facebook, PalTalk, YouTube, Skype, AOL, Apple (and Dropbox being planned).

Also how Yahoo first refused but was forced to comply by the Foreign Intelligence Surveillance Court of Review.

https://www.electrospaces.net/2014/04/what-is-known-about-ns...

(Note that supposedly, "the companies prefer installing their own monitoring capabilities to their networks and servers, instead of allowing the FBI to plug in government-controlled equipment.")


And for Yahoo this was reason why Alex Stamos resign: https://arstechnica.com/tech-policy/2016/10/report-fbi-andor...


I don’t know how many believe it and how much is willful ignorance. The big cloud providers make big mistakes but how many trust their organizations to do better against a nation state level actor?

The underlying architectures of our systems are not secure and much of the abstractions built on top of them make that insecurity worse, not better.

For nation state level issues, the solution likely isn’t technical, that is a game of whack-a-mole, it will take a nation deciding that digital intrusions are as or more dangerous than physical ones and to draw a line in the sand. The issue is every nation is doing it and doesn’t want to cut off their own access.


I always just tell people to lookup “Lavabit” to learn everything you need to know.


To save others a goog: https://en.wikipedia.org/wiki/Lavabit

> Lavabit is an open-source encrypted webmail service, founded in 2004. The service suspended its operations on August 8, 2013 after the U.S. Federal Government ordered it to turn over its Secure Sockets Layer (SSL) private keys, in order to allow the government to spy on Edward Snowden's email


> He also wrote that in addition to being denied a hearing about the warrant to obtain Lavabit's user information, he was held in contempt of court. The appellate court denied his appeal due to no objection, however, he wrote that because there had been no hearing, no objection could have been raised. His contempt of court charge was also upheld on the ground that it was not disputed; similarly, he was unable to dispute the charge because there had been no hearing to do it in.

Land of the free...


That’s scary


> If your threat model includes...

At my Fortune 250, our threat model apparently includes -- rather conveniently and coincidentally -- everything! Well, everything they make an off-the-shelf product for, anyway. It makes new purchasing decisions easy:

"Does your product make any thing, in any way, more secure?"

"Uh... Yes?"

"You son of a bitch. We're in. Roll it out everywhere. Now."


This reminds me of our own security team, who as far as I can tell do nothing but run POC's of new security tools. And then maybe once a year actually buy one, generating a ton of work (for others) to replace the very similar tool they bought last year. Seems like a good gig.


And the sad/funny thing is that said tool would probably do diddly squat if one employee falls for a social engineering/phishing attack.


Occasionally security products turn into malware delivery platforms as well, because they run very privileged, are sometimes more shoddily developed than what they’re protecting, and have fewer eyeballs on them than the vanilla operating system.

Not to mention they may be another Crypto AG.


> Occasionally

Much more frequently than that if you lump 'anti virus software' in with security products.


As someone who's company just suffered this exact issue, all I can say is yes.

They gave me a laptop with 8gb of ram. The laptop runs invisible security software that nominally takes 6~6.8gb.

We just got penetrated by two attackers in the last 40 days.


> We just got penetrated by two attackers in the last 40 days.

* that you know of


And then when there is a security issue you ask them share the log files from all their spyware and suddenly half the stuff needed is not there because we did not get that module.


Or ‘oh, that feature hasn’t been rolled out yet, expect it in 6 quarters.’.


Ahh, I've been there. I'm sure no concern is given for usability of the result.

Welding your vault shut may make it harder for thieves to break in, but if your business model requires making deposits and withdrawals, it's somewhat less helpful.


Luckily, all but tiny portion of security products have a door you can open if you ask support nicely enough you didn’t know about before. So you can still get your stuff after you weld the door shut.


There's no thought given to if the cost to secure the thing outweighs the risk of exposure?


I’m not privy to those discussions, but it certainly doesn’t feel like they’re happening. We implement every security “best practice,” for every project, no matter how big or small. We have committees to review, but not to assess scope, only to make sure everything is applied to everything. Also, we have multiple overlapping security products on the corporate desktop image. It feels EXACTLY like no one has ever tried to gauge what a compromise might cost.


It's interesting to consider the people who, with the very same set of facts, come to completely opposite conclusions about security.

For instance, Amazon has a staff of thousands or tens of thousands. To me, that means they can't possibly have a good grasp on internal security, that there's no way to know if and when data has been accessed improperly, et cetera. To others, the fact that they're a mega-huge company means they have security people, security processes and procedures, and they are therefore even more secure than smaller companies.

For one of the two groups, the generalized uncertainty of the small company is greater than the generalized uncertainty of the large. For the other, the size of the large makes certain things inevitable, where the security of smaller companies obviously depends on which companies we're talking about and the people involved. More often than not, people want to generalize about small companies but wouldn't apply the same criteria to larger companies like Amazon.

There's a huge emotional component in this, which I think salespeople excel at exploiting.

It fascinates me, even though it's a never-ending source of frustration.


If your threat model includes the nation state where you physical infrastructure is, you're hosed.


> If your threat model includes the nation state where you physical infrastructure is, you're hosed.

True. But even if you trust your nation state 100%, having a backdoor means you now have to worry about it falling into the wrong hands.


Even if you trust your nation state 100% having a backdoor means it has already fallen into the wrong hands. That's because 'nation state' is not synonymous with 'people running the nation state'.


Literally hosed. There's a funny jargon term "rubber hose cryptography" that's used to refer to the cryptanalysis method where you beat someone with a rubber hose until they give you the key. It's 100% effective against all forms of cryptography including even post-quantum algorithms.


You would be surprised that for a percent this would not work. Some even like it. Some have a deathwish and want to be a martyr. Some people blow themselves up to further a cause. Also put under heavy stress memories of keys cannot be recalled at times.

It's probably slightly less effective than threatening to kill family members but probably more than threat of jail time.

Either way you require someone alive and with mental awareness. The mind reading tools found in science fiction hasn't been developed yet.


It doesn't matter, something will be found that will coerce them into talking. Nobody is an island. Everyone has a breaking point, if it's not rubber hoses, it's socks full of rocks, or it's bottles of mineral water, or any number of methods. Don't think for a second that someone hasn't thought of a better way to get information out of somebody else.


Yep... read up on interrogation resistance.


We're talking about normal people, not psychopaths.


Terrorists are generally highly altruistic, not psychopaths.

It’s a lot easier to blow yourself up(or to spread ideology which encourages it)for a cause that you believe is helping people, in particular _your_ people.


The terrorists that blow themselves up and that blow other people up are usually misguided brainwashed angry young men. It's nothing to do with ideology, everything to do with power. Or did you think blowing up schools full of girls is something people genuinely believe helps their people, to give just one example?

Ordinary people just want to be left alone. Old guys wishing for more power will use anything to get it, including sacrificing the younger generations.


> did you think blowing up schools full of girls is something people genuinely believe helps their people

It absolutely is something that they think helps their people, yes.


No, it's something that a bunch of old guys with issues told them helps their people.

Beliefs stop when they are no longer about yourself but about how other people should live. Especially when those other people loudly protest that this is how you think they should be living. Killing them is just murder, not the spreading of ideas.

But hey, those human rights are just for decoration anyway.


> it's something that a bunch of old guys with issues told them helps their people

I don’t understand why you said “no” before this; I believe this agreed with what I’m saying.


We're back to what psychopathy is all about:

https://en.wikipedia.org/wiki/Psychopathy#Signs_and_symptoms


The old men persuade the would-be suicide bomber that educating women will liberate and liberalize them, and that this is counter to the interests of those who prefer the traditional order of society. Are they even lying?


Yes, they're lying.

The 'traditional order of society' is a society run by psycho pathological individuals and benefits nobody except for those individuals.

But you already knew that, didn't you?


You're deeply mistaken if you think there aren't men who don't genuinely prefer the traditional order of women being subjugated by men.

1. Not everybody shares your values.

2. People who don't share your values are not necessarily brainwashed.

3. People may do things that are irrational under your system of values, but rational under their own.

And BTW, there is no a single fighting force in the world that doesn't have old men persuading young men to sign up and risk throwing away their lives. There's not a whole lot of difference between regular soldiers persuaded to participate in a forlorn hope or banzai charge attacking a defended position and a suicide bomber or kamikaze.


Are you saying that liberalizing the society is not counter to the interests of those who prefer traditional society?

I think it clearly is.


Who makes that determination ? And by what justification ?


That's actually not true. It can do nothing about M of N cryptography. (That's when a key is broken up such that there are N parts, and at least M (less than N) are required to decrypt. It doesn't matter how many rubber hoses you have, one person can fully divulge or give access to their key and it's still safe.


I always giggle a little when really smart people forget thugs exist and do what they’re told. If that includes breaking the knees of M people to get what they’re after, then M pairs of knees are gonna get destroyed.

This isn’t hard to understand, but it’s easy to forget our civilization hangs by a thread more often than any of us care to admit.


I don't remember the provenance of the quip, but somewhere at a def con or a hope, I heard, "The point of cryptography is to force the government to torture you."


They're perfectly ok with that, and depending on where you live this may happen in more or less overt ways. If the government wants your information, they will get your information. Your very best outcome is to simply rot in detention until you cough up your keys.


Now that I think about it, I'm pretty sure it was a session about root zone security, and Adam Langley was in the room. I was thinking, damn, kinda sucks to be the guy that holds Google's private keys. They want someone's information, so they let you rot...


power in numbers

can't torture us all!


Are we deep enough in the thread for the customary reminder that each measure makes it incrementally harder to attack a system?

(Including a system of people.)

Even nation state adversaries don’t have infinite resources to allocate for all opponents.


I think you can probably get away with only breaking one pair of knees and sending a video of it to the other people.


Youtube would delist that before they could all see it though.


You know there are other ways to have a video and send it to people than YouTube, right? You can just email a link from dropbox or gdrive, or an attachment, or send a WhatsApp/Telegram/etc. message, send a letter with a USB drive, etc.


Yes. It was just a dumb joke :/


> You can just email a link from dropbox or gdrive, or an attachment, or send a WhatsApp/Telegram/etc. message

Why do you think governments are demanding those services give them access to quickly remove "misinformation"?


Any organization that is really really serious about security will obviously keep at least N-M +1 folks, along with their family, in other countries.

Which is a much much higher bar to clear for any would be rubber hose attackers.


Your secrets aren't really safe unless Xi and Putin each have part of your key personally memorized.


That’s hyperbole


Lets say for example

Bob, Jon, and Tom have pieces of the key. Bob and Jon are in the US and arrested over and commanded by a court to give up the key. Tom is the holdout. The US will issue an international arrest warrant, and now Tom can never safely fly again or the plane will be diverted to the nearest US friendly airport where they will be extradited. So, yea, "safe" is very situational here.


Doesn't Tom's key fragment have to be on a disk somewhere for things to work?

That's the actual weak link to attack.


That situation just requires a longer hose


Or M hoses.


and more beatings.


Sure, so you hit all of the people that have all of the pieces. Problem solved.


Or you publicly announce you're hitting 1 of the N people with the rubber hose until M-1 of the other people send you their key fragments.

It's not like these keys are shared among disinterested strangers who have no attachment to each other.


Somehow, somewhere you've just influenced a megacorp's internal crypto process.


This probably works if each person has a cyanide+happy drug pill or a grenade and is willing to sacrifice themselves and the rubber-hoser(s). I think that requires a rare level of devotion. This process must also disable a simple and fragile signalling device to let the others know what's coming.


This would not work well, because you can’t do it in a secret manner. Overuse of the rubber hose cryptography will become known, and there will be public backlash.


Seems like the NSA is threatening everyone of arrest (=state-organized violence) if they don’t secretly give them keys, and Snowden revealed it, and there is no public backlash.


Hose-resistant cryptography is possible. Secret sharing comes to mind, or a system by which even the principals can only compromise a key slowly.


I mean in the end everything is people just like Logan Roy said in Succession. Cryptography or any software protections are the same. It's a great quote that is very true:

> "Oh, yes... The law? The law is people. And people is politics. And I can handle of people."


“I can handle of people”? Cannot parse.


I think that was a mobile typo. The quote is just "I can handle people"


i feel like "typo" should mean "typing error" and not "autocorrect fubar"

mixing the two implicates humans for the errors of machines

edit:

unless failure to disable autocorrect is counted as a user error


That's exactly what happened!


Addendum: if your threat model includes any nation state that has significant ties to the nation state that hosts your physical or transit infrastructure, you're hosed.


How might this apply or what are the implications of Signal given its US jurisdiction?


The US authorities can make the same orders that they made with LavaBit (i.e. ordering them to produce a backdoored build and replace yours with it), and they can make them secretly. Given that Signal by design requires you to use it with auto-update enabled (and, notably, goes to some effort to take down ways of using it without auto-update), and has no real verification of those auto-updated builds, I would consider it foolish to rely on the secrecy of Signal if your threat model includes the US authorities or anyone who might be able to call in a favour with them.


How odd. I have, and continue, to use Signal without auto-update enabled.

I have been prompted, twice in three years to update though.

Perhaps the requirement depends on your country?


Ya, does it do that thing banking apps do where it insists on the most recent version in order to even be usable?

Otherwise, thats more of an iOS option that can be easily altered

Settings < App Store < Automatic Downloads > App Updates


Signal started keeping sensitive user data in the cloud a while ago. All the information they brag about previously not being able to turn over because they don't collect it in the first place, well they collect it now. Name, photo, phone number, and worst of all a list of all your contacts is stored forever.

It's not stored very securely either. I wouldn't doubt that three letter agencies have an attack that lets them access the data, but even if they didn't they can just brute force a pin to get whatever they need.

https://community.signalusers.org/t/proper-secure-value-secu...


Signal relies on the client program to not be compromised to keep conversations secret


I believe this is why the government of Singapore appears to fund a lot of work on homomorphic encryption.

Even when you are a nation state, you still have to worry about other nation states.


Especially when you are a nation state.


I feel the same and Snowden kinda said as much regarding phones. To assume each phone is compromised by state level actors.


I mean, there's a reason that the government was involved with setting up the first cell networks. No assumptions need to be involved. They ARE all compromised.


Lawful intercept has always existed in phone networks. Just that one cannot use that in non-allied nations.


You’re missing the point. It was designed to be transparent to interception efforts up front, so you can’t tell if you’re being surveilled, lawfully or not.


For analog Gen0 and Gen1 networks I'd make the claim that it was just as much about technical limitations of the era.

But for 2G export crypto it definitely was about keeping it weak enough to break on demand.


Cloud HSM services have always been understood as a convenience with limited real world security, without even considering nation state threats.


I think there’s such a thing as plausible deniability here. We didn’t know for certain so we weren’t culpable, but now that it’s public record, we really have to do something about it or risk liability with our customer data.


See the Cryptographic Control Over Data Access [0] section here for one answer to this problem.

[0] https://cloud.google.com/blog/products/identity-security/new...


That's nice, but the only reasons that public clients would use a well known bad actor from a rogue state is laziness / incompetence.


You don't need to think about this in a binary fashion. You can split your trust across multiple entities. Different clouds, different countries, or a mix of cloud and data centers you own.


The cloud act ensures this


This breeds the familiar scenario where a group will start saying the link between the two is so clear that there must be a connection. Then you’ll get another group calling the first group conspiracy theorists, and say it’s just a coincidence of probability.

Narrative control and information modeling is so powerful it’s scary.


Post Snowden the first group has some formidable ammunition.


Now apply that to every other "conspiracy.."


That's not how this works. Plenty of conspiracies are just that: idiots pretending they have special knowledge or that believe that behind everything that doesn't quite mesh with their worldview there is someone pulling invisible strings. Those people have a mental issue. The big trick is to be able to tell the two apart, not to categorically assume that because some conspiracies that had a whole bunch of evidence to go with them turned out to be true that all conspiracies, even those that have no evidence to go with them are true as well. That's just faulty logic.


Now get yourself some half-decent psyops and contaminate the first group with supporting voices that emphasize weaker evidence, use poor logic, name-drop socially questionable sources, and go out of their way to sound ridiculous.


Bingo


…which is really weird. At least Google and Microsoft are quite outspoken about their in-house secure element technology.

If nothing else, at Google/Amazon scale, I’d be concerned about a third-party HSM losing data.


It's not surprising because who wants to make their own FIPS 140-2 level 3 compliant key store device?

Also, the Cavium one was the fastest one on the market the last time I looked at this. Thales, Safenet and IBM also had them..


Google? Titan appears to meet FIPS 140-2 level 1.

I find the levels bizarre. Chromebooks are highly exposed to physical attack. Keys in the cloud are not nearly as exposed. Yet people seem okay with level 1 for chromebooks but apparently want level 3 in the cloud?

I’d rather see a level 1 or level 2 auditable cloud solution, with at least source available.


Level 1 is pretty easy to meet IIRC. It's 2-4 that are hard, with pretty much no Level 4 certified ones on market I believe?


The IBM one for z was level 4 I think..

Yes: https://www.ibm.com/docs/en/cryptocards?topic=4768-overview


This is so weird. The idea of an adversary covertly walking off with an IBM Mainframe or covertly bringing an electronics lab, a microscope, logic analyzers, glitching hardware, etc to the aforementioned mainframe is rather strange. Whereas someone doing that to a phone or a laptop or a game console is very likely.

If I wanted to store an important long term key in a secure facility, I would worry, first and foremost, about software attacks, attacks doable over a network, malicious firmware attacks, and maybe passively observed side channel attacks. Physical attacks would be a rather distant second.


It's not weird.

The adversary will show up and badge in just like everyone else. They might have worked there for 20 years, or they might be an outside repair person or external consultant.

They will definitely fit in. They're supposed to be there.

It will be the most normal thing in the world. And you may never know their real purpose.


Evil maid attack applies to data centers too doesn’t it?


Sure. But the attacker needs to actually get in, which is considerably harder than getting into a hotel room. But more relevantly, the kinds of countermeasures that get you from level 1 to a higher level don’t seem likely to help at all — if some evil-maids or otherwise fully compromises a machine hosting a FIPS 140-2 level 4 HSM, they likely get the unrestricted ability to perform cryptographic operations using keys protected by that HSM, but they get this by using the HSM’s normal API. If they can convince the HSM to export its keys to another HSM (oops) or to otherwise leak the key material, they get the key material. But this doesn’t seem like it has much to do with physical attacks against the HSM.

Now if someone evil-maid attacks the HSM itself, that’s a different story. Any good HSM should resist this, especially one found in a portable device. And this is because you can steal an entire important corporate laptop or other portable device without necessarily raising an quick alarm, whereas I have trouble imagining someone walking off with the HSM out of an IBM mainframe or with an AWS HSM without the loss being noticed immediately.

(To be fair, in the mainframe case, some crusty corporations seem to have a remarkable ability to fail to notice obvious crypto problems like their public facing certificates expiring. But a loss of an entire HSM from a secure large cloud datacenter will, at the very least, immediately trigger “elevated failure rates” or whatever they like to call it…)


> Sure. But the attacker needs to actually get in, which is considerably harder than getting into a hotel room.

It depends who is the attacker. There are countries (western democracies) where the police regularly "visits" datacenters.


Gotta be better than Utimaco HSM cards. I've worked with them, and have issues with them throwing false low power alarms, and wiping for no reason.

And tech support is horrible, incompetent.


Wiping for no reason: that could well be a difference between the view of the firmware of the world versus your view and I guess they just decided to err on the side of caution?

And low power alarms may well be a variation on that theme. Glitching the power supply has been a tool in the arsenal of reverse engineers for a long time so that sort of sensitivity may well make sense. Voltage spikes and drops can be very short, short enough for you not to see them on a DVM but on a memory scope with a trigger value set much lower than you might expect they'd show up with alarming regularity in some hardware that I've worked on. And that explained some pretty weird instability issues. Good power is rare enough that really sensitive hardware usually has power conditioning circuitry right up close to the consumer.


Wiping for no reason: that could well be a difference between the view of the firmware of the world versus your view and I guess they just decided to err on the side of caution?

No. I said I've been in touch with technical support, and the manuals, docs, and their support is clear. It should not be wiping, it has a backuo battery too.

We've spent hours and hours testing, to validate the issue, and cause.

They likely have a firmware bug, or bad board design. And we've seen this from cards from different batches, bought years apart.

Their support is incompetent, and I say that with 30+ years of dealing with, and providing tech support. They fail to read tickets, and even spend (supposedly) weeks running tests, while ignoring vital data in tickets, and conveyed in support calls.

They. Are. Incompetent.

In terms of "issues with power", no. Not over dozens of servers, in different datacentres, and even just with the card at rest, out of server, on battery.

Understand, their job is to provide stable. HSM cards are useless, if they randomly wipe when in use, while under power "just cause".

I find it weird that you're playing devil's advocate here, describing how hard this is, this is an enterprise grade card, and people have been making reliable, and safe HSMs for decades.

The problem is 100% them, their desogn.

And even more so, their incompetent tech support.

Did I mention their tech support is incompetent?


Hehe, ok! Clear case of faulty product then. Thanks for the extra context.

I'm not so much playing devils advocate as that I'm aware how hard making such devices is and the difference between 'user error' and 'incompetent staff/faulty product' can be hard to distinguish in a comment.


Time to leverage IBM Cloud KYOK model. You need level 4 especially if you're using 3rd party: FIPS 140-2 Level 4 certified HSM

https://cloud.ibm.com/docs/hs-crypto?topic=hs-crypto-faq-bas...


In-house stuff is for security.

HSMs are mainly for compliance, where a customer needs to check a regulatory box, because some rules says you must use a HSM. The more standard it is, the easier it is to demonstrate to the auditor that you've checked the box.


Not Google..



I'm not saying you are wrong but I can make a website which claims some cloud provider uses my hardware too. Their website is irrelevant. Do we have a Google (or AWS/...) page regarding this?


> Note: Currently, all Cloud HSM devices are manufactured by Marvell (formerly Cavium). "Cavium" and "HSM manufacturer" are currently interchangeable in this topic.

https://cloud.google.com/kms/docs/attest-key


Thanks.

Also, not great, hope the hyperscalers can diversify this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: