My knee-jerk reaction was, why didn't WD sign the code and use on-chip fuses and a secure boot path to verify the code before transferring control to anything outside their boot ROM? (Many ARM-based systems-on-a-chip are capable of doing this).
Adds cost, for one thing. But you can arrange for the unit to never run a byte of code (even one loaded from the platter) that didn't come from WD.
Your typical motherboard's BIOS code is not signed. Your video card's BIOS is not signed. Your network's card firmware is not signed. Your optical disc drive's firmware is not signed. Etc. This threat vector exists with each of these devices.
As always security is a trade-off. The threat vector of flashing a backdoored BIOS/firmware is irrelevant for 99% of the market: most people will never be targets of such highly-technical attacks.
PS: I tip my hat off to Sprite_TM; fascinating research! I love to disassemble firmware myself :) I liked how you were able to reverse-engineer the data structures in RAM.
Adding security to a system imposes costs on the use, maintenance, and support of those systems. Can you imagine the scale issues associated with maintaining PKI over the millions of devices deployed? How about hundreds of millions?
TPM is present in many, many laptops yet most IT departments leave it un-configured. Why? Because when you replace the hard drive and it changes the boot vector parameters, the machine will no longer boot and you have to work inside the auth mechanisms of the TPM infrastructure do to simple hard drive replacements. (nb: I'm not an IT guy so I might be a bit off here, but you get the gist)
So the reasons for NOT including security are pretty damn big compared to the risk. As security guys are fond of saying: "If I can get physical access to your machine, all bets are off." Keyloggers, evil maid BIOS, HID attacks, firmware attacks on peripherals, etc are all possible ways to compromise a device. If you're wondering how STUXNET got into an air-gapped security facility in Iran, I'd bet that a method like this was the prime candidate. That's how I'd do it.
There's not much maintenance involved in the security of an embedded system. Once you do it, assuming you do it right, you're done.
Now, security is a process. You can expect breaches of high value hardware, and need to react to them.
The security model for consoles STARTS with the assumption that the attacker has physical possession of the hardware. This makes things interesting; it's certainly a big differentiator for PCs versus tablets, and one of the reasons why the Windows group at Microsoft has had a lot of trouble making their stuff secure on non-PCs.
But for a hard drive, things should be pretty contained. I estimated it would have taken a couple of weeks to secure one major embedded system I worked on; assuming the interfaces are limited, it's not a huge deal.
Come to think of it, isn't this how Xbox (360?) piracy protections were breached for a long while? People were flashing hacked DVD-ROM firmwares to make burned discs appear like legit ones.
You had to duplicate some key material, IIRC. Also, not as trivial as you might think.
This lets you play "backups" of games, enabling piracy. It doesn't let you run unsigned code (those required other cracks) or sign anything (such as save files).
In Secure Boot environments, this is all untrue - all option ROMs are signed, and the firmware should only accept signed updates. That pushes the problem out to ancillary devices that we've traditionally thought of as safe, but this demonstrates that they really need to reconsider.
I said "typical". Most desktop and server computers do not have or do not fully enable Secure Boot (or else you would not be able to plug in a random 5-year old NIC whose firmware is not signed).
Pretty much all desktop systems shipped in the past 8 months have fully enabled Secure Boot by default. Yes, this means that you can't plug in a random 5-year old NIC and PXE, just like it means you can't plug in an older graphics card and get any output before the OS starts.
The latest generation of SAS enterprise drives do exactly this. All firmware is signed and there is extra hardware to ensure unsigned code is never run. They also disable the JTAG port before the drives leave the factory so there's no opportunity for shenanigans.
These features are required by enterprise customers to prevent just this sort of tampering.
For PC's and smartphones, which can have higher level security structures, the community is violently against secure boot. For firmware based embedded components, most people aren't so strongly opposed to it.
Not that their opinion is the community's, but the FSF is not against secure boot even for PCs and smartphones, only against "restricted boot" - that is, secure boot without giving the keys to the user.
Here you said "knee jerk" where you meant to say "logically principled and wise". Also "common sense" would work in certain circles.
Secure boot isn't a technical solution to a soft/firmware update problem. It's a control mechanism to solve a management problem. Crazy idea, don't put the interface that has access to firmware on the standard interface. Use its own interface and allow motherboard manufacturers to support it for enterprise/datacenter geared systems.
The iPhone is a /lot/ more complex than a disk drive, with a bunch more ways to talk to the world. (Also, I don't know if the iPhone has a secure boot ROM. It might, whereupon Apple blew it somewhere else).
This stuff /is/ hard. I sat next to a bunch of folks doing this on a console platform and the techniques and exploits were, in a word, breathtaking. But if you know what you're doing and your scope is limited -- to a smallish device, for instance -- you can make it very hard for someone to crack.
Adds cost, for one thing. But you can arrange for the unit to never run a byte of code (even one loaded from the platter) that didn't come from WD.