That seems really out of proportion with the experience of others, you may want to get it checked out. Do you have an older model with resistive heat and no seat heaters?
My car has a heat pump. I am basing the numbers off of the range estimates the car provides when I turn off cabin climate. The range is still lower, but I assume another chunk of that is from heating the battery when it's 10°F outside.
My EV is absolutely terrible range wise at cold weather. It is EPA rated at 220miles of range. I only see that when the temperature is at or above 80F.
Most of the winter it tells me I can only do between 100 and 120 miles. It is definitely half the EPA range with climate controls disabled at 0F. (Ask me how I know).
I love driving it in the winter.
I don't have a pressing need to go long distances, so that is not a current concern. Not having to stand outside in the bitter cold to fuel up in absolutely awesome.
There are EVs on the market that do much, much better than mine in cool weather and I now know what to look for.
To really penetrate the midwest it will take a car that can realistically do a road trip to Florida from say Duluth, MN or Michigan's UP in the winter.
Because not only do folks in the midwest drive long distances without a second thought, they sometimes do it in the cold of winter so they can get a break from the snow.
So yes still getting 90% of the range at -40C does sound attractive.
That right there is a big problem to begin with. The headline EPA number only reflects reality if you have a mix of city and highway driving. The problem is that people only care about range when driving 75mph. I think the headline EPA number should reflect that reality.
There appear to be a relatively few possibilities.
* The reporter lied.
* The reporter forgot.
* Apple devices share fingerprint matching details and another device had her details (this is supposed to be impossible, and I have no reason to believe it isn't).
* The government hacked the computer such that it would unlock this way (probably impossible as well).
* The fingerprint security is much worse than years of evidence suggests.
Mainly it was buried at the very end of the article, and I thought it worth mentioning here in case people missed it.
My opinion is that she set it up, it didn't work at first, she didn't use it, forgot that it existed, and here we are.
> Apple devices share fingerprint matching details and another device had her details
I looked into it quite seriously for windows thinkpads, unless Apple do it differently, you cannot share fingerprint, they're in a local chip and never move.
The reporter lying or forgetting seems to be the clear answer, there's really no reason to believe it's not one of those. And the distinction between the two isn't really important from a technical perspective.
Fingerprint security being poor is also unlikely, because that would only apply if a different finger had been registered.
When backing up to a local system it is extremely useable and reliable. It creates separate snapshot volumes for each backup and can be navigated in the Finder interface or using the fancy space interface.
Also, backups over the network are possible and have worked well for me for a few years.
It's reliable except when it's not. I'm using Mojave, and currently fighting a bug where a local snapshot gets stuck. When I list the local snapshots, I see the old one, then a gap of several days, and then additional snapshots.
From what I can tell, this snapshot is preventing space reclamation. The last month or so, I've constantly run out of disk space even when not doing anything special. As in actually run out of disk space — apps start to become unresponsive or crash, and I get warning boxes about low disk space. When you run low, the OS is supposed to reclaim the space used by snapshots, but I guess it doesn't happen,
The stuck snapshot can't be deleted with tmutil. I get a generic "failed to delete" error. The snapshot is actually mounted by the backup daemon, but unmount also fails. The only solution I've found is to reboot. Then I get 200-300GB back and the cycle starts again, with snapshots getting stuck again.
I'm considering updating to Tahoe just because there's a chance they fixed it in that release.
Yeah, I just had to re-pave an M1 MBP with Monterey. That was an adventure. Got the installer. Ran through part one, "This is no longer supported, click here to run in reduced security mode, or cancel the install?" Reduced security mode. "The installation of reduced security mode failed." Cool.
My journey to figure it out found me a Monterey IPSW image. Try to install it via DFU and a second Mac. "Nah, you can't do that, I won't even let you try."
ChatGPT hinted that I needed to do it from a similar vintage OS. "Even an Intel Mac running Ventura could work for this." As luck would have it, my partner still had her old MBP Core i5 running Ventura!
Alright, install Apple Configurator on the Ventura Mac.
"Nah. You need a Mac running 15.7 to install Apple Configurator."
Chicken and egg.
I mean, this OS (Monterey) only came out FOUR YEARS AGO. Ventura was three.
I got lucky with a Reddit post where someone asked for and got a zip file of an old version of Configurator.
I was then able to DFU re-image the M1 Mac with Monterey.
(Why do I need Monterey on it? Because someone else abandoned their software.)
So this Kafkaesque process to even get a four year old OS on a four year old Mac laptop means we shouldn't just be slobberingly praising Apple.
(I realize you, personally, weren't. Just when you said 10.12, I got flashbacks.)
If they say they don't, and they do, then that's fraud, and they could be held liable for any damages that result. And, if word got out that they were defrauding customers, that would result in serious reputational damage to Apple (who uses their security practices as an industry differentiator) and possibly a significant customer shift away from them. They don't want that.
The government would never prosecute a company for fraud where that fraud consists of cooperating with the government after promising to a suspected criminal that they wouldn't.
That's not the scenario I was thinking of. There are other possibilities here, like providing a decryption key (even if by accident) to a criminal who's stolen a business's laptop, or if a business had made contractual promises to their customers, based on Apple's promises to them. The actions would be private (civil) ones, not criminal fraud prosecution.
Besides, Apple's lawyers aren't stupid enough to forget to carve out a law-enforcement demand exception.
Terrible security... compared to what? Some ideal state that exists in your head, or a real-world benchmark? Do you expect them to ignore lawful orders from governments as well?
Cooperating with law enforcement cannot be a fraud. Fraud is lying to get illegal gains. I think, it's legally ok to lie if the goal is to catch a criminal and help the government.
For example, in 20th century, an European manufacturer of encryption machines (Crypto AG [1]) made a backdoor at request of governments and never got punished - instead it got generous payments.
None of these really match the scenario we're discussing here. Some are typical big company stuff, some are technical edge cases, but none are "Apple lies about a fundamental security practice consistently and with malice"
That link you provided is a "conspiracy theory," even by the author's own admission. That article is also outdated; OCSP is as dead as a doornail (no doubt in part because it could be used for surveillance) and they fixed the cleartext transmission of hardware identifiers.
Are you expecting perfection here? Or are you just being argumentative?
> That link you provided is a "conspiracy theory," even by the author's own admission.
"Conspiracy theory" is not the same as a crazy, crackhead theory. See: Endward Snowden.
Full quote from the article:
> Mind you, this is definitionally a conspiracy theory; please don’t let the connotations of that phrase bias you, but please feel free to read this (and everything else on the internet) as critically as you wish.
> and they fixed the cleartext transmission of hardware identifiers
Have you got any links for that?
> Are you expecting perfection here? Or are you just being argumentative?
I expect basic things people should expect from a company promoting themselves as respecting privacy. And I don't expect them to be much worse than GNU/Linux in that respect (but they definitely are).
It was noted at the bottom of the article as a follow up.
> I expect basic things people should expect from a company promoting themselves as respecting privacy. And I don't expect them to be much worse than GNU/Linux in that respect (but they definitely are).
The problem with the word “basic” is that it’s entirely subjective. What you consider “basic,” others consider advanced. Plus the floor has shifted over the years as threat actors have become more knowledgeable, threats more sophisticated, and technologies advanced.
Finally, the comparison to Linux doesn’t make a lot of sense. Apple provides a solution of integrated hardware, OS, and services. Linux has a much smaller scope; it’s just a kernel. If you don’t operate services, then by definition, you don’t have any transmitted data to protect. Nevertheless, if you consider the software packages that distros package alongside that kernel, I would encourage you to peruse the CVE databases to see just how many security notices have been filed against them and which remain open. It’s not all sunshine and roses over in Linux land, and never has been.
At the end of the day, it's all about how you weigh the evidence. If those examples are sufficient to tip the scales for you, that's your choice. However, Apple's overall trustworthiness--particular when it comes to protecting people's sensitive data--remains high for in the market. Even the examples you posted aren't especially pertinent to that (except for iCloud Keychain, where the complaint isn't whether Apple is securely storing it, but the fact that it got transmitted to them in the first place, and there exists some unresolved ambiguity about whether it is appropriately deleted on demand).
> Apple's solution is iCloud Keychain which is E2E encrypted, so would not be revealed with a court order.
Nope. For this threat model, E2E is a complete joke when both E's are controlled by the third party. Apple could be compelled by the government to insert code in the client to upload your decrypted data to another endpoint they control, and you'd never know.
This is a wildly unrealistic viewpoint. This would assume that you somehow know the language of the client you’re building and have total knowledge over the entire codebase and can easily spot any sort of security issues or backdoors, assuming you’re using software that you yourself didn’t make (and even then).
This also completely disregards the history of vulnerability incidents like XZ Utils, the infected NPM packages of the month, and even for example CVEs that have been found to exist in Linux (a project with thousands of people working on it) for over a decade.
You're conflating two orthogonal threat models here.
Threat model A: I want to be secure against a government agency in my country using the ordinary judicial process to order engineers employed in my country to make technical modifications to products I use in order to spy on me specifically. Predicated on the (untrue in my personal case) idea that my life will be endangered if the government obtains my data.
Threat model B: I want to be secure against all nation state actors in the world who might ever try to surreptitiously backdoor any open source project that has ever existed.
I'm talking about threat model A. You're describing threat model B, and I don't disagree with you that fighting that is more or less futile.
Many open source projects are controlled by people who do not live in the US and are not US citizens. Someone in the US is completely immune to threat model A when they use those open source projects and build them directly from the source.
We're talking about a hypothetical scenario where a state actor getting the information encrypted by the E2E encryption puts your life or freedom in danger.
If that's you, yes, you absolutely shouldn't trust US corporations, and you should absolutely be auditing the source code. I seriously doubt that's you though, and it's certainly not me.
The sub-title from the original forbes article (linked in the first paragraph of TFA):
> But companies like Apple and Meta set up their systems so such a privacy violation isn’t possible.
...is completely utterly false. The journalist swallowed the marketing whole.
Okay, so yes I grant your point that people where governments are the threat model should be auditing source code.
I also grant that many things are possible (where the journalist says "isn't possible").
However, what remains true is that Microsoft appears to store this data in a manner that can be retrieved through "simple" warrants and legal processes, compared to Apple where these encryption keys are stored in a manner that would require code changes to accomplish.
These are fundamentally different in a legal framework and while it doesn't make Apple the most perfect amazing company ever, it shames Microsoft for not putting in the technical work to accomplish these basic barriers to retrieving data.
> retrieved through "simple" warrants and legal processes
The fact it requires an additional engineering step is not an impediment. The courts could not care less about the implementation details.
> compared to Apple where these encryption keys are stored in a manner that would require code changes to accomplish.
That code already exists at apple: the automated CSAM reporting apple does subverts their icloud E2E encryption. I'm not saying they shouldn't be doing that, it's just proof they can and already do effectively bypass their own E2E encryption.
A pedant might say "well that code only runs on the device, so it doesn't really bypass E2E". What that misses is that the code running on the device is under the complete and sole control of apple, not the device's owner. That code can do anything apple cares to make it do (or is ordered to do) with the decrypted data, including exfiltrating it, and the owner will never know.
> The courts could not care less about the implementation details
That's not really true in practice by all public evidence
> the automated CSAM reporting apple does
Apple does not have a CSAM reporting feature that scans photo libraries, it never rolled out. They only have a feature that can blur sexual content in Messages and warn the reader before viewing.
We can argue all day about this, but yeah - I guess it's true that your phone is closed source so literally everything you do is "under the complete and sole control of Apple."
That just sends you back to the first point and we can never win an argument if we disagree about the level the government might compel a company to produce data.
reply