> The OS could require the parent to manually update it.
How is their age verified?
At some point one of two things is required:
1) A promise that the user is a certain age
- Which puts us exactly where we are
2) Official identification is used to verify age
- Which creates a PII nightmare
That's it. There's only those two options. You may not believe #2 is going to be a privacy nightmare but we're already seeing it happen with Discord/OpenAI/LinkedIn and everyone else that uses Persona[1]. They aren't doing the minimal security things and already aren't doing what they claimed (processed on device, then deleted). This "hack" couldn't happen if that was true
The difference here is it can be set by the parent on the OS and locked. Requiring sudo equivalent to change.
The way it is now, there's nothing stopping a (18-) user from logging out of a 'parental control enabled' account and making a new account without those controls on any service from Facebook to Steam. So the only effective option at that point is to entirely block that app or service.
This gives more power to parental control software. And yeah moves the responsibility from the service to the parents, which is what the services want cuz COPPA and other similar laws.
But you do bring up another issue people aren't discussing. That the default setting is under 18.
So we protect the children from adults by... having no way to actually verify someone is a child?
The problem is less kids getting access to porn and more pedos getting accounts to spaces designed for children. Places like Club Penguin or very famously Roblox.
Here's the problem, you can't verify children. They don't have identification in the same way adults do. And worse, if we gave them that then it only makes them more vulnerable!
Then we have the whole problem of a global internet. VPN usage is already skyrocketing to circumvent these policies.
So the only real "solution" to this is global identification systems where essentially everyone is carrying around some dystopian FIDO key (definitely your phone) that has all your personal information on it and you sign every device you touch. Because everything from your fridge to your car is connected to the Internet.
But that's a cure worse than the poison. I mean what the fuck happens to IOT devices? Do we just not allow them on the internet? That they're assumed 18+? So all kids need to do is get a raspberry pi? All they need to do is install a VM on their phone? On their computer? You might think that kids won't do this but when I was in high school 20 years ago we all knew how to set up proxies. That information spread like wildfire and you bet it got easier as the smarter kids put in the legwork.
This is a losing battle. It's not a cat and mouse game it's While E Coyote vs Road Runner.
We're on HN FFS. If there's anywhere on the Internet that the average user is going to understand how impossible this is it should be here. We haven't even talked about hacking! And yes, teenage script kiddies do exist.
These policies don't protect kids, they endanger them. On top of that they endanger the rest of us. Seriously, just try to work it out. Try to create a solution and then actually try to defeat your solution. Don't be fucking Don Quixote.
> But you do bring up another issue people aren't discussing. That the default setting is under 18.
Some things do that. This law doesn't have a default. If the admin sets all the user accounts to 18+, then the users are stuck with the setting being 18+.
> I mean what the fuck happens to IOT devices? Do we just not allow them on the internet?
Sounds pretty good to me.
But yeah they need a different handling of some manner. Maybe a "give no access to anything age-gated" category, though is that really different from under-13 in practice?
> So all kids need to do is get a raspberry pi? All they need to do is install a VM on their phone? On their computer? You might think that kids won't do this but when I was in high school 20 years ago we all knew how to set up proxies.
Just delaying unrestricted access to high school would already solve most of the problem.
> These policies don't protect kids, they endanger them. On top of that they endanger the rest of us.
They do not. Some totally different system could endanger people, but this one doesn't.
> They employ some of the best security analysts in the world and have $10-30B/yr revenue
I'll never not be impressed by how many people will defend trillion dollar organizations and say that things are too expensive. Especially when open source projects (including forks!) implement such features.
I'm completely with you, they could do these things if they wanted to. They have the money. They have the manpower. It is just a matter of priority. And we need to be honest, they're spending larger amounts on slop than actual fixes or even making their products better (for the user).
“Priorities” is far too soft a term in this context. These are anti-priorities: not just things they choose not to work on, but things they’ll spend big money to prevent, up to and including bribing, uh I mean lobbying, lawmakers.
Neither big players have refined enough permissions. These set users up for giving away more data than they think.
Maybe one clear example is needing a permission once for setup and then it remaining persistent.
An easy demonstration is just looking at what Graphene has done. It's open source and you wana say Google can't protect their users better? Certainly Graphene has some advanced features but not everything can be dismissed so easily. Besides, just throw advanced features behind a hidden menu (which they already have!). There's no reason you can't many most users happy while also catering to power users (they'll always complain, but that's their job)
> Why not? Why can't faster typing help us understand the problem faster?
Can it help? Of course! But I think the question is too vague here and you're (presumably) unintentionally creating a false dichotomy. I'll clarify with the next responses
> Why can't we figure out the right thing faster by building the wrong thing faster?
The main problem is that solution spaces are very large. There are two general ways to narrow the solution space: directly and indirectly. Directly by things like thinking hard, digging down, and "zooming in". Indirectly by things such as figuring out what not to do (ruling things out).
You can build a lot of wrong things that don't also help you narrow that solution space. The most effective way to "build the wrong thing" in an informative way is to first think hard and understand your solution space. You want to build the right wrong thing. The thing that helps you rule out lots of stuff. But if you are doing it randomly then you aren't doing this effectively and probably wasting a lot of time. You probably are already doing this but not thinking too much about this explicitly, but if you think explicitly you'll improve on this.
> Presumably we were gonna build the wrong thing either way
You always build the "wrong" thing. But it is about how wrong you are. Despite being about physics, I think Asimov's Relativity of Wrong[0] (short essay) is pretty relevant here and says everything I want to say but better. It is worth the read and I come back to it frequently.
> I often build something to figure out what I want
Yes! But this is not quite the same thing. I do this too! I never know the full details of the thing I want before I start building. I'm not sure that's even possible tbh. You're always going to discover more things as you get into the details and nuance. But that doesn't mean foresight is useless either.
Analogy
-------
Let's say I'm somewhere in the middle of America and I want to get to NYC. Analogous to your framing you are saying "why can't moving faster help us get there faster?" Obviously it can! BUT speed is meaningless without direction. You don't want speed, you want velocity. If you start driving as fast as possible in a random direction you're equally likely to head in a direction that increases your distance than one that decreases. And you are very unlikely to head in a good direction. Driving fast in the wrong direction significantly increases harm than were you to drive slowly in the wrong direction.
So what's the optimal strategy? Find a general direction (e.g. use the sun or stars/moon) to figure out where "east"(ish) is, start driving relatively slowly, refine your direction as you get more information about the landscape, speed up as you gain more information. If you can't find a general direction you should slowly meander, carefully taking in how the landscape/environment is changing. If it is very unchanging, then yeah, speedup, but only until you find a region that becomes more informative, then repeat the process.
If we already had perfect information about how to get to NYC then just drive as fast as fucking possible. But if we don't have that information we need a completely different strategy. Thus, t̶y̶p̶i̶n̶g̶ driving speed isn't the bottleneck.
> This course covered CLIs/git/Unix/shell/IDEs/vim/emacs/regex/etc.
Fwiw I just graduated grad school and our lower division courses taught most of this stuff, though not as the main subject. Most upper division classes required you to submit your git repo. Most of this was fairly rudimentary but it existed. Though we didn't cover vim/emacs and I'd argue shell and bash were very lacking.
That said, several of us grad students and a few professors lived in the terminal. The students that wanted to learn this stuff frequented our offices, even outside office hours. I can certainly say every single one of those students was consistently at the top of the class (but not the only ones). The students who lived in the terminal but didn't frequent office hours tended to do well in class but honestly I think several were bored do didn't get straight A's but I could generally tell they knew more than most. Though I'm biased. I think more people should live in the shell
A warrant usually isn't a free pass to search everything. They are often narrow.
The warrant is the receipt. Even if you believe it's fine most of the time I'm pretty certain most people would feel uncomfortable if they went to the grocery store and weren't offered one. You throw it away most of the time, but have you never needed it? Mistakes happen.
The stakes are a lot higher here. The cost of mistakes are higher. The incentives for abuse are higher. The cost of abuse is lower.
And what's the downside of the person being searched having the warrant? Why does it need to be secret?
You used a conditional so I assume you also know how such a system can fail. It's not hard to figure out how that can be exploited, right? You can't rely on that conditional being executed perfectly every time, even without adversarial actors. But why ignore adversarial actors?
Honestly I think we're just becoming more aware of this way of thinking. It's certainly exacerbated it now that everyone has "an expert" in their pocket.
It's no different than conspiracy theorists. We saw a lot more with the rise in access to the internet. Not because they didn't put in work to find answers to their questions, but because they don't know how to properly evaluate things and because they think that if they're wrong then it's a (very) bad thing.
But the same thing happens with tons of topics, and it's way more socially acceptable. Look how everyone has strong opinions on topics like climate, rockets, nuclear, immigration, and all that. The problem isn't having opinions or thoughts, but the strength of them compared to the level of expertise. How many people think they're experts after a few YouTube videos or just reading the intro to the wiki page?
Your PM is no different. The only difference is the things they believed in, not the way they formed beliefs. But they still had strong feelings about something they didn't know much about. It became "their expert" vs "your expert" rather than "oh, thanks for letting me know". And that's the underlying problem. It's terrifying to see how common it is. But I think it also leads to a (partial) solution. At least a first step. But then again, domain experts typically have strong self doubt. It's a feature, not a bug, but I'm not sure how many people are willing to be comfortable with being uncomfortable
I'm not Canadian, but it seems similarly written to how laws in the US have been exploited to be used to spy on Americans. And despite not being Canadian, as an American I have a horse in this race, as the OP notes...
| many of these rules appear geared toward global information sharing
I see a lot of people arguing that these bounds are reasonable so I want to make an argument from a different perspective:
Investigative work *should* be difficult.
There is a strong imbalance of power between the government and the people. My little understanding of Canadian Law suggests that Canada, like the US, was influenced by Blackstone[0]. You may have heard his ratio (or the many variations of it)
| It is better that ten guilty persons escape than that one innocent suffer.
What Blackstone was arguing was about the legal variant of "failure modes" in engineering. Or you can view it as the impact of Type I (False Positive) and Type II (False Negative) errors. Most of us here are programmers so this should be natural thinking: when your program fails how do you want it to fail? Or think of it like with a locked door. Do you want the lock to fail open or closed? In a bank you probably want your safe to fail closed: the safe requires breaking into to access again. But in a public building you probably want it to fail open (so people can escape from a fire or some other emergency that is likely the reason for failure).
This frame of thinking is critical with laws too! When the law fails how do you want it to fail? So you need to think about that when evaluating this (or any other) law. When it is abused, how does it fail? Are you okay with that failure mode? How easy is it to be abused? Even if you believe your current government is unlikely to abuse it do you believe a future government might? (If you don't believe a future government might... look south...)
A lot of us strongly push against these types of measures not because we have anything to hide nor because we are on the side of the criminals. We generally have this philosophy because it is needed to keep a government in check. It doesn't matter if everyone involved has good intentions. We're programmers, this should be natural too! It doesn't matter if we have good intentions when designing a login page, you still have to think adversarially and about failure modes because good intentions are not enough to defend against those who wish to exploit it. Even if the number of exploiters is small the damage is usually large, right?
This framework of thinking is just as beneficial when thinking about laws as it is in the design of your programs. You can be in favor of the intent (spirit of the law), but you do have to question if the letter of the law is sufficient.
I wanted to explain this because I think it'll help facilitate these types of discussions. I think they often break down because people are interpreting from very different mental frameworks. Disagree with me if you want, but I hope making the mental framework explicit can at least improve your arguments :)
> A lot of us strongly push against these types of measures not because we have anything to hide nor because we are on the side of the criminals.
I had this view as well until I realized it’s predicated on living in a high trust society. At some point you reach a critical mass of crime that is so rampant, and the rule of law has so broken down that it’s basically Mad Max out there, and then these idealistic philosophies start to fall apart.
You can look to parts of SE Asia or the Middle East to see some examples where that happened, and where it was eventually reigned in with extreme measures (Usually broad and indiscriminate capital punishment).
I know your comment is about fixing failure modes in the legal system, and I’m not defending government surveillance, or the idea of considering someone innocent until proven guilty, but what happens when the entire system fails due to misplaced idealism? Much worse things are waiting on the other end of the spectrum when people don’t feel like the government is adequately protecting them.
I think a practical argument against what you're saying here is simply that solving the mad max stuff doesn't require anything at all like this. The type of crime that's scary and impactful (e.g. terrorism is scary, but so extremely rare that it can't really be considered impactful) is generally trivial to bust.
Are you of the opinion that peoples' default state is a Mad Max-like existence?
The question isn't about idealism or the realistic possibility of said idealism. The question, in my opinion, is whether we can only succeed as a species if a small number of people are entrusted with creating and enforcing laws by force when necessary.
That isn't to say we never need some level of hierarchy or that laws, social norms, etc aren't important. Its to say that we need to keep a tight reign on it and only push authority and enforcement up the ladder when absolutely necessary.
It will end poorly if we continue down the road of larger and larger governments under the fear of Mad Max, and this idea many people have that "someone has to be in charge."
>I had this view as well until I realized it’s predicated on living in a high trust society.
Building down these high trust scenarios has been the consequence of active policies. You don't just miss these trends and correlations. Not to this extent.
The Mad Max stuff is occurring at scale more due to unchecked governments, and governments that don't work for society than it is from insufficient surveillance
>I had this view as well until I realized it’s predicated on living in a high trust society. At some point you reach a critical mass of crime that is so rampant, and the rule of law has so broken down that it’s basically Mad Max out there, and then these idealistic philosophies start to fall apart.
I see "High Trust Society" so much as a weird racist dogwhistle, but feel free to disabuse me of that notion.
I live in an extremely high crime area. Because cops abuse the law to keep their numbers up. If someone checked they would see that my local McDonalds car park is one of the biggest crime hotspots in the country because of administrative detections made on minor drug deals there.
It just so happens that my area is also where the government dumps migrants, refugees and poor people. Its also the case that they test welfare changes here.
I haven't had a single incident here in 6 years. We often forget to lock our doors. My wife takes my toddler walking around the neighborhood at night. I wave hello to the guy across the road who I have like 99% certainty is dealing drugs (Or just has a lot of friends with nice cars who visit to see how long it has been since he trimmed his lawn).
That said, if you turn on the tv 2 things are apparently happening. 1. We are under attack by hordes of immigrants tearing the country apart. 2. We are under attack by kids on ebikes mowing kids down in a rampage of terror.
Politicians, in order to be seen to be doing things, bring laws in to counter these threats. People bash their chests and demand more be done.
But the issue is that its just not happening. My suburb is great. The people are generally lovely, even those in meth related occupations.
When you complain about the trustiness of the society, consider that your lack of trust might actually be the problem? Nothing is necessarily going to break down because you didnt make your neighbors life worse by supporting another dumb as shit law. "Oh no crime is so rampant" buddy you need to get over yourself. Societies don't fail because of socially defined Crime they fail because people prioritise their perceived safety over everyones freedom.
> I’m not defending government surveillance, or the idea of considering someone innocent until proven guilty
Exactly what you are defending.
>what happens when the entire system fails due to misplaced idealism?
Its at threat from the idealism that you can just pass one more law to fix society.
>don’t feel like the government is adequately protecting them.
They come up with a bunch of dumbshit laws like the OP. Thats the result.
Re: High trust society general means people are pointing to some implicit unwritten structures that stop something from happening.
Collective notions of shame, actual networks of friends and families that reinforce correct behaviour or issue corrections.
Think about simply how credit networks form and function. And why visiting a food truck or medieval travelling doctor for your vial of ointment is different from buying special products from a brick and mortar establishment.
Basically if you or the network has a harder time back propagating defaults and bad credit in a way that prevents future bad outcomes then that is a loss of high trust.
This isn't about race really unless you are operating at the level of some biological or genetic connection to behaviour ... But that is a pretty strange place to be as there a whole host of confounding factors that are much more obvious and believable and I cast serious doubt that even a motivated racist would ever credibly be able to do empirical studies showing causal links between any given genetic population cluster and the emergent societal behaviour. These are such high dimensional systems it just seems insane to even think one could measure this effect.
The invisible substrate is the society unfortunately ... And we are all bad at writing it down and measuring it.
It seems to me that society isnt anything but a stick to beat against ones hobby horse. "Society is bad because of the thing that happened to me, save society by changing things my way!!!" etc. Where really if you turn off the tv and go to the shops its fine.
> until I realized it’s predicated on living in a high trust society.
I don't think it's predicated on that. It's based on low trust of authority. Not necessarily even current authority. And low trust of authority is not equivalent to high trust in... honestly anything else.
> You can look to parts of SE Asia or the Middle East to see some examples where that happened
These are regions known for high levels of authoritarianism, not democracy, not anarchy (I'm not advocating for anarchy btw). These regions often have both high levels of authoritarianism AND low levels of trust. Though places like China, Japan, Korea etc have high authoritarianism and high trust (China obviously much more than the other two).
> but what happens when the entire system fails due to misplaced idealism?
It's a good question and you're right that the results aren't great. But I don't think it's as bad as the failure modes of high authoritarian countries.
High authority + low trust + abuse gives you situations like we've seen in Russia, Iran, North Korea. These are pretty bad. The people have no faith in their governments and the governments are centered around enriching a few.
High authority + high trust + abuse is probably even worse though. That's how you get countries like Nazi German (and cults). The government is still centered around enriching a few but they create more stability by narrowing the targeting. Or rather by having a clearer scale where everyone isn't abused ad equally. (You could see the famous quotes by a famous US president about keeping the white population in check by making them believe that at least they're not black)
None of the outcomes are good but I think the authoritarian ones are much worse.
> when people don’t feel like the government is adequately protecting them.
But this is also different from what I'm talking about. You can have my framework and trust your government. If you carefully read you'll find that they are not mutually exclusive.
The road to hell is paved with good intentions, right? That implies that the road to hell isn't paved just by evil people. It can be paved even by good well intentioned ones. Just like I suggested about when programming. We don't intend to create bugs or flaws (at least most of us don't), but they still exist. They still get created even when we're trying our hardest to not create them, right? But being aware that they happen unintentionally helps you make fewer of them, right? I'm suggesting something similar, but about governments.
The quote refers to a Faustian bargain offered by the Penn's. They'd bankroll securing a township, as long as the township gave up the ability to tax them. The quote points out that by giving up the liberty to tax, for short term protection, ultimately the township would end up having neither the freedom to tax to fund further defense, or long term security so might as well hold onto the ability to tax and just figure out the security issue.
Moral: don't give up freedoms for temporary gains. It never balances out in the end.
It's become more a shorthand for saying much more. Though the original context differs from how it is used today (common with many idioms).
People do not generally believe a seat belt limits your liberty, but you're not exactly wrong either. But maybe in order to understand what they mean it's better to not play devil's advocate. So try an example like the NSA's mass surveillance. This was instituted under the pretext of keeping Americans safe. It was a temporary liberty people were willing to sacrifice for safety. But not only did find the pretext was wrong (no WMDs were found...) but we never were returned that liberty either, now were we?
That's the meaning. Or what people use it to mean. But if you try to tear down any saying it's not going to be hard to. Natural languages utility isn't in their precision, it's their flexibility. If you want precision, well I for one am not going to take all the time necessary to write this in a formal language like math and I'd doubt you'd have the patience for it either (who would?). So let's operate in good faith instead. It's far more convenient and far less taxing
People are let go off all the time. Not because of the law but because who needs the work of chasing and punishing every law breaker in the land. In your own workplace,family and friend circle, count how many times you have seen some one do something dumb(forget illegal) that has caused a loss or pain to some one else. And then count how many times you have done something about it.
I use the speed chime in my Model 3 car to alert me if I'm more than 2 km/h over the posted speed limit, which it infers from its database with the autopilot camera providing overrides.
If I'm over that when passing a speed camera in Victoria, AUS, I'll be pinged with a decent fine to arrive shortly.
Imagine if instead of a chime I got fined every single time, everywhere? All this new monitoring makes it a bit like that, at an extreme. I don't want to live in such a society.
There were two commenters that responded 15 minutes prior to your comment. I'd suggest starting there if you want to understand. Then if you disagree with those, you can comment and actually contribute to the conversation ;)
At some point one of two things is required:
That's it. There's only those two options. You may not believe #2 is going to be a privacy nightmare but we're already seeing it happen with Discord/OpenAI/LinkedIn and everyone else that uses Persona[1]. They aren't doing the minimal security things and already aren't doing what they claimed (processed on device, then deleted). This "hack" couldn't happen if that was true[0] https://cybernews.com/privacy/persona-leak-exposes-global-su...
[1] https://withpersona.com/customers
reply