They gave 16 year olds the vote, and 16 year olds can leave home, marry, join the army, and so on. Why should they not vote?
They didn't run pointless elections by request of the very councils that were due for them, because those areas are being redrawn and would have to have fresh elections almost immediately, making the results meaningless.
They also gave all the conservative hereditary peers lifetime peerages so they will keep their seats.
Your framing of all three of these is obviously intended to mislead.
> 16 year olds can leave home, marry, join the army, and so on. Why should they not vote?
That's a separate argument.
My point is Labour's change to the rules is very politically convenient for themselves. In the most recent polling, 32% of 16-17-year-olds would vote Labour, while only 17% of the overall electorate would vote Labour.
> They didn't run pointless elections by request of the very councils that were due for them, because those areas are being redrawn and would have to have fresh elections almost immediately, making the results meaningless.
They allowed individual incumbent councillors to choose whether elections were cancelled. This was politically convenient for the Labour and Tory parties because the Reform Party is new, and while it's polling well ahead of Labour, it doesn't have many incumbent council seats.
When a court challenge loomed, Labour quickly u-turned on the latest round of cancellations. Funny how something can seem sensible one day, and can then be u-turned at the slightest whiff of legal scrutiny.
> They also gave all the conservative hereditary peers lifetime peerages so they will keep their seats.
Can you name a single Conservative hereditary peer that will be given a lifetime peerage in Starmer's reform plan?
No, you can do things that benefit you electorally, but are also just the right thing to do. Changing the voting system from FPTP would obviously benefit parties other than the major ones, but that doesn't mean it'd be wrong for those parties to do it if they got into power. So the question is if it's good policy, and so I argue it is, if someone can be living by themselves, working in the army or as a full-time apprentice, married, and having a child, they should be able to vote.
> When a court challenge loomed, Labour quickly u-turned on the latest round of cancellations. Funny how something can seem sensible one day, and can then be u-turned at the slightest whiff of legal scrutiny.
Yes, it's absolutely bad that the government isn't making sure these things are legal before doing them, just as with the Palestine Action proscription. It's also hardly a sign of it being gerrymandering, why would they bother when it's going to give them basically zero advantage, given it would only achieve getting a council that will have no time to actually do anything? The obvious conclusion is they thought it was a waste of money and effort to hold them, but if you have to fight a legal battle over it, it won't actually save any money or effort as that has a large cost, even if it is legal.
> Can you name a single Conservative hereditary peer that will be given a lifetime peerage in Starmer's reform plan?
> The BBC understands ministers have offered the Conservatives the chance to retain 15 hereditary members of the House of Lords as life peers.
So it's not specific names as it hasn't been finalised, but 15 of them. I accept I misremembered when I said "all", but the point stands: not gerrymandering.
> No, you can do things that benefit you electorally, but are also just the right thing to do. Changing the voting system from FPTP would obviously benefit parties other than the major ones, but that doesn't mean it'd be wrong for those parties to do it if they got into power
You're reinforcing my point.
Minor parties (who might collectively be popular with the electorate) will never be able to change the voting methodology to their advantage because FPTP keeps the incumbents in place, and only the incumbents have the power to choose the voting system. So democracy suffers and the incumbents benefit.
Similarly, in this case, allowing children to vote helps the incumbents stay in place despite their party, and their leader being deeply unpopular with the electorate overall. So democracy suffers and the incumbents benefit.
This "logic" doesn't track at all. Enfranchising women may have benefited the party, does that mean we shouldn't have given women the vote and doing so hurt democracy? Of course not.
Just because something benefits a singular party doesn't make it antidemocratic. Expanding the franchise is more democratic, not less. A party being rewarded electorally for doing something good is the system working, not failing.
There are reasonable arguments to be made (in my opinion) that 16 is too young but you aren't making that argument, the one you are making is completely invalid.
Yeah, my setup is purely for my own security reasons and interests, so there's very little downside to my scorched earth approach.
I do, however, think that if there was a more widespread scorched earth approach then the issues like those mentioned in the article would be much less common.
In such a world you can say goodbye to any kind of free Wi-Fi, anonymous proxy etc., since all it would take to burn an IP for a year is to run a port scan from it, so nobody would risk letting you use theirs.
Fortunately, real network admins are smarter than that.
Pretty much. I think there's also a responsibility on the part of the network owner to restrict obviously malicious traffic. Allow anonymous people to connect to your network and then perform port scans? I don't really want any traffic from your network then.
Yes, there are less scorched-earth ways of looking at this, but this works for me.
As always, any of this stuff is heavily context specific. Like you said: network admins need to be smart, need to adapt, need to know their own contexts.
This is how you get really annoying restrictions on public networks, because some harmless traffic will inevitably be miscategorized by an overeager firewall/DPI system.
I’m not saying that there should be zero consequences for allowing bad traffic from your network, but there’s a balance, and I would hate a world in which your policy were more common.
Arguably we are already partially living in that world, as some companies are already blanket-banning entire countries, VPNs etc., rather than coming up with more fine-grained strategies or improving their authentication systems to make brute force login attempts harder. It’s incredibly annoying.
Not all of us have cell plans with hotspots ($$$), hotspots often have data caps, cell is often slower or congested, and there are some areas without cell signal. It's also kind of silly from a wider perspective to shove everyone onto the cellular network when most businesses have perfectly decent fiber internet nowadays.
Sure, I'm usually on hotspot, but I personally appreciate when businesses have wifi. Either way, there are always going to be shared networks somewhere.
What we should actually be doing is WiFi using SIM cards as authentication.
Have it count against your data cap (but make it much cheaper than cellular data). Pay part of that revenue to hotspot-owning businesses. If something bad happens, use the logs that telecoms are already required to keep.
It's very strange to me that we don't have something like this already.
How about we don't? We really don't need to tie even more things to SIM cards and phone numbers.
Criminals have more than enough ways to still get anonymous SIM cards (at least until every country on the planet makes KYC mandatory for prepaid SIMs), and legitimate users are greatly inconvenienced by this.
> Pay part of that revenue to hotspot-owning businesses.
To subsidize a network connection they probably already need for their business operations, e.g. their payment terminal or POS? Why should I? The marginal cost of an incremental byte on wired Internet connections is basically zero, these days. It's literally too cheap to meter, so why bother?
Besides the centralization and tracking concerns, not nearly every device has a SIM card. Why does my Laptop not deserve to access a coffee shop Wi-Fi, my Kindle to use an in-flight conenction, or my smartwatch to use the gym's network for podcasts?
It's very strange to me that people keep trying to willingly ruin the open Internet.
I live in a country that has mandatory SIM registration, and it's stopping exactly zero organized criminals – these can just pay a tiny bit more and buy burner phones and use out-of-country SIM cards – while it's making life more complicated and expensive for the average citizen.
Expensive because KYC isn't cheap, and guess who pays for that in the end... And that is assuming that your form of ID is even accepted as a foreigner. In a different country, I literally just spent two days sending back and forth selfies holding my passport(!) to little success. And I guess the customer support reps could now just use the same photos to impersonate me elsewhere, since passport photos provide absolutely zero domain binding and are just about the dumbest thing still seeing widespread adoption.
I don't often use registration-free public Wi-Fis, but I love that they exist, and I would hate if they'd be taken away too. I also just transited at an airport that requires passport scans for Wi-Fi usage, and it feels so backwards.
Thanks for being honest about this, though. I was always wondering who all these people were that are seriously in favor of all this dystopian stuff. Would love to hear why you think that it's a net positive for society.
> What an incredibly short-sighted, dystopian view.
You do recognize that the person I kept replying to was not asking these questions in earnest, right? They were all carefully directed questions, specifically designed to confirm their world view. I played into it, because I think they're pitiful and hilarious. Serves them right. Their latest question about government criticisms completes the caricature perfectly. All they're missing is referencing or quoting Orwell.
> I live in a country that has mandatory SIM registration, and it's stopping exactly zero organized criminals – these can just pay a tiny bit more and buy burner phones and use out-of-country SIM cards – while it's making life more complicated and expensive for the average citizen.
Pretty much the same here to my understanding. There's no credible evidence I'm aware of that'd suggest the criminal use of phone networks decreased significantly thanks to these. It might have improved on the exhaustion rate of the numbering pool, but I don't think we were particularly close to exhausting it anyways. Most benefit I can think of is a chance at traceability, but how well realized vs abused that is, no idea. Just like with IP leasing described in the article above, enlisting the help SIM mules has a long standing tradition, after all.
Any addressing system that relies on non-cryptographic identifiers will be prone to all kinds of mass misuse. There's no amount of lawmaking, honest or not, that could be implemented to counteract these. It's just like email.
> Thanks for being honest about this, though.
Except I really wasn't, and I find it both remarkably funny but also extremely concerning how on board you guys are with it. Propaganda and culture sure are powerful.
The current ways of identity verification are broken, and are prone to enable surveillance: this is something I fully recognize. What I refuse to recognize however is that the concept of identity verification would be wrong wholesale. There was another thread on here a few days ago that I did comment on, but the bottom line is, in my understanding there's no mathematical reason that things would have to be this way. Its shortcomings, including its enablement of mass surveillance, are an implementation issue, not something fundamental to the idea per se.
Being able to trust that a stranger you're talking to is
- an actual specific person
- is actually a stranger
are bottom of the barrel human expectations that communications technology have completely shattered. Technologically guaranteeing these, to the extent the analog hole problem allows for it, does not require dystopian practices. I'm confident that the lack of these guarantees is the root of many societal problems we see at large today. For better or for worse, a lot of people live a lot of their lives on the internet these days, but the internet is no hospitable place for them, among else for these exact reasons.
Accountability is a good thing. I refuse to let it be monkey paw-d by people who mean unwell into being recognized as a tool for evil, and I think you should too. Trust being abused by a centralized system does not mean trust is wrong. It means there are abusers at the wheel. The solution is not mistrust, or even systems that require less trust necessarily, although both can be useful. The solution is reworking the system to get more trustworthy people into the leading positions, and to make it so that those who have demonstrated to be not deserving are thrown out more readily. It is most unfortunate that this listing is ordered exactly by difficulty, from easiest to hardest. Trust is easily broken, and human systems are impossibly hard to get right. I don't think this justifies giving up though.
My profile is not blank. You can page through all my comments, posts, and favorites to your liking.
Did you actually bother to understand what I said by the way? Are you able to formulate a post that isn't just a bare minimum asinine rhetorical question?
> The current ways of identity verification are broken, and are prone to enable surveillance: this is something I fully recognize. What I refuse to recognize however is that the concept of identity verification would be wrong wholesale. There was another thread on here a few days ago that I did comment on, but the bottom line is, in my understanding there's no mathematical reason that things would have to be this way. Its shortcomings, including its enablement of mass surveillance, are an implementation issue, not something fundamental to the idea per se.
Put into more exact terms, your way of wanting to verify my identity is the same one you criticize governments and businesses for doing. It is not one I think is a good idea either, despite how you're trying to present this. I just retain the opportunity for there being other, better ways, whereas you don't.
Mind you, there's no reason to think that those who do publish such information do it because they're here to champion accountability. Note the type of forum this was originally supposed to be. It's in part a place for self-advertising. Many contact details you find on bios are visibly and explicitly HN specific.
Haha, nice, I run something similar.. But more manualy managed and I put those bans pernametly. Currneltly, there are 1360 blocks in drop list and growing.
I never really remove them, because even those leased blocks move from one spam/abuse operator to another, so no big loss.
And indeed, if people would fight w/ spam/abuse better and more aggresivly, the problem would be much smaller. I dont care anymore, In my opinion Internet is done. Time to start building overlay networks with services for good guys...
If you actually wanted your site or service to be accessible you’d run in to issues immediately since once IP would have cycled between hundreds of homes in a year.
It's crazy to me that you'd trust the output of an LLM for that. It's something where if you do it wrong it could cause major damage, and LLMs are literally famous for creating plausbile-sounding but wrong output.
If you wanted to use an LLM to identify it, sure, you can validate that, and then find the manufacturer instructions and use those. Just following what it says about the cables without any validation it's correct is just wild to me. These are products with instruction manuals made for them specifically designed for this.
> It's crazy to me that you'd trust the output of an LLM for that. It's something where if you do it wrong it could cause major damage,
With critical tasks you need to cross reference multiple AI, start by running 4 deep reports, on Claude, ChatGPT, Gemini and Perplexity, then put all of them into a comparative - critical analysis round. This reduces variance, the models are different, and using different search tools, you can even send them in different directions, one searches blogs, one reddit, etc.
Or you can ask for a link to the manual. I genuinely can't tell if your post is real advice or sarcasm intended to highlight the insanity of trying to fit square pegs in round holes of using LLMs for everything.
It doesn't matter, because any process that seems right most of the time but occasionally is wrong in subtle, hard to spot ways is basically a machine to lull people into not checking, so stuff will always slip through.
It's just like the cars driving themselves but you need to be able to jump in if there is a mistake, humans are not going to react as fast as if they were driving, because they aren't going to be engaged, and no one can stay as engaged as they were when they were doing it themselves.
We need to stop pretending we can tell people they "just" need to check things from LLMs for accuracy, it's a process that inevitably leads to people not checking and things slipping through. Pretending it's the people's fault when essentially everyone using it would eventually end up doing that is stupid and won't solve the core problem.
what's the core problem tho? Because if the core problem is "using ai", then it's an inevitable outcome - ai will be used, and there are always incentive to cut costs maximally.
So realistically, the solution is to punish mistakes. We do this for bridges that collapse, for driver mistakes on roads, etc. The "easy" fix is to make punishment harsher for mistakes - whether it's LLM or not, the pedigree of the mistake is irrelevant.
The core problem is that the tool provides output that looks right and is right a lot of the time, but also slips in incorrect stuff in a hard to notice way.
Punishment isn't a problem because it doesn't work. If you create a system that lulls people into a sense of security, no punishment will stop them because they aren't doing it thinking "it's worth the risk", it's that they don't see the risk. There are so many examples of this, it's weird people still think this actually works.
Furthermore, it becomes a liability-washing tool: companies will tell employees they have to take the time to check things, but then not give them the time required to actually check everything, and then blame employees when they do the only thing they can: let stuff slip.
If you want to use LLMs for this kind of thing, you need to create systems around them that make it hard to make the mistakes. As an example (obviously not a complete solution, just one part): if they cite a source, there should be a mandated automatic check that goes to that source, validates it exists, and that the cited text is actually there, not using LLMs. Exact solutions will vary based on the specific use case.
An example from outside LLMs: we told users they should check the URL bar as a solution to phishing. In theory a user could always make sure they were on the right page and stop attacks. In practice people were always going to slip up. The correct solution was automated tooling that validates the URL (e.g: password managers, passkeys).
> The correct solution was automated tooling that validates the URL
that's because this particular problem has a solution.
The issue here is that there's no such a tool to automatically validate the output of the LLM - at least, not yet, and i don't see the theoretical way to do it either.
And you're making the punishment as being getting fired from the job - which is true, but the company making the mistake also gets punished (or should be, if regulatory capture hasn't happened...). This results in direct losses for the company and shareholders (in the form of a fine, recalls and/or replacements etc).
> The issue here is that there's no such a tool to automatically validate the output of the LLM - at least, not yet, and i don't see the theoretical way to do it either.
Yeah, it's never going to be possible to validate everything automatically, but you may be able to make the tool valuable enough to justify using it if you can make errors easier to spot. In all cases you need to ask if there is actually any gain from using the LLM and checking it, or if doing so well enough actually takes enough time that it loses it's value. My point is that just blaming the user isn't a good solution.
> And you're making the punishment as being getting fired from the job - which is true, but the company making the mistake also gets punished (or should be, if regulatory capture hasn't happened...). This results in direct losses for the company and shareholders (in the form of a fine, recalls and/or replacements etc).
Yes, regulation needs to be strong because companies can accept these things as a cost of doing business and will do so, but people losing their jobs can be life destroying. If companies are going to not give people the time and tools to check this stuff, then the buck should stop with them not the employees that they are forcing to take risks.
The human is responsible. That's the fix. I don't care if you got the results from an LLM or from reading cracks in the sidewalk; you are responsible for what you say, and especially for what you say professionally. I mean, that's almost the definition of a professional.
And if you can't play by those rules, then maybe you aren't a professional, even if you happened to sneak your way into a job where professionalism is expected.
This doesn't solve the problem, because companies will force people to use these tools and demand they work faster, eventually resulting in people slipping.
People will have to choose between being fired for being "too slow", or taking the risk they end up liable. Most people can't afford to just lose their job, and will end up being pressured into taking the risk, then the companies will liability-wash by giving them the responsibility.
You need regulation that ensures companies can't just push the risk onto employees who can be rotated out to take the blame for mistakes.
Right, but companies routinely accept fines as costs of doing business, while losing your job can destroy your life. If a company has not taken appropriate measures to ensure employees can reasonably catch errors at the rate they are required to work, then the company should take all the blame, because they are choosing to push employees to take risks.
To be fair, that's a problem with human authors too. Wikipedia is really well-cited, but it's common to check a citation and find it only says half of what a sentence does, while the rest seemingly has no basis in fact. Judges are supposed to actually read the citations to not only confirm the case exists and says what's being claimed, but often to also compare & contrast the situations to ensure that principle is applicable to the case at hand.
Yup. The issue with LLMs are not that any specific thing it is doing is unique. Rather that it does it in previously unimaginable volume, scale, and accessibility.
Even disregarding self driving features, it seems like the smarter we make cars the dumber the drivers are. DRLs are great, until they allow you to drive around all night long with no tail lights and dim front lighting because you’re not paying enough attention to what’s actually turned on.
The default behaviour for self-hosted on Android is to have a foreground service which holds a websocket open, so it does get pushed from the server and doesn't rely on your phone being awake.
This isn't true, self-hosted Android push notifications in ntfy are provided using a "foreground service" by default (i.e: the app keeps a websocket open and listens), unless you set up firebase for yourself and build a custom version of the app with the cert baked in.
I think you misread, the delays are if you don't use instant delivery. I use it and it's extremely consistently delivered instantly, which makes sense, it's a websocket.
As to battery drain, I'm sure it technically does consume more, but according to my phone it's an insignificant amount: <1% of usage which is the lowest stat it gives you. Their docs suggest the same thing:
> the app has to maintain a constant connection to the server, which consumes about 0-1% of battery in 17h of use (on my phone). There has been a ton of testing and improvement around this. I think it's pretty decent now.
Honestly it's a good solution that works well with few downsides, the only real one is that iOS doesn't support doing it, but personally I don't have any apple phones so I do get an essentially free lunch.
Google doesn't have any magic way to do instant notification that nobody else has access to. The only thing they have access to in this regard is disabling any battery optimisations without triggering warnings.
Notification and battery performance is on par with google's solution except when an android build does dumb things to prevent the background activity, in which case notification performance gets worse and battery draw gets worse (not sure why exactly, it's just a common issue in these regards).
Well, there is an advantage, if everything is using the one service then you only need to have one thing alive to check it, so each new app is "free" if you already have push enabled (assuming that push notifications are rare enough the activity isn't the cost), as where each app doing it themselves is going to cause more battery use, so it isn't directly equivalent.
However, it also isn't a big deal, at least in my experience, at least for ntfy.sh.
Listening on a socket doesn't drain any battery when no data arrives unless the app does other things that actually use CPU. That's just what Google/Apple want you to believe so you depend on their proprietary lock in services.
Also like, how else would the Google / Apple services do it? Probably via sockets right? I guess you could do it in a pull-based approach on a timer, but that doesn't seem more efficient to me.
A single process waiting on multiple sockets is basically no more expensive than a single socket, but if each app has its own background process then that is more expensive. So for best performance you really want to delegate all the push-notification-listening for all the apps on a device to a single background process owned by the OS, but it'd be fine for each app to use its own push server (though of course most apps do not actually want to self-host this).
I guess it's been a while, but using µ in a name isn't exactly unprecedented, μTorrent was the BitTorrent software to use at the time, and so that discussion played out then. Everyone just called it uTorrent because it was the easier thing to type and made the pronunciation obvious and singular.
Until just now I would have sworn uBlockOrigin used μ. It's the first thing I've installed after the OS since it existed, but your lack of mentioning it made me check.
This is actually something where you are often better off outside of cities. The areas serviced by newer providers who are using the government grants to offer fibre to places without it and are actually running new fibre tend to offer much better prices and speeds.
E.g: One of them offers 900Mbps symmetric for £40/month (with a deal for £30/month for the first year). Meanwhile the legacy providers via OpenReach will only give you 700 down/100 up for more money, and require a two year contract.
The only real downside is most of them will CGNAT you, but most do offer IPv6 too, and mine offers a static IPv4 for £5/month more.
They didn't run pointless elections by request of the very councils that were due for them, because those areas are being redrawn and would have to have fresh elections almost immediately, making the results meaningless.
They also gave all the conservative hereditary peers lifetime peerages so they will keep their seats.
Your framing of all three of these is obviously intended to mislead.
reply