Hacker Newsnew | past | comments | ask | show | jobs | submit | softg's commentslogin

Why just teens though? Getting manipulated by algorithms crafted to maximize screentime and ad revenue is bad for anyone.

These platforms rely on ads to survive. Which means it should be easy to regulate them. You can prevent them from selling ads at which point they will be forced to comply. If they don't, someone else will get the ad revenue. Europe is already hostile towards american tech giants anyways.

The possibilities are endless. Pass a law that forces all social media with more than x users to not implement constant scrolling, make their ranking algorithm open source, allow people to use their own algorithms, employ robust moderation etc.

Instead we have a blanket ban that requires id checks but leaves the manipulation machine intact so it can prey on adults. Mental health is not the real issue here. They want to be able to track people and destroy anonimity online. Children are a convenient excuse.


Probably because:

- Banning things for non-adults is easier, they don't vote. - In the eyes of the law, social media is now seen as cigarettes and alcohol. Something teenagers should not be allowed to have access to, but is fair game once you are adult.

I don't fully agree with the ban for the same reasons you cited, but knowing that social media companies (mostly facebook and youtube) have known for years that their platform have been used for spreading radicalization (source: the book "The chaos machine") and have dodged any accountability, maybe it is a good solution after all, it is hard to say...


Because the effects are much worse on developing brains.


I share your vision about what ideal regulation looks like.

I don't share your cynicism pertaining to motives. Well, I am cynical about it, but in a non-conspiratorial way.

Politicians are feckless trend followers, cowardly in their disposition, preferring to follow the path of least resistance, and they lack any substantial vision or imagination themselves.

That explains why nothing bold is happening. And that lack of boldness is not unique to social media regulation.

That puts me squarely in the "this isn't perfect, but it's a good step" camp.


If the main purpose of this was to satisfy people's morbid curiosity that makes a lot of sense. Maybe they made up some juicy deaths in slow news weeks even.


I don't see how selecting the Lion, the Robot or the Scarecrow at random is going to help with any of the issues you mentioned. Now some rando (or group of randos) that you didn't even know existed gets power based on pure luck. You will still need media to learn about them and they could still be made up.

At least elections have a veneer of consent since people are asked which of the available options they prefer. Can you imagine anyone going to war because people chosen by a lottery wheel asked for it?

This is a problem of scale. The Greeks back then lived in small city-states where random selection meant that every able bodied male had a good shot at holding an important office at least once in their lifetime. You didn't need to hatch devious schemes to come to power. You couldn't abuse your fellow men because they would be in charge tomorrow. That's the true power of random selection and it's completely inapplicable to today's society at large.


> Now some rando (or group of randos) that you didn't even know existed gets power based on pure luck.

Being chosen at random could be better than being chosen by elites who are actively trying to oppress you. You get the median thing instead of the below-median thing.

> At least elections have a veneer of consent since people are asked which of the available options they prefer. Can you imagine anyone going to war because people chosen by a lottery wheel asked for it?

Exactly. It would remove the false veneer of consent. That's a feature, not a cost.

> The Greeks back then lived in small city-states where random selection meant that every able bodied male had a good shot at holding an important office at least once in their lifetime.

Re-apply the intended principles of federalism so that only decisions of insurmountable national relevance are made at the national level and the large majority of decision are made at the local level.


Greeks were choosing randomly from the ruling elite members.


Are you suggesting that is required in order for the system to operate?


There's also the simple fact that in a regular electoral system there is a mechanism for figuring out whether you're voting for the Lion, the Robot or the Scarecrow, called previous track record of that individual or the faction they're affiliated with. And the Lion, Robot or Scarecrow or at least their party usually intend on getting reelected, so whilst they always overpromise, they have some incentive to deliver something the electorate wants.

The solution to "candidates don't always deliver what the electorate wanted them to deliver and the electorate doesn't always hold them accountable" isn't "let's put people who never promised anything in the first place and aren't accountable for anything in charge, and somehow assume that they're going to be more benign"


at that point you could just buy cheap drones yourself and ram those into your neighbor's (oops)


Doesn't really sound like a viable solution, as the 10 neighbors have on average 10 times the amount of cash to throw at it.


I really think this is only a small step from neighbors firebombing each other's homes, and all ending up homeless or in prison.


I think the point they're making is "Walter White is a well-behaved chemistry teacher who resorts to manufacturing and selling drugs after he gets cancer" would still be true even if he stopped dealing drugs after he was offered money.


The Czech Republic changed its official name but many places still use the old name. Same with Eswatini, Côte d'Ivoire, Cabo Verde, etc. I suspect name changes aren't that reliable for dating globes since some of them probably have modern borders and former official names.


A Czech here with a small correction. Countries often have a long and a short version of their name. For example "Federal Republic of Germany" aka "Germany".

We didn't change the name, rather we adopted "Czechia" as an official short variant, since we previously didn't have one. So both are correct.

Otherwise I agree with your point. This is just a bit of a pet peeve of mine.


This document was made by a specific globe manufacturer, so it probably isn't meant to be used to date arbitrary globes, but only theirs.


You're dating the data source used by the globe, not the globe itself. But it guarantees a lower limit of age on any globe.


What's even worse would be enforcing the wrong solution that will cause more damage. The French have a saying, fuite en avant (lit. escape forwards) when someone insists on doing something knowing full well that it will not work but they do it anyways because it's better than inaction.

Considering the rise of the far right in the last EU elections, anyone who's seriously considering weakening public encryption must be out of their minds.


Bears can't use their strength to make even stronger bears so we're safe for now.

The Unabomber was clearly an intelligent person. You could even argue that he was someone worth listening to. But he was also a violent individual who harmed people. Intelligence does not prevent people from harming others.

Your analogy falls apart because what prevents a human from becoming an emperor of the world doesn't apply here. Humans need to sleep and eat. They cannot listen to billions of people at once. They cannot remember everything. They cannot execute code. They cannot upload themselves to the cloud.

I don't think agi is near, I am not qualified to speculate on that. I am just amazed that decades of dystopian science fiction did not innoculate people against the idea of thinking machines.


I know nothing about physics. If I came across some magic algorithm that occasionally poops out a plane that works 90 percent of the time, would you book a flight in it?

Sure, we can improve our understanding of how NNs work but that isn't enough. How are humans supposed to fully understand and control something that is smarter than themselves by definition? I think it's inevitable that at some point that smart thing will behave in ways humans don't expect.


> I know nothing about physics. If I came across some magic algorithm that occasionally poops out a plane that works 90 percent of the time, would you book a flight in it?

With this metaphor you seem to be saying we should, if possible, learn how to control AI? Preferably before anyone endangers their lives due to it? :)

> I think it's inevitable that at some point that smart thing will behave in ways humans don't expect.

Naturally.

The goal, at least for those most worried about this, is to make that surprise be not a… oh, I've just realised a good quote:

""" the kind of problem "most civilizations would encounter just once, and which they tended to encounter rather in the same way a sentence encountered a full stop." """ - https://en.wikipedia.org/wiki/Excession#Outside_Context_Prob...

Not that.


Excession is literally the next book on my reading list so I won't click on that yet :)

> With this metaphor you seem to be saying we should, if possible, learn how to control AI? Preferably before anyone endangers their lives due to it?

Yes, but that's a big if. Also that's something you could never ever be sure of. You could spend decades thinking alignment is a solved problem only to be outsmarted by something smarter than you in the end. If we end up conjuring a greater intelligence there will be the constant risk of a catastrophic event just like the risk of a nuclear armageddon that exists today.


Enjoy! No spoilers from me :)

I agree it's a big "if". For me, simply reducing the risk to less than the risk of the status quo is sufficient to count as a win.

I don't know the current chance of us wiping ourselves out in any given year, but I wouldn't be surprised if it's 1% with current technology; on the basis of that entirely arbitrary round number, an AI taking over that's got a 63% chance of killing us all in any given century is no worse than the status quo.


Why wouldn't it be? A lot of super intelligent people are/were also "destructive and evil". The greatest horrors in human history wouldn't be possible otherwise. You can't orchestrate the mass murder of millions without intelligent people and they definitely saw things as a zero sum game.


A lot of stupid people are destructive and evil too. And a lot of animals are even more destructive and violent. Bacteria are totally amoral and they’re not at all intelligent (and if we’re counting they’re winning in the killing people stakes).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: