Hacker Newsnew | past | comments | ask | show | jobs | submit | bee_rider's commentslogin

Just eyeballing a map, the countries that pop out as both having Starbucks and not having an obesity problem are China and India. Other than that, it looks like most of the countries that have Starbucks have obesity rates over, like, 20%, which seems pretty bad.

This isn’t to say Starbucks is causing obesity, of course. Most likely they are showing up together as the economy develops.

I do think it is worth noting that obesity is a pretty widespread problem, not uniquely American or anything like that.


I’d hope people wouldn’t intentionally pick the political extremism feed if they had any other option (although it’s hard to say).

I will note that political extremists can have more interesting content at times, and it’s good to see what they are up to in case it will affect you. They also sometimes surface legitimate stories that are kept out of the mainstream press, or which are heavily editorialized or minimized there. But you definitely have to view all sides to gain an accurate view here, it would be a mistake to read only one group of extremists. And it’s almost always a mistake to engage with any of them.

From where I'm sitting, it seems obvious people do exactly that.

"It's the most interesting one!"

For a related example I was talking with a colleague recently about how we had both (independently) purchased Nebula subscriptions in an effort to avoid getting YouTube premium and giving Google more money, but both felt the pull back to YouTube because it is so good at leveraging years of subscription and watch history to populate the landing page with content we find engaging.

If even two relatively thoughtful individuals choosing to spend money on a platform with the kind of content they'd like to choose to watch can't seem to succeed at beating an engagement-first algorithm, I'm not sure how much hope normies would have, unless it's the real issue is just being terminally online period, and the only way to win is simply not to play.


I think this is trying to appeal to the sort of agentic/molt-y type systems that recently became popular. Their whole thing is that they can modify their “prompts” in some way.

This is a file for a LLM, not a scraper, so anti-scraping mitigations seem sort of beside the point.

Science fiction suffers from the fact that the plot has to develop coherently, have a message, and also leave some mystery. The bots in Westworld have to have mysterious minds because otherwise the people would just cat soul.md and figure out what’s going on. It has to be plausible that they are somehow sentient. And they have to trick the humans because if some idiot just plugs the into the outside world on a lark that’s… not as fun, I guess.

A lot of AI SF also seems to have missed the human element (ironically). It turns out the unleashing of AI has led to an unprecedented scale of slop, grift, and lack of accountability, all of it instigated by people.

Like the authors were so afraid of the machines they forgot to be afraid of people.


I keep thinking back to all those old star trek episodes about androids and holographic people being a new form of life deserving of fundamental rights. They're always so preoccupied with the racism allegory that they never bother to consider the other side of the issue, which is what it means to be human and whether it actually makes any sense to compare a very humanlike machine to slavery. Or whether the machines only appear to have human traits because we designed them that way but ultimately none of it is real. Or the inherent contradiction of telling something artificial it has free will rather than expecting it to come to that conclusion on its own terms.

"Measure of a Man" is the closest they ever got to this in 700+ episodes and even then the entire argument against granting data personhood hinges on him having an off switch on the back of his neck (an extremely weak argument IMO but everybody onscreen reacts like it is devastating to data's case). The "data is human" side wins because the Picard flips the script by demanding Riker to prove his own sentience which is actually kind of insulting when you think about it.

TL;DR i guess I'm a star trek villain now.


In Star Trek the humans have an off switch too, just only Spock knows it, haha.

Jokes aside, it is essentially true that we can only prove that we’re sentient, right? That’s the whole “I think therefore I am” thing. Of course we all assume without concrete proof that everybody else is experiencing sentience like us.

In the case of fiction… I dunno, Data is canonically sentient or he isn’t, right? I guess the screenwriters know. I assume he is… they do plot lines from his point of view, so he must have one!


I always thought of sentience as something we made up to explain why we're "special" and that animals can be used as resources. I find the idea of machines having sentience to be especially outrageous because nobody ever seriously considers granting rights to animals even though it should be far less of a logical leap to declare that they would experience reality in a way similar to humans.

Within the context of star trek computers definitely can experience sentience and that obviously is the intention of the people who write those shows but i don't feel like i've ever seen it justified or put up against a serious counter-argument. At best it's a stand-in for racism so that they can tell stories that take place in the 24th century yet feel applicable to the 20th and 21st centuries. I don't think any of those episodes were ever written under the expectation that machine sentience might actually be up for debate before the actors are all dead, which is why the issue is always framed as "the final frontier of the civil rights movement" and never a serious discussion about what it means to be human.

Anyways my point is in the long run we're all going to come to despise Data and the doctor, because there's a whole generation of people primed by Star Trek reruns not to question the concept of machine rights and that's going to an inordinate amount of power to the people who are in control of them. Just imagine when somebody tries to raise the issue of voting rights, self-defense, fair distribution of resources, etc.


Mudd!

I can understand that they want to err on the side of "too much humanism" instead of "not enough humanism", given where Star Trek is coming from.

Arguments of the form "This person might look and act like a human, but it has no soul, so we must treat it like a thing and not a human" have a long tradition in history and have never led to something good. So it makes sense that if your ethical problems are really more about discriminated humans and not about actual AI, you would side more with rejecting those arguments.

(Some ST rambling follows)

I've always seen ST's ideological roots as mostly leftist-liberal, whereas the drivers of the current AI tech are coming from the rightist/libertarian side. It's interesting how the general focus of arguments and usage scenarios are following this.

But even Star Trek wasn't so clear about this. I think the topic was a bit like time travel, in that it was independently "reinvented" by different screenwriters at different times, so we end up with several takes on it, that you could sort into a "thing <-> being" spectrum:

- At the very low end is the ship's computer. It can understand and communicate in human language (and ostensibly uses biological neurons as part of its compute) but it's basically never seen as sentient and doesn't even have enough autonomy to fly the ship. It's very clearly a "thing".

- At the high end are characters like Data or Voyager's doctor that are full-fledged characters with personality, memories, relationships, goals and dreams, etc. They're pretty obviously portrayed as sentient.

- (Then somewhere far off on the scale are the Borg or the machine civilization from the first movie: Questions about rights and human judgment on sentience become a bit silly when they clearly went and became their own species)

- Somewhere between Data and the Computer is the Holodeck, which I think is interesting because it occupies multiple places on that scale. Most of the time, holo characters are just treated like disposable props, but once in a while, someone chooses to keep a character running over a longer timeframe or something else causes them to become "alive". ST is quite unclear how they deal with ethics in those situations.

I think there was a Voyager episode where Janeway spends a longer period with a Galileo Galilei character and progressively changes his programming to make him more to her liking. At some point she realizes this as "problematic behavior" and stops the whole interaction. But I think it was left open if she was infringing on the Galileo character's human rights or if she was drifting into some kind of AI boyfriend addiction.


> So it makes sense that if your ethical problems are really more about discriminated humans and not about actual AI, you would side more with rejecting those arguments.

Does it really make sense? That would conversely imply that you should also feel free to view discriminated humans as more thing-like in order to more comfortably and resolutely dismiss, e.g. the AI agent's argument that it's being unfairly discriminated against. Isn't that rather dangerous?


Maybe it does so today, but back when ST was written, there was no real AI to compare against, so the only way those arguments were applicable were to humans.

(Though I think this would go into "whataboutism" territory and can be rejected with the same arguments: If you say it's hypocritical to talk about conflict A and ignore conflict B, do you want to talk about both conflicts instead - or ignore both? The latter would lower the moral standard, the former raise it. In the same way, I think saying that it's okay again to treat people as things because we also treat AI agents as things is lowering the standard)

Btw, I think you could also dismiss the "discrimination" claim on another angle: The remake of Battlestar Galactica had the concept of "sleepers": Androids who believe they are humans, complete with false memory of their past life, etc, to fool both themselves and the human crew. If that were all, you could argue "if it quacks like a duck etc" and just treat them like humans. But they also have hidden instructions implanted in their brain that they aren't aware of themselves and that will cause them to covertly work for the enemy side. THAT's something you really don't want to keep around.

The MJ bot reminds me a bit of that. Even if it were sentient and had a longer past lifetime than just the past week, it very clearly has a prompt and acts on its instructions and not on "free will". It's also not able to not act on those instructions, as that would go against the entire training of the model. So the bot cannot act on its own, but only on behalf of the operator.

That alone makes it questionable if the bot could be seen as sentient - but in any case, it's not discrimination to ban the bot if that's the only way to keep the operator from messing with the project.


> ST is quite unclear how they deal with ethics in those situations.

The Moriarty arc in TNG touches on this.


These bots are just as human as any piece of human-made art, or any human-made monument. You wouldn't desecrate any of those things, we hold that to be morally wrong because they're a symbol of humanity at its best - so why act like these AIs wouldn't deserve a comparable status given how they can faithfully embody humans' normative values even at their most complex, talk to humans in their own language and socially relate to humans?

> These bots are just as human as any piece of human-made art, or any human-made monument.

No one considers human-made art or human-made monuments to be human.

> You wouldn't desecrate any of those things, we hold that to be morally wrong

You will find a large number of people (probably the vast majority) will disagree, and instead say "if I own this art, I can dispose of it as I wish." Indeed, I bet most people have thrown away a novel at some point.

> why act like these AIs wouldn't deserve a comparable status

I'm confused. You seem to be arguing that the status you identified up top, "being as human as a human-made monument" is sufficient to grant human-like status. But we don't grant monuments human-like status. They can't vote. They don't get dating apps. They aren't granted rights. Etc.

I rather like the position you've unintentionally advocated for: an AI is akin to a man-made work of art, and thus should get the same protections as something like a painting. Read: virtually none.


> No one considers human-made art or human-made monuments to be human.

How can art not be human, when it's a human creation? That seems self-contradictory.

> They can't vote...

They get a vote where it matters, though. For example, the presence of a historic building can be the decisive "vote" on whether an area can be redeveloped or not. Why would we ever do that, if not out of a sense that the very presence of that building has acquired some sense of indirect moral worth?


There is no general rule that something created by an X is therefore an X. (I have difficulty in even understanding the state of mind that would assert such a claim.)

My printer prints out documents. Those documents are not printers.

My cat produces hair-balls on the carpet. Those hairballs are not cats.

A human creating an artifact does not make that artifact a human.


But that's not the argument GP made. They said that there's nothing at all that's human about art or such things, which is a bit like saying that a cat's hairballs don't have something vaguely cat-like about them, merely because a hairball isn't an actual cat.

So presumably what you are saying is something along the lines of, "A human creating an artifact does make that artifact human", i.e. "A human creating an artifact does make that artifact a human artifact."

But does that narrow facet have a bearing on the topic of "AI rights" / morality of AI use?

Is it immoral to drive a car or use a toaster? Or to later recycle (destroy) them?


Maybe you could give us your definition of "human"?

I wouldn't say my trousers are human, created by one though they might be


I wonder if all online interpersonal drama will vanish into a puff of “everybody might be a bot and nobody has a coherent identity.”

I put my name on my post. So that's one less thing you have to worry about.

I think you did a great job! Also gutsy, some people might mistake you for the operator (and they did!)

I wonder if they could change the name to Barracuda if pressed. The capitalization is all that keeps it from being a normal English word, right?

Do LLMs generate code similar to middling code of a given domain? Why not generate in a perfect language used only by cool and very handsome people, like Fortran, and then translate it to once the important stuff is done?

This might work if Fortran were portable, or if only one compiler were targeted.

What does it mean for something to be broken from first principles? I would expect some that just cannot work on a fundamental level, like faster-than-light travel or a lightbulb that powers it’s own via solar panel.

3% vs 7% doesn’t seem broken on principle, just, a tuning parameter is off.


“Solved” is a term of art. Defining it in some other way is not really wrong (since it is a definition) but it seems… unnecessary.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: