Hacker Newsnew | past | comments | ask | show | jobs | submit | falcor84's commentslogin

>Also, family account?

My understanding is that Google banned all users that had been logged into that family device.


Why not ban the offending account only? Is there a logic i'm missing?

From Google's point of view, very little downstairs to banning those other accounts. And the upside is that they reduce their legal risk: "We automatically blocked the account that was generating the kiddie porn, as well as the other accounts that had been logged into that device."

It's probably as simple as that…


Yes, the logic you're missing is "protect the children"

What exactly is the joke? These individuals have a lot of interests in play that might not coincide with those of the US public, but it seems to me that they are all very experienced and knowledgeable.

> The following individuals have been appointed:

> Marc Andreessen, Sergey Brin, Safra Catz, Michael Dell, Jacob DeWitte, Fred Ehrsam, Larry Ellison, David Friedberg, Jensen Huang, John Martinis, Bob Mumgaard, Lisa Su, Mark Zuckerberg


Are any of them scientists?

I don't recognize all of the names, but of those I do, every single one is a tech magnate. Most of them had at best one technological idea 30 years ago, and has been running the business every since.

Googling the others turns up exclusively "investors". None of them appear to know anything about science.

There's room for technologists on the science advisory council, but surely at least somebody in the room should know something about chemistry, biology, etc.


And their relationship with science is... anything but their own commercial interest?

Cross reference with political donations

Ah, well, that is an issue, but not one that I'm laughing about. This is the natural outcome of the Citizens United decision.

We need some constitutional amendments after seeing the holes and how much relied on the powerful not abusing them, re: Trump & Heritage Foundation

Well, obviously if we have a better prior, then that's better. But assuming no other knowledge, and especially if we think that other people's priors could be intentionally misleading, this rule seems to offer the best estimate.

Generally speaking, you never truly have "no prior knowledge". Some relevant past experience, or "common sense", or something tips you away from "all probabilities are equally likely". I think this rule is rarely a best estimate.

Taking a related quote from Dollhouse: "That is their business, but that is not their purpose."

> detecting whether the user is using the exact same system prompt and tool definition as Claude Code

Why would it be the exact same one? Now that we have the code, it's trivial to have it randomize the prompt a bit on different requests.


> Every message from every channel flows into one session. The agent knows who said what, where, and when.

> Your agent doesn't just remember the last 40k tokens. It remembers everything.

I don't get it - it seems to assume that LLM context is not only unlimited, but also somehow benefits from intermixing of unrelated tasks. This is unfortunately not the case.


I don't know if this is what the GP meant, but obviously classrooms do scale, in the sense that almost every kid in the world has access to a classroom. But arguably, for better or worse, this scaling was enabled by "cramming" many of those kids into relatively large classes, with little 1:1 attention from the teachers, and with standardized (mostly multiple-choice) testing, rather than old-school oral testing.

As I see it, it is a very salient question - what would the economics of global schooling look like if we decided that it's imperative for every student to get personal pedagogy and regular individual professional oral examinations for their schoolwork?


I'm very embarrassed to say that with the amount of tech books I already bought and haven't read, I'm starting to think that it might be a good investment for me to hire someone to read them for me.

Is there anything actually bad with that writing (other than implying that theirs is the first system to solve this)?

AI has been rlhf post-trained to generate text that people find to be clear to read. Are you now looking to reject clear writing just to spite AI labs?


Pieces of writing don’t really exist in isolation. Your opinion of a given chunk is formed not only by it, but by everything else you have read.

So in one part the negative reaction is to staleness. Everything sounds the same.

If it was all the same but dry, terse, and to the point (like technical writing), it wouldn’t be so bad.

But it’s repetitive with an annoying, breathless, get-ready-to-be-impressed voice that many of us find grating.


I agree in principle, but this is a press release, and I personally am finding AI-assisted marketing copy to be much nicer and easier to read than human copywriter-written ones.

I don't get what semantic value you're getting by pasting this. It's almost like saying "VC-funded tech = bad", which is an ironic stance to take on this platform.

Is there anything that bitwarden did that is actually bad for you as a customer of theirs?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: