From Google's point of view, very little downstairs to banning those other accounts. And the upside is that they reduce their legal risk: "We automatically blocked the account that was generating the kiddie porn, as well as the other accounts that had been logged into that device."
What exactly is the joke? These individuals have a lot of interests in play that might not coincide with those of the US public, but it seems to me that they are all very experienced and knowledgeable.
> The following individuals have been appointed:
> Marc Andreessen,
Sergey Brin,
Safra Catz,
Michael Dell,
Jacob DeWitte,
Fred Ehrsam,
Larry Ellison,
David Friedberg,
Jensen Huang,
John Martinis,
Bob Mumgaard,
Lisa Su,
Mark Zuckerberg
I don't recognize all of the names, but of those I do, every single one is a tech magnate. Most of them had at best one technological idea 30 years ago, and has been running the business every since.
Googling the others turns up exclusively "investors". None of them appear to know anything about science.
There's room for technologists on the science advisory council, but surely at least somebody in the room should know something about chemistry, biology, etc.
Well, obviously if we have a better prior, then that's better. But assuming no other knowledge, and especially if we think that other people's priors could be intentionally misleading, this rule seems to offer the best estimate.
Generally speaking, you never truly have "no prior knowledge". Some relevant past experience, or "common sense", or something tips you away from "all probabilities are equally likely". I think this rule is rarely a best estimate.
> Every message from every channel flows into one session. The agent knows who said what, where, and when.
> Your agent doesn't just remember the last 40k tokens. It remembers everything.
I don't get it - it seems to assume that LLM context is not only unlimited, but also somehow benefits from intermixing of unrelated tasks. This is unfortunately not the case.
I don't know if this is what the GP meant, but obviously classrooms do scale, in the sense that almost every kid in the world has access to a classroom. But arguably, for better or worse, this scaling was enabled by "cramming" many of those kids into relatively large classes, with little 1:1 attention from the teachers, and with standardized (mostly multiple-choice) testing, rather than old-school oral testing.
As I see it, it is a very salient question - what would the economics of global schooling look like if we decided that it's imperative for every student to get personal pedagogy and regular individual professional oral examinations for their schoolwork?
I'm very embarrassed to say that with the amount of tech books I already bought and haven't read, I'm starting to think that it might be a good investment for me to hire someone to read them for me.
Is there anything actually bad with that writing (other than implying that theirs is the first system to solve this)?
AI has been rlhf post-trained to generate text that people find to be clear to read. Are you now looking to reject clear writing just to spite AI labs?
I agree in principle, but this is a press release, and I personally am finding AI-assisted marketing copy to be much nicer and easier to read than human copywriter-written ones.
I don't get what semantic value you're getting by pasting this. It's almost like saying "VC-funded tech = bad", which is an ironic stance to take on this platform.
Is there anything that bitwarden did that is actually bad for you as a customer of theirs?
My understanding is that Google banned all users that had been logged into that family device.
reply