It's already illegal to threaten journalists. In America we generally make bad things illegal, not activities that could become motivation for bad things. Someone threatened me on League of Legends last week. Should we ban the game?
>In America we generally make bad things illegal, not activities that could become motivation for bad things.
Not really, even in America. Like, take alcohol regulation. Your model would be "drunken bar fights are already illegal, so just prosecute that, problem solved."
Except that, historically, there's so much of that that it overwhelms the ability of law enforcement to keep up. So we try to remove the driving factors: "Okay, you can drink in public, but only[1] at these licensed places that are heavily incentivized to prevent fights before they start."
I'm not advocating any particular position, I'm just saying that if there's a persistent situation that heavily incentivizes violence, then it's not unreasonable to push back on that mechanism rather than just try to mop up the violence after the fact. Which specific situations merit that is up for debate, but it shouldn't be controversial that some situations should be handled this way.
[1] Yes, I'm simplifying, just focus on the general point here.
This isn't useful information without also knowing how common it is for newly-created accounts to place and lose bets around that size. Polymarket is a large platform with a lot of accounts being created per day. If two accounts made large bets and won and eight accounts made large bets and lost, you haven't discovered anything interesting.
He doesn't include the best solution in the 'what actually works' section: Give your LLM the same level of permissions that you would give a human you just hired in the same role. The examples given, tricking the customer support LLM into sending text messages to all users, or into transferring money, are not things that you would ever give a human customer support agent the tools to do. At some businesses that employ humans, you have to demonstrate good judgement for months before they even let you touch the keys to the case that has the PS5 games in it.
I haven't encountered a support person so locked down that they couldn't do anything impactful. Even simple things like booking or canceling appointments has financial consequences.
This is just counting pypi packages. Why would I go to the effort of publishing a library or cli tool that took me ten minutes to create? Especially in an environment where open source contributions from strangers are useless. If anything I'd expect useful AI to reduce the number of new pypi packages.
I don't see how this is more degenerate than betting on roulette at a casino. Prediction markets usually provide more efficient odds than casinos because the house profits from trading volume instead of from the spread, so it's essentially just a way to bet on a game of complete chance with a much better average-loss than you could get on games of pure chance in the past. If people want to bet on coinflips, it seems objectively better that they have access to a way to do that in a way where they only get fleeced for 1% of their bet rather than 5%+ of their bet.
For sporting events, for example, the alternative to prediction markets 5-10 years ago was to use a website where you bet against the house directly, and they'd usually take around a 15-20% spread, and they'd ban you and keep your account funds if they decided you're winning too much. Now you can bet on the same events on prediction market sites, with around a 1-5% spread, and the house doesn't care how much you win (so there's actually an argument that you're playing a game of skill, compared to the old format where you definitely weren't, since you'd be banned for being too skilled).
The embedded programs can be connected to the other weights during training, in whatever way the training process finds useful. It doesn't just have to be arithmetic calculation. You can put any hard-coded algorithm in there, make the weights for that algorithm static, and let the training process figure out how to connect the other trillion weights to it.
I don't think anyone is using LLMs for those conversations. A lot of those replies are bots. There's a market for reddit accounts that have a solid human-looking reply/post history, to be used for astroturf marketing, so some organizations set up bots to grow such accounts. There probably are also just people who overuse "Honestly? [statement]" sentences. I've spoken to such people in person before LLMs.
> borrowing ordinary mannerisms of speech that aren't necessarily egregious
That's how a trope starts. When a minority of writers are using a particular pattern, it's personalized style. When a majority of writers in a genre adopt the same personalized style, it's a trope.
We find AI tropes especially annoying because there are three frontier LLMs producing a sizable chunk of text we read (maybe even a majority of text, for some people) lately. It would also be annoying if a clique of three humans were producing most of the text we read; we'd start to find their personal styles annoying and overdone. Even before LLMs, that was a thing that happened in some "slop" fiction genres where a particularly active author would churn out dozens of novels per year in one style (often via ghostwriters, but still with a single style and repetitive plot pattern).
reply