Hacker Newsnew | past | comments | ask | show | jobs | submit | srg0's commentslogin

Plastic is made from the same stuff as gasoline.

Drain cleaner and hydrochloric acid makes salt water. Water is made of highly explosive hydrogen. Salt is made of toxic chlorine and explosive sodium.

Are there good ESP32-based starter kits with manual books which a kid can learn from? I was looking for an Arduino-like kit as a Christmas gift, and it seems that Arduino kits are unbeatable. The starter kit is available in 10 languages and comes with a project booklet. All ESP32-based products seem to be better suited for more advanced users and seem to have a steeper learning curve.


No, not like there are with Arduino. Arduino is definitely the best I know of for embedded programming, and it has its place there, but I stopped trying to do real work with it.


At some point this year I was getting 14-20 spam calls per hour. I'm in Italy.


Wow. Did you have to get a new number?


Pro Spotify: existing playlists and history, better artists info, better UI.

YouTube Music is both better and worse: UI has some usability issues and unfortunately it shares likes and playlists with the normal YouTube account, as a library it has lots of crap uploaded by YouTube users, often wrong metadata, but thanks to that it also has some niche artists and recordings which are not available on other platforms.


> Pro Spotify: existing playlists and history

It doesn't address the other reasons, but there are some free tools for moving Spotify playlists to YouTube.


So, basically, another smart speaker skinned as a toy robot.


That's probably driven by some kind of an AR headset. AR can't properly render solids, so it is stuck with having everything transparent. Now it won't look worse than everything else.


Because everything else looks worse instead. That's one way to solve it, I guess.


> most LLM users will ~always choose the smartest model

Most LLM users will choose the cheapest model which is good enough.

I think that LLMs' performance is already "good enough" for a lot of applications. We're in the diminishing returns part of the curve.

There are two other concerns:

1. being able to run the model on trusted infrastructure locally (so some jerk won't turn it off on a whim, and the data will remain safe and comply with the local data protection laws and policies)

2. having good tools to create AI applications (like how easy it is to fine-tune it to customer needs)

> how much the addition of copyrighted material affects how smart the resulting model is

Copyrighted material improve the models, not by making it smart, but more factually correct, because it will be trained on reputable, reliable and up-to-date sources.


Copyrighted material includes works by authors from outside the US. By Berne convention, the exceptions which any country may introduce must not "conflict with a normal exploitation of the work" and "unreasonably prejudice the legitimate interests of the author". So if at least one French author does license their work for AI training, then any exception of this kind will harm their legitimate interests and rob them of potential income from normal exploitation of the work.

If the US can harm authors from other countries, then other countries may be willing to reciprocate to American copyright holders, and introduce exceptions which allow free use of the US copyrighted material for some specific purposes they deem important.

IANAL, but it is a slippery slope, and it may hurt everyone. Who has more to lose?

And I hope that Mistral.AI takes note.


> then any exception of this kind will harm their legitimate interests

Pray tell what legitimate interest of the author is harmed by LLM's training on that work? No one is publishing the authors book.


The legitimate interest that there does not exist a tool that allows any random person to create art in the same style as she does? Which could arguably devalue their offering?


No such interest has been granted by copyright. You can create a painting today in the style of any trending artist without issues.


What I think the parent meant is the interest to sell license to others to train on their data.


Exactly. Some copyright holders do license their work for AI training. It certainly happens in the music industry, but I don't see why texts would be any different. The exception would harm their business.


Example please? It's always been fair use to train on accessible data. It's how for eg: so much of research has been going on for decades.


My first reaction was like -- wow, a site that runs on a reverb pedal.


this website has the toan


I would also like to point out that English spelling is obsolete and should be abolished (/s). The text of the CRLF abolition proposal itself contains more digraphs, trigraphs, diphthongs, and silent letters than line-ending sequences. The last letter of the word "obsolete" is not necessary. "Should" can be written as only three letters in Shavian "𐑖𐑫𐑛".

According to ChatGPT, the original proposal had:

Number of sentences: 60 Number of diphthongs: 128 (pairs of vowels in the same syllable like "ai", "ea", etc.) Number of digraphs: 225 (pairs of letters representing a single sound, like "th", "ch", etc.) Number of trigraphs: 1 (three-letter combinations representing a single sound, like "sch") Number of silent letters: 15 (common silent letter patterns like "kn", "mb", etc.)

For all intents and purposes, CRLF is just another digraph.


I'm a big fan of English spelling reform and know Shavian and sometimes write in it, but I feel shavian is limited due to how heavily it uses letter rotation. Dyslexics already have trouble with b, d, p and q, having most letters have a rotated form would be challenging


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: