Hacker Newsnew | past | comments | ask | show | jobs | submit | mandeepj's commentslogin

> a couple people literally spat out the food they were enjoying and threw their plates in the trash

That was an unnecessarily extreme reaction, like AI 3d printed the ingredients.


> I discovered the OneDrive app doesn’t have any of the document scanning tools.

Loved office lens! The closets thing they have now is - + icon and Document.



> Not the first time they couldn't keep to a ceasefire for even a day,

They are obsessed with wars, murders, and chaos


Yes, and they're actively monitoring this site to flag and bulk downvote anything that sheds light on their crimes (like this whole thread).

I call the orange guy many things! I believe he's an accidental president. DNC screwed up big time both times. The stakes were higher than ever, so they could have played it safe by looking at past elections, but nope. They wanted to write history, but got the other guy to do it.

Bush (reminder: a republican) screwed things so bad that the country opened to something that had never happened before - A black President.

Now, orange guy (again, a republican, see the pattern) has screwed, and I'm not sure where his bottom is, will set the country to accept again something that hasn't happened before - A Woman President; maybe a black one. There's still time until the 2028 general election.

Also, what do conservatives conserve? They conserve their brains by not using them. Don't take my word; just look at the history, what they have done so far! They are the same everywhere - be it the US or India - same hate mongering lunatics!


> I had 15,000 hours of audio data

do you really need that much data for fine-tuning?


More data -> better, faster on-device models

The actual plan was to distill Gemini 2.5 Pro into the best on-device voice dictation model.

Pretty sure it would have worked. Alas.


Reasons for running local aside...

What is the practical latency difference you see between on-device and, say, whisper, in streaming mode, over the internet? Comparable? Seems that internet latency would be mostly negligible (assuming reasonable internet/cell coverage), or at least compensated for by the higher end hardware on the other side?


depends on the model!

If you run a smaller whisper-distil variant AND you optimize the decoder to run on Apple Neural Engine, you can get latency down to ~300ms without any backend infra.

The issue is that the smaller models tend to suck, which is why the fine-tuning is valuable.

My hypothesis is that you can distill a giant model like Gemini into a tiny distilled whisper model.

but it depends on the machina you are running, which is why local AI is a PITA.


> Payment of compensation to Iran

Fox News is still singing in chorus about the billion dollars payment to Iran by Obama.


> effectively $20/$200 in credits for codex

So, 1.3ish million tokens for Codex? Following the token limit from here https://openai.com/api/pricing/


It's not a product, but enablement or a feature! Just like a 'Pro' label :-)

> America is at near full employment [2]

That can’t be further from the truth


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: