Hacker Newsnew | past | comments | ask | show | jobs | submit | numbers's commentslogin

Yeah, I have a Facebook that's about 2-3 years old now, and I use it mainly from Marketplace. But man, if I just accidentally go to the feed, it's just a bunch of spam and some sort of bait, whether it's rage bait or thirst traps or anything like that. Facebook is maybe trying to see if I'll engage with it, but mainly because I use the app for Marketplace, it just continues to recommend garbage.

Next time someone is confused about the meaning of the word "Enshitification" just pull up Facebook.

please add in the keyboard shortcuts to navigate, that's one of my favorite things about native desktop apps

I will look into this for the next release. Thanks for the idea!

I love TUIs and I love the way this looks and the concept behind it, but often I'm doing household stuff on my phone because I'm walking around checking on things or just taking photos of things.

Yes, one of the other comments alluded to this as well. I am also in this boat, so other than bizarroland LLM ingest stuff, I'll probably work on this next. Having never written a mobile app, I'm sure it'll be fine.

I remember this was back in 2023, when ChatGPT had first launched, and I had a manager whose English was not very good. He started sending emails that felt like they were written by a copywriter. And the messaging was so hard to parse through because there's so much ChatGPT fluff around it. Very quickly we realized that what he was saying was usually in the middle somewhere, but we'd have to read through the intro and the ending of the emails just so that we couldn't miss anything. It felt like wasting 2-3 extra minutes per team member.

I have long believed that LLMs will herald a new corporate data transfer format, unlike most new formats that boast efficiency gains and compression, this new format will be incredibly wasteful and bloat transmission sizes.

I'll want to communicate something to my team. I'll write 4 bullet points, plug it into an LLM, which will produce a flowing, multi paragraph e-mail. I'll distribute it to my co-workers. They will each open the e-mail, see the size, and immediately plug it into an LLM asking it to make a 4 bullet summary of what I've sent. Somewhere off in the distance a lake will dry up.


> I'll want to communicate something to my team. I'll write 4 bullet points, plug it into an LLM, which will produce a flowing, multi paragraph e-mail. I'll distribute it to my co-workers. They will each open the e-mail, see the size, and immediately plug it into an LLM asking it to make a 4 bullet summary of what I've sent. Somewhere off in the distance a lake will dry up.

All while both sides were charged per token for processing. This is _the dream_ of these AI firms.


And hopefully they're the same four same bullet points..

Ah, yes, the LLM Exchange Protocol.

I believe it's already in place, making the internet a bit more wasteful.


HypoText Transfer Protocol

the solution is simple, ask ChatGPT to summarize it

a large part of the business models of these systems is going to consist of dealing with these systems... it's a wonderful scheme


Why bother fixing existing problems if you can just create new problems and then fix those? /s

man, I want to support something like Zulip, I would even want to work on a product like this but one thing I'd say is you have to go back and study why Slack beat Hipchat and others. It's so simple in hindsight but it was the marketing and the UI/UX of Slack that made it so much easier to use. If you'd like, I have a ton of ideas and experience building UIs and would love to give you some of my input. Too much typing for a comment at the moment.

You should stop by #feedback in chat.zulip.org and share your ideas!

Regarding the history: Slack had very effective marketing, powered by a lot of venture capital. And HipChat was a weak product that had an embarrassing total hack, which did not leave customers with confidence that their data was safe there.

Zulip is not venture-funded, so we're reliant on people sharing it with others to get the word out.

As a side note, I don't think Slack could have succeeded if it launched today. Microsoft Teams has far far more users as Slack, and it's slopware. You can thank the end of anti-trust enforcement for that.


Fun fact: Shortly after MS Teams launched I created an internal "reconstituted" desktop Teams client for myself and the poor souls in my org that had MS Teams thrust upon them. It extracted resources from the (unminified!) electron app as well the js and CSS files from their web version, then repackaged it again via electron, wrapping into a standalone executable. Think like a really complicated greasemonkey/tampermonkey script.

My fork at the time replaced their criminal white space use and offered a more compact and information dense alternative using CSS and JavaScript, injected all post rendering. Ah, the silly things one is capable when faced with a minor inconvenience and a wandering mind...


does anyone know if there's any desktop tools I can use this transcription model with? e.g. something where like Wisper Flow/WillowVoice but with custom model selection


There is Handy, an open source project meant to be a desktop tool, but I haven’t installed it yet to see how you pick your model.

Handy – Free open source speech-to-text app https://github.com/cjpais/Handy


Try https://ottex.ai/

I recently added support for Mistral provider, model is actually a very good one, I personally switched to it as my default model.

p.s. the app is free for personal user, has support for both local models and BYOK with OpenRouter, Groq, Mistral, Fireworks, and more coming soon.


I want a bright lamp like this but not for $1200...any suggestions?


links are broken to cloudflare and vercel. who's writing these...


Well, given the subject ..


Fixed now!


I'm seeing the image on zen which is a firefox fork but not on firefox itself :/

even with `image.jxl.enabled` I don't see it on firefox


Checking the Firefox bugs on this, it seems they decided to replace the C++ libjxl with a rust version which is a WIP, to address security concerns with the implementation. All this started a few months ago.

Maybe the zen fork is a bit older and still using the C++ one?


... update. after reading the comments in the rust migration security bug, I saw they mentioned "only building in nightly for now"

I grabbed the nightly firefox, flipped the jxl switch, and it does indeed render fine, so I guess the rust implementation is functioning, just not enabled in stable.

... also, I see no evidence that it was ever enabled in the stable builds, even for the C++ version, so I'm guessing Zen just turned it on. Which... is fine, but maybe not very cautious.


zen browser is pretty much vibe coded


Do you have any proof/more about this? I've never heard this claim and I'd like to know more


1. Zen Browser had remote debugging enabled by default and disabled the security prompt for it. Extreme incompetence or malice? https://github.com/zen-browser/desktop/pull/927

2. Social trackers are selectively allowed, unsigned extensions are enabled by default, and Enhanced Tracking Protection isn't fully implemented.

There's just a theme of incompetence, trying to cover it up and just in general being clueless about security.


good. image parsing has produced so many bad RCEs.


Google Chrome is using a Rust implementation. The existence and sufficient maturity of it is the reason they were willing to merge support in the first place.


Hmmm, check the jxl-rs repository. I wouldn’t call it mature. Not to say it’s buggy, but most of its code is very fresh.


Flipping `image.jxl.enabled` made it work for me after refreshing the page. I'm using Librewolf 146.0.1-1, but I guess it works just fine in firefox 146


I'm going to start doing this with Rufus too


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: