I’ve been thinking about this a lot while building Murmel (https://murmel.social). One thing we wanted to avoid from day one was the “infinite engagement machine” model, so instead of pushing algorithmic slop, we just surface links that are already being shared by people you follow on Bluesky and Mastodon.
It ends up feeling much closer to “what’s interesting in my corner of the web right now?” and much less like a system trying to keep you trapped inside it.
Small scope, obviously, but I think more social tools should feel like utilities, not casinos.
Don't give them ideas please. They'll ask for more investment to do exactly this.
I miss the days when open source was a way to get your product in the developers hands and build trust. Stuff like this shows that the tide has shifted to primary focus on shareholders and potential hold on patents and trademarks.
Me too. I also miss the days when I was proud of my little open source projects. Now I just regret donating fuel, even a miniscule amount in the grand scheme of things, to the soulless lawnmower that has already chopped down so much of my joy in work and promises to eventually shred the paycheck, too.
I hear yah, especially knowing that AI crawlers just don't respect ROBOTS.txt or anything similar, but there's still nothing wrong with writing code for fun.. No need to lose that!
I’m old enough to remember and toy with the now long-dead XNA. It was lots of fun, and gave a lot of us students versed with C# a sort of first-hand exposure with the .NET. If only (the old) Microsoft wasn’t so stupid, short-sighted, and selfish at the time.
If only it were that rosy. I tested a few of the top open-source coding models on a beefy GPU machine, and they all behaved like anything about anything - simply rotating in circles and wasting electricity.
Has anyone used this in earnest with something like OpenCode? Over the past few months I’ve tested a dozen models that were claimed to be nearly as good Claude Code or Codex, but the overall experience when using them with OpenCode was close to abysmal. Not even a single one was able to do a decent code editing job on a real-world codebase.
With M2, yes - I’ve used it in Claude Code (e.g. native tool calling), Roo/Cline (e.g. custom tool parsing), etc. It’s quite good and for some time the best model to self-host. At 4bit it can fit on 2x RTX 6000 Pro (e.g. ~200GB VRAM) with about 400k context at fp8 kv cache. It’s very fast due to low active params, stable at long context, quite capable in any agent harness (its training specialty). M2.1 should be a good bump beyond M2, which was undertrained relative to even much smaller models.
It ends up feeling much closer to “what’s interesting in my corner of the web right now?” and much less like a system trying to keep you trapped inside it.
Small scope, obviously, but I think more social tools should feel like utilities, not casinos.
reply