Hacker Newsnew | past | comments | ask | show | jobs | submit | wild_egg's commentslogin

Does that apply to subagents?

I keep hearing OpenClaw runs on pi?

EDIT: confused by downvotes. In this thread people are saying it runs on top of `claude -p` and others saying it's on pi.

The `claude -p` option is allowed per https://x.com/i/status/2040207998807908432 so I really don't understand how they're enforcing this.


It runs on pi, not claude -p

That's my understanding too, though i haven't checked it. running claude -p would be horribly inefficient. I would not be surprised if openclaw added some compatibility layer to brute force prompts through claude -p as a workaround. This isn't the first time that openclaw was "banned" from claude subscriptions.

Don't have a GPU so tried the CPU option and got 0.6t/s on my old 2018 laptop using their llama.cpp fork.

Then found out they didn't implement AVX2 for their Q1_0_g128 CPU kernel. Added that and getting ~12t/s which isn't shabby for this old machine.

Cool model.


Are you getting anything besides gibberish out of it? I tried their recommended commandline and it's dog slow even though I built their llama.cpp fork with AVX2 enabled. This is what I get:

    $ ./build/bin/llama-cli     -hf prism-ml/Bonsai-8B-gguf -p "Explain quantum computing in simple terms." -n 256 --temp 0.5 --top-p 0.85 --top-k 20 -ngl 99
    > Explain quantum computing in simple terms.

     \( ,

      None ( no for the. (,./. all.2... the                                                                                                                                ..... by/

EDIT: It runs fine in their collab notebook. Looking at that you have to do: git checkout prism (in the llama.cpp repo) before you build. That's a missing instruction if you're going straight to their fork of llama.cpp. Works fine now.

UPDATE: I was using the llama.cpp CPU backend and was still getting gibberish. On Google colab they're running with CUDA. I turned Claude loose on the problem and it discovered a problem in the llama.cpp CPU backend code where a float was being converted to an int and basically going to 0. Now it runs fine locally with the CPU backend.

Mind sharing the fix as a patch? I would like to run it this way, too.


"Not shabby" is a big understatement.

Why so?

Because it's the opposite of shabby

What are the reasons?

Virtual memory doesn't matter at all. It's virtual. You can take 2TB of address space, use 5MB of it, and nothing on the system cares.

Have a read through everything that's needed for a full uninstall: https://gist.github.com/banteg/1a539b88b3c8945cd71e4b958f319...

Minimalist alternative with no hooks or dependencies for the curious: https://github.com/wedow/ticket


Ticket looks great, thanks!

I think we're a long ways from that.

But with that said, those who learn the underlying mechanisms will always be able to solve more problems than the folks who don't. When you know the lower pieces, your mental model tells you when and where the higher level pieces are likely to break. Legit superpower.


> But with that said, those who learn the underlying mechanisms will always be able to solve more problems than the folks who don't

I would define that as being "seriously hamstrung"


seems similar to a couple of simonw's recent tools?

https://simonwillison.net/2026/Feb/10/showboat-and-rodney/


Simon’s tools are really great. Showboat is more for static screenshots though. ProofShot is the full session: recording, error capture, action timeline, PR upload. Different scope i'd say.

Factories benefit from economies of scale that favour centralization.

I think smaller groups handling more complexity is on point. But that's because each group will build their own bespoke factory catered to their exact needs.

I very fully expect a mass proliferation of custom programs rather than standardizing on a common set that groans under the weight of being so general to support all use cases.


I miss the Unix philosophy


Wayland is far more aligned with the Unix philosophy than Xorg ever was. Xorg was a giant, monolithic, do everything app.

The Unix philosophy is fragmentation into tiny pieces, each doing one thing and hoping everyone else conforms to the same interfaces. Piping commands between processes and hoping for the best. That's exactly how Wayland works, although not in plain text because that would be a step too far even for Wayland.

Some stuff should not follow the Unix philosophy, PID 1 and the compositor are chief examples of things that should not. It is better to have everything centralized for these processes.


In X you have server, window manager, compositing manager, and clients and all is scoupled by a very flexible protocol. This seems nicely split and aligned with Unix philosophy to me. It also works very well, so I do not think this should be monolithic.


Zig also makes this trivial


I believe you. Can you provide a simple example? I would be helpful for me to understand.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: