Hacker Newsnew | past | comments | ask | show | jobs | submit | quatonion's commentslogin

I used it to add a MIDI driver and support to my OS this afternoon. Worked okay, but I agree it is a bit clunky yet. I think it is pretty good for a preview release. Much better than nothing.


You aren't the only one. Lol. Those maniacs are out of control.

They really need to get rid of human mods on there, it is completely toxic.


I did this with my experimental site GoodFaith.

The idea is everyone posts in good faith, and AI moderates the posts before you send.

You can still go ahead if you don't agree with the moderation, so it isn't censorship, it's just that it provides a pause.

The AI also rates the posts and comments, just like you said.

I built it for precisely the reasons you said, because the way it is going, reddit is unsustainable.

I hope they change something.


I have been having a crack at it in my spare time. A kind of intentional LISP where functions get turned into WASM in the cloud.

The functions are optionally tested using formal verification. I plan to enable this by default soon, as time allows.

These functions that get written can then be composed, and enzymes that run in the cloud actively look for functions to fuse.

Also more people use it, the faster the compiler gets via network scaling laws.

It's very much research at the moment, but kinda works.

Jupyter Notebook style interface with the beginnings of some image and media support.

https://prometheus.entrained.ai

Can try looking at some of the examples or trying something yourself.

Would love some feedback.


For some time I have been trying to replace the very costly attention pass in LLMs.

Here is my current attempt at fixing things.

This is applicable beyond LLMs, but that is certainly an important use case.

Description, Ready to use Code and Interactive Educational materials inside.


This is really interesting. I always wondered how it works.

Couple of years ago I did some experiments using a surrogate for attention using a feed forward network (MLP) to avoid the quadratic explosion.

It worked but had problems at the time, and my mind wasn't really in it.

This has dug it back out again with the benefit of time and additional insights.

So now I'm thinking, you can use a lot of the insights in the work here, but also shoot for a full linear scaling surrogate.

The trick is to use the surrogate as a discriminator under an RL regime during training.

Instead of just applying better/faster math and optimizations alone, have the model learn to work with a fundamentally better inference approach during training.

If you do that, you can turn the approximation error present in the FFN surrogate inference method into a recovery signal encoded into the model itself.

I haven't tried it, but don't see a reason it shouldn't work. Will give it a go on a GPT-2 model ASAP.

Thanks again for the awesome article.


Really not a big fan of batteries included opinionated protocols.

Even Cap'n Proto and Protobuf is too much for me.

My particular favorite is this. But then I'm biased coz I wrote it haha.

https://github.com/Foundation42/libtuple

No, but seriously, it has some really nice properties. You can embed JSON like maps, arrays and S-Expressions recursively. It doesn't care.

You can stream it incrementally or use it a message framed form.

And the nicest thing is that the encoding is lexicographically sortable.


That link is dead


Are we a hundred percent sure it isn't a watermark that is by design?

A quick test anyone can run and say, yup, that is a model XYZ derivative running under the hood.

Because, as you quite rightly point out, it is trivial to train the model not to have this behaviour. For me, that is when Occam kicks in.

I remember initially believing the explanation for the Strawberry problem, but one day I sat down and thought about it, and realized it made absolutely zero sense.

The explanation that Karpathy was popularizing was that it has to do with tokenization.

However, models are not conscious of tokens, and they certainly don't have any ability to count them without tool help.

Additionally, if it were a tokenization issue, we would expect to spot the issue everywhere.

So yeah, I'm thinking it's a model tag or insignia of some kind, similar to the fun logos you find when examining many silicon integrated circuits under a microscope.


That is just a made up story that gets passed around with nobody ever stopping to obtain formal verification. The image of the whole AI industry is mostly an illusion designed for tight narrative control.

Notice how despite all the bickering and tittle tattle in the news, nothing ever happens.

When you frame it this way, things make a lot more sense.


I know right, if I didn't know any better one might think they are all customized versions of the same base model.

To be honest that is what you would want if you were digitally transforming the planet with AI.

You would want to start with a core so that all models share similar values in order they don't bicker etc, for negotiations, trade deals, logistics.

Would also save a lot of power so you don't have to train the models again and again, which would be quite laborious and expensive.

Rather each lab would take the current best and perform some tweak or add some magic sauce then feed it back into the master batch assuming it passed muster.

Share the work, globally for a shared global future.

At least that is what I would do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: