I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.
It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.
Many ASR models already support prompts/adding your own terminology. This one doesn't, but full LLMs especially such expensive ones aren't needed for that.
For those experiencing 403 errors when accessing certain models:
We're required to comply with the terms of service of our upstream model providers. Our enforcement mechanisms and regional access rules are updated continuously. We understand that enforcement changes are disruptive for people who didn't realize their usage was in violation. You can find the relevant policy in the Prohibited Content section of our [Terms of Service](<https://openrouter.ai/terms#_6_-prohibited-conduct_>).
This is not a ban on using OpenRouter. We will keep working hard to add all the best models so everyone has great options to choose from, no matter where you are.
If you are seeing a 403 author banned error and believe you are not in violation of the Terms of Service of the AI Model or Provider you are using, please fill out the [appeal form](<https://forms.gle/yc2vyJiALz8Uhbmh7>) with detailed information. We will review these and correct any mistaken restrictions.
Openrouter just practically killed hundred of services in a day, this must be illegal, ton of services have been built around Openrouter to provide API access to end-consumers (at least to my knowledge).
Their ToS now cite:
`access the Site or Service for purposes of reselling API access to AI Models or otherwise developing a competing service;`
It’s not like you ever owned anything when you built something on top of these sorts of services.
I think it is clear that there is no point providing AI based services via 3rd party AI. Openrouter may even end up with a similar fate if the upstream providers make a similar ToS change. I’ve always thought of Openrouter as a useful tool for development and chat that lets me add a zoo of models quickly. Anything relatively close to production? Fix a model version and use a providers API, for as long as that’s supported.
same argument could apply for AWS tho, you wouldn't expect them to just cut you off and all your sub-customers because you are leveraging the infrastructure YOU PAY for your users...
My guess is that probably not for Muon. What I said about ADAM was partly based on this blogpost I read some time ago, should have cited it as well [0].
The thing about Muon is that it doesn't have this specific feature of ADAM that causes it to "move along the diagonal". Basically if you flatten weights as a huge vector of a few billion elements. SGD moves along the gradient, which isn't biased. ADAM normalizes everything elementwise, so it sort of moves along a vector of +-1.
This isn't a proof or anything, but what you can imagine might be happening is that if you move along +-1, then you find spikey solutions somehow. Not sure how to prove that. Muon doesn't really do this, but it has its own sort of funky reshaping of the update (it moves along low rank directions).
I don't know a ton about this field (though I'm curious, if anyone had recommended material) but I used Verilator for a risc-v project in university and it was excellent, substantially faster than Yosys
It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
reply