Yes, Marble (from World Labs) feels like it's generating Gaussian Splats or similar. I guess it's more compatible and easier to use for 3d asset generation and reusing in other software.
Very exciting times ahead!
Yes, I also see that (also using dark mode on Chrome without Dark Reader extension). I sometimes use the Dark Reader Chrome extension, which usually breaks sites' colours, but this time it actually fixes the site.
I just wanted to check whether there is any information about the pricing. Is it the same as Qwen Max? Also, I noticed on the pricing page of Alibaba Cloud that the models are significantly cheaper within mainland China. Does anyone know why? https://www.alibabacloud.com/help/en/model-studio/models?spm...
There’s a domestic AI price war in China, plus pricing in mainland China benefits from lower cost structures and very substantial government support e.g., local compute power vouchers and subsidies designed to make AI infrastructure cheaper for domestic businesses and widespread adoption.
https://www.notebookcheck.net/China-expands-AI-subsidies-wit...
All of this is true and credit assignment is hard, but the brutal competition between Chinese firms, especially in manufacturing, differentiates them from and advances them over economies in the west. It makes investment hard as profits are competed away, which is blasphemy in Thiel's worldview, but is excellent for consumers both local and global.
The costs of Netflix and Spotify are licensing. Offering the subscription at half price to additional users is non-cannibalizing and a way to get more revenue from the same content.
The cost of LLMs are the infrastructure. Unless someone can buy/power/run compute cheaper (Google w/ TPUs, locales with cheap electricity, etc), there won't be a meaningful difference in costs.
That assumes inference efficiency is static, which isn't really the case. Between aggressive quantization, speculative decoding, and better batching strategies, the cost per token can vary wildly on the exact same hardware. I suspect the margins right now come from architecture choices as much as raw power costs.
Slightly off-topic, surveillance Pricing is a term being used more often, whereby even hotel room prices vary based on where you're booking from, what terms you searched for etc.
I think we are just very close to the peak of a typical Gartner hype cycle around LLMs. They are useful but overhyped. There will be more posts about fuckups that happen because people run things on autopilot and cannot keep up with reviewing AI generated code.
Do not get me wrong. I use AI all day to speed things up. But I believe that there is only a small group, maybe 5 percent or less, that actually knows how to use AI properly (I'd count myself not yet in that 5%), which I see as potentially dangerous. The other issue I see is inexperienced software engineers writing software. Although I see this as a great value add and productivity boost for prototyping, I am afraid of the “I do not know much about coding but can also make PRs to our codebase” mentality.
For those of you that run things on autopilot, how do you keep code quality under control? And how do you handle refactoring? I am really curious, because one option now is also to just YOLO your LLMs to write code based on the maturity of the product. You can refactor an app or parts of it pretty fast again with LLMs. While tech debt accumulates faster, we also have the opportunity to rebuild faster.
Is the price here correct? https://openrouter.ai/moonshotai/kimi-k2-thinking
Would be $0,60 for input and $2,50 for 1 million output tokens. If the model is really that good it's 4x cheaper than comparable models. It's hosted at a loss or the others have a huge margin? I might miss something here.
Would love some expert opinion :)
Somehow that article totally ignored the insane pricing of cached input tokens set by Anthropic and OpenAI. For agentic coding, typically 90~95% of the inference cost is attributed to cached input tokens, and a scrappy China company can do it almost for free: https://api-docs.deepseek.com/news/news0802
Yes, you may consider that opensource models hosted over Openrouter are charging about bare hardware costs, where in practice some providers there may run on subsidized hardware even, so there is money to be made.
I can only agree with your experience in Europe. I do not get how they do that, but Tesla Superchargers are more reliable. The occupancy information works better, they are easier to use, and they almost always offer a more competitive price. I often see other chargers that are 50 to 100 percent more expensive and only very rarely see offers that are within 10 to 50 percent.
What strikes me is that this difference can make EVs more expensive per kilometer if you only compare energy cost with fuel cost.
Here is the math with numbers.
Tesla chargers in Switzerland and Germany are usually at most CHF 0.50 or EUR 0.60 per kilowatt hour at the more expensive locations, along highways for example. They offer fast charging of 150 kW or more.
Alternative providers often start at around CHF 0.75 for 50 kW or CHF 1.00 for more than 250 kW fast charging.
If your electric car consumes 20 kWh (Model 3 is at around 15 I think) per 100 km you end up with costs of CHF 10.00, CHF 15.00, or CHF 20.00 per 100 km at CHF 0.50, CHF 0.75, or CHF 1.00 per kilowatt hour. If you drive a petrol car that uses 8 l per 100 km and the cost per liter is CHF 1.70 you pay CHF 13.60 per 100 km.
In Slovakia superchargers cost around 0.30-0.37 €/kWh, while the competitors are priced around 0.45-0.60, so yes there is a major price difference as well.
To be fair the others offer subscription plans which lower the price, but such plans don't suit me, so I pay the full prices.
While the site says they are available 24/7 they are in the turnpiked parking space of a shopping mall. Closed between 8PM to 9AM and on Sundays. Fun to see danish tourists desperately try to reach them while very low on juice.