Maybe this is a naive question, but why wouldn't there be market for this even for frontier models? If Anthropic wanted to burn Opus 4.6 into a chip, wouldn't there theoretically be a price point where this would lower inference costs for them?
Because we don't know if this would scale well to high-quality frontier models. If you need to manufacture dedicated hardware for each new model, that adds a lot of expense and causes a lot of e-waste once the next model releases. In contrast, even this current iteration seems like it would be fantastic for low-grade LLM work.
For example, searching a database of tens of millions of text files. Very little "intelligence" is required, but cost and speed are very important. If you want to know something specific on Wikipedia but don't want to figure out which article to search for, you can just have an LLM read the entire English Wikipedia (7,140,211 articles) and compile a report. Doing that would be prohibitively expensive and glacially slow with standard LLM providers, but Taalas could probably do it in a few minutes or even seconds, and it would probably be pretty cheap.
The demo was so fast it highlighted a UX component of LLMs I hadn’t considered before: there’s such a thing as too fast, at least in the chatbot context. The demo answered with a page of text so fast I had to scroll up every time to see where it started. It completely broke the illusion of conversation where I can usually interrupt if we’re headed in the wrong direction. At least in some contexts, it may become useful to artificially slow down the delivery of output or somehow tune it to the reader’s speed based on how quickly they reply. TTS probably does this naturally, but for text based interactions, still a thing to think about.
I tend to agree, this has been my experience with LLM-powered coding, especially more recently with the advent of new harnesses around context management and planning. I’ve been building software for over ten years so I feel comfortable looking under the hood, but it’s been less of that lately and more talking with users and trying to understand and effectively shape the experience, which I guess means I’m being pushed toward product work.
I spent some very enjoyable time browsing courses and tutorials in the Santa Fe Institute’s complexity explorer![1]
I wish I had encountered complexity science earlier in life. It touches on so many of the questions that have sparked my imagination over the years, I’m so pleased to find such an accessible introduction.
The ending of the article left me feeling he had more of an axe to grind here. The mostly unspoken ideological background is that classical art is often appropriated by proponents of Western chauvinism to demonstrate their supposed innate cultural superiority. Poorly painted reconstructions undermine that image, but it does not mean this was done intentionally. I agree that a more neutral observer would have been interested in learning the thought process of those researchers.
> Poorly painted reconstructions undermine that image, but it does not mean this was done intentionally
If I'm understanding you right, you're suggesting the author thinks that researchers are intentionally doing poor constructions to undermine public perception of classical art as part of some sort of culture war? I don't see anything in the article to suggest this
> The enormous public interest generated by garish reconstructions is surely because of and not in spite of their ugliness. It is hard to believe that this is entirely accidental. One possibility is that the reconstructors are engaged in a kind of trolling.
It's towards the end of the article. He doesn't directly mention culture war stuff but he does talk about it being "iconoclastic." I think it's a reasonable interpretation of what he was saying.
I don't think it's reasonable. If there's context I'm missing and this guy has written about culture war stuff before, fair enough, but based on this article alone, I'm not seeing any indication of that.
That phrase suggests more that the author believes this is done for spectacle, knowing that it will attract attention to the researcher far more than a nice-looking painted statue would. Basically he seems to be accusing these researchers of doing flame-bait for clicks, like those kitchen-top meal TikTok videos designed to get engagement by making people angry.
Maybe my brain is oversaturated with culture war nonsense from too much doomscrolling but that’s where my train of thought went too, even if it wasn’t directly implied.
By claiming our ancient predecessors had terrible taste you can make them look like primitive fools, and make our own modernity appear superior in comparison.
When boiled down to culture war brainrot the poor coloring in the reconstructions becomes a woke statement that the brutish patriarchal empires of antiquity have nothing to teach our sophisticated modern selves and that new is good and old is bad. A progressive hit-piece on muh heritage.
Anything you don’t like is a purple haired marxist if you squint hard enough.
Idk why my brain went there. I’m guessing the years of daily exposure to engagement-farming ragebait had something to do with it.
Interesting. Like many people here, I've thought a great deal about what it means for LLMs to be trained on the whole available corpus of written text, but real world conversation is a kind of dark matter of language as far as LLMs are concerned, isn't it? I imagine there is plenty of transcription in training data, but the total amount of language use in real conversational surely far exceeds any available written output and is qualitatively different in character.
This also makes me curious to what degree this phenomenon manifests when interacting with LLMs in languages other than English? Which languages have less tendency toward sycophantic confidence? More? Or does it exist at a layer abstracted from the particular language?
I think the point here is that objecting to AI data center water use and not to say, alfalfa farming in Arizona, reads as reactive rather than principled. But more importantly, there are vast, imminent social harms from AI that get crowded out by water use discourse. IMO, the environmental attack on AI is more a hangover from crypto than a thoughtful attempt to evaluate the costs and benefits of this new technology.
On the flip side, the crypto hype machine pretty seamlessly flipped to the AI hype machine, so it makes sense the same anti crowd shifted pretty seamlessly. Given the practical applications of crypto were minimal and the externalities were mostly crime and pollution, I’m not at all surprised that many people expect the same for AI.
The anti-crypto people were correct, though. Why should we not push back when we’re seeing the same type of baseless hype that surrounded crypto being cultivated around the AI space?
They were and we should push back and yes, there is a mountain of baseless hype. But if you train your fire on the wrong thing, you risk not addressing the actual problem.
But if I say "I object to AI because <list of harms> and its water use", why would you assume that I don't also object to alfalfa farming in Arizona?
Similarly, if I say "I object to the genocide in Gaza", would you assume that I don't also object to the Uyghur genocide?
This is nothing but whataboutism.
People are allowed to talk about the bad things AI does without adding a 3-page disclaimer explaining that they understand all the other bad things happening in the world at the same time.
Because your argument is more persuasive to more people if you don't expand your criticism to encompass things that are already normalized. Focus on the unique harms IMO.
If you take a strong argument and through in an extra weak point, that just makes the whole argument less persuasive (even if that's not rational, it's how people think).
You wouldn't say the "Uyghur genocide is bad because of ... also the disposable plastic crap that those slave factories produce is terrible for the environment."
Plastic waste is bad but it's on such a different level from genocide that it's a terrible argument to make.
Adding a weak argument is a red flag for BS detectors. It's what prosecutors do to hoodwink a jury into stacking charges over a singular underlying crime.
reply