Hacker Newsnew | past | comments | ask | show | jobs | submit | pzs's commentslogin

Related question: how do we resolve the problem that we sign a blank cheque for the autonomous agents to use however many tokens they deem necessary to respond to your request? The analogy from team management: you don't just ask someone in your team to look into something only to realize three weeks later (in the absence of any updates) that they got nowhere with a problem that you expected to take less than a day to solve.

EDIT: fixed typo


We'll have to solve for that sometime soon-ish I think. Claude Code has at least some sort of token estimation built-in to it now. I asked it to kick off a large agent team (~100 agents) to rewrite a bunch of SQL queries, one per agent. It did the first 10 or so, then reported back that it would cost too much to do it this way...so it "took the reins" without my permission and tried to convert each query using only the main agent and abandoned the teams. The results were bad.

But in any case, we're definitely coming up on the need for that.


> blank cheque

The Bing AI summary tells me that AI companies invested $202.3 billion in AI last year. Users are going to have to pay that back at some point. This is going to be even worse as a cost control situation than AWS.


Didn't you hear? Ads are coming! (well not to Claude, because I guess they plan to somehow get unlimited SV funding?!)


> Users are going to have to pay that back at some point.

That’s not how VC investments work. Just because something costs a lot to build doesn’t mean that anyone will pay for it. I’m pretty sure I haven’t worked for any startup that ever returned a profit to its investors.

I suspect you are right in that inference costs currently seem underpriced so users will get nickel-and-dinked of a while until the providers leverage a better margin per user.

Some of the players are aiming for AGI. If they hit that goal, the cost is easily worth it. The remaining players are trying to capture market share and build a moat where none currently exists.


I'm so glad Airbnb, Uber, Netflix, etc aren't both hiking their prices and enshittifying via ads, dark patterns, etc.

LLMs are not AGI and everyone is starting to see it. We need new basic research for that. Think fusion reactors.


What planet are you living on and how do I get there.

Yes currency is very rarely at times exchanged at a loss for power but rarely not for more currency down the road.


An AI product manager agent trained on all the experience of product managers setting budgets for features and holding teams to it. Am I joking? I do not know.


This seems pretty in line with how you’d manage a human - you give it a time constraint. a human isn't guaranteed to fix a problem either, and humans are paid by time


Just experienced this and came here to check, because even their website is down. The referenced link also returns with 500.


Hmm, (whatever is in execs' head about) AI appears to amplify the same kind of thinking fallacies that are discussed in the eternal Mythical Manmonth essay, which was written like half a century ago. Funny how some things don't change much...


This trend of overengineering is apparent now in cars, too. An innocent failure, like a headlight going out can turn into a major systemic issue, like the engine refusing to start through a chain reaction inside an inadequately tested software control system.

I wonder if this is a one-way street, that is, if a realization will come at some point that simple solutions to simple problems can be more robust...


> This trend of overengineering

I'd dispute it being over-engineering: media keys tend to control a mix of hardware and software (OS) features (looking at asus keyboards on the internet I see audio volume, mic mute, fan speed / perf governor, display off, brightness, projection mode, touchpad off, sleep, and airplane mode).

Given this, an OS driver is a requirement, and the OS further needs to access the hardware for obvious reasons.

This means you can either implement everything uniformly in driver (just bouncing from the interrupt to a hardware operation in the case of hardware features), or you can mix the implementation between firmware and driver.

Unless you have a very good justification to do so (which I'd very much dispute the existence of for gaming-oriented ASUS laptops) the latter seems like the over-engineering.


I think in many respects these problems are actually _under_-engineering. It's possible to treat software as an artefact with a measurable level of quality, and to use frankly not especially ambitious tools (programming languages with memory safety and rich type systems, unit and integration tests, etc) to build them. It's also possible to have a strong sense of user experience and taste as far as what makes a product, not just a pile of parts.

But you have to take software seriously as something that can improve a system, not just a cost centre to be minimised where possible, and an embarrassing source of problems that will ultimately end up in the newspaper or worse.


Some of the "proudly-open" laptops have open-source EC firmware. I don't have one and haven't looked deeply enough to know, but maybe they have these features sanely implemented there.

On the other hand, I'm not as optimistic about open-source BIOSes like Coreboot, whose only reason for existence seems to be "it's open-source!" --- that project has been around since the last century, yet still lacks any actual GUI/TUI for configuration, like any other BIOS has had since the late 80s.


The UI is a payload issue, not a Coreboot issue - various vendors ship Coreboot based firmware with a configuration interface, usually based on the Tiano payload. But for my EC issues I simply took the approach of reverse engineering the EC firmware, binary patching it, flashing that back, and getting on with life. Skill issue.


> I simply took the approach of reverse engineering the EC firmware, binary patching it, flashing that back, and getting on with life. Skill issue.

There is no simply here.

You can’t list a litany of niche skills before implying that’s just life and it’s everyone fault they don’t have the time and knowledge to just, you know, casually reverse engineer and patch a binary.


It was a sarcastic joke ;)


Hard to tell in writing. Still not convinced.


They call the cherries cascara, and I have come across them in some specialty coffee shops packaged just like the beans. You can pour hot (not boiling) water over them and prepare a tea-like infusion. It tastes sweet-ish without adding anything else. It gives a pretty noticeable kick to me when I drink it, even though I am a regular coffee drinker. I think it is worth a try, if you haven't done so yet.


"If the human brain were so simple that we could understand it, we would be so simple that we couldn’t." - without trying to defend such business practice, it appears very difficult to define what are necessary and sufficient properties that make AGI.


What about if the human brain were so complex that we could be complex enough to understand it?


To update this excellent quote to 2025, change minutes to seconds and you just described TikTok.


Yeah, I was thinking that the while modern social media has made the "cost of entry lower," and everyone can theoretically reach more people than ever, it's hard to even describe most of it as "fame" anymore. I mean, does content even "go viral" anymore, with users subdivided into the tiniest niche communities or audiences? Even if things get wider traction for a while, there's so much competition with so much other content that everything seems to get quickly drowned out and then can't even be found again later through search.


There’s a saying on twitter that every day there is a main character and the goal of twitter is to not be it.


"The real problem is the ROI on AI spending is.. pretty much zero. The commonly asserted use cases are the following: Chatbots Developer tools RAG/search"

I agree with you that ROI on _most_ AI spending is indeed poor, but AI is more than LLM's. Alas, what used to be called AI before the onset of the LLM era is not deemed sexy today, even though it can still make very good ROI when it is the appropriate tool for solving a problem.


AI is a term that changes year to year. I don't remember where I heard it but I like that definition that "as soon as computers can do it well it stops becoming AI and just becomes standard tech". Neural Networks were "AI" for a while - but if I use a NN for risk underwriting nobody will call that AI now. It is "just ML" and not exciting. Will AI = LLM forever now? If so what is the next round of advancements called?



There is a video on the page in which Bret Victor explains what it is all about. I find it very difficult to summarize, but my best attemp would be something like transforming computation into an activity that a community of people performs via manipulating real world objects.


This reminds me of what I learned about myself during my years spent at the university. I observed that in the morning my brain is better at understanding new concepts. Mornings were the best time for me to practice and improve problem solving, but I tend to remember less details of what I come across. However, at about 2pm my brain appears to switch to memorizing mode, where I struggle with problem solving compared to the morning, but I will remember a lot more of what I read. I structured my learning activity leveraging this observation. Even to this day (am 46) I can feel the same tendency, e.g., if a problem seems somewhat difficult, I just wait until the next morning, if I can, only to find it easy to come up with some solution that seemed out of reach the previous evening. Also, I try to do most of my reading at night (well, life with a family doesn't leave a whole lot of options for timing anyway).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: