Hacker Newsnew | past | comments | ask | show | jobs | submit | cpard's commentslogin

Building TUIs might be easy now but building good user experience on a TUI is feels harder than ever has been to me. The modern libraries make a lot of things easy but we are currently pushing terminals far beyond what they were designed for.

Claude Code et.al. are good examples of that. Diffs, user approval flows, non-linear flows in general and a ton of text buffered are all elements that we know really well how to handle in web interfaces but are challenging for the terminal.


It's important in a book treating an emerging field (data eng for LLMs) to mention emerging categories related to it such as storage formats purpose built for the full ML lifecycle.

Lance[1] (the format, not just LanceDB) is a great example, where you have columnar storage optimized for both analytical operations and vector workloads together with built-in versioning for dataset iteration.

Plus (very important) random access, which is important for stuff like sampling and efficient filtering during curation but also for working with multimodal data, e.g. videos.

Lance is not alone, vortex[2] is another one, nimble[3] from Meta yet another one and I might be missing a few more.

[1] https://github.com/lance-format/lance [2] https://vortex.dev [3] https://github.com/facebookincubator/nimble


AI can be an amazing productivity multiplier for people who know what they're doing.

This result reminded me of the C compiler case that Anthropic posted recently. Sure, agents wrote the code for hours but there was a human there giving them directions, scoping the problem, finding the test suites needed for the agentic loops to actually work etc etc. In general making sure the output actually works and that it's a story worth sharing with others.

The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding. It works great for creating impressions and building brand value but also does a disservice to the actual researchers, engineers and humans in general, who do the hard work of problem formulation, validation and at the end, solving the problem using another tool in their toolbox.


>AI can be an amazing productivity multiplier for people who know what they're doing.

>[...]

>The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.

You're sort of acting like it's all or nothing. What about the the humans that used to be that "force multiplier" on a team with the person guiding the research?

If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.

For a more current example: do you think all the displaced Uber/Lyft drivers aren't going to think "AI took my job" just because there's a team of people in a building somewhere handling the occasional Waymo low confidence intervention, as opposed to being 100% autonomous?


Where I work, we're now building things that were completely out of reach before. The 90% job loss prediction would only hold true if we were near the ceiling of what software can do, but we're probably very, very far from it.

A website that cost hundreds of thousands of dollars in 2000 could be replaced by a wordpress blog built in an afternoon by a teenager in 2015. Did that kill web development? No, it just expanded what was worth building


> If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.

Yes, but this assumes a finite amount of software that people and businesses need and want. Will AI be the first productivity increase where humanity says ‘now we have enough’? I’m skeptical.


> Yes, but this assumes a finite amount of software that people and businesses need and want.

A lot of software exists because humans are needy and kinda incompetent, but we needed to enable to process data at scale? Like, would you build SAP as it is today, for LLMs?


This is all inevitable with the trajectory of technology, and has been apparent for a long time. The issue isn't AI, it's that our leaders haven't bothered to think or care about what happens to us when our labor loses value en masse due to such advances.

Maybe it requires fundamentally changing or economic systems? Who knows what the solution is, but the problem is most definitely rooted in lack of initiative by our representatives and an economic system that doesn't accommodate us for when shit inevitably hits the fan with labor markets.


there's 90% job loss assuming that this is a zero sum type of thing where humans and agents compete for working on a fixed amount of work.

I'm curious why you think I'm acting like it's all or nothing. What I was trying to communicate is the exact opposite, that it's not all or nothing. Maybe it's the way I articulate things, I'm genuinely interested what makes it sound like this.


Fully agree with your og comment and I didn’t get the same read as the person above at all.

This is a bizarre time to be living in, on one hand these tools are capable of doing more and more of the tasks any knowledge worker today handles, especially when used by an experienced person in X field.

On the other, it feels like something is about to give. All the superbowl ads, AI in what feels like every single piece of copy coming out these days. AI CEOs hopping from one podcast to another warning about the upcoming career apocalypse…I’m not fully buying it.


The optimistic case is that instead of a team of 10 people working on one project, you could have those 10 people using AI assistants to work on 10 independent projects.

That, of course, assumes that there are 9 other projects that are both known (or knowable) and worth doing. And in the case of Uber/Lyft drivers, there's a skillset mismatch between the "deprecated" jobs and their replacements.


Well those Uber drivers are usually pretty quick to note that Uber is not their job, just a side hustle. It's too bad I won't know what they think by then since we won't be interacting any more.

> The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.

It's also a legitimate concern. We happen to be in a place where humans are needed for that "last critical 10%," or the first critical 10% of problem formulation, and so humans are still crucial to the overall system, at least for most complex tasks.

But there's no logical reason that needs to be the case. Once it's not, humans will be replaced.


The reason there is a marketing opportunity is because, to your point, there is a legitimate concern. Marketing builds and amplifies the concern to create awareness.

When the systems turn into something trivial to manage with the new tooling, humans build more complex or add more layers on the existing systems.


The logical reason is that humans are exceptionally good at operating at the edge of what the technology of the time can do. We will find entire classes of tech problems which AI can't solve on its own. You have people today with job descriptions that even 15 years ago would have been unimaginable, much less predictable.

To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.


> To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.

AGI means fully general, meaning everything the human brain can do and more. I agree that currently it still feels far (at least it may be far), but there is no reason to think there's some magic human ingredient that will keep us perpetually in the loop. I would say that is delusional.

We used to think there was human-specific magic in chess, in poker, in Go, in code, and in writing. All those have fallen, the latter two albeit only in part but even that part was once thought to be the exclusive domain of humans.


When I refer to AI, I mean the "AI" that has materialized thus far - LLMs and their derivatives. AGI in the sense that you mean is science fiction, no less than it was 50 years ago. It might happen, it might not, LLMs are in all likelihood not a pathway to get there.

I'm not sure you can call something an optimizing C compiler if it doesn't optimize or enforce C semantics (well, it compiles C but also a lot of things that aren't syntactically valid C). It seemed to generate a lot of code (wow!) that wasn't well-integrated and didn't do what it promised to, and the human didn't have the requisite expertise to understand that. I'm not a theoretical physicist but I will hold to my skepticism here, for similar reasons.

sure, I won't argue on this, although it did manage to deliver the marketing value they were looking for, at the end their goal was not to replace gcc but to make people talk about AI and Anthropic.

What I said in my original comment is that AI delivers when it's used by experts, in this case there was someone who was definitely not a C compiler expert, what would happen if there was a real expert doing this?



Actually, the results were far worse and way less impressive than what the media said.

the c compiler results or the physics results this post is about?

The C compiler.

of course the results were much worse than what was communicated on the media, it was content marketing not an attempt to build a better c compiler.

His point is going to be some copium like since the c compiler is not as optimized as gcc, it was not impressive.

You probably don’t know what you’re talking about.

Why wasn't the C compiler it made impressive to you?

Like everything genAI, it was amazing yet surprisingly crappy.

Yes, the bear is definitely dancing.

But a few feet away there's a world-class step dancer doing intricate rhythms they've perfected over twenty years of hard work.

The bear's kind of shuffling along to the beat like a stoner in a club.

It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.


>It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.

No one has made that assertion; however, the fact that it can create a functioning C compiler with minimal oversight is the impressive part, and it shows a path to autonomous GenAI use in software development.


OK, but don't you see where this is going? The trajectory that we're on?

It didn’t work without gcc and it was significantly worse than gcc with gcc optimizations disabled.

I found this was the least impressive bit about it https://github.com/anthropics/claudes-c-compiler/issues/1

>I found this was the least impressive bit about it https://github.com/anthropics/claudes-c-compiler/issues/1

So, I just skimmed the discussion thread, but I am not seeing how this shows that CCC is not impressive. Is the point you're making that the person who opened the issue is not impressive?


AI is indeed an amazing productivity multiplier! Sadly that multiplier is in the range [0; 1).

> for people who know what they're doing.

I worry we're not producing as many of those as we used to


We will be producing them even less. I fear for the future graduates, hell even for school children, who are now uncontrollably using ChatGPT for their homework. Next level brainrot

Right. If it hadn't been Nicholas Carlini driving Claude, with his decades of experience, there wouldn't be a Claude c compiler. It still required his expertise and knowledge for it to get there.

Everytime I see a RL startup, a data startup or even a startup focused on a specific vertical, I think this exact same thing about LLMs.

As others said, Vortex is complementary to the table Formats you mentioned.

There are other formats though that it can be compared to.

The Lance columnar format is one: https://github.com/lancedb/lancedb

And Nimble from Meta is another: https://github.com/facebookincubator/nimble

Parquet is so core to data infra and widespread, that removing it from its throne is a really really hard task.

The people behind these projects that are willing to try and do this, have my total respect.


Many times I read something on HN and come back to find it after a few days or weeks and using the current keyword based search has been consistently giving me a hard time, so I played around with LLMs as an alternative way of searching and finding information on HN.


Servethehome[1] does a bit of a better job describing what maverick-2 is and why it makes sense.

[1]https://www.servethehome.com/nextsilicon-maverick-2-brings-d...


Thats a fairly specialized chip and requires a bunch of custom software. The only way it can run apps unmodified is if the math libraries have been customized for this chip. If the performance is there, people will buy it.

For a minute I thought maybe it was Risc-V with a big vector unit, but its way different from that.


The quote at the end of the posted Reuters article (not the one you’re responding to) says that it doesn’t require extensive code modifications. So is the “custom software” is standard for the target customers of nextsilicon?


Companies often downplay the amount of software modifications necessary to benefit from their hardware platform's strengths because quite often, platforms that cannot run software out of the box lose out compared to those that can.

By the time special chips were completed and mature, the developers of "mainstream" CPUs had typically caught up speedwise in the past, which is why we do not see any "transputers" (e.g. Inmos T800), LISP machines (Symbolics XL1200, TI Explorer II), or other odd architectures like the Connection Machine CM-2 around anymore.

For example, when Richard Feynman was hired to work on the Connection Machine, he had to write a parallel version of BASIC first before he could write any programs for the computer they were selling: https://longnow.org/ideas/richard-feynman-and-the-connection...

This may also explain failures like Bristol-based CPU startup Graphcore, which was acquired by Softbank, but for less money than the investors had put in: https://sifted.eu/articles/graphcore-cofounder-exits-company...


XMOS (spiritual successor to Inmos) is still kicking around, it’s not without its challenges though, for the reasons you mention.


It's a bit more complicated, you need to use their compiler (LVVM fork with clang+fortran). This in itself is not that special as most accelerators (ICC, nvcc, aoc) already require this.

Modifications are likely on the level of: Does this clang support my required c++ version? Actual work is only required when you want to bring something else, like Rust (AFAIK not supported).

However, to analyze the efficiency of the code and how it is interpreted by the card you need their special toolchain. Debugging also becomes less convenient.


>> says that it doesn’t require extensive code modifications

If they provide a compiler port and update things like BLAS to support their hardware then higher level applications should not require much/any code modification.


The article says they are also developing a RISCV CPU


I've also found their "Technology Launch" video[1] that goes somewhat deeper into the details (they also have code examples.)

[1] https://www.youtube.com/watch?v=krpunC3itSM


They've got a "Mill Core" in there- is the design related to the Mill Computing design?


Yeah, it's an unfortunate overlap. The Mill-Core in NextSilicon terminology is the software defined "configuration" of the chip so to speak that represents swaths of the application that are deemed worthy of acceleration as expressed on the custom HW.

So really the Mill-Core is in a way the expression of the customer's code. really.


They are completely different designs, but the name is inspired by the same source: the Mill component in Charles Babbage's Analytical Engine.


A framework for optimizing LLM agents, including but not limited to RL. You can even do fine tuning, they have an example with unsloth in there.

The design of this is pretty nice, it's based on a very simple to add instrumentation to your agent and the rest happens in parallel while your workload runs which is awesome.

You can probably do also what DSPy does for optimizing prompts but without having to rewrite using the DSPy API which can be a big win.


We are very excited for this integration with HF datasets. Datasets have a huge potential to deliver some much needed developer experience when it comes to working with data and LLMs/agentic architectures. Happy to answer any questions and also hear what the community thinks.


LLMs have the potential to compress the cost of learning new programming models. The current moats built around that cost will start to dissolve and that's a good thing.


That’s the whole reason of existence of Iceberg, Delta and Hudi right?

Not as easy as just appending metadata to a parquet file but in the other hand, parquet was never and probably shouldn’t be designed with that functionality in mind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: