Hacker Newsnew | past | comments | ask | show | jobs | submit | acuozzo's commentslogin

> I have never heard of analog CDs.

Laserdiscs are analog.


NTSC video & film (movie film - 35mm) restoration.

> I don't think very many people predicted that it simply wouldn't matter when photorealistic compromising images of whoever you don't like

This goes hand-in-hand with the widespread death of belief in absolute truth in the US and other western nations.

If this technology were released during the height of the Monica Lewinsky scandal, I'd wager it would have had the impact most of us expected it to have, at least for a little while.


> AI and LLMs have changed one thing very quickly: competent output is now cheap.

If you're working on something not truly novel, sure.

If you're using LLMs to assist in e.g. Mathematics work on as-yet-unproven problems, then this is hardly the case.

Hell, if we just stick to the software domain: Gemini3-DeepThink, GPT-5.4pro, and Opus 4.6 perform pretty "meh" writing CUDA C++ code for Hopper & Blackwell.

And I'm not talking about poorly-spec'd problems. I'm talking about mapping straightforward mathematics in annotated WolframLanguage files to WGMMA with TMA.


I am not sure you set it up right. Did you have a runnable WolframLanguage file so it can compare results? Did you give it H100 / H200 access to compile and then iterate?

My experience is that once you have these two, it does amazing kernel work (Codex-5.4).


> Did you have a runnable WolframLanguage file so it can compare results?

Yes.

> Did you give it H100 / H200 access to compile and then iterate?

Yes via Lambda.ai. Also, FWIW, I run claude with --dangerously-skip-permissions and codex with the equivalent flag.

> it does amazing kernel work (Codex-5.4)

Specifically with WGMMA + TMA?

---

Once TMA gets involved both Claude and Codex spin endlessly until they dump TMA for a slower fallback.

I've observed this with Claude-Code having Opus 4.6 reasoning set to medium, high, and max; "adaptive thinking" enabled and disabled; and I've made sure to max-out thinking tokens.

I've also observed this with Codex GPT-5.4 in addition to GPT-5.3-Codex with reasoning efforts from medium to xhigh.

---

I've also observed this on the web, as mentioned in my OP, with GPT-5.4pro (Extended Pro), Gemini3-DeepThink, and Opus 4.6.


That is informative, thanks! Yes, I observe the same thing as the model tends to give up (like you said, "dump TMA for a slower fallback") and needs active steering to get good results. But it indeed works further than one-shot from Chat interface and knows much more about profiling / kernel coding than these.

It doesn't have to be anything so extreme as novel work. The frontier of models still struggle when faced with moderately complex semantics. They've gotten quite good at gluing dependencies together, but it was a rather disappointing nothingburger watching Claude choke on a large xterm project I tried to give him. Spent a month getting absolutely nowhere, just building stuff out until it was so broken the codebase had to be reset and he'd start over from square 1. We've come a long way in certain aspects, but honestly we're just as far away from the silver bullet as we were 3 years ago (for the shit I care about). I'm already bundling up for the next winter.

> I solve that in a hilarious way: by uninstalling the app when I’m not using it.

Ha, I do the same thing!


> but that was not a replacement of understanding what multiplication was

You're conflating an algorithm in N or Z with inherent meaning.

Let's shift over to R: Expand e*pi to repeated addition.

Think exponentiation is "repeated multiplication"? Try 2^pi.


> our countries identity has basically become "we're all immigrants" but that's not a new phenomena

This is a very "New World" perspective which won't cut it for the "Old World" until their countries hit their own crisis points.


> laundry [is] already automated

Partially. Ironing/steaming is only partially automated. Folding/hanging is not.


> How do you view HTML/Code/JSONs in other applications?

Not GP, but I'll be forever thankful to have been able to make my career focused on embedded software.

In my line of work there's nothing to view because there's no visual component at all. If my user(s) "see" the results of my work, then it means I've catastrophically fucked up.

I spend 90% of my time working in vim within XTerm.

The closest I get to UI/UX is a UART debugging interface.


> very specific edge cases

Mathematics is hardly an edge case, but SOTA models differ wildly in their ability to write proofs for unsolved problems.

Models also differ wildly in tasks like decompilation for reverse engineering.

Also, so far, the only model I've found which can competently write PTX for SM100 CUDA devices is GPT-5.4pro, but I'm willing to admit that this is more of an edge case than the aforementioned.

AFAICT, the extent to which someone finds models interchangeable is inversely proportional to the novelty of their work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: