Hacker Newsnew | past | comments | ask | show | jobs | submit | burntoutgray's commentslogin

Some ARM licensees might switch to RISC-V if ARM decides to compete by making their own chips. With the possibility of a recession looming, the build-out of data centres is likely stall. Could become a negative outcome for ARM.

I have a single pillar, admittedly for in-house PWAs: Upgrade to the current version of Chrome then if your problem persists, we'll look into it.

This is how it should be for internal stuff! Corporate IT wants everyone to update anyway so there really isn’t a downside.

One thing I kinda understand is users who want to use a more performant browser (safari really does sip memory I’ve found compared to chrome) but that’s kind of a side point. But if your company decides this is the browser(s) we support, then it makes sense and is the right way to go about it.


Keeping it simple usually saves the day.

To use an analogy, back in the days of film cameras and before 1 hour labs, the "craftsman" photographer would carefully frame the shot, carefully setting the exposure, aperture and focus. The most meticulous would take notes in a notebook. There were only 36 frames to a roll of film and all going well, the photographer had to wait a couple of days to get back the proof sheet. Those were the days when expert photographers were commissioned to take photos for special events, etc.

These days, everybody is an expert photographer, taking thousands of irrelevant photos with their smartphones. The volume of photos has exploded, the quality of the best has minimally changed (i.e. before being photoshopped, etc.)

The current crop of AI-aided tools are comparable to the early digital cameras in phones.


> These days, everybody is an expert photographer

No, everybody is a photographer, and a mediocre one at that. That's the point.

AI won't turn laypeople into expert programmers. Mediocrity might be just enough for the problem they need to solve, but quality and craftsmanship comes from dedication and hard work, not one-click solutions.


> These days, everybody is an expert photographer

If that were true there would be no wedding photographer's or any sales of high end DSLR's. The barrier of entry may have fallen but the need for real experts and tools still exists.

I expect AI's will cause a similar shift, lower barrier to entry but still requiring the hand of the expert in critical situations.


This fails to take into account a pervasive, global lowering of standards on all fronts.

Who will retain the good taste to keep paying a premium for professional photographers, etc?


We shot our wedding (2021) entirely on film. Some medium format, some 35mm, some instant film (polaroid and instax mini)

Just wait long enough

An excellent analogy. Everyone is an expert in taking the photo. But this does not make them a photographer. Even that expert claim is actually not fully true, the phone camera is woefully inadequate in many ways. But the main difference between a photographer and a layman like myself is the ability to produce output strongly linked with clear artistic intent.

Writing code is not the hard part and never has been. The hard part is having a clear understanding of how to solve a specific complex problem and being able to express that intent in code. Getting a decently exposed image was never the hard part.

Finally, there’s no scaling issues with cameras. You just make them better until it stops making economic sense. This is not true with code. To make llms better, good human-made code is needed for training. Better llms lead to less human-made code being available. This means there’s not an exponential growth in quality but a S-curve with a balance point. I’d say we are already there: innovation is shifting from the models to the ways of harnessing the models.


And you could say similar for the transition between painting and photography.

ETA: It's interesting how the bottleneck may reveal the real skill in the thing. Architecting the code. Having a eye for interestingness in creating an image / painting of something, etc.


The way some people wield LLM, etc is like using a chainsaw to cut a dovetail because it is faster.

You're definitely going to get people using LLMs running on 8x $50K GPUs in a datacenter to do the job of a bash script.

I already see people using an agent to write a git commit

What’s wrong with that? The agent session had all the business context, knows what changed, and how we verified it. It takes 5s to turn that into a PR desc vs 10-100x that by hand

Because it's not perfect and it still fabricates things from time to time.

I have coworkers who do this and it sucks to be on the receiving end of. It means I now need to read every commit message with skepticism.

It's an example of using AI to save energy for yourself while simultaneously increasing the energy expenditure of your coworkers.


100 x 5s is nearly 10 minutes. If it takes 10 minutes to write a PR there may be a "skill issue". The bottom end of this 1-2 minutes makes more sense.

How much productivity do we really need? Even at senior dev payscale 2 minutes is like a dollar. The tokens and calls involved in having a 5s commit could close in on 10¢, depending on your contract, the model etc. and that's today's costs. Do remember that my salary is on top of the rates for the LLM, so if the 5s response takes 5s for me to prompt, that's 15s (10 for me 5 for the LLM) that the boss is paying for.

This starts to feel like a billionaire eating ramen noodles just so he can reach his second billion dollars.

Where I work our contract limits API calls, so doing this could result in not being able to use the model when I need it later for something more sophisticated (planning, debugging etc.) than using tooling I'm paid to already know.


Im not even talking about the description but “commit this to git with the description x” type prompts

I figured there must be a mix of dovetails and chainsawing: beautiful:

https://dovetailcabintools.com/videos


The void isn't about AI. It is the ever present machinations by managements to extract maximum profit and reduce expenses. AI is simply the latest fad by which to lay off personnel. Technical debt can be deferred by focusing on making this quarter's profit forecasts.

In my experience, every job is a compromise between earnings (or even just having a job) and personal values.


How about you put your contact details into your profile, so that an interested developer can make contact, etc?

I've worked on construction software systems. Wouldn't mind an off-line chat.


Added to my profile please reach out.

I thought I read somewhere that Z CPUs run at 5GHz ??


ISAs fail to gain traction when the sufficiently smart compilers don't eventuate.

The x86-64 is a dog's breakfast of features. But due to its widespread use, compiler writers make the effort to create compilers that optimize for its quirks.

Itanium hardware designers were expecting the compiler writers to cater for its unique design. Intel is a semi company. As good as some of their compilers are, internally they invested more in their biggest seller and the Itanium never got the level of support that was anticipated at the outset.


I am a firm believer that if AMD wasn't in the position to be able to come up with AMD64 architecture, eventually those Itanium issues would have been sorted out, Windows XP was already there and there was no other way for 64 bit going forward.


It has never happened that a compiler was able to do static scheduling of general purpose instructions over the long term.

Every CPU changes the cycles it takes for many instructions, adds new instructions etc.

Out of order execution is a huge dividing line in performance for a reason. The CPU itself needs to figure these things out to minimize memory latency, cache latency, pipelining, prefetching and all that stuff.


I haven't said that, I said that I am a firm beliver that Itanium would have prevailed without AMD being able to push their AMD64 alternative.

Maybe compilers would get better, maybe Itanium would have needed some redesign, after all it isn't as if a Raptor Lake Refresh execution units are the same as an Xeon Nocona, yet both execute x64 instructions.


I don't know anything about Itanium in particular, but AMD's NPU uses a VLIW architecture and they had to break backwards compatibility in the ISA for the second generation NPU (XDNA2) to get better performance.


+1 -- misinformation is best corrected quickly. If not, AI will propagate it and many will believe the erroneous information. I guess that would be viral hallucinations.


One can quickly correct misinformation without being rude. It's not hard, and does not lessen the impact of the correction to do so. There's no reason to tolerate the kind of rudeness the parent post exhibits.


The military is owned by the MIC. Trump is merely following the orders from the Epstein era buddies. Such beautiful guys, those bankers. </sarcasm>


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: