Hacker Newsnew | past | comments | ask | show | jobs | submit | hackit2's commentslogin

I spent more time correcting LLMs or agentics systems than just learning the domain and doing the coding myself. I mainly leave LLM to the boring work of doing tedious repetitive code.

If I give it anything resembling anything that I'm not an expert on, it will make a mess of things.


Yeah the old adage "what you put in is what you get out" is highly relevant here.

Admittedly I'm knowledgable in most of the domains I use LLMs for, but even so, my prompts are much longer now than they used to be.

LLMs are token happy, especially Claude, so if you give it a short 1-2 sentence prompt, your results will be wildly variable.

I now spend a lot of mental energy on my prompting, and resist the urge to use less-than-professional language.

Instead of "build me an app to track fitness" it's more like:

> "We're building a companion app for novice barbell users, roughly inspired by the book 'Starting Strength.' The app should be entirely local, with no back-end. We're focusing on iOS, and want to use SwiftUI. Users should [..] Given this high-level description, let's draft a high-level design doc, including implementation decisions, open questions, etc. Before writing any code, we'll review and iterate on this spec."

I've found success in this method for building apps/tools in languages I'm not proficient in (Rust, Swift, etc.).


Yup, it makes me think that the whole bubble/marketing about how AI is going to revolutionize business and managers can just fire or make redundant 80% of their developers because they can replace them with a single Claude subscription to be hyperbolic, and very short sighted. Even from a Business standpoint - most of the cost that is associated with running a business isn't in the people but in the marketing and material cost - developers are probably the least costly part of a business. This is just due to the fact that developer have such a high ROI that it is silly to make the case that your developers are such a significant cost factor for running your business.

That being said, I may get to that stage. How-ever there is still a lot more growing pains to be had with LLM/AI before it reaches that point - if it ever does.


That can be so far from the truth it hurts thinking about it. Governments passed laws that mandated that businesses must legally comply with DOJ or Government Investigates on people of interest. Otherwise they will be blocked in those countries. No users = No money. Most government consider they're extending you the privilege to conduct business with their citizens, and by virtue of granting you those rights you're burden with complying with the countries laws/security and/or audits.


I'm not concerned, they're accelerating research and development into hardware and more optimal models. People forget that you can locally host some of the early models quantized to 4 with reasonable inference with a 4080 and 64gb of ram. There are daily tools being released that are a simple click and run, without much hassle other than downloading the model and you're off and running.

Yes there is mad dash by Google, Oracle, Microsoft, Meta, and China not to cede their position to each other - it actually isn't about who will buy or pay for the service its more of a Business Strategic position to obtain critical mass in a new market using their massive reserve of cash. The users right now are insignificant to their goal - they probably aren't even given a second thought.


I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.


Not surprised, I work in Academia and there is a push from the Business side to start marking essays and performing lectures with ChatGPT/AI.

I have my own personal reservation about it all.


I predict the future of the web will be no-bot-allowed spaces a bit like discord is right now. Once you know what you are reading was created by a bot, it looses all its appeal, except when its purely informational


I recently went down that rabbit hole, just look up how often embedded devices use fixed-point arithmetic to compensate for the lack of FPU units on the chips.


It isn't that hard. I'm currently reverse engineering a old flight simulator game called A-10 Cuba. I had to teach myself X86 Assembly, and understand basic calling convention. Then C++ vtables, struct alignment and struct layout. How-ever you do need this basic level understanding of the core fundamental to help you along when the tools you use IDA, Ghidra that turn the assembly code back into C pseudo code.

So there is a big hurdle to get over in the initial stages but you soon find out that a lot higher code structure/scaffold isn't wiped out by the compiler. For example, the generated assembly code very closely mirrors the C/C++ function boundaries. This enables you to infer the over-all original code structure/layout basically from the call chain, and then you can manually step through and figure out what the original programmer was trying to achieve - abet the order of execution does get messed up by the compiler but it isn't that bad.

In my project with A-10 Cuba, I was successful in reverse engineering its file format, the over-all module layout, engine and rendering engine during my three weeks break. I still have some time to work out the AI logic, and mission design but one builds on another. What do I mean one builds on another? Well when you first start you have no types, not structs. So the first days you think you're making absolutly no progress because you're trying to calculate pointer offsets and structs layouts in IDA. I highly recommend Google Gemini or Claude code to do this heavy lifting because you can get away with a lot by asking it (for this IDA Pseudocode, infer what the struct layout is and tell me what it is doing?).

The first stage of getting those first struct layout is painstaking, then you soon can branch off one strut, or struct pointer to another. This feeds back like a feed-back loop - because programmers are lazy. And you soon have a large part of the struct/code-flow layout figured out.

You then take the structs/code-flow, and pesudo code and then do a re-write in a modern C/C++ compiler until you have a working version.


It would likely be enormously useful in this to have a development wiki behind the games that devs and people that played the games may know the engines, file formats, save file formats, compilers, Langs, etc.

Old devs of the games could enlighten the game preservation community anonymously.

Game dev was a frontier and hardware pushing activity even in the directx era. The magic shift code for ...quake??? Was just the tip of the iceberg.


When I started with the gaming industry (Operation Flashpoint), I was coming in near the end of fixed point arithmetic where floating point units where becoming more common as part of the PII or PI era. It was around mid-1990, I'm actually in my early 40's now. I knew a little bit of floating point units but never really dived into the area - also at the time I wasn't really mathematically minded and we really didn't have access to the internet - Australia.

A-10 Cuba! came out around 1996, and only now am I getting to know its internal engine. For example, it utilizes a signed Q15.16 fixed-point representation for its X and Z axes. For instance, a raw value of 98,304 (0x00018000) decodes to 1.5 units - where a unit is defined as 6607.92 feet - which translates to 9,911.88 feet in the game world. Then to top it off, it uses a different coordinate scaling convention for its up axis. For example the up-axis resolution is 1/10th of a foot so a raw value of 10,000 is actually 1,000 feet. Then there are other discrete exponent scaling factors which the game applies to maintain numerical precision and accuracy.

I had a great time learning all this well reverse engineering the game, and also come to learn even today how common it is for chips not to come with FPU unit, and how common it is that those chips actually perform fixed point arithmetic.


I played A-10 Cuba a ton on an old Pentium 3. Are you doing this in a repo somewhere?


The plan is to put everything into a repo on github, this includes documentation on the file format, and also the rewrite of the original code in modern C++ and DirectX or Vulkan. I don't see much point in reverse engineering the old rendering engine - I can do it but I've got everything I need right now that I can just rewrite the game inside the browser.


Awesome. Good luck and I hope to see it.


In his own words he all-ready got early feedback from his family.

"I tried the local Iranian market. I showed it to friends, family, and potential clients. Their response: "Nobody in Iran will pay $500/month for this. The Persian language quality isn't perfect. We'll use free ChatGPT instead.""

Which should of been free feed-back on the risk vs reward.


Most of the internet still assumes you're using a 96 DPI monitor. Tho the rise of mobile phone has changed that it seems like the vast majority of the content consumed on mobile lends itself to being scaled to any DPI - eg.. movies, pictures, youtube ect.


Sad to see what happened to the kid, but to point the finger at a language model is just laughable. It shows a complete breakdown of society and the caregivers entrusted with responsibility.


people are (rightly) pointing the finger at OpenAI, the organization comprised of human beings, all of whom made decisions along the way to release a language model that encouraged a child to attempt and complete suicide.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: