Hacker Newsnew | past | comments | ask | show | jobs | submit | markjgx's commentslogin

If I had to guess why HN is so attracted to Rust, there's a few things that compound on each other.

Rust occupies a point in the design space that no other language does. Statically verified memory safety, strong concurrency guarantees, a world class toolchain, and it compiles to native machine code. Before Rust you always had to sacrifice one of these. Go gives you safety and ergonomics but you're garbage collected and limited in expressiveness. C++ gives you performance but safety is entirely your problem. Rust is the first mainstream language to credibly offer all of it.

The toolchain is the part people underestimate. As someone from C++, I cannot overstate how much this matters. Our toolchain is literally cobbled together ancient runes. Package management is strange and archaic. You can't just add a package to your project. There are stacks of contrived and obscure rules that must be followed. The language itself is a hodgepodge of inconsistencies layered on each other over decades. Some codebases are hundreds of macro incantations deep. Some prefer overload maxxing, default constructor shenanigans, uninitialized memory. Macros aren't even part of the language, it's a file preprocessor glued on from the side. Headers and implementations are stitched together. System dependency management is hell on earth. Build systems are fragmented: CMake vs Meson vs Bazel vs Make vs Ninja, pick your poison. I could keep going.

Meanwhile Rust just works. You add a package in a few seconds. Compile to seven different platforms out of the box, including the web. Type references just work, nothing to forward declare. Cargo is your build system, package manager, test runner, doc generator, and publisher in one coherent thing. That entire category of bikeshedding that has plagued C++ for decades just doesn't exist. It's heaven on earth.


I was wondering how the JVM and V8 stack up. Do you have a source for that claim? Genuinely curious.

Coming from the game dev world, I’ve grown more and more convinced that managed languages are the right move for most code. My reasoning is simple: most game developers don’t have the time or patience to deeply understand allocation strategy, span usage, and memory access patterns, even though those are some of the most performance-critical and time-consuming parts of programming to get right.

Managed languages hide a lot of that complexity. Instead of explaining to someone, “you were supposed to use this specialized allocator for your array and make sure your functions were array-view compatible”—something that’s notoriously tedious to guarantee in game engines given how few developers even think about array views—you just let developers write code and most of those problems go away.

I’m not saying everything should be managed. Core engine code should still live in the predictable, statically compiled world. But history shows it can work: projects like Jak and Daxter were written primarily in a custom LISPy scripting language, and even Ryujinx (RIP), the excellent Nintendo Switch emulator, is written entirely in C#.

Another strong technical reason is that managed JIT languages can profile at runtime and keep optimizing call sites based on actual usage patterns. Normally, developers would have to do this by hand or rely on PGO, which works but is painful to set up.

Industry standards make this harder to adopt since platforms like Sony still block JIT, but I think this is the direction we should be moving.


> Do you have a source for that claim? Genuinely curious.

Hmm, no real source for the claim, just a general interest in the two VMs. JVM for work and V8 partially for work. AFAIK, those are two of the VMs that are getting the highest amount of research in due to how they are positioned.

If you want more general information on V8, I suggest reading about the turbofan design docs [1].

The JVM first started doing a lot of these optimizations with Hotspot. Google poached the engineers that did Hotspot and put them to work on V8. That's why the two VMs tend to share similar optimizations.

> I’ve grown more and more convinced that managed languages are the right move for most code.

I tend to agree, to an extent.

The JVM is very fast, but it's also memory hungry and doesn't give up the memory it claims easily. That's due to the nature of the GC algorithms it employs and some historical constraints which bloat object size. One thing you get out of non-gced languages is much lower memory usage and much better live memory density (when done correctly).

[1] https://docs.google.com/presentation/d/1sOEF4MlF7LeO7uq-uThJ...


> It's been flawless, more battery and memory efficient than Chrome

Is that actually true?


It was true for me, specifically for my workflow, the websites I use and how I leave some specific tabs in the background.

I’ve used Chrome for many years before Firefox and it was always prioritizing JS responsiveness even when the app was in the background and not needed, so it consumed CPU cycles and battery power needlessly. I see now that Chrome enables a Low Power mode by default on battery and it’s unusable as scrolling gets janky. I don’t know if the overall experience has gotten better in the last year on Chrome.

Not sure what’s different about memory though, but Chrome always appeared like a memory hog when I tested both browser side by side on the same set of websites and same few extensions. Could be that it just caches more and that’s benefitting responsiveness


True for me too, much more memory efficient, especially with content heavy websites.


"Surfer: The World's First Digital Footprint Exporter" is dubious—it's clearly not the first. Kicking off with such a bold claim while only supporting seven major platforms? A scraper like this is only valuable if it has hundreds of integrations; the more niche, the better. The idea is great, but this needs a lot more time in the oven.

I would prefer a cli tool with partial gather support. Something that I could easily setup to run on a cheap instance somewhere and have it scrape all my data continuously at set intervals, and then give me the data in the most readable format possible through an easy access path. I've been thinking of making something like that, but with https://github.com/microsoft/graphrag at the center of it. A continuously rebuilt GraphRAG of all your data.


Take a look at https://github.com/karlicoss/HPI

It builds an entire ecosystem around your data where it is programmatic rather than just dumping text files. The point of HPI is to build your own stuff onto it and it all integrates seamlessly together into one Python package.

The next stop after Karlicoss is https://github.com/seanbreckenridge/HPI_API which creates a REST API on top of your HPI without any additional configuration.

If you want to get more fancy / antithetical to HPI, you can use https://github.com/hpi/authenticated_hpi_api or https://github.com/hpi/hpi-graph so you can theoretically expose it to the web (I am squatting the HPI org, I am not the creator of HPI). I made the authentication method JWTs so you can create JWTs where it will give access to only certain services' data. (Beware, hpi-graph is very out of date and I haven't touched it lately but my HPI stuff has been chugging away downloading data).

Some of the /hpi stuff I made is a bit mish-mash because it was rip-and-replace from a project I was making so you'll see references to "Archivist" or things that aren't local-first and depend on Vercel applications.


The amount of built-in platforms isn't necessarily the problem. The best systems are those that establish a plugin ecosystem.


While I agree that it's not the first, I think it's unfair to say that it's not valuable without hundreds of integrations.


Yeah it was honestly more of a marketing statement lol, but removing it for sure. Adding daily/interval exporting is one of our top priorities right now and after that and making the scraping more reliable, we'll add something similar to GraphRAG. Curious to hear what other integrations you would want built into this system.


Some players will also be convinced that you're cheating if you play the game really really well, and they will get upset. CS:GO (now CS2) has a very fascinating way of determining if someone is cheating. A ML based heuristic that is constantly being retrained, that can accurately judge whether someone is actually cheating or not based off their Overwatch (not the game) replay system. https://www.youtube.com/watch?v=kTiP0zKF9bc


> Managing build configurations...

In terms of package management, you can apply rules to what crates you want to include; including specific platform constraints.

  [target.'cfg(target_os = "linux")'.dependencies]
  nix = "0.5"
On the code side it's pretty much the same as C++. You have a module that defines an interface and per-platform implementations that are included depending on a "configuration conditional check" #[cfg(target_os = "linux")] macro.

https://github.com/tokio-rs/mio/blob/c6b5f13adf67483d927b176...


Glad this was flagged. A lot of people have a misconception of managed languages being slow compared to your regular ol' binary program, these days that couldn't be further from the truth. In traditional high performance C/C++ development you have to manually split your code into hot and cold paths, static analysis optimization can only go so far.

Do you want to inline this function in your loop? Yes and no, i.e. you might be taking up some valuable registers in your loop, increasing register pressure. Time to pull out the profiler and experiment, wasting your precious time.

Managed languages have the advantage of knowing the landscape of your program exactly, as __that__ additional level of managed overhead can help the VM automatically split your code into hot and cold paths, having access to the runtime heuristics of your program allows it to re-JIT your hot paths, inline certain functions on the fly, etc.


hckrnews.com + DarkReader. DarkReader is such a good extension, couldn't live without it!


"and it looks like they're finally improved on what was a terrible, clunky, extremely dated UI." doesn't hold true, it's mostly the same UI with a dark reskin. Certain editor hot paths have been reworked and that's about it. The editor is almost exclusively written in Slate, Epic's in-house Window framework/general GUI module. Slate's got a pretty interesting nested MACRO system. Most definitely not data-driven and pretty much as hard-coded as it gets, redesigning the editor for real would be difficult to say the least. In reality most experienced developers don't want a new design, they are happy with the workflow they have and I have to agree with them. I'm generally happy with the "if it ain't broke don't fix it" reskin decision.


Hey there, this looks great. I was wondering why Deepspeech 0.6? Why not the latest version DeepSpeech 0.9?


I need to cycle back and update voice2json. Rhasspy (the full voice assistant) supports DeepSpeech 0.9.3.


Awesome, thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: