In the long run https://github.com/google/crubit will very likely solve this for Rust even if it's a bit specific to Google's use cases right now as per readme.
For many relevant specs you can find "draft versions" that are essentially the final version without the official stamp on the open web so there isn't that much of a need.
Funny, another commenter on this post was saying the opposite, that Rust was likely being used to just port existing features and that was easier because there were probably good tests for it already.
If you've actually written considerable amounts of Rust and C++, these statistics don't require justification. In my opinion it's completely expected that Rust code is easier to write correctly.
As a relatively novice programmer who's worked in tech for decades but not as a software developer: I take issue with the idea that you need to write considerable amounts of Rust and C++ for these statistics to be expected. In fact, despite Rust's initial vertical learning curve I'd say that any junior developer trying to implement anything with any degree of complexity at all in Rust and C++ would see the benefits.
At the very least, the fact that IDE integration can tell you all kinds of stuff about what you're doing/doing wrong and why accelerates things greatly when you're starting out.
The problem with junior developers is that Rust will be incredibly frustrating to learn by perturbation, because the compiler will reject most random changes to the code. Which is the point of course, but C++ will compile programs which then crash, giving you a very misguided feeling that you’re making progress, but this is very important in the process of gaining new skills.
I don’t see a way around it, programming without garbage collection is hard, Rust makes it very clear very quickly, which is also the point, but this is at odds with making the learning curve accessible.
> The problem with junior developers is that Rust will be incredibly frustrating to learn by perturbation
Yes, this is the biggest issue with Rust that I've seen; most language will let you do something wrong and then as you learn you get better. Rust will refuse to compile if you're not doing things correctly (and normally I would put 'correctly' in quotes but correctness in Rust is well defined).
The first time I tried to experiment with learning Rust was a disaster. I just wanted to decode some JSON and filter it, but -- oops! -- I don't own that variable. Okay, well I can pass it somewhere else mutably, right? But then that function does the work and returns something that... what's a lifetime? What's a 'a mean? How do I... screw it, I'll go back to Python.
Eventually, after the tooling and the tutorials got better I came back to it and really enjoyed what I've seen so far and even rewrote one of my own personal tools in Rust[1] to experiment with. It's nothing impressive, but it was fun to do.
The logic in my comment wasn't that you need to have written considerably amounts of code to be expecting this, just that to not be expecting this would make me think you hadn't. If that makes sense.
On your second point, I think IDE integration for C++ is similar as it is for Rust. Just Rust errors and tooling are a million times better regardless of IDE.
Oh, the more junior the developers, the quicker they will get any benefit. That's common for any language that enforces correctness, but the C++ vs. Rust comparison isn't even fair; C++ is an incredibly hard language to use.
Apple should have modernized ObjC instead of making Swift the lingua franca. Both speed of iteration and flexibility (on which web-stack-rivaling productivity features would have been possible) are gone forever.
Swift Concurrency is a tire fire that not even their async-algorithms team can use completely correctly, and useful feature like typed throws are left half finished. The enormous effort the constant further bastardization of Swift takes, is at least in part the reason for the sorry state dev tooling is in. Not even a 4T dollar company can make a reliable SwiftUI preview work, in their own IDE. Variadic generics (a seemingly pure compiler feature) crash at runtime if you look at them the wrong way. Actors, the big light tower of their structured concurrency, are unusable because calls to them are unordered. They enforce strict concurrency checking now, but the compiler is too dumb to infer common valid send patterns; and their solution to make this abomination work in real codebases? Intro a default that lets _everything_ in a module run on the main thread per default!
Swift has so many issues they would honestly be better off just moving to Rust rather than fix Swift. Seriously. The fact that it's so easy to get the compiler to spend exponential time resolving types that it very often just shits the bed and begs you to rewrite your code for it to stand a chance is shameful coming from, as you say, a $4T company. Points to deep problems with Swift.
While the C calling convention continues to rule operating systems and FFIs, I think it’ll continue to limp along. Hopefully one day that can be fixed, it’s annoying that C is what I have to reach for to call SomeLib no matter what language I’m using
I remember that quite well. However, the backlash was very specific; as far as I remember it was never directed at the company as a whole, let alone the person of, say, Eric Schmidt.
Eric Schmidt didn’t present as a creepy weirdo.
He also didn’t make the company a reflection of himself. That kept the glasshole backlash compartmentalized.
Strange things happen when a leader merges the company brand and with his personal brand. It can strengthen the company brand (in the case of a plucky can-do technologist) but the company brand starts to get colored by the personality of the person (in the case of a person who goes off the deep end and starts saying weird and inflammatory stuff).
Why do Hunyuan, OpenAI 4o and Gwen get a pass for the octopus test? They don't cover "each tentacle", just some. And midjourney covers 9 of 8 arms with sock puppets.
Good point. I probably need to adjust the success pass ratios to be a bit stricter, especially as the models get better.
> midjourney covers 9 of 8 arms with sock puppets.
Midjourney is shown as a fail so I'm not sure what your point is. And those don't even look remotely close to sock puppets, they resemble stockings at best.
I think that might be true of the language committee, but there's presumably a huge crowd of people with existing c++ code bases that would like to have a different path forward than just hoping that the committee changes priorities.
That is what many of us have done moving into managed languages, with native libraries when required to do so.
The remaining people driving where the language goes have other priorities in mind like reflection.
The profiles that were supposed to be so much better than the Safe C++ proposal, none of them made it into C++26, and it remains to be seen if we ever will see a sensible preview implementation for C++29.
C++ 26 doesn't have the technology, but it wouldn't matter anyway because what's crucial about Rust isn't the technology it's the culture.
If WG21 were handling Rust instead f64 would implement Ord, and people would just write unsafe blocks with no explanation in the implementation of supposedly "safe" functions. Rust's technology doesn't care but their culture does.
Beyond that though, the profiles idea is dead in the water because it doesn't deliver composition. Rust's safety composes. Jim's safe Activity crate, Sarah's safe Animals crate and Dave's safe Networking crate compose to let me work with a safe IPv6-capable juggling donkey even though Jim, Sarah and Save have never met and had no idea I would try that.
A hypothetical C++ 29 type safe Activity module, combined with a thread safe Animals module, and a resource leak safe Networking module doesn't even get you something that will definitely work, let alone deliver any particular safety.
> If WG21 were handling Rust instead f64 would implement Ord, and people would just write unsafe blocks with no explanation in the implementation of supposedly "safe" functions. Rust's technology doesn't care but their culture does.
I'm sure you think this was somehow succinctly making your point, but I can't see any connection at all, so if you did have an actual point you're going to need to explain it.
OK? I don't see how that's connected? It's not controversial that f32 and f64 are partially ordered, the problem in C++ is that the difference between "Partially Ordered" and "Totally Ordered" is semantic not syntactic in their language and all semantic mistakes are just IFNDR so it's a footgun.
WebP lossless is close to state of the art and widely available. It's also not widely used. The takeaway seems to be that absolute best performance for lossless compression isn't that important, or at least it won't get you widely adopted.
I don't know that i have ever used jpg or png lossless in practical usage (e.g. I don't think 99.9% of mobile app or web usecases are for lossless). WebP lossy performance is just not worth it in practice, which is why WebP never took off IMO.
Are there usecases for lossless other than archival?
I definitely noticed when the Play Store switched to lossy icons. I can still notice it to this day, though they did at least make it harder to notice (it was especially apparent on low-DPI displays). Fortunately, the apps once installed still seem to use lossless icons.
A lot of images should be lossless. Icons/pictograms/emoji, diagrams and line drawings (when rasterized), screenshots, etc. You can sometimes get away with large-resolution lossy for some of these if you scale it down, but that doesn't necessarily translate into a smaller file size than a lossless image at the intended resolution.
There's another problem with lossy images, which is re-encoding. Any app/site that lets you upload/share an image but also insists on re-encoding it can quickly turn it into pixelated mush.
Only downside is that webp lossless requires RGB colorspace so you can't, for example, save direct YUV frames from a video losslessly. AVIF lossless does support this though.
That's a critique of `async`, not of `using` though, right? This doesn't seem to make functions more colored than they already are as far as I understand.