Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The point is that Zig basically circumvents issues surrounding that, by introducing "comptime", where you can just write regular Zig to achieve the same things.

The article showcases a nice example of having this very direct power.

But it really comes up more often than you think as soon as you actually have it.

It's easy in Zig to allocate precisely based on computed values and then have the sizes as part of your types etc. It all falls out of some simple ideas and it's all just regular Zig code.

"Types in Zig are values of the type type" from: https://ziglearn.org/chapter-1/

So instead of making it hard to write incorrect programs, Zig makes it easy to write correct programs.



I am certainly sometimes envious of comptime and what it makes practical, but it’s worth noting that it results in dynamically-typed generics, whereas Rust goes for statically-typed generics, which is in keeping with its goals. There are some significant general maintainability improvements in statically-typed generics; when you use dynamic generics, subtle changes in one place can cause obscure compile errors in a completely different and seemingly unrelated place (commonly called post-monomorphisation errors); this doesn’t happen with static generics.

So… I’m not sold on your wording that it’s circumventing issues, as it’s choosing a different set of trade-offs. In shedding types-are-a-language-of-their-own, you also shed confidence about what’s a breaking change, and make call sites more fragile. Decide for yourself whether it’s worth it.


What do you mean by dynamically-typed generics and statically typed generics here? I've looked up "post-monomorphization errors" and found some things about assertions about generic types failing because of the choices made by the user who passed in types or constants that do not work with the generic code. It seems like Zig libraries have the option of generating the errors at the right place if they place their assertions in the function that returns the type to the user, but they also have the option of generating the errors in the wrong place if they place their assertions in methods of the type.

> So… I’m not sold on your wording that it’s circumventing issues, as it’s choosing a different set of trade-offs. In shedding types-are-a-language-of-their-own, you also shed confidence about what’s a breaking change, and make call sites more fragile. Decide for yourself whether it’s worth it.

Client code can just look at all the members of all the structs so there's not really much hope for enforcing that changes cannot break any client code using compiler-adjacent tooling.


It’s easiest to see the distinction in otherwise-similar systems, so I’ll choose Rust generics (statically-typed) and C++ templates (dynamically-typed).

In Rust, the generic constraints (the traits that the type must satisfy) are a contract, part of the signature. The caller must satisfy them, and then the callee knows nothing else about the type it has received. Therefore, changes inside the body of the generic method will never† cause any code that uses the function to stop compiling.

In C++, templates don’t have that, so you have to seek knowledge of what conditions your type must satisfy some other way, and it’s easy to accidentally depend on additional details (since it’s not statically checked), so that changes in the template that you thought were harmless actually break someone else’s code somewhere else that uses your template in ways you didn’t expect.

https://gist.github.com/brendanzab/9220415 has a decent example, though it’s from 2014 and refers to Zero and One traits that were removed from the standard library before Rust 1.0, and the compiler messages would be better today as well.

—⁂—

† In practice there’s at least one way of leaking details, so post-monomorphisation errors that aren’t compiler bugs can actually happen, though it’s very rare: if you return an `impl Trait`, the body leaks whether it implements auto traits like Send.


is there anything fundamental about using the same language at compile time to generate fully static typing that is boiled away? I don't know, but it doesn't seem so?


I'm still in the honeymoon phase with this language and learning, but I agree it's a trade off.

For example your LSP isn't going to help you as much while you edit code.

However being able to express arbitrary compile time constraints and pre-compute stuff without having to go through a code generation tool is really powerful. You can actually use all the knowledge you have ahead of time as long as you can express it in Zig.

So far it seems like Zig is carving out a very strong niche for itself.


>instead of making it hard to write incorrect programs, Zig makes it easy to write correct programs.

Or May be rephrasing it ( To avoid the word "instead" which may anger Rust supporters );

Rust Makes it hard to write incorrect programs, Zig makes it easy to write correct programs.

I think this single sentence captures the philosophical difference between Rust and Zig. And of course there is no right or wrong in philosophy.


I fully agree. It’s an interesting trade off that is worth thinking about.


The problem is that it ends up being dynamically-typed, like C++ templates. See: https://nitter.net/pcwalton/status/1369114008045772804

> So instead of making it hard to write incorrect programs, Zig makes it easy to write correct programs.

Well maybe in theory, but the current state of Zig is that it makes it hard to write programs no matter how correct because the compiler keeps crashing ¯\_(ツ)_/¯


What are you talking about? The tagged versions are usually quite stable and also plan bug fix follow ups.

If you follow master, you’ll occasionally run in to crashes, which is true of any developing language. If you don’t want that, follow tagged versions.


Here is a really good link that shows the power of Zig's comptime:

https://kristoff.it/blog/what-is-zig-comptime/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: