This is one the reasons I find it so silly when people disregard Zig «because it’s just another memory unsafe language»: There’s plenty of innovation within Zig, especially related to comptime and metaprogramming. I really hope other languages are paying attention and steals some of these ideas.
«inline else» is also very powerful tool to easily abstract away code with no runtime cost.
What I’ve seen isn’t people disregarding Zig because it’s just another memory-unsafe language, but rather disqualifying Zig because it’s memory-unsafe, and they don’t want to deal with that, even if some other aspects of the language are rather interesting and compelling. But once you’re sold on memory safety, it’s hard to go back.
This is really the crust of the argument. I absolutely love the Rust compiler for example, going back to Zig would feel a regression to me. There is a whole class of bugs that my brain now assumes the compiler will handle for me.
Problem is, like they say the stock market has predicted nine of the last five recessions, the Rust compiler stops nine of every five memory safety issues. Put another way, while both Rust and Zig prevent memory safety issues, Zig does it with false negatives while Rust does it with false positives. This is by necessity when using the type system for that job, but it does come at a cost that disqualifies Rust for others...
Nobody knows whether Rust and/or Zig themselves are the future of low-level programming, but I think it's likely that the future of low-level programming is that programmers who prefer one approach would use a Rust-like language, while those who prefer the other approach would use a Zig-like language. It will be intesting to see whether the preferences are evenly split, though, or one of them has a clear majority support.
C++ already illustrates this idea you're talking about and we know exactly where this goes. Rust's false positives are annoying, so programmers are encouraged to further improve the borrowck and language features to reduce them. But the C++ or Zig false negatives just means your program malfunctions in unspecified ways and you may not even notice, so programmers are encouraged to introduce more and more such cases to the compiler.
The drift over time is predictable, compared to ten years ago Rust has fewer false positives, C++ has more false negatives.
You are correct to observe that there is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable. But I would argue we already know what you're calling the "false positive" scenario is also not useful, we're just not at the point where people stop doing it anyway.
> C++ already illustrates this idea you're talking about and we know exactly where this goes.
No, it doesn't. Zig is safer than C++ (and it's much simpler, which also has an effect on correctness).
Making up some binary distinction and then deciding that because C++ falls on the same side of it as Zig (except it doesn't, because Zig eliminates out-of-bounds access to the same degree as Rust, not C++) then what applies to one must apply to the other. There is simply no justification to make that equivalence.
> There is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable.
That's nothing to do with Rice's theorem. Proving some properties with the type system isn't a general algorithm; it's a proof you have to work for in every program you write individually. There are languages (Idris, ATS) that allow you to prove any correctness property using the type system, with no false positives. It's a matter of the effort required, and there's nothing binary about that.
To get a sense of the theoretical effort (the practical effort is something to be measured empirically, over time) consider the set of all C programs and the effort it would take to rewrite an arbitrary selection of them in Rust (while maintaining similar performance and footprint characteristics). I believe the effort is larger than doing the same to translate a JS program to a Haskell program.
> There is simply no justification to make that equivalence.
I explained in some detail exactly why this equivalence exists. I actually have a small hope that this time there are enough people who think it's a bad idea that we don't have to watch this play out for decades before the realisation as we did with C and C++.
Yes it's exactly Rice's Theorem, it's that simple and that drastic. You can choose what to do when you're not sure, but you can't choose (no matter how much effort you imagine applying) to always be sure†, that Undecidability is what Henry Rice proved. The languages you mention choose to treat "not sure" the same as "nope", like Rust does, you apparently prefer languages like Zig or C++ which instead treat "not sure" as "it's fine". I have explained why that's a terrible idea already.
The underlying fault, which is why I'm confident this reproduces, is in humans. To err is human. We are going to make mistakes and under the Rust model we will curse, perhaps blame the compiler, or the machine, and fix our mistake. In C++ or Zig our mistake compiles just fine and now the software is worse.
† For general purpose languages. One clever trick here is that you can just not be a general purpose language. Trivial semantic properties are easily decided, so if your language can make the desired properties trivial then there's no checking and Rice's Theorem doesn't apply. The easy example is, if my language has no looping type features, no recursive calls, nothing like that, all its programs trivially halt - a property we obviously can't decidably check in a general purpose language.
> I explained in some detail exactly why this equivalence exists.
No, you assumed that Zig and C++ are equivalent and concluded that they'll follow a similar trajectory. It's your premise that's unjustified.
A problem you'd have to contend with is that Rust is much more similar to C++ than Zig in multiple respects, which may matter more or less than the level of safety when predicting the language trajectory.
> But you can't choose (no matter how much effort you imagine applying) to always be sure
That is not Rice's theorem. You can certainly choose to prove every program correct. What you cannot do is have a general mechanism that would prove all programs in a certain language correct.
> One clever trick here is that you can just not be a general purpose language.
That's not so much a clever trick as the core of all simple (i.e. non-dependent) type systems. Type-safety in those languages then trivially implies some property, which is an inductive invariant (or composable invariant) that's stronger than some desired property. E.g. in Rust, "borrow/lifetime-safety" is stronger than UAF-safety.
However, because an effort to prove any property must exist, we can find it for some language that trivially offers it by looking at the cost of translating a correct program in some other language that doesn't guarantee the property to one that does.
Bounds safety by default, nullability is opt-in and checks are enforced by the type-system, far less "undefined behaviour", less implicit integer casting (the ergonomics could still use some work here), etc.
This is on top of the cultural part, which has led to idiomatic Zig being less likely to heap allocate in the first place, and more likely to consider ownership in advance. This part shouldn't be underestimated.
You presumably intend "shouldn't be underestimated" rather than "can't be". I agree that culture is crucial, but the technology needs to support that culture and in this respect Zig's technology is lacking. I would love to imagine that the culture drives technology such that Zig will fix the problem before 1.0, but Zig is very much an auteur language like Jai or Odin, Andrew decides and he does not seem to have quite the same outlook so I do not expect that.
> Maybe if someone bends over backwards to rationalize it, but not in any real sense.
In a simple, real sense. Zig prevents out-of-bounds access just as Rust does; C++ doesn't. Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html).
> You can't build RAII and moves into zig.
So RAII is part of the definition of memory safety now?
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
We could, of course, argue over which of Rust, Zig, and C++ offers the best contribution to correctness beyond the sound guarantees they make, except these are empirical arguments with little empirical data to make any determination, which is part of my point.
Software correctness is such a complicated topic and, if anything, it's become more, not less, mysterious over the decades (see Tony Hoare's astonishment that unsound methods have proven more effective than sound methods in many regards). It's now understood to be a complicated game of confidence vs cost that depends on a great many factors. Those who claim to have definitive solutions don't know what they're talking about (or are making unfounded extrapolations).
Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer. The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
> Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer.
That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods guy, although formal methods have come a long way since the seventies, when we thought the point was to prove programs correct).
> The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at all).
> That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods person)
I agree that tests and reviews are somewhat effective. That's not the point. The point is that if you look at the history of programming languages simplicity in general goes against safety. Simplicity also goes against human understanding of code. C and assembly are extremely simple compared to java, python, C#, typescript etc. yet programs written in C and assembly are much harder to understand for humans. This isn't just a PL thing either. Simplicity is not the same as easy, it often is the opposite.
> I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at al
It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe. You are right that zig is simpler and safer, but it's a green field language. Else I might as well say rust is more safe than zig and also more complex. The point is as to isolate simplicity as the factor as much as possible.
I would even say that zig willingly sacrifices safety on the alter of simplicity.
> The point is that if you look at the history of programming languages simplicity in general goes against safety... C and assembly are extremely simple compared to java, python, C#, typescript
But Java and Python are simpler yet safer than C++, so I don't understand what trend you can draw if there are examples in both directions.
> It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe.
But I didn't mean to imply that's not possible to add safety with complexity. I meant that when the sound guarantees are the same in two languages, then there's an argument to be made that the simpler one would be easier to write more correct programs in. Of course, in this case Zig is not only simpler than C++, but actually offers more sound safety guarantees.
So far I think the adoption in critical infrastructure (Linux, AWS, Windows, etc.) is clearly in Rust favor but I agree that something at some point will replace Rust. My belief is that more guardrails will end up winning no matter the language since the last 50 years of progamming have shown us we can't rely on humans to write bug free code and it is even worse with LLM.
would you be satisfied if there was a static safety checker? (or if it were a compiler plugin that you trigger by running a slightly different command?). Note that zig compiles as a single object, so if you import a library and the library author does not do safety checking, your program would still do the safety checking if it doesn't cross a C abi boundary.
In practice, almost all memory safety related bugs caught by the Rust compiler are caught by the Zig safe build modes at run time. This is strictly worse in isolation, but when you factor in the fact that the rest of the language is much easier to reason about, the better C interop, the simple yet powerful metaprogramming, and the great built in testing tools, the tradeoffs start to become a lot more interesting.
catching at compile time is much better, though. there are plenty of strange situations that can happen that you'll not reach in runtime (for example, odds of running into a tripwire increase over time, things that can only happen after certain amount of memory fragmentation -- maybe you forgot an errdefer somewhere, etc.)
I think the problem with this attitude is the compiler becomes a middle manager you have to appease rather than a collaborator. Certainly there are advantages to having a manager, but if you go off the beaten track with Rust, you will not have a good time. I write most of my code in Zig these days and I think being able to segfault is a small price to pay to never have to see `Arc<RefCell<Foo<Bar<Whatever>>>` again.
I view it as a wonderful collaborator, it tells me automatically were my code is wrong and it gets better with every release, I can't complain really. I think a segfault is a big price to pay, but it depends on the criticality of it I guess.
I can't imagine writing c++ or c these days without static analysis or the various llvm sanitizers. I would think the same applies to zig. Rather than need these additional tools, rust gives you most of their benefits in the compiler. Being able to write bugs and have the code run isn't really something to boast about.
I would rather rely on a bunch of sanitizers and static analysis because it is more representative of the core problem I am solving: Producing machine code. If I want Rust to solve these problems for me I now have to write code in the Rust model, which is a layer of indirection that I have found more trouble than it's worth.
You can write rust without over-using traits. Regrettably, many rust libs and domains encourage patterns like that. One of the two biggest drawbacks of the rust ecosystem.
As someone who uses D and has been doing things like what you see in the post for a long time, I wonder why other languages would put attention to these tricks and steal them when they have been completely ignoring them forever when done in D. Perhaps Zig will make these features more popular, but I'm skeptic.
I was trying to implement this trick in D using basic enum, but couldn't find a solution that works at compile-time, like in Zig. Could you show how to do that?
import std.meta: AliasSeq;
enum E { a, b, c }
void handle(E e)
{
// Need label to break out of 'static foreach'
Lswitch: final switch (e)
{
static foreach (ab; AliasSeq!(E.a, E.b))
{
case ab:
handleAB();
// No comptime switch in D
static if (ab == E.a)
handleA();
else static if (ab == E.b)
handleB();
else
static assert(false, "unreachable");
break Lswitch;
}
case E.c:
handleC();
break;
}
}
I've seen a few new languages come along that were inspired by zig's comptime/metaprogramming in the same language concept.
Zig I think has potential but it hasn't stabilized enough yet for broad adoption. That means it'll be awhile before it's built an ecosystem (libraries, engines etc.) that is useful to developers that don't care about language design.
This perspective that many people take on memory-safety of Rust seems really
"interesting".
Unfortunately for all fanatics, language really doesn't matter that much.
I have been using KDE for years now and it works perfectly good for me. It has no issues/crashes, it has many features in terms of desktop environment and also many programs that come with it like music player, video player, text editor, terminal etc. and they all work perfectly well for me. Almost all of this is written in C++. No need to mention the classic linux/chromium etc. etc which are all written in c++/c.
I use Ghostty which is written in zig, it is amazingly polished and works super well as well.
I have built and used a lot of software written in Rust as well and they worked really well too.
At some point you have to admit, what matters is the people writing software, the amount of effort that goes into it etc. it is not the langauge.
As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff. Even then just using Rust isn't as good as you might think, I uncountered a decent amount of segfaults, random crashes etc. using very popular Rust libraries as well. In the end just need to put in the effort.
I'm not saying language doesn't matter but it isn't even close to being the most important thing.
> As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff.
Safety is the selling point of Rust, but it's not the only benefit from a technical point of view.
The language semantics force you to write programs in a way that is most convenient for the optimizing compiler.
Not always, but in many cases, it's likely that a program written in Rust will be highly and deeply optimized. Of course, you can follow the same rules in C or Zig, but you would have to control more things manually, and you'd always have to think about what the compiler is doing under the hood.
It's true that neither safety nor performance are critical for many applications, but from this perspective, you could just use a high-level environment such as the JVM. The JVM is already very safe, just less performant.
Also, treating all languages that don't ensure full memory safety as if they're equally problematic is silly. The reason not ensuring memory safety is bad is because memory unsafety as at the root of some bugs that are both common, dangerous, and hard to catch. Only not all kinds of memory unsafety are equally problematic, Zig does ensure the lack of the the most dangerous kind of unsafety (out-of-bounds access) while making the other kind (use-after-free) easier to find.
That the distinction between "fully memory safe" and "not fully memory safe" is binary is also silly not just because of the above, but because no lanugage, not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
Furthermore, Zig has (or intends to have) novel features (among low-level languages) that help reduce bugs beyond those caused by memory unsafety.
If you one day write a blog, I would want to subscribe.
Your writing feels accessible. I find it makes complex topics approachable. Or at least, it gives me a feel of concepts that I would otherwise have no grasp on. Other online writing tends to be permeated by a thick lattice of ideology or hyper-technical arcanery that inhibits understanding.
I interpreted his post as saying it's not binary safe/unsafe, but rather a spectrum, with Java safer than C because of particular features that have pros and cons, not because of a magic free safe/unsafe switch. He's advocating for more nuance, not less.
Right but I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards. Even if it is much much better than C (clearly), it's hard to get excited about a language that "unsolves" a longstanding problem.
> not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
> I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards
Memory safety (like soundly ensuring any non-trivial property) must come at a cost (that's just complexity theory). You can pay for it with added footprint (Java) or with added effort (Rust). Some people are disappointed that Zig offer more safety than C++ but less than Rust in exchange for other important benefits, while others are disappointed that the price you have to pay for even more safety in Rust is not a price they're happy to pay.
BTW, many Rust programs do use GC (that's what Rc/Arc are), it's just one that optimises for footprint rather than speed (which is definitely okay when you don't use the GC as much as in Java, but it's not really "without GC", either, when many programs do rely on GC to some extent).
> This is a silly point.
Why? It shows that even those who wish to make the distinction seem binary themselves accept that it isn't, and really believe that it matters just how much risk you take and how much you pay to reduce it.
(You could even point out that memory corruption can occur at the hardware level, so not only is the promise of zero memory corruption not necessarily worth any price, but it is also unattainable, even in principle, and if that were truly the binary line, then all of software is on the same side of it.)
I can't take zig as seriously as rust due to lack of data race safety. There are just too many bugs that can happen when you have threads, share state between those threads and manually manage memory. There are so many bugs I've written because I did this wrong for many years but didn't realize until I wrote rust. I don't trust myself or anyone to get this right.
This post shows how versatile Zig's comptime is not only in terms of expressing what to pre-compute before the program ever runs, but also for doing arbitrary compile time bug-checks like these. At least to me, the former is a really obvious use-case and I have no problem using that to my advantage like that. But I often seem to overlook the latter, even though it could prove really valuable.
It's not an optimization. What gets evaluated via the lazy evaluation is well defined. Control flow which has a value defined at comptime will only evaluate the path taken. In the op example, the block is evaluated twice, once for each enum value, and the inner switch is followed at comptime so only one prong is evaluated.
Nope, this is not relying on optimization, it's just how compile time evaluation works. The language guarantees "folding" here regardless of optimization level in use. The inline keyword used in the original post is not an optimization hint, it does a specific thing. It forces the switch prong to be evaluated for all possible values. This makes the value comptime, which makes it possible to have a comptime unreachable prong when switching on it.
There are similarities here to C++ if constexpr and static_assert, if those are familiar to you.
Well, for example you may have some functions which accept types and return types, which are not compatible with some input types, and indicate their incompatibility by raising an error so that compilation fails. If the program actually does not pass some type to such a function that leads to this sort of error, it would seem like a bug for the compiler to choose to evaluate that function with that argument anyway, in the same way that it would be a bug if I had said "template" throughout this comment. And it is not generally regarded as a deficiency in C++ that if the compiler suddenly chose to instantiate every template with every value or type, some of the resulting instantiations would not compile.
The code example will work even if `u` is only known at runtime. That's because the inner switch is not matching on `u`, it's matching on `ab`, which is known at compile time due to the use of `inline`.
That may be confusing, but basically `inline` is generating different code for the branches .a and .b, so in those cases the value of `ab` is known at compile time. So, the inner switch is running at compile time too. In the .a branch it just turns into a call to handle_a(), and in the .b branch it turns into a call to handle_b().
The problem this is meant to solve is that sometimes a human thinking about the logic of the program can see it is impossible to reach some code (ie it is statically certain) but the language syntax and type system alone would not see the impossibility. So you can help the compiler along.
It is not meant for asserting dynamic “unreachability” (which is more like an assertion than a proof).
I have no idea what that's trying to do. A demonstration that rust is a large language with different dialects! A terse statement with multiple things I don't understand:
- Assigning a const conditionally?
- Naming a const _ ?
- () as a type?
- Assigning a panic to a constant (or variable) ?
To me it might as well be:
fn main() {
match let {
if ()::<>unimplemented!() -> else;
}
}
Sure, because it's compile-time code inside a (semantically) run-time check. In recent Rust versions you can do
fn main() {
const {
if false {
let _:() = panic!();
}
}
}
which compiles as expected. (Note that if the binding were `const` instead of `let`, it'd still have failed to compile, because the semantics don't change.)
It's fine that we want a constant, it's fine that this constant would, when being computed at compile time, panic if false was true, because it is not.
«inline else» is also very powerful tool to easily abstract away code with no runtime cost.
Nobody knows whether Rust and/or Zig themselves are the future of low-level programming, but I think it's likely that the future of low-level programming is that programmers who prefer one approach would use a Rust-like language, while those who prefer the other approach would use a Zig-like language. It will be intesting to see whether the preferences are evenly split, though, or one of them has a clear majority support.
The drift over time is predictable, compared to ten years ago Rust has fewer false positives, C++ has more false negatives.
You are correct to observe that there is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable. But I would argue we already know what you're calling the "false positive" scenario is also not useful, we're just not at the point where people stop doing it anyway.
No, it doesn't. Zig is safer than C++ (and it's much simpler, which also has an effect on correctness).
Making up some binary distinction and then deciding that because C++ falls on the same side of it as Zig (except it doesn't, because Zig eliminates out-of-bounds access to the same degree as Rust, not C++) then what applies to one must apply to the other. There is simply no justification to make that equivalence.
> There is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable.
That's nothing to do with Rice's theorem. Proving some properties with the type system isn't a general algorithm; it's a proof you have to work for in every program you write individually. There are languages (Idris, ATS) that allow you to prove any correctness property using the type system, with no false positives. It's a matter of the effort required, and there's nothing binary about that.
To get a sense of the theoretical effort (the practical effort is something to be measured empirically, over time) consider the set of all C programs and the effort it would take to rewrite an arbitrary selection of them in Rust (while maintaining similar performance and footprint characteristics). I believe the effort is larger than doing the same to translate a JS program to a Haskell program.
I explained in some detail exactly why this equivalence exists. I actually have a small hope that this time there are enough people who think it's a bad idea that we don't have to watch this play out for decades before the realisation as we did with C and C++.
Yes it's exactly Rice's Theorem, it's that simple and that drastic. You can choose what to do when you're not sure, but you can't choose (no matter how much effort you imagine applying) to always be sure†, that Undecidability is what Henry Rice proved. The languages you mention choose to treat "not sure" the same as "nope", like Rust does, you apparently prefer languages like Zig or C++ which instead treat "not sure" as "it's fine". I have explained why that's a terrible idea already.
The underlying fault, which is why I'm confident this reproduces, is in humans. To err is human. We are going to make mistakes and under the Rust model we will curse, perhaps blame the compiler, or the machine, and fix our mistake. In C++ or Zig our mistake compiles just fine and now the software is worse.
† For general purpose languages. One clever trick here is that you can just not be a general purpose language. Trivial semantic properties are easily decided, so if your language can make the desired properties trivial then there's no checking and Rice's Theorem doesn't apply. The easy example is, if my language has no looping type features, no recursive calls, nothing like that, all its programs trivially halt - a property we obviously can't decidably check in a general purpose language.
No, you assumed that Zig and C++ are equivalent and concluded that they'll follow a similar trajectory. It's your premise that's unjustified.
A problem you'd have to contend with is that Rust is much more similar to C++ than Zig in multiple respects, which may matter more or less than the level of safety when predicting the language trajectory.
> But you can't choose (no matter how much effort you imagine applying) to always be sure
That is not Rice's theorem. You can certainly choose to prove every program correct. What you cannot do is have a general mechanism that would prove all programs in a certain language correct.
> One clever trick here is that you can just not be a general purpose language.
That's not so much a clever trick as the core of all simple (i.e. non-dependent) type systems. Type-safety in those languages then trivially implies some property, which is an inductive invariant (or composable invariant) that's stronger than some desired property. E.g. in Rust, "borrow/lifetime-safety" is stronger than UAF-safety.
However, because an effort to prove any property must exist, we can find it for some language that trivially offers it by looking at the cost of translating a correct program in some other language that doesn't guarantee the property to one that does.
This is on top of the cultural part, which has led to idiomatic Zig being less likely to heap allocate in the first place, and more likely to consider ownership in advance. This part shouldn't be underestimated.
You presumably intend "shouldn't be underestimated" rather than "can't be". I agree that culture is crucial, but the technology needs to support that culture and in this respect Zig's technology is lacking. I would love to imagine that the culture drives technology such that Zig will fix the problem before 1.0, but Zig is very much an auteur language like Jai or Odin, Andrew decides and he does not seem to have quite the same outlook so I do not expect that.
Good call, I've fixed that.
Maybe if someone bends over backwards to rationalize it, but not in any real sense. Zig doesn't have automatic memory management or move semantics.
In C++ you can put bounds checking in your data structures and it is already in the standard data structures. You can't build RAII and moves into zig.
In a simple, real sense. Zig prevents out-of-bounds access just as Rust does; C++ doesn't. Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html).
> You can't build RAII and moves into zig.
So RAII is part of the definition of memory safety now?
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
We could, of course, argue over which of Rust, Zig, and C++ offers the best contribution to correctness beyond the sound guarantees they make, except these are empirical arguments with little empirical data to make any determination, which is part of my point.
Software correctness is such a complicated topic and, if anything, it's become more, not less, mysterious over the decades (see Tony Hoare's astonishment that unsound methods have proven more effective than sound methods in many regards). It's now understood to be a complicated game of confidence vs cost that depends on a great many factors. Those who claim to have definitive solutions don't know what they're talking about (or are making unfounded extrapolations).
That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods guy, although formal methods have come a long way since the seventies, when we thought the point was to prove programs correct).
> The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at all).
I agree that tests and reviews are somewhat effective. That's not the point. The point is that if you look at the history of programming languages simplicity in general goes against safety. Simplicity also goes against human understanding of code. C and assembly are extremely simple compared to java, python, C#, typescript etc. yet programs written in C and assembly are much harder to understand for humans. This isn't just a PL thing either. Simplicity is not the same as easy, it often is the opposite.
> I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at al
It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe. You are right that zig is simpler and safer, but it's a green field language. Else I might as well say rust is more safe than zig and also more complex. The point is as to isolate simplicity as the factor as much as possible.
I would even say that zig willingly sacrifices safety on the alter of simplicity.
But Java and Python are simpler yet safer than C++, so I don't understand what trend you can draw if there are examples in both directions.
> It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe.
But I didn't mean to imply that's not possible to add safety with complexity. I meant that when the sound guarantees are the same in two languages, then there's an argument to be made that the simpler one would be easier to write more correct programs in. Of course, in this case Zig is not only simpler than C++, but actually offers more sound safety guarantees.
https://www.youtube.com/watch?v=ZY_Z-aGbYm8
Due diligence every single time after the tenth refactor?
Zig I think has potential but it hasn't stabilized enough yet for broad adoption. That means it'll be awhile before it's built an ecosystem (libraries, engines etc.) that is useful to developers that don't care about language design.
Unfortunately for all fanatics, language really doesn't matter that much.
I have been using KDE for years now and it works perfectly good for me. It has no issues/crashes, it has many features in terms of desktop environment and also many programs that come with it like music player, video player, text editor, terminal etc. and they all work perfectly well for me. Almost all of this is written in C++. No need to mention the classic linux/chromium etc. etc which are all written in c++/c.
I use Ghostty which is written in zig, it is amazingly polished and works super well as well.
I have built and used a lot of software written in Rust as well and they worked really well too.
At some point you have to admit, what matters is the people writing software, the amount of effort that goes into it etc. it is not the langauge.
As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff. Even then just using Rust isn't as good as you might think, I uncountered a decent amount of segfaults, random crashes etc. using very popular Rust libraries as well. In the end just need to put in the effort.
I'm not saying language doesn't matter but it isn't even close to being the most important thing.
Safety is the selling point of Rust, but it's not the only benefit from a technical point of view.
The language semantics force you to write programs in a way that is most convenient for the optimizing compiler.
Not always, but in many cases, it's likely that a program written in Rust will be highly and deeply optimized. Of course, you can follow the same rules in C or Zig, but you would have to control more things manually, and you'd always have to think about what the compiler is doing under the hood.
It's true that neither safety nor performance are critical for many applications, but from this perspective, you could just use a high-level environment such as the JVM. The JVM is already very safe, just less performant.
Also, treating all languages that don't ensure full memory safety as if they're equally problematic is silly. The reason not ensuring memory safety is bad is because memory unsafety as at the root of some bugs that are both common, dangerous, and hard to catch. Only not all kinds of memory unsafety are equally problematic, Zig does ensure the lack of the the most dangerous kind of unsafety (out-of-bounds access) while making the other kind (use-after-free) easier to find.
That the distinction between "fully memory safe" and "not fully memory safe" is binary is also silly not just because of the above, but because no lanugage, not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
Furthermore, Zig has (or intends to have) novel features (among low-level languages) that help reduce bugs beyond those caused by memory unsafety.
Your writing feels accessible. I find it makes complex topics approachable. Or at least, it gives me a feel of concepts that I would otherwise have no grasp on. Other online writing tends to be permeated by a thick lattice of ideology or hyper-technical arcanery that inhibits understanding.
I did have one once (https://pron.github.io) but I don't know how accessible it is :) (two post series are book-length)
Yeah. By omitting a large swath of nuance. It reeks of "you can approximate cow with a sphere the size of Jupiter". It's baffling ludicrous.
Any rhetorical device that equates Java/C# (any memory safe Turing language ) safety with C is most likely a fallacy.
> not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
This is a silly point.
Memory safety (like soundly ensuring any non-trivial property) must come at a cost (that's just complexity theory). You can pay for it with added footprint (Java) or with added effort (Rust). Some people are disappointed that Zig offer more safety than C++ but less than Rust in exchange for other important benefits, while others are disappointed that the price you have to pay for even more safety in Rust is not a price they're happy to pay.
BTW, many Rust programs do use GC (that's what Rc/Arc are), it's just one that optimises for footprint rather than speed (which is definitely okay when you don't use the GC as much as in Java, but it's not really "without GC", either, when many programs do rely on GC to some extent).
> This is a silly point.
Why? It shows that even those who wish to make the distinction seem binary themselves accept that it isn't, and really believe that it matters just how much risk you take and how much you pay to reduce it.
(You could even point out that memory corruption can occur at the hardware level, so not only is the promise of zero memory corruption not necessarily worth any price, but it is also unattainable, even in principle, and if that were truly the binary line, then all of software is on the same side of it.)
Sure, but you lose the clarity of errors. The error wasn't in `comptime unreachable` but in `inline .a .b .c`.
https://godbolt.org/z/P1r49nTWo
If a dead code elimination pass didn't remove the 'comptime unreachable' statement, you'll now fail to compile (I expect?)
Doesn't mean it's not useful.
That sounds as bad as relying on undefined behaviour in C.
There are similarities here to C++ if constexpr and static_assert, if those are familiar to you.
Which is fine for small inputs and uses, but it's not something that would scale well.
That may be confusing, but basically `inline` is generating different code for the branches .a and .b, so in those cases the value of `ab` is known at compile time. So, the inner switch is running at compile time too. In the .a branch it just turns into a call to handle_a(), and in the .b branch it turns into a call to handle_b().
It is not meant for asserting dynamic “unreachability” (which is more like an assertion than a proof).
Fails to compile in Rust.
in zig they have one brach const.
in rust example from you, whole control flow ix is const. which is not rquivalent to zig. so how to have non const branches?