Even calling uninitialized data “garbage” is misleading. You might expect that the compiler would just leave out some initialization code and compile the remaining code in the expected way, causing the values to be “whatever was in memory previously”. But no - the compiler can (and absolutely will) optimize by assuming the values are whatever would be most convenient for optimization reasons, even if it would be vanishingly unlikely or even impossible.
At high enough optimization levels, the function compiles to “mov eax, 9485; ret”, which sets both a=13 and b=37 without testing the condition at all - as if both branches of the test were executed. This is perfectly reasonable because the lack of initialization means the values could already have been set that way (even if unlikely), so the compiler just goes ahead and sets them that way. It’s faster!
The compiler sees that foo can only be assigned in one place (that isn't called locally, but could called from other object files linked into the program) and its address never escapes. Since dereferencing a null pointer is UB, it can legally assume that `*foo` is always 42 and optimizes out the variable entirely.
Compilers can do whatever they want when they see UB, and accessing an unassigned and unassiganble (file-local) variable is UB, therefore the compiler can just decide that *foo is in fact always 42, or never 42, or sometimes 42, and all would be just as valid options for the compiler.
(I know I'm just restating the parent comment, but I had to think it through several times before understanding it myself, even after reading that.)
Although it should be noted that that’s not how compilers “reason”.
The way they work things out is to assume no UB happens (because otherwise your program is invalid and you would not request compiling an invalid program would you) then work from there.
If I understand it right, in principle the compiler doesn't even need to do that.
It can just leave the result totally uninitialised. That's because both code paths have undefined behaviour: whichever of result.x or result.y is not set is still copied at "return result" which is undefined behaviour, so the overall function has undefined behaviour either way.
It could even just replace the function body with abort(), or omit the implementation entirely (even the ret instruction, allowing execution to just fall through to whatever memory happens to follow). Whether any computer does that in practice is another matter.
> It can just leave the result totally uninitialised. That's because both code paths have undefined behaviour: whichever of result.x or result.y is not set is still copied at "return result" which is undefined behaviour, so the overall function has undefined behaviour either way.
That is incorrect, per the resolution of DR222 (partially initialized structures) at WG14:
> This DR asks the question of whether or not struct assignment is well defined when the source of the assignment is a struct, some of whose members have not been given a value. There was consensus that this should be well defined because of common usage, including the standard-specified structure struct tm.
As long as the caller doesn't read an uninitialised member, it's completely fine.
Things can get even wonkier if the compiler keeps the values in registers, as two consecutive loads could use different registers based as you say on what's the most convenient for optimisation (register allocation, code density).
Even the notion that uninitialized memory contain values is kind of dangerous. Once you access them you can't reason about what's going to happen at all. Behaviour can happen that's not self-consistent with any value at all: https://godbolt.org/z/adsP4sxMT
Is that an old 'bot? because I noticed it was an old version of Clang, and I tried switching to the latest Clang which is hilarious: https://godbolt.org/z/fra6fWexM
Oh yeah the classic Clang behaviour of “just stop codegen at UB”. If you look at the assembly, the main function just ends after the call to endl (right before where the if test should go); the program will run off the end of main and execute whatever nonsense is after it in memory as instructions. In this case I guess it calls main again (??) and then runs off into the woods and crashes.
I’ve never understood this behaviour from clang. At least stick a trap at the end so the program aborts instead of just executing random instructions?
The x and y values are funny too, because clang doesn’t even bother loading anything into esi for operator<<(unsigned int), so you get whatever the previous call left behind in that register. This means there’s no x or y variable at all, even though they’re nominally being “printed out”.
Practically speaking, I’d argue that a compiler assuming uninitialized stack or heap memory is always equal to some arbitrary convenient constant is obviously incorrect, actively harmful, and benefits no one.
In this example, the human author clearly intended mutual exclusivity in the condition branches, and this optimization would in fact destroy that assumption. That said, (a) human intentions are not evidence of foolproof programming logic, and often miscalculate state, and (b) the author could possibly catch most or all errors here when compiling without optimizations during debugging phase.
The compiler is the arbiter of what’s what (as long as it does not run afoul the CPU itself).
The memory being uninitialised means reading it is illegal for the writer of the program. The compiler can write to it if that suits it, the program can’t see the difference without UB.
In fact the compiler can also read from it, because it knows that it has in fact initialised that memory. And the compiler is not writing a C program and is thus not bound by the strictures of the C abstract machine anyway.
Because a could be 13 even if x is false because initialisation of the struct doesn’t have defined behavior of what the initial values of a and b need to be.
Same for b. If x is true, b could be 37 no matter how unlikely that is.
I have bumped into this myself, too. It's really annoying. The biggest footgun isn't even discussed explicitly and it might be how the error got introduced - it's when the struct goes from POD to non-POD or vice-versa, the rules change, so completely innocent change, like adding a string field, can suddenly create undefined behaviour in unrelated code that was correct previously.
Not the OP, but note that adding a std::string to a POD type makes it non-POD. If you were doing something like using malloc() to make the struct (not recommended in C++!), then suddenly your std::string is uninitialized, and touching that object will be instant UB. Uninitialized primitives are benign unless read, but uninitialized objects are extremely dangerous.
Even if you fixed the initialized data problem, this code is still a bug waiting to happen. It should be a single bool in the struct to handle the state for the function as there are only two states that actually make sense.
succeeded = true;
error = true;
//This makes no sense
succeeded = false;
error = false;
//This makes no sense
Otherwise if I'm checking a response, I am generally going to check just "succeeded" or "error" and miss one of the two above states that "shouldn't happen", or if I check both it's both a lot of awkward extra code and I'm left with trying to output an error for a state that again makes no sense.
It happens often when "error" field is not a bool, but a string, aka error_message. Could be empty string, or _null_, or even _undefined_ if we're in JS.
Then the obvious question why do we need _succeeded_ at all, if we can always check for _error_. Sometimes it can be useful, when the server doesn't know itself if the operation is succeeded (e.g. an IO/database operation timed out), so it might be succeeded, but should also show an error message to user.
Another possibility if the succeeded is not a bool, but, say, "succeeded_at" timestamp. In general, I noticed that almost always any boolean value in database can be replaced with a timestamp or an error code.
Yeah, looks pretty straightforward to me, but I used to write C++ for a living. I mean, there are complicated cases in C++ starting with C++11, this one is not really one of them. Just init the fields to false. Most of these cases is just C++ trying to bring in new features without breaking legacy code, it has become pretty difficult to keep up with it all.
The two fields in the struct are expected to be false unless changed, then initialize them as such. Nothing is gained by leaving it to the compiler, and a lot is lost.
I think the point is that sometimes variables are defined by the language spec as initialized to zero, and sometimes they aren't.
Perhaps what you mean is, "Nothing is to be gained by relying on the language spec to initialize things to zero, and a lot is lost"; I'd agree with that.
Not trying to be pedantic. When I hear "leave it to the compiler", I normally think, "let the compiler optimize it, rather than optimizing it yourself". The compiler is doing the initialization either way, but in one case you're relying on a correct understanding of minutiae of the language spec (both for you and all future readers and writers of the code), in another case you're explicitly instructing the compiler to initialize it to zero.
Compilers implement the parts of the standard they agree with, in the way they think is best. They also implement it in the way they understand the standardese.
Read a complex enough project that's meant to be used across compiler venrdos and versions, and you'll find plenty of instances where they're working around the compiler not implementing the standard.
Also, if you attended the standards committee, you would hear plenty of complaints from compiler vendors that certain things are implementable. Sometimes the committee listens and makes changes, other times they put their fingers in their ears and ignore reality.
There are also plenty of places where the standard lets the compiler make it's own decision (implementation defined behavior). You need to know what your compiler vendor(s) chose to do.
tl;dr: With a standard as complex as C++'s, the compilers very much do not just "implement the standard". Sometimes you can get away with pretending that, but others very much not.
Many years had a customer complaint about undefined data changing value in Fortran 77. It turned out that the compiler never allocated storage for uninitialized variables, so it was aliased to something else.
Compiler was changed to allocate storage for any referenced varibles.
Yeah... but I wouldn't characterize the bug itself (in its essential form) as UB.
Even if the implementation specified that the data would be indeterminate depending on what existed in that memory location previously, the bug would still exist.
Even if you hand-coded this in assembly, the bug would still exist.
The essence of the bug is uninitialized data being garbage. That's always gonna be a latent bug, regardless of whether the behavior is defined in an ISO standard.
Yeah I agree. This is a classic “uninitialized variable has garbage memory value” bug. But it is not a “undefined nasal demons behavior” bug.
That said, we all learn this one! I spent like two weeks debugging a super rare desync bug in a multiplayer game with a P2P lockstep synchronous architecture.
Suffice to say I am now a zealot about providing default values all the time. Thankfully it’s a lot easier since C++11 came out and lets you define default values at the declaration site!
I prefer language constructs define that new storage is zero-initialized. It doesn't prevent all bugs (i.e. application logic bugs) but at least gives deterministic results. These days it's zero cost for local variables and near-zero cost for fields. This is the case in Virgil.
That makes things worse if all-zero is not a valid value for the datatype. I'd much prefer a set-up that requires you to initialise explicitly. Rust, for example, has a `Default` trait that you can implement if there is a sensible default, which may well be all-zero. It also has a `MaybeUninit` holder which doesn't do any initialisation, but needs an `unsafe` to extract the value once you've made sure it's OK. But if you don't have a suitable default, and don't want/need to use `unsafe`, you have to supply all the values.
I think it's acceptable to leave an escape hatch for these situations instead of leaving it to easy to misunderstand nooks and crannies of the standard.
You don't want to zero out the memory?
Slap a "foo = uninitialized" in there to have that exact behavior and get the here be demons sign for free.
Yeah this issue is super obvious and non-controversial.
Uninitialized state is totally fine as an opt-in performance optimization. But having a well defined non-garbage default value should obviously be the default.
Did C fuck that up 50 years ago? Yeah probably. They should have known better even then. But that’s ok. It’s a historical artifact. All languages are full of them. We learn and improve!
Symbian's way of avoiding this was to use a class called CBase to derive from. CBase would memset the entire allocated memory for the object to binary zeros, thus zeroizing any member variable.
And by convention, all classes derived from CBase would start their name with C, so something like CHash or CRectangle.
As an example, consider this code (godbolt: https://godbolt.org/z/TrMrYTKG9):
At high enough optimization levels, the function compiles to “mov eax, 9485; ret”, which sets both a=13 and b=37 without testing the condition at all - as if both branches of the test were executed. This is perfectly reasonable because the lack of initialization means the values could already have been set that way (even if unlikely), so the compiler just goes ahead and sets them that way. It’s faster!The compiler sees that foo can only be assigned in one place (that isn't called locally, but could called from other object files linked into the program) and its address never escapes. Since dereferencing a null pointer is UB, it can legally assume that `*foo` is always 42 and optimizes out the variable entirely.
Compilers can do whatever they want when they see UB, and accessing an unassigned and unassiganble (file-local) variable is UB, therefore the compiler can just decide that *foo is in fact always 42, or never 42, or sometimes 42, and all would be just as valid options for the compiler.
(I know I'm just restating the parent comment, but I had to think it through several times before understanding it myself, even after reading that.)
The way they work things out is to assume no UB happens (because otherwise your program is invalid and you would not request compiling an invalid program would you) then work from there.
It can just leave the result totally uninitialised. That's because both code paths have undefined behaviour: whichever of result.x or result.y is not set is still copied at "return result" which is undefined behaviour, so the overall function has undefined behaviour either way.
It could even just replace the function body with abort(), or omit the implementation entirely (even the ret instruction, allowing execution to just fall through to whatever memory happens to follow). Whether any computer does that in practice is another matter.
That is incorrect, per the resolution of DR222 (partially initialized structures) at WG14:
> This DR asks the question of whether or not struct assignment is well defined when the source of the assignment is a struct, some of whose members have not been given a value. There was consensus that this should be well defined because of common usage, including the standard-specified structure struct tm.
As long as the caller doesn't read an uninitialised member, it's completely fine.
I’ve never understood this behaviour from clang. At least stick a trap at the end so the program aborts instead of just executing random instructions?
The x and y values are funny too, because clang doesn’t even bother loading anything into esi for operator<<(unsigned int), so you get whatever the previous call left behind in that register. This means there’s no x or y variable at all, even though they’re nominally being “printed out”.
The code says that if x is true then a=13 and if it is false than b=37.
This is the case. Its just that a=13 even if x is false. A thing that the code had nothing to say about, and so the compiler is free to do.
Practically speaking, I’d argue that a compiler assuming uninitialized stack or heap memory is always equal to some arbitrary convenient constant is obviously incorrect, actively harmful, and benefits no one.
I take issue with the compiler assuming anything about the contents of that memory; it should be a black box.
The memory being uninitialised means reading it is illegal for the writer of the program. The compiler can write to it if that suits it, the program can’t see the difference without UB.
In fact the compiler can also read from it, because it knows that it has in fact initialised that memory. And the compiler is not writing a C program and is thus not bound by the strictures of the C abstract machine anyway.
Same for b. If x is true, b could be 37 no matter how unlikely that is.
succeeded = true; error = true; //This makes no sense
succeeded = false; error = false; //This makes no sense
Otherwise if I'm checking a response, I am generally going to check just "succeeded" or "error" and miss one of the two above states that "shouldn't happen", or if I check both it's both a lot of awkward extra code and I'm left with trying to output an error for a state that again makes no sense.
Then the obvious question why do we need _succeeded_ at all, if we can always check for _error_. Sometimes it can be useful, when the server doesn't know itself if the operation is succeeded (e.g. an IO/database operation timed out), so it might be succeeded, but should also show an error message to user.
Another possibility if the succeeded is not a bool, but, say, "succeeded_at" timestamp. In general, I noticed that almost always any boolean value in database can be replaced with a timestamp or an error code.
Perhaps what you mean is, "Nothing is to be gained by relying on the language spec to initialize things to zero, and a lot is lost"; I'd agree with that.
Read a complex enough project that's meant to be used across compiler venrdos and versions, and you'll find plenty of instances where they're working around the compiler not implementing the standard.
Also, if you attended the standards committee, you would hear plenty of complaints from compiler vendors that certain things are implementable. Sometimes the committee listens and makes changes, other times they put their fingers in their ears and ignore reality.
There are also plenty of places where the standard lets the compiler make it's own decision (implementation defined behavior). You need to know what your compiler vendor(s) chose to do.
tl;dr: With a standard as complex as C++'s, the compilers very much do not just "implement the standard". Sometimes you can get away with pretending that, but others very much not.
Compiler was changed to allocate storage for any referenced varibles.
I think a sanitizer probably would have caught this, but IMHO this is the language's fault.
Hopefully future versions of C++ will mandate default initialization for all cases that are UB today and we can be free of this class of bug.
Even if the implementation specified that the data would be indeterminate depending on what existed in that memory location previously, the bug would still exist.
Even if you hand-coded this in assembly, the bug would still exist.
The essence of the bug is uninitialized data being garbage. That's always gonna be a latent bug, regardless of whether the behavior is defined in an ISO standard.
That said, we all learn this one! I spent like two weeks debugging a super rare desync bug in a multiplayer game with a P2P lockstep synchronous architecture.
Suffice to say I am now a zealot about providing default values all the time. Thankfully it’s a lot easier since C++11 came out and lets you define default values at the declaration site!
You don't want to zero out the memory? Slap a "foo = uninitialized" in there to have that exact behavior and get the here be demons sign for free.
Uninitialized state is totally fine as an opt-in performance optimization. But having a well defined non-garbage default value should obviously be the default.
Did C fuck that up 50 years ago? Yeah probably. They should have known better even then. But that’s ok. It’s a historical artifact. All languages are full of them. We learn and improve!
And by convention, all classes derived from CBase would start their name with C, so something like CHash or CRectangle.