Is using an unsigned rather than signed int more likely to cause bugs? Why?
![Creative The name of the picture](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO9GURib1T8z7lCwjOGLQaGtrueEthgQ8LO42ZX8cOfTqDK4jvDDpKkLFwf2J49kYCMNW7d4ABih_XCb_2UXdq5fPJDkoyg7-8g_YfRUot-XnaXkNYycsNp7lA5_TW9td0FFpLQ2APzKcZ/s1600/1.jpg)
![Creative The name of the picture](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYQ0N5W1qAOxLP7t7iOM6O6AzbZnkXUy16s7P_CWfOb5UbTQY_aDsc727chyphenhyphen5W4IppVNernMMQeaUFTB_rFzAd95_CDt-tnwN-nBx6JyUp2duGjPaL5-VgNO41AVsA_vu30EJcipdDG409/s400/Clash+Royale+CLAN+TAG%2523URR8PPP.png)
up vote
35
down vote
favorite
In the Google C++ Style Guide, on the topic of "Unsigned Integers", it is suggested that
Because of historical accident, the C++ standard also uses unsigned integers to represent the size of containers - many members of the standards body believe this to be a mistake, but it is effectively impossible to fix at this point. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
What kind of bugs (a significant class) does the guide refer to? Overflowing bugs?
Do not use an unsigned type merely to assert that a variable is non-negative.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
c++ c google-style-guide
 |Â
show 13 more comments
up vote
35
down vote
favorite
In the Google C++ Style Guide, on the topic of "Unsigned Integers", it is suggested that
Because of historical accident, the C++ standard also uses unsigned integers to represent the size of containers - many members of the standards body believe this to be a mistake, but it is effectively impossible to fix at this point. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
What kind of bugs (a significant class) does the guide refer to? Overflowing bugs?
Do not use an unsigned type merely to assert that a variable is non-negative.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
c++ c google-style-guide
3
Try to dounsigned int x = 0; --x;
and see whatx
becomes. Without limit checks, the size could suddenly get some unexpected value that could easily lead to UB.
â Some programmer dude
yesterday
21
At least unsigned overflow has a well-defined behavior and produces expected results.
â VTT
yesterday
24
On an unrelated (to your question but not to Google styleguides) note, if you search a little you will find some (sometimes rightfully) criticism of the Google styleguides. Don't take them as gospel.
â Some programmer dude
yesterday
15
On the other hand,int
overflow and underflow are UB. You are less likely to experience a situation where anint
would try to express a value it can't than a situation that decrements anunsigned int
below zero but the kind of people that would be surprised by the behavior ofunsigned int
arithmetic are the kind of people that could also write code that would causeint
overflow related UB like usinga < a + 1
to check for overflow.
â François Andrieux
yesterday
5
If unsigned integer overflows, it's well defined. If signed integer overflows, it's undefined behaviour. I prefer well defined behaviour, but if your code can't handle overflowed values, you are lost with both. Difference is: for signed you are already lost for the overflowing operation, for unsigned in the following code. The only point I agree is if you need negative values, an unsigned integer type is the wrong choice - obviously.
â too honest for this site
yesterday
 |Â
show 13 more comments
up vote
35
down vote
favorite
up vote
35
down vote
favorite
In the Google C++ Style Guide, on the topic of "Unsigned Integers", it is suggested that
Because of historical accident, the C++ standard also uses unsigned integers to represent the size of containers - many members of the standards body believe this to be a mistake, but it is effectively impossible to fix at this point. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
What kind of bugs (a significant class) does the guide refer to? Overflowing bugs?
Do not use an unsigned type merely to assert that a variable is non-negative.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
c++ c google-style-guide
In the Google C++ Style Guide, on the topic of "Unsigned Integers", it is suggested that
Because of historical accident, the C++ standard also uses unsigned integers to represent the size of containers - many members of the standards body believe this to be a mistake, but it is effectively impossible to fix at this point. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
What kind of bugs (a significant class) does the guide refer to? Overflowing bugs?
Do not use an unsigned type merely to assert that a variable is non-negative.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
c++ c google-style-guide
edited 9 mins ago
einpoklum
29.1k24103209
29.1k24103209
asked yesterday
user7586189
282211
282211
3
Try to dounsigned int x = 0; --x;
and see whatx
becomes. Without limit checks, the size could suddenly get some unexpected value that could easily lead to UB.
â Some programmer dude
yesterday
21
At least unsigned overflow has a well-defined behavior and produces expected results.
â VTT
yesterday
24
On an unrelated (to your question but not to Google styleguides) note, if you search a little you will find some (sometimes rightfully) criticism of the Google styleguides. Don't take them as gospel.
â Some programmer dude
yesterday
15
On the other hand,int
overflow and underflow are UB. You are less likely to experience a situation where anint
would try to express a value it can't than a situation that decrements anunsigned int
below zero but the kind of people that would be surprised by the behavior ofunsigned int
arithmetic are the kind of people that could also write code that would causeint
overflow related UB like usinga < a + 1
to check for overflow.
â François Andrieux
yesterday
5
If unsigned integer overflows, it's well defined. If signed integer overflows, it's undefined behaviour. I prefer well defined behaviour, but if your code can't handle overflowed values, you are lost with both. Difference is: for signed you are already lost for the overflowing operation, for unsigned in the following code. The only point I agree is if you need negative values, an unsigned integer type is the wrong choice - obviously.
â too honest for this site
yesterday
 |Â
show 13 more comments
3
Try to dounsigned int x = 0; --x;
and see whatx
becomes. Without limit checks, the size could suddenly get some unexpected value that could easily lead to UB.
â Some programmer dude
yesterday
21
At least unsigned overflow has a well-defined behavior and produces expected results.
â VTT
yesterday
24
On an unrelated (to your question but not to Google styleguides) note, if you search a little you will find some (sometimes rightfully) criticism of the Google styleguides. Don't take them as gospel.
â Some programmer dude
yesterday
15
On the other hand,int
overflow and underflow are UB. You are less likely to experience a situation where anint
would try to express a value it can't than a situation that decrements anunsigned int
below zero but the kind of people that would be surprised by the behavior ofunsigned int
arithmetic are the kind of people that could also write code that would causeint
overflow related UB like usinga < a + 1
to check for overflow.
â François Andrieux
yesterday
5
If unsigned integer overflows, it's well defined. If signed integer overflows, it's undefined behaviour. I prefer well defined behaviour, but if your code can't handle overflowed values, you are lost with both. Difference is: for signed you are already lost for the overflowing operation, for unsigned in the following code. The only point I agree is if you need negative values, an unsigned integer type is the wrong choice - obviously.
â too honest for this site
yesterday
3
3
Try to do
unsigned int x = 0; --x;
and see what x
becomes. Without limit checks, the size could suddenly get some unexpected value that could easily lead to UB.â Some programmer dude
yesterday
Try to do
unsigned int x = 0; --x;
and see what x
becomes. Without limit checks, the size could suddenly get some unexpected value that could easily lead to UB.â Some programmer dude
yesterday
21
21
At least unsigned overflow has a well-defined behavior and produces expected results.
â VTT
yesterday
At least unsigned overflow has a well-defined behavior and produces expected results.
â VTT
yesterday
24
24
On an unrelated (to your question but not to Google styleguides) note, if you search a little you will find some (sometimes rightfully) criticism of the Google styleguides. Don't take them as gospel.
â Some programmer dude
yesterday
On an unrelated (to your question but not to Google styleguides) note, if you search a little you will find some (sometimes rightfully) criticism of the Google styleguides. Don't take them as gospel.
â Some programmer dude
yesterday
15
15
On the other hand,
int
overflow and underflow are UB. You are less likely to experience a situation where an int
would try to express a value it can't than a situation that decrements an unsigned int
below zero but the kind of people that would be surprised by the behavior of unsigned int
arithmetic are the kind of people that could also write code that would cause int
overflow related UB like using a < a + 1
to check for overflow.â François Andrieux
yesterday
On the other hand,
int
overflow and underflow are UB. You are less likely to experience a situation where an int
would try to express a value it can't than a situation that decrements an unsigned int
below zero but the kind of people that would be surprised by the behavior of unsigned int
arithmetic are the kind of people that could also write code that would cause int
overflow related UB like using a < a + 1
to check for overflow.â François Andrieux
yesterday
5
5
If unsigned integer overflows, it's well defined. If signed integer overflows, it's undefined behaviour. I prefer well defined behaviour, but if your code can't handle overflowed values, you are lost with both. Difference is: for signed you are already lost for the overflowing operation, for unsigned in the following code. The only point I agree is if you need negative values, an unsigned integer type is the wrong choice - obviously.
â too honest for this site
yesterday
If unsigned integer overflows, it's well defined. If signed integer overflows, it's undefined behaviour. I prefer well defined behaviour, but if your code can't handle overflowed values, you are lost with both. Difference is: for signed you are already lost for the overflowing operation, for unsigned in the following code. The only point I agree is if you need negative values, an unsigned integer type is the wrong choice - obviously.
â too honest for this site
yesterday
 |Â
show 13 more comments
6 Answers
6
active
oldest
votes
up vote
17
down vote
accepted
Some of the answers here mention the surprising promotion rules between signed and unsigned values, but this seems more like a problem relating to mixing signed and unsigned values, and doesn't necessarily explain why signed is preferred over unsigned, outside of mixing scenarios.
In my experience, outside of mixed comparisons and promotion rules, there are two primary reasons why unsigned values are big bug producers.
Unsigned values have a discontinuity at zero, the most common value in programming
Both unsigned and signed integers have a discontinuities at their minimum and maximum values, where they wrap around (unsigned) or cause undefined behavior (signed). For unsigned
these points are at zero and UINT_MAX
. For int
they are at INT_MIN
and INT_MAX
. Typical values of INT_MIN
and INT_MAX
on system with 4-byte int
values are -2^31
and 2^31-1
, and on such a system UINT_MAX
is typically 2^32-1
.
The primary bug-inducing problem with unsigned
that doesn't apply to int
is that it has a discontinuity at zero. Zero, of course, is a very common value in programs, along with other small values like 1,2,3. It is common to add and subtract small values, especially 1, in various constructs, and if you subtract anything from an unsigned
value and it happens to be zero, you just got a massive positive value and an almost certain bug.
Consider code iterates over all values in a vector by index except the last0.5:
for (size_t i = 0; i < v.size() - 1; i++) // do something
This works fine until one day you pass in an empty vector. Instead of doing zero iterations, you get v.size() - 1 == a giant number
1 and you'll do 4 billion iterations and almost have a buffer overflow vulnerability.
You need to write it like this:
for (size_t i = 0; i + 1 < v.size(); i++) // do something
So it can be "fixed" in this case, but only by carefully thinking about the unsigned nature of size_t
. Sometimes you can't apply the fix above because instead of a constant one you have some variable offset you want to apply, which may be positive or negative: so which "side" of the comparison you need to put it on depends on the signedness - now the code gets really messy.
There is a similar issue with code that tries to iterate down to and including zero. Something like while (index-- > 0)
works fine, but the apparently equivalent while (--index >= 0)
will never terminate for an unsigned value. Your compiler might warn you when the right hand side is literal zero, but certainly not if it is a value determined at runtime.
Counterpoint
Many might argue that signed values also have discontinuities, but they are very far away from zero. I really consider this a separate problem of "overflow", both signed and unsigned values may overflow at very large values. In many cases overflow is impossible due to constraints on the possible range of the values, and overflow of many 64-bit values may be physically impossible). Even if possible, the chance of an overflow related bug is often minuscule compared to an "at zero" bug, and overflow occurs for unsigned values too. So unsigned combines the worst of both worlds: potentially overflow with very large magnitude values, and a discontinuity at zero. Signed only has the former.
Many will argue "you lose a bit" with unsigned. This is often true - but not always (if you need to represent differences between unsigned values you'll lose that bit anyways: so many 32-bit things are limited to 2 GiB anyways, or you'll have a weird grey area where say a file can be 4 GiB, but you can't use certain APIs on the second 2 GiB half).
Even in the cases where unsigned buys you a bit: it doesn't buy you much: if you had to support more than 2 billion "things", you'll probably soon have to support more than 4 billion.
Logically, unsigned values are a subset of signed values
Mathematically, unsigned values (non-negative integers) are a subset of signed integers (just called _integers).2. Yet signed values naturally pop out of operations solely on unsigned values, such as subtraction. We might say that unsigned values aren't closed under subtraction. The same isn't true of signed values.
Want to find the "delta" between two unsigned indexes into a file? Well you better do the subtraction in the right order, or else you'll get the wrong answer. Of course, you often need a runtime check to determine the right order! When dealing with unsigned values as numbers, you'll often find that (logically) signed values keep appearing anyways, so you might as well start of with signed.
Counterpoint
As mentioned in footnote (2) above, signed values in C++ aren't actually a subset of unsigned values of the same size, so unsigned values can represent the same number of results that signed values can.
True, but the range is less useful. Consider subtraction, and unsigned numbers with a range of 0 to 2N, and signed numbers with a range of -N to N. Arbitrary subtractions result in results in the range -2N to 2N in _both cases, and either type of integer can only represent half of it. Well it turns out that the region centered around zero of -N to N is usually way more useful (contains more actual results in real world code) than the range 0 to 2N. Consider any of typical distribution other than uniform (log, zipfian, normal, whatever) and consider subtracting randomly selected values from that distribution: way more values end up in [-N, N] than [0, 2N] (indeed, resulting distribution is always centered at zero).
64-bit closes the door on many of the reasons to use signed values as numbers
I think the arguments above were already compelling for 32-bit values, but the overflow cases, which affect both signed and unsigned at different thresholds, do occur for 32-bit values, since "2 billion" is a number that can exceeded by many abstract and physical quantities (billions of dollars, billions of nanoseconds, arrays with billions of elements). So if someone is convinced enough by the doubling of the positive range for unsigned values, they can make the case that overflow does matter and it slightly favors unsigned.
Outside of specialized domains 64-bit values largely remove this concern. Signed 64-bit values have an upper range of 9,223,372,036,854,775,807 - more than nine quintillion. That's about the age of the universe measured in nanoseconds. That's more money that you'll need to track, that's a larger array than would fit in all RAM in all the computers on each for the foreseeable future, etc. So maybe 9 quintillion is enough for everybody (for now).
When to use unsigned values
Note that the style guide doesn't forbid or even necessarily discourage use of unsigned numbers. It concludes with:
Do not use an unsigned type merely to assert that a variable is non-negative.
Indeed, there are good uses for unsigned variables:
When you want to treat an N-bit quantity not as an integer, but simply a "bag of bits". For example, as a bitmask or bitmap, or N boolean values or whatever. This use often goes hand-in-hand with the fixed width types like
uint32_t
anduint64_t
since you often want to know the exact size of the variable. A hint that a particular variable deserves this treatment is that you only operate on it with with the bitwise operators such as~
,|
,&
,^
,>>
and so on, and not with the arithmetic operations such as+
,-
,*
,/
etc.Unsigned is ideal here because the behavior of the bitwise operators is well-defined and standardized. Signed values have several problems, such as undefined and unspecified behavior when shifting, and an unspecified representation.
- When you actually want modular arithmetic. Sometimes you actually want 2^N modular arithmetic. In these cases "overflow" is a feature, not a bug. Unsigned values give you what you want here since they are defined to use modular arithmetic. Signed values cannot be (easily, efficiently) used at all since they have an unspecified representation and overflow is undefined.
0.5 After I wrote this I realized this is nearly identical to Jarod's example, which I hadn't seen - and for good reason, it's a good example!
1 We're talking about size_t
here so usually 2^32-1 on a 32-bit system or 2^64-1 on a 64-bit one.
2 In C++ this isn't exactly the case because unsigned values contain more values at the upper end than the corresponding signed type, but the basic problem exists that manipulating unsigned values can result in (logically) signed values, but there is no corresponding issue with signed values (since signed values already include unsigned values).
3
I agree with everything you've posted, but "64 bits should be enough for everyone" sure seems way too close to "640k ought to be enough for everyone".
â Andrew Henle
6 hours ago
1
@Andrew - yup, I chose my words carefully :).
â BeeOnRope
6 hours ago
Well, you might want to work a bit on equating defined wrap-around for unsigned types with full undefined behavior for signed types.
â Deduplicator
6 hours ago
2
"64-bit closes the door on unsigned values" --> Disagree. Some integer programming tasks are simple not a case of counting and do not need negative values yet need power-of-2 widths: Passwords, encryption, bit graphics, benefit with unsigned math. Many ideas here point out why code could use signed math when able, yet falls very short of making unsigned type useless and closing the door on them.
â chux
4 hours ago
@chux - the title of that section is probably a bit overstated: I really mean what I say in the following text, that it closes the loophole in the arguments above for most types of "counting" and "indexing" values. Unsigned absolutely still has its place for the "bag of bits" type scenarios, where you aren't doing math on it. Can you elaborate a bit on the integer programming case? Are those values used with mathematical expressions? There are certainly exceptions to every rule, and integer programming may be a good one (too obscure to reverse this type of style guide entry though).
â BeeOnRope
3 hours ago
 |Â
show 3 more comments
up vote
24
down vote
As stated, mixing unsigned
and signed
might lead to unexpected behaviour (even if well defined).
Suppose you want to iterate over all elements of vector except for the last five, you might wrongly write:
for (int i = 0; i < v.size() - 5; ++i) foo(v[i]); // Incorrect
// for (int i = 0; i + 5 < v.size(); ++i) foo(v[i]); // Correct
Suppose v.size() < 5
, then, as v.size()
is unsigned
, s.size() - 5
would be a very large number, and so i < v.size() - 5
would be true
for a more expected range of value of i
. And UB then happens quickly (out of bound access once i >= v.size()
)
If v.size()
would have return signed value, then s.size() - 5
would have been negative, and in above case, condition would be false immediately.
On the other side, index should be between [0; v.size()[
so unsigned
makes sense.
Signed has also its own issue as UB with overflow or implementation-defined behaviour for right shift of a negative signed number, but less frequent source of bug for iteration.
2
While I myself use signed numbers whenever I can, I don't think that this example is strong enough. Someone who uses unsigned numbers for a long time, surely knows this idiom: instead ofi<size()-X
, one should writei+X<size()
. Sure, it's a thing to remember, but it is not that hard to got accustomed to, in my opinion.
â geza
yesterday
7
What you are saying is basically one has to know the language and the coercion rules between types. I don't see how this changes whether one uses signed or unsigned as the question asks. Not that I recommend using signed at all if there is no need for negative values. I agree with @geza, only use signed when necessary. This makes the google guide questionable at best. Imo it's bad advice.
â too honest for this site
yesterday
2
@toohonestforthissite The point is the rules are arcane, silent and major causes of bugs. Using exclusively signed types for arithmetic relieves you of the issue. BTW using unsigned types for the purpose of enforcing positive values is one of the worst abuse for them.
â Passer By
yesterday
2
Thankfully, modern compilers and IDEs give warnings when mixing signed and unsigned numbers in an expression.
â Alexey B.
23 hours ago
4
@PasserBy: If you call them arcane, you have to add the integer promotions and the UB for overflow of signed types arcane, too. And the very common sizeof operator returns an unsigned anyway, so you do have to know about them. Said that: if you don't want to learn the language details, just don't use C or C++! Considering google promotes go, maybe that#s exactly their goal. The days of "don't be evil" are long gone â¦
â too honest for this site
22 hours ago
 |Â
show 8 more comments
up vote
12
down vote
One of the most hair-raising examples of an error is when you MIX signed and unsigned values:
#include <iostream>
int main()
auto qualifier = -1 < 1u ? "makes" : "does not make";
std::cout << "The world " << qualifier << " sense" << std::endl;
The output:
The world does not make sense
Unless you have a trivial application, it's inevitable you'll end up with either dangerous mixes between signed and unsigned values (resulting in runtime errors) or if you crank up warnings and make them compile-time errors, you end up with a lot of static_casts in your code. That's why it's best to strictly use signed integers for types for math or logical comparison. Only use unsigned for bitmasks and types representing bits.
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea. Most numbers are closer to 0 than they are to 2 billion, so with unsigned types, a lot of your values are closer to the edge of the valid range. To make things worse, the final value may be in a known positive range, but while evaluating expressions, intermediate values may underflow and if they are used in intermediate form may be VERY wrong values. Finally, even if your values are expected to always be positive, that doesn't mean that they won't interact with other variables that can be negative, and so you end up with a forced situation of mixing signed and unsigned types, which is the worst place to be.
8
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea *if you don't treat implicit conversions as warnings and are too lazy to use proper type casts.* Modeling your types on their expected valid values is completely reasonable, just not in C/C++ with built-in types.
â villasv
yesterday
1
@user7586189 It's a good practice to make invalid data impossible to instantiate, so having positive-only variables for sizes is perfectly reasonable. But you can't fine tune C/C++ built-in types to disallow by default bad casts like the one in this answer and the validity ends up being responsibility of someone else. If you're in a language with stricter casts (even between built-ins), expected-domain modeling is a pretty good idea.
â villasv
yesterday
1
Note, I did mention cranking up warnings and setting them to errors, but not everyone does. I still disagree @villasv with your statement about modeling values. By choosing unsigned, you are ALSO implicitly modeling every other value it may come into contact with without having much foresight of what that will be. And almost certainly getting it wrong.
â Chris Uzdavinis
yesterday
1
Modeling with the domain in mind is a good thing. Using unsigned to model the domain is NOT. (Signed vs unsigned should be chosen based on types of usage, not range of values, unless it's impossible to do otherwise.)
â Chris Uzdavinis
yesterday
2
Once your codebase has a mix of signed and unsigned values, when you turn up warnings and promote them to errors, the code ends up littered with static_casts to make the conversions explicit (because the math still needs to be done.) Even when correct, it's error-prone, harder to work with, and harder to read.
â Chris Uzdavinis
yesterday
 |Â
show 6 more comments
up vote
5
down vote
Why is using an unsigned int more likely to cause bugs than using a signed int?
Using an unsigned type is not more likely to cause bugs than using a signed type with certain classes of tasks.
Use the right tool for the job.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
Why is using an unsigned int more likely to cause bugs than using a signed int?
If the task if well-matched: nothing wrong. No, not more likely.
Security, encryption, and authentication algorithm count on unsigned modular math.
Compression/decompression algorithms too as well as various graphic formats benefit and are less buggy with unsigned math.
Any time bit-wise operators and shifts are used, the unsigned operations do not get messed up with the sign-extension issues of signed math.
Signed integer math has an intuitive look and feel readily understood by all including learners to coding. C/C++ was not targeted originally nor now should be an intro-language. For rapid coding that employs safety nets concerning overflow, other languages are better suited. For lean fast code, C assumes that coders knows what they are doing (they are experienced).
A pitfall of signed math today is the ubiquitous 32-bit int
that with so many problems is well wide enough for the common tasks without range checking. This leads to complacency that overflow is not coded against. Instead, for (int i=0; i < n; i++)
int len = strlen(s);
is viewed as OK because n
is assumed < INT_MAX
and strings will never be too long, rather than being full ranged protected in the first case or using size_t
, unsigned
or even long long
in the 2nd.
C/C++ developed in an era that included 16-bit as well as 32-bit int
and the extra bit an unsigned 16-bit size_t
affords was significant. Attention was needed in regard to overflow issues be it int
or unsigned
.
With 32-bit (or wider) applications of Google on non-16 bit int/unsigned
platforms, affords the lack of attention to +/- overflow of int
given its ample range. This makes sense for such applications to encourage int
over unsigned
. Yet int
math is not well protected.
The narrow 16-bit int/unsigned
concerns apply today with select embedded applications.
Google's guidelines apply well for code they write today. It is not a definitive guideline for the larger wide scope range of C/C++ code.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
In C/C++, signed int math overflow is undefined behavior and so not certainly easier to detect than defined behavior of unsigned math.
As @Chris Uzdavinis well commented, mixing signed and unsigned is best avoided by all (especially beginners) and otherwise coded carefully when needed.
1
You make a good point that anint
doesn't model the behavior of an "actual" integer either. Undefined behavior on overflow is not how a mathematician thinks of integers: they're no possibility of "overflow" with an abstract integer. But these are machine storage units, not a math guy's numbers.
â tchrist
18 hours ago
add a comment |Â
up vote
1
down vote
I have some experience with Google's style guide, AKA the Hitchhiker's Guide to Insane Directives from Bad Programmers Who Got into the Company a Long Long Time Ago. This particular guideline is just one example of the dozens of nutty rules in that book.
Errors only occur with unsigned types if you try to do arithmetic with them (see Chris Uzdavinis example above), in other words if you use them as numbers. Unsigned types are not intended to be used to store numeric quantities, they are intended to store counts such as the size of containers, which can never be negative, and they can and should be used for that purpose.
The idea of using arithmetical types (like signed integers) to store container sizes is idiotic. Would you use a double to store the size of a list, too? That there are people at Google storing container sizes using arithmetical types and requiring others to do the same thing says something about the company. One thing I notice about such dictates is that the dumber they are, the more they need to be strict do-it-or-you-are-fired rules because otherwise people with common sense would ignore the rule.
While I get your drift, the blanket statements made would virtually eliminate bitwise operations ifunsigned
types could only hold counts and not be used in arithmetic. So the "Insane Directives from Bad Programmers" part makes more sense.
â David C. Rankin
2 hours ago
@DavidC.Rankin Please don't take it as a "blanket" statement. Obviously there are multiple legitimate uses for unsigned integers (like storing bitwise values).
â Tyler Durden
2 hours ago
Yes, yes -- I didn't, that's why I said "I get your drift."
â David C. Rankin
2 hours ago
add a comment |Â
up vote
-7
down vote
One of the main issues is that unsigned integers can't be negative. This can lead to buggy behavior with negative numbers. Take for example:
unsigned int myInt = 0;
myInt -= 1;
printf(" %u", myInt);
Try that and you will see strange results (like the printed being an extremely high number).
5
Gee, but what if my numbers should not be negative (like array indices, for example), and I would like to express that in their type? Or if yourint
underflows / overflows (which is UB forsigned
, but not forunsigned
)?
â DevSolar
yesterday
3
@DevSolar Usingunsigned
to express a number should not be negative is thought by many to be a mistake.
â NathanOliver
yesterday
2
@NathanOliver: [who] [citation needed]. ;-) (Don't. Just pointificating.)
â DevSolar
yesterday
4
@NathanOliver: I still have to meet these "many". The many I know prefer unsigned integers (though not necessarilyunsigned
where appropriate. Among others, they have a well defined overflow behaviour.
â too honest for this site
yesterday
7
Maybe I wasn't clear: "This can lead to buggy behavior with negative numbers" - no, it can't. Because unsigned integers cannot be negative. It's the same argument like "a bike is not as usefull as a truck, because you can't transport a grande piano with it" - that's just no argument, because a bike is not meant to.
â too honest for this site
yesterday
 |Â
show 8 more comments
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
17
down vote
accepted
Some of the answers here mention the surprising promotion rules between signed and unsigned values, but this seems more like a problem relating to mixing signed and unsigned values, and doesn't necessarily explain why signed is preferred over unsigned, outside of mixing scenarios.
In my experience, outside of mixed comparisons and promotion rules, there are two primary reasons why unsigned values are big bug producers.
Unsigned values have a discontinuity at zero, the most common value in programming
Both unsigned and signed integers have a discontinuities at their minimum and maximum values, where they wrap around (unsigned) or cause undefined behavior (signed). For unsigned
these points are at zero and UINT_MAX
. For int
they are at INT_MIN
and INT_MAX
. Typical values of INT_MIN
and INT_MAX
on system with 4-byte int
values are -2^31
and 2^31-1
, and on such a system UINT_MAX
is typically 2^32-1
.
The primary bug-inducing problem with unsigned
that doesn't apply to int
is that it has a discontinuity at zero. Zero, of course, is a very common value in programs, along with other small values like 1,2,3. It is common to add and subtract small values, especially 1, in various constructs, and if you subtract anything from an unsigned
value and it happens to be zero, you just got a massive positive value and an almost certain bug.
Consider code iterates over all values in a vector by index except the last0.5:
for (size_t i = 0; i < v.size() - 1; i++) // do something
This works fine until one day you pass in an empty vector. Instead of doing zero iterations, you get v.size() - 1 == a giant number
1 and you'll do 4 billion iterations and almost have a buffer overflow vulnerability.
You need to write it like this:
for (size_t i = 0; i + 1 < v.size(); i++) // do something
So it can be "fixed" in this case, but only by carefully thinking about the unsigned nature of size_t
. Sometimes you can't apply the fix above because instead of a constant one you have some variable offset you want to apply, which may be positive or negative: so which "side" of the comparison you need to put it on depends on the signedness - now the code gets really messy.
There is a similar issue with code that tries to iterate down to and including zero. Something like while (index-- > 0)
works fine, but the apparently equivalent while (--index >= 0)
will never terminate for an unsigned value. Your compiler might warn you when the right hand side is literal zero, but certainly not if it is a value determined at runtime.
Counterpoint
Many might argue that signed values also have discontinuities, but they are very far away from zero. I really consider this a separate problem of "overflow", both signed and unsigned values may overflow at very large values. In many cases overflow is impossible due to constraints on the possible range of the values, and overflow of many 64-bit values may be physically impossible). Even if possible, the chance of an overflow related bug is often minuscule compared to an "at zero" bug, and overflow occurs for unsigned values too. So unsigned combines the worst of both worlds: potentially overflow with very large magnitude values, and a discontinuity at zero. Signed only has the former.
Many will argue "you lose a bit" with unsigned. This is often true - but not always (if you need to represent differences between unsigned values you'll lose that bit anyways: so many 32-bit things are limited to 2 GiB anyways, or you'll have a weird grey area where say a file can be 4 GiB, but you can't use certain APIs on the second 2 GiB half).
Even in the cases where unsigned buys you a bit: it doesn't buy you much: if you had to support more than 2 billion "things", you'll probably soon have to support more than 4 billion.
Logically, unsigned values are a subset of signed values
Mathematically, unsigned values (non-negative integers) are a subset of signed integers (just called _integers).2. Yet signed values naturally pop out of operations solely on unsigned values, such as subtraction. We might say that unsigned values aren't closed under subtraction. The same isn't true of signed values.
Want to find the "delta" between two unsigned indexes into a file? Well you better do the subtraction in the right order, or else you'll get the wrong answer. Of course, you often need a runtime check to determine the right order! When dealing with unsigned values as numbers, you'll often find that (logically) signed values keep appearing anyways, so you might as well start of with signed.
Counterpoint
As mentioned in footnote (2) above, signed values in C++ aren't actually a subset of unsigned values of the same size, so unsigned values can represent the same number of results that signed values can.
True, but the range is less useful. Consider subtraction, and unsigned numbers with a range of 0 to 2N, and signed numbers with a range of -N to N. Arbitrary subtractions result in results in the range -2N to 2N in _both cases, and either type of integer can only represent half of it. Well it turns out that the region centered around zero of -N to N is usually way more useful (contains more actual results in real world code) than the range 0 to 2N. Consider any of typical distribution other than uniform (log, zipfian, normal, whatever) and consider subtracting randomly selected values from that distribution: way more values end up in [-N, N] than [0, 2N] (indeed, resulting distribution is always centered at zero).
64-bit closes the door on many of the reasons to use signed values as numbers
I think the arguments above were already compelling for 32-bit values, but the overflow cases, which affect both signed and unsigned at different thresholds, do occur for 32-bit values, since "2 billion" is a number that can exceeded by many abstract and physical quantities (billions of dollars, billions of nanoseconds, arrays with billions of elements). So if someone is convinced enough by the doubling of the positive range for unsigned values, they can make the case that overflow does matter and it slightly favors unsigned.
Outside of specialized domains 64-bit values largely remove this concern. Signed 64-bit values have an upper range of 9,223,372,036,854,775,807 - more than nine quintillion. That's about the age of the universe measured in nanoseconds. That's more money that you'll need to track, that's a larger array than would fit in all RAM in all the computers on each for the foreseeable future, etc. So maybe 9 quintillion is enough for everybody (for now).
When to use unsigned values
Note that the style guide doesn't forbid or even necessarily discourage use of unsigned numbers. It concludes with:
Do not use an unsigned type merely to assert that a variable is non-negative.
Indeed, there are good uses for unsigned variables:
When you want to treat an N-bit quantity not as an integer, but simply a "bag of bits". For example, as a bitmask or bitmap, or N boolean values or whatever. This use often goes hand-in-hand with the fixed width types like
uint32_t
anduint64_t
since you often want to know the exact size of the variable. A hint that a particular variable deserves this treatment is that you only operate on it with with the bitwise operators such as~
,|
,&
,^
,>>
and so on, and not with the arithmetic operations such as+
,-
,*
,/
etc.Unsigned is ideal here because the behavior of the bitwise operators is well-defined and standardized. Signed values have several problems, such as undefined and unspecified behavior when shifting, and an unspecified representation.
- When you actually want modular arithmetic. Sometimes you actually want 2^N modular arithmetic. In these cases "overflow" is a feature, not a bug. Unsigned values give you what you want here since they are defined to use modular arithmetic. Signed values cannot be (easily, efficiently) used at all since they have an unspecified representation and overflow is undefined.
0.5 After I wrote this I realized this is nearly identical to Jarod's example, which I hadn't seen - and for good reason, it's a good example!
1 We're talking about size_t
here so usually 2^32-1 on a 32-bit system or 2^64-1 on a 64-bit one.
2 In C++ this isn't exactly the case because unsigned values contain more values at the upper end than the corresponding signed type, but the basic problem exists that manipulating unsigned values can result in (logically) signed values, but there is no corresponding issue with signed values (since signed values already include unsigned values).
3
I agree with everything you've posted, but "64 bits should be enough for everyone" sure seems way too close to "640k ought to be enough for everyone".
â Andrew Henle
6 hours ago
1
@Andrew - yup, I chose my words carefully :).
â BeeOnRope
6 hours ago
Well, you might want to work a bit on equating defined wrap-around for unsigned types with full undefined behavior for signed types.
â Deduplicator
6 hours ago
2
"64-bit closes the door on unsigned values" --> Disagree. Some integer programming tasks are simple not a case of counting and do not need negative values yet need power-of-2 widths: Passwords, encryption, bit graphics, benefit with unsigned math. Many ideas here point out why code could use signed math when able, yet falls very short of making unsigned type useless and closing the door on them.
â chux
4 hours ago
@chux - the title of that section is probably a bit overstated: I really mean what I say in the following text, that it closes the loophole in the arguments above for most types of "counting" and "indexing" values. Unsigned absolutely still has its place for the "bag of bits" type scenarios, where you aren't doing math on it. Can you elaborate a bit on the integer programming case? Are those values used with mathematical expressions? There are certainly exceptions to every rule, and integer programming may be a good one (too obscure to reverse this type of style guide entry though).
â BeeOnRope
3 hours ago
 |Â
show 3 more comments
up vote
17
down vote
accepted
Some of the answers here mention the surprising promotion rules between signed and unsigned values, but this seems more like a problem relating to mixing signed and unsigned values, and doesn't necessarily explain why signed is preferred over unsigned, outside of mixing scenarios.
In my experience, outside of mixed comparisons and promotion rules, there are two primary reasons why unsigned values are big bug producers.
Unsigned values have a discontinuity at zero, the most common value in programming
Both unsigned and signed integers have a discontinuities at their minimum and maximum values, where they wrap around (unsigned) or cause undefined behavior (signed). For unsigned
these points are at zero and UINT_MAX
. For int
they are at INT_MIN
and INT_MAX
. Typical values of INT_MIN
and INT_MAX
on system with 4-byte int
values are -2^31
and 2^31-1
, and on such a system UINT_MAX
is typically 2^32-1
.
The primary bug-inducing problem with unsigned
that doesn't apply to int
is that it has a discontinuity at zero. Zero, of course, is a very common value in programs, along with other small values like 1,2,3. It is common to add and subtract small values, especially 1, in various constructs, and if you subtract anything from an unsigned
value and it happens to be zero, you just got a massive positive value and an almost certain bug.
Consider code iterates over all values in a vector by index except the last0.5:
for (size_t i = 0; i < v.size() - 1; i++) // do something
This works fine until one day you pass in an empty vector. Instead of doing zero iterations, you get v.size() - 1 == a giant number
1 and you'll do 4 billion iterations and almost have a buffer overflow vulnerability.
You need to write it like this:
for (size_t i = 0; i + 1 < v.size(); i++) // do something
So it can be "fixed" in this case, but only by carefully thinking about the unsigned nature of size_t
. Sometimes you can't apply the fix above because instead of a constant one you have some variable offset you want to apply, which may be positive or negative: so which "side" of the comparison you need to put it on depends on the signedness - now the code gets really messy.
There is a similar issue with code that tries to iterate down to and including zero. Something like while (index-- > 0)
works fine, but the apparently equivalent while (--index >= 0)
will never terminate for an unsigned value. Your compiler might warn you when the right hand side is literal zero, but certainly not if it is a value determined at runtime.
Counterpoint
Many might argue that signed values also have discontinuities, but they are very far away from zero. I really consider this a separate problem of "overflow", both signed and unsigned values may overflow at very large values. In many cases overflow is impossible due to constraints on the possible range of the values, and overflow of many 64-bit values may be physically impossible). Even if possible, the chance of an overflow related bug is often minuscule compared to an "at zero" bug, and overflow occurs for unsigned values too. So unsigned combines the worst of both worlds: potentially overflow with very large magnitude values, and a discontinuity at zero. Signed only has the former.
Many will argue "you lose a bit" with unsigned. This is often true - but not always (if you need to represent differences between unsigned values you'll lose that bit anyways: so many 32-bit things are limited to 2 GiB anyways, or you'll have a weird grey area where say a file can be 4 GiB, but you can't use certain APIs on the second 2 GiB half).
Even in the cases where unsigned buys you a bit: it doesn't buy you much: if you had to support more than 2 billion "things", you'll probably soon have to support more than 4 billion.
Logically, unsigned values are a subset of signed values
Mathematically, unsigned values (non-negative integers) are a subset of signed integers (just called _integers).2. Yet signed values naturally pop out of operations solely on unsigned values, such as subtraction. We might say that unsigned values aren't closed under subtraction. The same isn't true of signed values.
Want to find the "delta" between two unsigned indexes into a file? Well you better do the subtraction in the right order, or else you'll get the wrong answer. Of course, you often need a runtime check to determine the right order! When dealing with unsigned values as numbers, you'll often find that (logically) signed values keep appearing anyways, so you might as well start of with signed.
Counterpoint
As mentioned in footnote (2) above, signed values in C++ aren't actually a subset of unsigned values of the same size, so unsigned values can represent the same number of results that signed values can.
True, but the range is less useful. Consider subtraction, and unsigned numbers with a range of 0 to 2N, and signed numbers with a range of -N to N. Arbitrary subtractions result in results in the range -2N to 2N in _both cases, and either type of integer can only represent half of it. Well it turns out that the region centered around zero of -N to N is usually way more useful (contains more actual results in real world code) than the range 0 to 2N. Consider any of typical distribution other than uniform (log, zipfian, normal, whatever) and consider subtracting randomly selected values from that distribution: way more values end up in [-N, N] than [0, 2N] (indeed, resulting distribution is always centered at zero).
64-bit closes the door on many of the reasons to use signed values as numbers
I think the arguments above were already compelling for 32-bit values, but the overflow cases, which affect both signed and unsigned at different thresholds, do occur for 32-bit values, since "2 billion" is a number that can exceeded by many abstract and physical quantities (billions of dollars, billions of nanoseconds, arrays with billions of elements). So if someone is convinced enough by the doubling of the positive range for unsigned values, they can make the case that overflow does matter and it slightly favors unsigned.
Outside of specialized domains 64-bit values largely remove this concern. Signed 64-bit values have an upper range of 9,223,372,036,854,775,807 - more than nine quintillion. That's about the age of the universe measured in nanoseconds. That's more money that you'll need to track, that's a larger array than would fit in all RAM in all the computers on each for the foreseeable future, etc. So maybe 9 quintillion is enough for everybody (for now).
When to use unsigned values
Note that the style guide doesn't forbid or even necessarily discourage use of unsigned numbers. It concludes with:
Do not use an unsigned type merely to assert that a variable is non-negative.
Indeed, there are good uses for unsigned variables:
When you want to treat an N-bit quantity not as an integer, but simply a "bag of bits". For example, as a bitmask or bitmap, or N boolean values or whatever. This use often goes hand-in-hand with the fixed width types like
uint32_t
anduint64_t
since you often want to know the exact size of the variable. A hint that a particular variable deserves this treatment is that you only operate on it with with the bitwise operators such as~
,|
,&
,^
,>>
and so on, and not with the arithmetic operations such as+
,-
,*
,/
etc.Unsigned is ideal here because the behavior of the bitwise operators is well-defined and standardized. Signed values have several problems, such as undefined and unspecified behavior when shifting, and an unspecified representation.
- When you actually want modular arithmetic. Sometimes you actually want 2^N modular arithmetic. In these cases "overflow" is a feature, not a bug. Unsigned values give you what you want here since they are defined to use modular arithmetic. Signed values cannot be (easily, efficiently) used at all since they have an unspecified representation and overflow is undefined.
0.5 After I wrote this I realized this is nearly identical to Jarod's example, which I hadn't seen - and for good reason, it's a good example!
1 We're talking about size_t
here so usually 2^32-1 on a 32-bit system or 2^64-1 on a 64-bit one.
2 In C++ this isn't exactly the case because unsigned values contain more values at the upper end than the corresponding signed type, but the basic problem exists that manipulating unsigned values can result in (logically) signed values, but there is no corresponding issue with signed values (since signed values already include unsigned values).
3
I agree with everything you've posted, but "64 bits should be enough for everyone" sure seems way too close to "640k ought to be enough for everyone".
â Andrew Henle
6 hours ago
1
@Andrew - yup, I chose my words carefully :).
â BeeOnRope
6 hours ago
Well, you might want to work a bit on equating defined wrap-around for unsigned types with full undefined behavior for signed types.
â Deduplicator
6 hours ago
2
"64-bit closes the door on unsigned values" --> Disagree. Some integer programming tasks are simple not a case of counting and do not need negative values yet need power-of-2 widths: Passwords, encryption, bit graphics, benefit with unsigned math. Many ideas here point out why code could use signed math when able, yet falls very short of making unsigned type useless and closing the door on them.
â chux
4 hours ago
@chux - the title of that section is probably a bit overstated: I really mean what I say in the following text, that it closes the loophole in the arguments above for most types of "counting" and "indexing" values. Unsigned absolutely still has its place for the "bag of bits" type scenarios, where you aren't doing math on it. Can you elaborate a bit on the integer programming case? Are those values used with mathematical expressions? There are certainly exceptions to every rule, and integer programming may be a good one (too obscure to reverse this type of style guide entry though).
â BeeOnRope
3 hours ago
 |Â
show 3 more comments
up vote
17
down vote
accepted
up vote
17
down vote
accepted
Some of the answers here mention the surprising promotion rules between signed and unsigned values, but this seems more like a problem relating to mixing signed and unsigned values, and doesn't necessarily explain why signed is preferred over unsigned, outside of mixing scenarios.
In my experience, outside of mixed comparisons and promotion rules, there are two primary reasons why unsigned values are big bug producers.
Unsigned values have a discontinuity at zero, the most common value in programming
Both unsigned and signed integers have a discontinuities at their minimum and maximum values, where they wrap around (unsigned) or cause undefined behavior (signed). For unsigned
these points are at zero and UINT_MAX
. For int
they are at INT_MIN
and INT_MAX
. Typical values of INT_MIN
and INT_MAX
on system with 4-byte int
values are -2^31
and 2^31-1
, and on such a system UINT_MAX
is typically 2^32-1
.
The primary bug-inducing problem with unsigned
that doesn't apply to int
is that it has a discontinuity at zero. Zero, of course, is a very common value in programs, along with other small values like 1,2,3. It is common to add and subtract small values, especially 1, in various constructs, and if you subtract anything from an unsigned
value and it happens to be zero, you just got a massive positive value and an almost certain bug.
Consider code iterates over all values in a vector by index except the last0.5:
for (size_t i = 0; i < v.size() - 1; i++) // do something
This works fine until one day you pass in an empty vector. Instead of doing zero iterations, you get v.size() - 1 == a giant number
1 and you'll do 4 billion iterations and almost have a buffer overflow vulnerability.
You need to write it like this:
for (size_t i = 0; i + 1 < v.size(); i++) // do something
So it can be "fixed" in this case, but only by carefully thinking about the unsigned nature of size_t
. Sometimes you can't apply the fix above because instead of a constant one you have some variable offset you want to apply, which may be positive or negative: so which "side" of the comparison you need to put it on depends on the signedness - now the code gets really messy.
There is a similar issue with code that tries to iterate down to and including zero. Something like while (index-- > 0)
works fine, but the apparently equivalent while (--index >= 0)
will never terminate for an unsigned value. Your compiler might warn you when the right hand side is literal zero, but certainly not if it is a value determined at runtime.
Counterpoint
Many might argue that signed values also have discontinuities, but they are very far away from zero. I really consider this a separate problem of "overflow", both signed and unsigned values may overflow at very large values. In many cases overflow is impossible due to constraints on the possible range of the values, and overflow of many 64-bit values may be physically impossible). Even if possible, the chance of an overflow related bug is often minuscule compared to an "at zero" bug, and overflow occurs for unsigned values too. So unsigned combines the worst of both worlds: potentially overflow with very large magnitude values, and a discontinuity at zero. Signed only has the former.
Many will argue "you lose a bit" with unsigned. This is often true - but not always (if you need to represent differences between unsigned values you'll lose that bit anyways: so many 32-bit things are limited to 2 GiB anyways, or you'll have a weird grey area where say a file can be 4 GiB, but you can't use certain APIs on the second 2 GiB half).
Even in the cases where unsigned buys you a bit: it doesn't buy you much: if you had to support more than 2 billion "things", you'll probably soon have to support more than 4 billion.
Logically, unsigned values are a subset of signed values
Mathematically, unsigned values (non-negative integers) are a subset of signed integers (just called _integers).2. Yet signed values naturally pop out of operations solely on unsigned values, such as subtraction. We might say that unsigned values aren't closed under subtraction. The same isn't true of signed values.
Want to find the "delta" between two unsigned indexes into a file? Well you better do the subtraction in the right order, or else you'll get the wrong answer. Of course, you often need a runtime check to determine the right order! When dealing with unsigned values as numbers, you'll often find that (logically) signed values keep appearing anyways, so you might as well start of with signed.
Counterpoint
As mentioned in footnote (2) above, signed values in C++ aren't actually a subset of unsigned values of the same size, so unsigned values can represent the same number of results that signed values can.
True, but the range is less useful. Consider subtraction, and unsigned numbers with a range of 0 to 2N, and signed numbers with a range of -N to N. Arbitrary subtractions result in results in the range -2N to 2N in _both cases, and either type of integer can only represent half of it. Well it turns out that the region centered around zero of -N to N is usually way more useful (contains more actual results in real world code) than the range 0 to 2N. Consider any of typical distribution other than uniform (log, zipfian, normal, whatever) and consider subtracting randomly selected values from that distribution: way more values end up in [-N, N] than [0, 2N] (indeed, resulting distribution is always centered at zero).
64-bit closes the door on many of the reasons to use signed values as numbers
I think the arguments above were already compelling for 32-bit values, but the overflow cases, which affect both signed and unsigned at different thresholds, do occur for 32-bit values, since "2 billion" is a number that can exceeded by many abstract and physical quantities (billions of dollars, billions of nanoseconds, arrays with billions of elements). So if someone is convinced enough by the doubling of the positive range for unsigned values, they can make the case that overflow does matter and it slightly favors unsigned.
Outside of specialized domains 64-bit values largely remove this concern. Signed 64-bit values have an upper range of 9,223,372,036,854,775,807 - more than nine quintillion. That's about the age of the universe measured in nanoseconds. That's more money that you'll need to track, that's a larger array than would fit in all RAM in all the computers on each for the foreseeable future, etc. So maybe 9 quintillion is enough for everybody (for now).
When to use unsigned values
Note that the style guide doesn't forbid or even necessarily discourage use of unsigned numbers. It concludes with:
Do not use an unsigned type merely to assert that a variable is non-negative.
Indeed, there are good uses for unsigned variables:
When you want to treat an N-bit quantity not as an integer, but simply a "bag of bits". For example, as a bitmask or bitmap, or N boolean values or whatever. This use often goes hand-in-hand with the fixed width types like
uint32_t
anduint64_t
since you often want to know the exact size of the variable. A hint that a particular variable deserves this treatment is that you only operate on it with with the bitwise operators such as~
,|
,&
,^
,>>
and so on, and not with the arithmetic operations such as+
,-
,*
,/
etc.Unsigned is ideal here because the behavior of the bitwise operators is well-defined and standardized. Signed values have several problems, such as undefined and unspecified behavior when shifting, and an unspecified representation.
- When you actually want modular arithmetic. Sometimes you actually want 2^N modular arithmetic. In these cases "overflow" is a feature, not a bug. Unsigned values give you what you want here since they are defined to use modular arithmetic. Signed values cannot be (easily, efficiently) used at all since they have an unspecified representation and overflow is undefined.
0.5 After I wrote this I realized this is nearly identical to Jarod's example, which I hadn't seen - and for good reason, it's a good example!
1 We're talking about size_t
here so usually 2^32-1 on a 32-bit system or 2^64-1 on a 64-bit one.
2 In C++ this isn't exactly the case because unsigned values contain more values at the upper end than the corresponding signed type, but the basic problem exists that manipulating unsigned values can result in (logically) signed values, but there is no corresponding issue with signed values (since signed values already include unsigned values).
Some of the answers here mention the surprising promotion rules between signed and unsigned values, but this seems more like a problem relating to mixing signed and unsigned values, and doesn't necessarily explain why signed is preferred over unsigned, outside of mixing scenarios.
In my experience, outside of mixed comparisons and promotion rules, there are two primary reasons why unsigned values are big bug producers.
Unsigned values have a discontinuity at zero, the most common value in programming
Both unsigned and signed integers have a discontinuities at their minimum and maximum values, where they wrap around (unsigned) or cause undefined behavior (signed). For unsigned
these points are at zero and UINT_MAX
. For int
they are at INT_MIN
and INT_MAX
. Typical values of INT_MIN
and INT_MAX
on system with 4-byte int
values are -2^31
and 2^31-1
, and on such a system UINT_MAX
is typically 2^32-1
.
The primary bug-inducing problem with unsigned
that doesn't apply to int
is that it has a discontinuity at zero. Zero, of course, is a very common value in programs, along with other small values like 1,2,3. It is common to add and subtract small values, especially 1, in various constructs, and if you subtract anything from an unsigned
value and it happens to be zero, you just got a massive positive value and an almost certain bug.
Consider code iterates over all values in a vector by index except the last0.5:
for (size_t i = 0; i < v.size() - 1; i++) // do something
This works fine until one day you pass in an empty vector. Instead of doing zero iterations, you get v.size() - 1 == a giant number
1 and you'll do 4 billion iterations and almost have a buffer overflow vulnerability.
You need to write it like this:
for (size_t i = 0; i + 1 < v.size(); i++) // do something
So it can be "fixed" in this case, but only by carefully thinking about the unsigned nature of size_t
. Sometimes you can't apply the fix above because instead of a constant one you have some variable offset you want to apply, which may be positive or negative: so which "side" of the comparison you need to put it on depends on the signedness - now the code gets really messy.
There is a similar issue with code that tries to iterate down to and including zero. Something like while (index-- > 0)
works fine, but the apparently equivalent while (--index >= 0)
will never terminate for an unsigned value. Your compiler might warn you when the right hand side is literal zero, but certainly not if it is a value determined at runtime.
Counterpoint
Many might argue that signed values also have discontinuities, but they are very far away from zero. I really consider this a separate problem of "overflow", both signed and unsigned values may overflow at very large values. In many cases overflow is impossible due to constraints on the possible range of the values, and overflow of many 64-bit values may be physically impossible). Even if possible, the chance of an overflow related bug is often minuscule compared to an "at zero" bug, and overflow occurs for unsigned values too. So unsigned combines the worst of both worlds: potentially overflow with very large magnitude values, and a discontinuity at zero. Signed only has the former.
Many will argue "you lose a bit" with unsigned. This is often true - but not always (if you need to represent differences between unsigned values you'll lose that bit anyways: so many 32-bit things are limited to 2 GiB anyways, or you'll have a weird grey area where say a file can be 4 GiB, but you can't use certain APIs on the second 2 GiB half).
Even in the cases where unsigned buys you a bit: it doesn't buy you much: if you had to support more than 2 billion "things", you'll probably soon have to support more than 4 billion.
Logically, unsigned values are a subset of signed values
Mathematically, unsigned values (non-negative integers) are a subset of signed integers (just called _integers).2. Yet signed values naturally pop out of operations solely on unsigned values, such as subtraction. We might say that unsigned values aren't closed under subtraction. The same isn't true of signed values.
Want to find the "delta" between two unsigned indexes into a file? Well you better do the subtraction in the right order, or else you'll get the wrong answer. Of course, you often need a runtime check to determine the right order! When dealing with unsigned values as numbers, you'll often find that (logically) signed values keep appearing anyways, so you might as well start of with signed.
Counterpoint
As mentioned in footnote (2) above, signed values in C++ aren't actually a subset of unsigned values of the same size, so unsigned values can represent the same number of results that signed values can.
True, but the range is less useful. Consider subtraction, and unsigned numbers with a range of 0 to 2N, and signed numbers with a range of -N to N. Arbitrary subtractions result in results in the range -2N to 2N in _both cases, and either type of integer can only represent half of it. Well it turns out that the region centered around zero of -N to N is usually way more useful (contains more actual results in real world code) than the range 0 to 2N. Consider any of typical distribution other than uniform (log, zipfian, normal, whatever) and consider subtracting randomly selected values from that distribution: way more values end up in [-N, N] than [0, 2N] (indeed, resulting distribution is always centered at zero).
64-bit closes the door on many of the reasons to use signed values as numbers
I think the arguments above were already compelling for 32-bit values, but the overflow cases, which affect both signed and unsigned at different thresholds, do occur for 32-bit values, since "2 billion" is a number that can exceeded by many abstract and physical quantities (billions of dollars, billions of nanoseconds, arrays with billions of elements). So if someone is convinced enough by the doubling of the positive range for unsigned values, they can make the case that overflow does matter and it slightly favors unsigned.
Outside of specialized domains 64-bit values largely remove this concern. Signed 64-bit values have an upper range of 9,223,372,036,854,775,807 - more than nine quintillion. That's about the age of the universe measured in nanoseconds. That's more money that you'll need to track, that's a larger array than would fit in all RAM in all the computers on each for the foreseeable future, etc. So maybe 9 quintillion is enough for everybody (for now).
When to use unsigned values
Note that the style guide doesn't forbid or even necessarily discourage use of unsigned numbers. It concludes with:
Do not use an unsigned type merely to assert that a variable is non-negative.
Indeed, there are good uses for unsigned variables:
When you want to treat an N-bit quantity not as an integer, but simply a "bag of bits". For example, as a bitmask or bitmap, or N boolean values or whatever. This use often goes hand-in-hand with the fixed width types like
uint32_t
anduint64_t
since you often want to know the exact size of the variable. A hint that a particular variable deserves this treatment is that you only operate on it with with the bitwise operators such as~
,|
,&
,^
,>>
and so on, and not with the arithmetic operations such as+
,-
,*
,/
etc.Unsigned is ideal here because the behavior of the bitwise operators is well-defined and standardized. Signed values have several problems, such as undefined and unspecified behavior when shifting, and an unspecified representation.
- When you actually want modular arithmetic. Sometimes you actually want 2^N modular arithmetic. In these cases "overflow" is a feature, not a bug. Unsigned values give you what you want here since they are defined to use modular arithmetic. Signed values cannot be (easily, efficiently) used at all since they have an unspecified representation and overflow is undefined.
0.5 After I wrote this I realized this is nearly identical to Jarod's example, which I hadn't seen - and for good reason, it's a good example!
1 We're talking about size_t
here so usually 2^32-1 on a 32-bit system or 2^64-1 on a 64-bit one.
2 In C++ this isn't exactly the case because unsigned values contain more values at the upper end than the corresponding signed type, but the basic problem exists that manipulating unsigned values can result in (logically) signed values, but there is no corresponding issue with signed values (since signed values already include unsigned values).
edited 7 mins ago
answered 18 hours ago
BeeOnRope
22.7k865155
22.7k865155
3
I agree with everything you've posted, but "64 bits should be enough for everyone" sure seems way too close to "640k ought to be enough for everyone".
â Andrew Henle
6 hours ago
1
@Andrew - yup, I chose my words carefully :).
â BeeOnRope
6 hours ago
Well, you might want to work a bit on equating defined wrap-around for unsigned types with full undefined behavior for signed types.
â Deduplicator
6 hours ago
2
"64-bit closes the door on unsigned values" --> Disagree. Some integer programming tasks are simple not a case of counting and do not need negative values yet need power-of-2 widths: Passwords, encryption, bit graphics, benefit with unsigned math. Many ideas here point out why code could use signed math when able, yet falls very short of making unsigned type useless and closing the door on them.
â chux
4 hours ago
@chux - the title of that section is probably a bit overstated: I really mean what I say in the following text, that it closes the loophole in the arguments above for most types of "counting" and "indexing" values. Unsigned absolutely still has its place for the "bag of bits" type scenarios, where you aren't doing math on it. Can you elaborate a bit on the integer programming case? Are those values used with mathematical expressions? There are certainly exceptions to every rule, and integer programming may be a good one (too obscure to reverse this type of style guide entry though).
â BeeOnRope
3 hours ago
 |Â
show 3 more comments
3
I agree with everything you've posted, but "64 bits should be enough for everyone" sure seems way too close to "640k ought to be enough for everyone".
â Andrew Henle
6 hours ago
1
@Andrew - yup, I chose my words carefully :).
â BeeOnRope
6 hours ago
Well, you might want to work a bit on equating defined wrap-around for unsigned types with full undefined behavior for signed types.
â Deduplicator
6 hours ago
2
"64-bit closes the door on unsigned values" --> Disagree. Some integer programming tasks are simple not a case of counting and do not need negative values yet need power-of-2 widths: Passwords, encryption, bit graphics, benefit with unsigned math. Many ideas here point out why code could use signed math when able, yet falls very short of making unsigned type useless and closing the door on them.
â chux
4 hours ago
@chux - the title of that section is probably a bit overstated: I really mean what I say in the following text, that it closes the loophole in the arguments above for most types of "counting" and "indexing" values. Unsigned absolutely still has its place for the "bag of bits" type scenarios, where you aren't doing math on it. Can you elaborate a bit on the integer programming case? Are those values used with mathematical expressions? There are certainly exceptions to every rule, and integer programming may be a good one (too obscure to reverse this type of style guide entry though).
â BeeOnRope
3 hours ago
3
3
I agree with everything you've posted, but "64 bits should be enough for everyone" sure seems way too close to "640k ought to be enough for everyone".
â Andrew Henle
6 hours ago
I agree with everything you've posted, but "64 bits should be enough for everyone" sure seems way too close to "640k ought to be enough for everyone".
â Andrew Henle
6 hours ago
1
1
@Andrew - yup, I chose my words carefully :).
â BeeOnRope
6 hours ago
@Andrew - yup, I chose my words carefully :).
â BeeOnRope
6 hours ago
Well, you might want to work a bit on equating defined wrap-around for unsigned types with full undefined behavior for signed types.
â Deduplicator
6 hours ago
Well, you might want to work a bit on equating defined wrap-around for unsigned types with full undefined behavior for signed types.
â Deduplicator
6 hours ago
2
2
"64-bit closes the door on unsigned values" --> Disagree. Some integer programming tasks are simple not a case of counting and do not need negative values yet need power-of-2 widths: Passwords, encryption, bit graphics, benefit with unsigned math. Many ideas here point out why code could use signed math when able, yet falls very short of making unsigned type useless and closing the door on them.
â chux
4 hours ago
"64-bit closes the door on unsigned values" --> Disagree. Some integer programming tasks are simple not a case of counting and do not need negative values yet need power-of-2 widths: Passwords, encryption, bit graphics, benefit with unsigned math. Many ideas here point out why code could use signed math when able, yet falls very short of making unsigned type useless and closing the door on them.
â chux
4 hours ago
@chux - the title of that section is probably a bit overstated: I really mean what I say in the following text, that it closes the loophole in the arguments above for most types of "counting" and "indexing" values. Unsigned absolutely still has its place for the "bag of bits" type scenarios, where you aren't doing math on it. Can you elaborate a bit on the integer programming case? Are those values used with mathematical expressions? There are certainly exceptions to every rule, and integer programming may be a good one (too obscure to reverse this type of style guide entry though).
â BeeOnRope
3 hours ago
@chux - the title of that section is probably a bit overstated: I really mean what I say in the following text, that it closes the loophole in the arguments above for most types of "counting" and "indexing" values. Unsigned absolutely still has its place for the "bag of bits" type scenarios, where you aren't doing math on it. Can you elaborate a bit on the integer programming case? Are those values used with mathematical expressions? There are certainly exceptions to every rule, and integer programming may be a good one (too obscure to reverse this type of style guide entry though).
â BeeOnRope
3 hours ago
 |Â
show 3 more comments
up vote
24
down vote
As stated, mixing unsigned
and signed
might lead to unexpected behaviour (even if well defined).
Suppose you want to iterate over all elements of vector except for the last five, you might wrongly write:
for (int i = 0; i < v.size() - 5; ++i) foo(v[i]); // Incorrect
// for (int i = 0; i + 5 < v.size(); ++i) foo(v[i]); // Correct
Suppose v.size() < 5
, then, as v.size()
is unsigned
, s.size() - 5
would be a very large number, and so i < v.size() - 5
would be true
for a more expected range of value of i
. And UB then happens quickly (out of bound access once i >= v.size()
)
If v.size()
would have return signed value, then s.size() - 5
would have been negative, and in above case, condition would be false immediately.
On the other side, index should be between [0; v.size()[
so unsigned
makes sense.
Signed has also its own issue as UB with overflow or implementation-defined behaviour for right shift of a negative signed number, but less frequent source of bug for iteration.
2
While I myself use signed numbers whenever I can, I don't think that this example is strong enough. Someone who uses unsigned numbers for a long time, surely knows this idiom: instead ofi<size()-X
, one should writei+X<size()
. Sure, it's a thing to remember, but it is not that hard to got accustomed to, in my opinion.
â geza
yesterday
7
What you are saying is basically one has to know the language and the coercion rules between types. I don't see how this changes whether one uses signed or unsigned as the question asks. Not that I recommend using signed at all if there is no need for negative values. I agree with @geza, only use signed when necessary. This makes the google guide questionable at best. Imo it's bad advice.
â too honest for this site
yesterday
2
@toohonestforthissite The point is the rules are arcane, silent and major causes of bugs. Using exclusively signed types for arithmetic relieves you of the issue. BTW using unsigned types for the purpose of enforcing positive values is one of the worst abuse for them.
â Passer By
yesterday
2
Thankfully, modern compilers and IDEs give warnings when mixing signed and unsigned numbers in an expression.
â Alexey B.
23 hours ago
4
@PasserBy: If you call them arcane, you have to add the integer promotions and the UB for overflow of signed types arcane, too. And the very common sizeof operator returns an unsigned anyway, so you do have to know about them. Said that: if you don't want to learn the language details, just don't use C or C++! Considering google promotes go, maybe that#s exactly their goal. The days of "don't be evil" are long gone â¦
â too honest for this site
22 hours ago
 |Â
show 8 more comments
up vote
24
down vote
As stated, mixing unsigned
and signed
might lead to unexpected behaviour (even if well defined).
Suppose you want to iterate over all elements of vector except for the last five, you might wrongly write:
for (int i = 0; i < v.size() - 5; ++i) foo(v[i]); // Incorrect
// for (int i = 0; i + 5 < v.size(); ++i) foo(v[i]); // Correct
Suppose v.size() < 5
, then, as v.size()
is unsigned
, s.size() - 5
would be a very large number, and so i < v.size() - 5
would be true
for a more expected range of value of i
. And UB then happens quickly (out of bound access once i >= v.size()
)
If v.size()
would have return signed value, then s.size() - 5
would have been negative, and in above case, condition would be false immediately.
On the other side, index should be between [0; v.size()[
so unsigned
makes sense.
Signed has also its own issue as UB with overflow or implementation-defined behaviour for right shift of a negative signed number, but less frequent source of bug for iteration.
2
While I myself use signed numbers whenever I can, I don't think that this example is strong enough. Someone who uses unsigned numbers for a long time, surely knows this idiom: instead ofi<size()-X
, one should writei+X<size()
. Sure, it's a thing to remember, but it is not that hard to got accustomed to, in my opinion.
â geza
yesterday
7
What you are saying is basically one has to know the language and the coercion rules between types. I don't see how this changes whether one uses signed or unsigned as the question asks. Not that I recommend using signed at all if there is no need for negative values. I agree with @geza, only use signed when necessary. This makes the google guide questionable at best. Imo it's bad advice.
â too honest for this site
yesterday
2
@toohonestforthissite The point is the rules are arcane, silent and major causes of bugs. Using exclusively signed types for arithmetic relieves you of the issue. BTW using unsigned types for the purpose of enforcing positive values is one of the worst abuse for them.
â Passer By
yesterday
2
Thankfully, modern compilers and IDEs give warnings when mixing signed and unsigned numbers in an expression.
â Alexey B.
23 hours ago
4
@PasserBy: If you call them arcane, you have to add the integer promotions and the UB for overflow of signed types arcane, too. And the very common sizeof operator returns an unsigned anyway, so you do have to know about them. Said that: if you don't want to learn the language details, just don't use C or C++! Considering google promotes go, maybe that#s exactly their goal. The days of "don't be evil" are long gone â¦
â too honest for this site
22 hours ago
 |Â
show 8 more comments
up vote
24
down vote
up vote
24
down vote
As stated, mixing unsigned
and signed
might lead to unexpected behaviour (even if well defined).
Suppose you want to iterate over all elements of vector except for the last five, you might wrongly write:
for (int i = 0; i < v.size() - 5; ++i) foo(v[i]); // Incorrect
// for (int i = 0; i + 5 < v.size(); ++i) foo(v[i]); // Correct
Suppose v.size() < 5
, then, as v.size()
is unsigned
, s.size() - 5
would be a very large number, and so i < v.size() - 5
would be true
for a more expected range of value of i
. And UB then happens quickly (out of bound access once i >= v.size()
)
If v.size()
would have return signed value, then s.size() - 5
would have been negative, and in above case, condition would be false immediately.
On the other side, index should be between [0; v.size()[
so unsigned
makes sense.
Signed has also its own issue as UB with overflow or implementation-defined behaviour for right shift of a negative signed number, but less frequent source of bug for iteration.
As stated, mixing unsigned
and signed
might lead to unexpected behaviour (even if well defined).
Suppose you want to iterate over all elements of vector except for the last five, you might wrongly write:
for (int i = 0; i < v.size() - 5; ++i) foo(v[i]); // Incorrect
// for (int i = 0; i + 5 < v.size(); ++i) foo(v[i]); // Correct
Suppose v.size() < 5
, then, as v.size()
is unsigned
, s.size() - 5
would be a very large number, and so i < v.size() - 5
would be true
for a more expected range of value of i
. And UB then happens quickly (out of bound access once i >= v.size()
)
If v.size()
would have return signed value, then s.size() - 5
would have been negative, and in above case, condition would be false immediately.
On the other side, index should be between [0; v.size()[
so unsigned
makes sense.
Signed has also its own issue as UB with overflow or implementation-defined behaviour for right shift of a negative signed number, but less frequent source of bug for iteration.
edited yesterday
answered yesterday
Jarod42
103k1289163
103k1289163
2
While I myself use signed numbers whenever I can, I don't think that this example is strong enough. Someone who uses unsigned numbers for a long time, surely knows this idiom: instead ofi<size()-X
, one should writei+X<size()
. Sure, it's a thing to remember, but it is not that hard to got accustomed to, in my opinion.
â geza
yesterday
7
What you are saying is basically one has to know the language and the coercion rules between types. I don't see how this changes whether one uses signed or unsigned as the question asks. Not that I recommend using signed at all if there is no need for negative values. I agree with @geza, only use signed when necessary. This makes the google guide questionable at best. Imo it's bad advice.
â too honest for this site
yesterday
2
@toohonestforthissite The point is the rules are arcane, silent and major causes of bugs. Using exclusively signed types for arithmetic relieves you of the issue. BTW using unsigned types for the purpose of enforcing positive values is one of the worst abuse for them.
â Passer By
yesterday
2
Thankfully, modern compilers and IDEs give warnings when mixing signed and unsigned numbers in an expression.
â Alexey B.
23 hours ago
4
@PasserBy: If you call them arcane, you have to add the integer promotions and the UB for overflow of signed types arcane, too. And the very common sizeof operator returns an unsigned anyway, so you do have to know about them. Said that: if you don't want to learn the language details, just don't use C or C++! Considering google promotes go, maybe that#s exactly their goal. The days of "don't be evil" are long gone â¦
â too honest for this site
22 hours ago
 |Â
show 8 more comments
2
While I myself use signed numbers whenever I can, I don't think that this example is strong enough. Someone who uses unsigned numbers for a long time, surely knows this idiom: instead ofi<size()-X
, one should writei+X<size()
. Sure, it's a thing to remember, but it is not that hard to got accustomed to, in my opinion.
â geza
yesterday
7
What you are saying is basically one has to know the language and the coercion rules between types. I don't see how this changes whether one uses signed or unsigned as the question asks. Not that I recommend using signed at all if there is no need for negative values. I agree with @geza, only use signed when necessary. This makes the google guide questionable at best. Imo it's bad advice.
â too honest for this site
yesterday
2
@toohonestforthissite The point is the rules are arcane, silent and major causes of bugs. Using exclusively signed types for arithmetic relieves you of the issue. BTW using unsigned types for the purpose of enforcing positive values is one of the worst abuse for them.
â Passer By
yesterday
2
Thankfully, modern compilers and IDEs give warnings when mixing signed and unsigned numbers in an expression.
â Alexey B.
23 hours ago
4
@PasserBy: If you call them arcane, you have to add the integer promotions and the UB for overflow of signed types arcane, too. And the very common sizeof operator returns an unsigned anyway, so you do have to know about them. Said that: if you don't want to learn the language details, just don't use C or C++! Considering google promotes go, maybe that#s exactly their goal. The days of "don't be evil" are long gone â¦
â too honest for this site
22 hours ago
2
2
While I myself use signed numbers whenever I can, I don't think that this example is strong enough. Someone who uses unsigned numbers for a long time, surely knows this idiom: instead of
i<size()-X
, one should write i+X<size()
. Sure, it's a thing to remember, but it is not that hard to got accustomed to, in my opinion.â geza
yesterday
While I myself use signed numbers whenever I can, I don't think that this example is strong enough. Someone who uses unsigned numbers for a long time, surely knows this idiom: instead of
i<size()-X
, one should write i+X<size()
. Sure, it's a thing to remember, but it is not that hard to got accustomed to, in my opinion.â geza
yesterday
7
7
What you are saying is basically one has to know the language and the coercion rules between types. I don't see how this changes whether one uses signed or unsigned as the question asks. Not that I recommend using signed at all if there is no need for negative values. I agree with @geza, only use signed when necessary. This makes the google guide questionable at best. Imo it's bad advice.
â too honest for this site
yesterday
What you are saying is basically one has to know the language and the coercion rules between types. I don't see how this changes whether one uses signed or unsigned as the question asks. Not that I recommend using signed at all if there is no need for negative values. I agree with @geza, only use signed when necessary. This makes the google guide questionable at best. Imo it's bad advice.
â too honest for this site
yesterday
2
2
@toohonestforthissite The point is the rules are arcane, silent and major causes of bugs. Using exclusively signed types for arithmetic relieves you of the issue. BTW using unsigned types for the purpose of enforcing positive values is one of the worst abuse for them.
â Passer By
yesterday
@toohonestforthissite The point is the rules are arcane, silent and major causes of bugs. Using exclusively signed types for arithmetic relieves you of the issue. BTW using unsigned types for the purpose of enforcing positive values is one of the worst abuse for them.
â Passer By
yesterday
2
2
Thankfully, modern compilers and IDEs give warnings when mixing signed and unsigned numbers in an expression.
â Alexey B.
23 hours ago
Thankfully, modern compilers and IDEs give warnings when mixing signed and unsigned numbers in an expression.
â Alexey B.
23 hours ago
4
4
@PasserBy: If you call them arcane, you have to add the integer promotions and the UB for overflow of signed types arcane, too. And the very common sizeof operator returns an unsigned anyway, so you do have to know about them. Said that: if you don't want to learn the language details, just don't use C or C++! Considering google promotes go, maybe that#s exactly their goal. The days of "don't be evil" are long gone â¦
â too honest for this site
22 hours ago
@PasserBy: If you call them arcane, you have to add the integer promotions and the UB for overflow of signed types arcane, too. And the very common sizeof operator returns an unsigned anyway, so you do have to know about them. Said that: if you don't want to learn the language details, just don't use C or C++! Considering google promotes go, maybe that#s exactly their goal. The days of "don't be evil" are long gone â¦
â too honest for this site
22 hours ago
 |Â
show 8 more comments
up vote
12
down vote
One of the most hair-raising examples of an error is when you MIX signed and unsigned values:
#include <iostream>
int main()
auto qualifier = -1 < 1u ? "makes" : "does not make";
std::cout << "The world " << qualifier << " sense" << std::endl;
The output:
The world does not make sense
Unless you have a trivial application, it's inevitable you'll end up with either dangerous mixes between signed and unsigned values (resulting in runtime errors) or if you crank up warnings and make them compile-time errors, you end up with a lot of static_casts in your code. That's why it's best to strictly use signed integers for types for math or logical comparison. Only use unsigned for bitmasks and types representing bits.
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea. Most numbers are closer to 0 than they are to 2 billion, so with unsigned types, a lot of your values are closer to the edge of the valid range. To make things worse, the final value may be in a known positive range, but while evaluating expressions, intermediate values may underflow and if they are used in intermediate form may be VERY wrong values. Finally, even if your values are expected to always be positive, that doesn't mean that they won't interact with other variables that can be negative, and so you end up with a forced situation of mixing signed and unsigned types, which is the worst place to be.
8
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea *if you don't treat implicit conversions as warnings and are too lazy to use proper type casts.* Modeling your types on their expected valid values is completely reasonable, just not in C/C++ with built-in types.
â villasv
yesterday
1
@user7586189 It's a good practice to make invalid data impossible to instantiate, so having positive-only variables for sizes is perfectly reasonable. But you can't fine tune C/C++ built-in types to disallow by default bad casts like the one in this answer and the validity ends up being responsibility of someone else. If you're in a language with stricter casts (even between built-ins), expected-domain modeling is a pretty good idea.
â villasv
yesterday
1
Note, I did mention cranking up warnings and setting them to errors, but not everyone does. I still disagree @villasv with your statement about modeling values. By choosing unsigned, you are ALSO implicitly modeling every other value it may come into contact with without having much foresight of what that will be. And almost certainly getting it wrong.
â Chris Uzdavinis
yesterday
1
Modeling with the domain in mind is a good thing. Using unsigned to model the domain is NOT. (Signed vs unsigned should be chosen based on types of usage, not range of values, unless it's impossible to do otherwise.)
â Chris Uzdavinis
yesterday
2
Once your codebase has a mix of signed and unsigned values, when you turn up warnings and promote them to errors, the code ends up littered with static_casts to make the conversions explicit (because the math still needs to be done.) Even when correct, it's error-prone, harder to work with, and harder to read.
â Chris Uzdavinis
yesterday
 |Â
show 6 more comments
up vote
12
down vote
One of the most hair-raising examples of an error is when you MIX signed and unsigned values:
#include <iostream>
int main()
auto qualifier = -1 < 1u ? "makes" : "does not make";
std::cout << "The world " << qualifier << " sense" << std::endl;
The output:
The world does not make sense
Unless you have a trivial application, it's inevitable you'll end up with either dangerous mixes between signed and unsigned values (resulting in runtime errors) or if you crank up warnings and make them compile-time errors, you end up with a lot of static_casts in your code. That's why it's best to strictly use signed integers for types for math or logical comparison. Only use unsigned for bitmasks and types representing bits.
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea. Most numbers are closer to 0 than they are to 2 billion, so with unsigned types, a lot of your values are closer to the edge of the valid range. To make things worse, the final value may be in a known positive range, but while evaluating expressions, intermediate values may underflow and if they are used in intermediate form may be VERY wrong values. Finally, even if your values are expected to always be positive, that doesn't mean that they won't interact with other variables that can be negative, and so you end up with a forced situation of mixing signed and unsigned types, which is the worst place to be.
8
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea *if you don't treat implicit conversions as warnings and are too lazy to use proper type casts.* Modeling your types on their expected valid values is completely reasonable, just not in C/C++ with built-in types.
â villasv
yesterday
1
@user7586189 It's a good practice to make invalid data impossible to instantiate, so having positive-only variables for sizes is perfectly reasonable. But you can't fine tune C/C++ built-in types to disallow by default bad casts like the one in this answer and the validity ends up being responsibility of someone else. If you're in a language with stricter casts (even between built-ins), expected-domain modeling is a pretty good idea.
â villasv
yesterday
1
Note, I did mention cranking up warnings and setting them to errors, but not everyone does. I still disagree @villasv with your statement about modeling values. By choosing unsigned, you are ALSO implicitly modeling every other value it may come into contact with without having much foresight of what that will be. And almost certainly getting it wrong.
â Chris Uzdavinis
yesterday
1
Modeling with the domain in mind is a good thing. Using unsigned to model the domain is NOT. (Signed vs unsigned should be chosen based on types of usage, not range of values, unless it's impossible to do otherwise.)
â Chris Uzdavinis
yesterday
2
Once your codebase has a mix of signed and unsigned values, when you turn up warnings and promote them to errors, the code ends up littered with static_casts to make the conversions explicit (because the math still needs to be done.) Even when correct, it's error-prone, harder to work with, and harder to read.
â Chris Uzdavinis
yesterday
 |Â
show 6 more comments
up vote
12
down vote
up vote
12
down vote
One of the most hair-raising examples of an error is when you MIX signed and unsigned values:
#include <iostream>
int main()
auto qualifier = -1 < 1u ? "makes" : "does not make";
std::cout << "The world " << qualifier << " sense" << std::endl;
The output:
The world does not make sense
Unless you have a trivial application, it's inevitable you'll end up with either dangerous mixes between signed and unsigned values (resulting in runtime errors) or if you crank up warnings and make them compile-time errors, you end up with a lot of static_casts in your code. That's why it's best to strictly use signed integers for types for math or logical comparison. Only use unsigned for bitmasks and types representing bits.
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea. Most numbers are closer to 0 than they are to 2 billion, so with unsigned types, a lot of your values are closer to the edge of the valid range. To make things worse, the final value may be in a known positive range, but while evaluating expressions, intermediate values may underflow and if they are used in intermediate form may be VERY wrong values. Finally, even if your values are expected to always be positive, that doesn't mean that they won't interact with other variables that can be negative, and so you end up with a forced situation of mixing signed and unsigned types, which is the worst place to be.
One of the most hair-raising examples of an error is when you MIX signed and unsigned values:
#include <iostream>
int main()
auto qualifier = -1 < 1u ? "makes" : "does not make";
std::cout << "The world " << qualifier << " sense" << std::endl;
The output:
The world does not make sense
Unless you have a trivial application, it's inevitable you'll end up with either dangerous mixes between signed and unsigned values (resulting in runtime errors) or if you crank up warnings and make them compile-time errors, you end up with a lot of static_casts in your code. That's why it's best to strictly use signed integers for types for math or logical comparison. Only use unsigned for bitmasks and types representing bits.
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea. Most numbers are closer to 0 than they are to 2 billion, so with unsigned types, a lot of your values are closer to the edge of the valid range. To make things worse, the final value may be in a known positive range, but while evaluating expressions, intermediate values may underflow and if they are used in intermediate form may be VERY wrong values. Finally, even if your values are expected to always be positive, that doesn't mean that they won't interact with other variables that can be negative, and so you end up with a forced situation of mixing signed and unsigned types, which is the worst place to be.
answered yesterday
![](https://i.stack.imgur.com/S44bo.png?s=32&g=1)
![](https://i.stack.imgur.com/S44bo.png?s=32&g=1)
Chris Uzdavinis
1,876112
1,876112
8
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea *if you don't treat implicit conversions as warnings and are too lazy to use proper type casts.* Modeling your types on their expected valid values is completely reasonable, just not in C/C++ with built-in types.
â villasv
yesterday
1
@user7586189 It's a good practice to make invalid data impossible to instantiate, so having positive-only variables for sizes is perfectly reasonable. But you can't fine tune C/C++ built-in types to disallow by default bad casts like the one in this answer and the validity ends up being responsibility of someone else. If you're in a language with stricter casts (even between built-ins), expected-domain modeling is a pretty good idea.
â villasv
yesterday
1
Note, I did mention cranking up warnings and setting them to errors, but not everyone does. I still disagree @villasv with your statement about modeling values. By choosing unsigned, you are ALSO implicitly modeling every other value it may come into contact with without having much foresight of what that will be. And almost certainly getting it wrong.
â Chris Uzdavinis
yesterday
1
Modeling with the domain in mind is a good thing. Using unsigned to model the domain is NOT. (Signed vs unsigned should be chosen based on types of usage, not range of values, unless it's impossible to do otherwise.)
â Chris Uzdavinis
yesterday
2
Once your codebase has a mix of signed and unsigned values, when you turn up warnings and promote them to errors, the code ends up littered with static_casts to make the conversions explicit (because the math still needs to be done.) Even when correct, it's error-prone, harder to work with, and harder to read.
â Chris Uzdavinis
yesterday
 |Â
show 6 more comments
8
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea *if you don't treat implicit conversions as warnings and are too lazy to use proper type casts.* Modeling your types on their expected valid values is completely reasonable, just not in C/C++ with built-in types.
â villasv
yesterday
1
@user7586189 It's a good practice to make invalid data impossible to instantiate, so having positive-only variables for sizes is perfectly reasonable. But you can't fine tune C/C++ built-in types to disallow by default bad casts like the one in this answer and the validity ends up being responsibility of someone else. If you're in a language with stricter casts (even between built-ins), expected-domain modeling is a pretty good idea.
â villasv
yesterday
1
Note, I did mention cranking up warnings and setting them to errors, but not everyone does. I still disagree @villasv with your statement about modeling values. By choosing unsigned, you are ALSO implicitly modeling every other value it may come into contact with without having much foresight of what that will be. And almost certainly getting it wrong.
â Chris Uzdavinis
yesterday
1
Modeling with the domain in mind is a good thing. Using unsigned to model the domain is NOT. (Signed vs unsigned should be chosen based on types of usage, not range of values, unless it's impossible to do otherwise.)
â Chris Uzdavinis
yesterday
2
Once your codebase has a mix of signed and unsigned values, when you turn up warnings and promote them to errors, the code ends up littered with static_casts to make the conversions explicit (because the math still needs to be done.) Even when correct, it's error-prone, harder to work with, and harder to read.
â Chris Uzdavinis
yesterday
8
8
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea *if you don't treat implicit conversions as warnings and are too lazy to use proper type casts.* Modeling your types on their expected valid values is completely reasonable, just not in C/C++ with built-in types.
â villasv
yesterday
Modeling a type to be unsigned based on the expected domain of the values of your numbers is a Bad Idea *if you don't treat implicit conversions as warnings and are too lazy to use proper type casts.* Modeling your types on their expected valid values is completely reasonable, just not in C/C++ with built-in types.
â villasv
yesterday
1
1
@user7586189 It's a good practice to make invalid data impossible to instantiate, so having positive-only variables for sizes is perfectly reasonable. But you can't fine tune C/C++ built-in types to disallow by default bad casts like the one in this answer and the validity ends up being responsibility of someone else. If you're in a language with stricter casts (even between built-ins), expected-domain modeling is a pretty good idea.
â villasv
yesterday
@user7586189 It's a good practice to make invalid data impossible to instantiate, so having positive-only variables for sizes is perfectly reasonable. But you can't fine tune C/C++ built-in types to disallow by default bad casts like the one in this answer and the validity ends up being responsibility of someone else. If you're in a language with stricter casts (even between built-ins), expected-domain modeling is a pretty good idea.
â villasv
yesterday
1
1
Note, I did mention cranking up warnings and setting them to errors, but not everyone does. I still disagree @villasv with your statement about modeling values. By choosing unsigned, you are ALSO implicitly modeling every other value it may come into contact with without having much foresight of what that will be. And almost certainly getting it wrong.
â Chris Uzdavinis
yesterday
Note, I did mention cranking up warnings and setting them to errors, but not everyone does. I still disagree @villasv with your statement about modeling values. By choosing unsigned, you are ALSO implicitly modeling every other value it may come into contact with without having much foresight of what that will be. And almost certainly getting it wrong.
â Chris Uzdavinis
yesterday
1
1
Modeling with the domain in mind is a good thing. Using unsigned to model the domain is NOT. (Signed vs unsigned should be chosen based on types of usage, not range of values, unless it's impossible to do otherwise.)
â Chris Uzdavinis
yesterday
Modeling with the domain in mind is a good thing. Using unsigned to model the domain is NOT. (Signed vs unsigned should be chosen based on types of usage, not range of values, unless it's impossible to do otherwise.)
â Chris Uzdavinis
yesterday
2
2
Once your codebase has a mix of signed and unsigned values, when you turn up warnings and promote them to errors, the code ends up littered with static_casts to make the conversions explicit (because the math still needs to be done.) Even when correct, it's error-prone, harder to work with, and harder to read.
â Chris Uzdavinis
yesterday
Once your codebase has a mix of signed and unsigned values, when you turn up warnings and promote them to errors, the code ends up littered with static_casts to make the conversions explicit (because the math still needs to be done.) Even when correct, it's error-prone, harder to work with, and harder to read.
â Chris Uzdavinis
yesterday
 |Â
show 6 more comments
up vote
5
down vote
Why is using an unsigned int more likely to cause bugs than using a signed int?
Using an unsigned type is not more likely to cause bugs than using a signed type with certain classes of tasks.
Use the right tool for the job.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
Why is using an unsigned int more likely to cause bugs than using a signed int?
If the task if well-matched: nothing wrong. No, not more likely.
Security, encryption, and authentication algorithm count on unsigned modular math.
Compression/decompression algorithms too as well as various graphic formats benefit and are less buggy with unsigned math.
Any time bit-wise operators and shifts are used, the unsigned operations do not get messed up with the sign-extension issues of signed math.
Signed integer math has an intuitive look and feel readily understood by all including learners to coding. C/C++ was not targeted originally nor now should be an intro-language. For rapid coding that employs safety nets concerning overflow, other languages are better suited. For lean fast code, C assumes that coders knows what they are doing (they are experienced).
A pitfall of signed math today is the ubiquitous 32-bit int
that with so many problems is well wide enough for the common tasks without range checking. This leads to complacency that overflow is not coded against. Instead, for (int i=0; i < n; i++)
int len = strlen(s);
is viewed as OK because n
is assumed < INT_MAX
and strings will never be too long, rather than being full ranged protected in the first case or using size_t
, unsigned
or even long long
in the 2nd.
C/C++ developed in an era that included 16-bit as well as 32-bit int
and the extra bit an unsigned 16-bit size_t
affords was significant. Attention was needed in regard to overflow issues be it int
or unsigned
.
With 32-bit (or wider) applications of Google on non-16 bit int/unsigned
platforms, affords the lack of attention to +/- overflow of int
given its ample range. This makes sense for such applications to encourage int
over unsigned
. Yet int
math is not well protected.
The narrow 16-bit int/unsigned
concerns apply today with select embedded applications.
Google's guidelines apply well for code they write today. It is not a definitive guideline for the larger wide scope range of C/C++ code.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
In C/C++, signed int math overflow is undefined behavior and so not certainly easier to detect than defined behavior of unsigned math.
As @Chris Uzdavinis well commented, mixing signed and unsigned is best avoided by all (especially beginners) and otherwise coded carefully when needed.
1
You make a good point that anint
doesn't model the behavior of an "actual" integer either. Undefined behavior on overflow is not how a mathematician thinks of integers: they're no possibility of "overflow" with an abstract integer. But these are machine storage units, not a math guy's numbers.
â tchrist
18 hours ago
add a comment |Â
up vote
5
down vote
Why is using an unsigned int more likely to cause bugs than using a signed int?
Using an unsigned type is not more likely to cause bugs than using a signed type with certain classes of tasks.
Use the right tool for the job.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
Why is using an unsigned int more likely to cause bugs than using a signed int?
If the task if well-matched: nothing wrong. No, not more likely.
Security, encryption, and authentication algorithm count on unsigned modular math.
Compression/decompression algorithms too as well as various graphic formats benefit and are less buggy with unsigned math.
Any time bit-wise operators and shifts are used, the unsigned operations do not get messed up with the sign-extension issues of signed math.
Signed integer math has an intuitive look and feel readily understood by all including learners to coding. C/C++ was not targeted originally nor now should be an intro-language. For rapid coding that employs safety nets concerning overflow, other languages are better suited. For lean fast code, C assumes that coders knows what they are doing (they are experienced).
A pitfall of signed math today is the ubiquitous 32-bit int
that with so many problems is well wide enough for the common tasks without range checking. This leads to complacency that overflow is not coded against. Instead, for (int i=0; i < n; i++)
int len = strlen(s);
is viewed as OK because n
is assumed < INT_MAX
and strings will never be too long, rather than being full ranged protected in the first case or using size_t
, unsigned
or even long long
in the 2nd.
C/C++ developed in an era that included 16-bit as well as 32-bit int
and the extra bit an unsigned 16-bit size_t
affords was significant. Attention was needed in regard to overflow issues be it int
or unsigned
.
With 32-bit (or wider) applications of Google on non-16 bit int/unsigned
platforms, affords the lack of attention to +/- overflow of int
given its ample range. This makes sense for such applications to encourage int
over unsigned
. Yet int
math is not well protected.
The narrow 16-bit int/unsigned
concerns apply today with select embedded applications.
Google's guidelines apply well for code they write today. It is not a definitive guideline for the larger wide scope range of C/C++ code.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
In C/C++, signed int math overflow is undefined behavior and so not certainly easier to detect than defined behavior of unsigned math.
As @Chris Uzdavinis well commented, mixing signed and unsigned is best avoided by all (especially beginners) and otherwise coded carefully when needed.
1
You make a good point that anint
doesn't model the behavior of an "actual" integer either. Undefined behavior on overflow is not how a mathematician thinks of integers: they're no possibility of "overflow" with an abstract integer. But these are machine storage units, not a math guy's numbers.
â tchrist
18 hours ago
add a comment |Â
up vote
5
down vote
up vote
5
down vote
Why is using an unsigned int more likely to cause bugs than using a signed int?
Using an unsigned type is not more likely to cause bugs than using a signed type with certain classes of tasks.
Use the right tool for the job.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
Why is using an unsigned int more likely to cause bugs than using a signed int?
If the task if well-matched: nothing wrong. No, not more likely.
Security, encryption, and authentication algorithm count on unsigned modular math.
Compression/decompression algorithms too as well as various graphic formats benefit and are less buggy with unsigned math.
Any time bit-wise operators and shifts are used, the unsigned operations do not get messed up with the sign-extension issues of signed math.
Signed integer math has an intuitive look and feel readily understood by all including learners to coding. C/C++ was not targeted originally nor now should be an intro-language. For rapid coding that employs safety nets concerning overflow, other languages are better suited. For lean fast code, C assumes that coders knows what they are doing (they are experienced).
A pitfall of signed math today is the ubiquitous 32-bit int
that with so many problems is well wide enough for the common tasks without range checking. This leads to complacency that overflow is not coded against. Instead, for (int i=0; i < n; i++)
int len = strlen(s);
is viewed as OK because n
is assumed < INT_MAX
and strings will never be too long, rather than being full ranged protected in the first case or using size_t
, unsigned
or even long long
in the 2nd.
C/C++ developed in an era that included 16-bit as well as 32-bit int
and the extra bit an unsigned 16-bit size_t
affords was significant. Attention was needed in regard to overflow issues be it int
or unsigned
.
With 32-bit (or wider) applications of Google on non-16 bit int/unsigned
platforms, affords the lack of attention to +/- overflow of int
given its ample range. This makes sense for such applications to encourage int
over unsigned
. Yet int
math is not well protected.
The narrow 16-bit int/unsigned
concerns apply today with select embedded applications.
Google's guidelines apply well for code they write today. It is not a definitive guideline for the larger wide scope range of C/C++ code.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
In C/C++, signed int math overflow is undefined behavior and so not certainly easier to detect than defined behavior of unsigned math.
As @Chris Uzdavinis well commented, mixing signed and unsigned is best avoided by all (especially beginners) and otherwise coded carefully when needed.
Why is using an unsigned int more likely to cause bugs than using a signed int?
Using an unsigned type is not more likely to cause bugs than using a signed type with certain classes of tasks.
Use the right tool for the job.
What is wrong with modular arithmetic? Isn't that the expected behaviour of an unsigned int?
Why is using an unsigned int more likely to cause bugs than using a signed int?
If the task if well-matched: nothing wrong. No, not more likely.
Security, encryption, and authentication algorithm count on unsigned modular math.
Compression/decompression algorithms too as well as various graphic formats benefit and are less buggy with unsigned math.
Any time bit-wise operators and shifts are used, the unsigned operations do not get messed up with the sign-extension issues of signed math.
Signed integer math has an intuitive look and feel readily understood by all including learners to coding. C/C++ was not targeted originally nor now should be an intro-language. For rapid coding that employs safety nets concerning overflow, other languages are better suited. For lean fast code, C assumes that coders knows what they are doing (they are experienced).
A pitfall of signed math today is the ubiquitous 32-bit int
that with so many problems is well wide enough for the common tasks without range checking. This leads to complacency that overflow is not coded against. Instead, for (int i=0; i < n; i++)
int len = strlen(s);
is viewed as OK because n
is assumed < INT_MAX
and strings will never be too long, rather than being full ranged protected in the first case or using size_t
, unsigned
or even long long
in the 2nd.
C/C++ developed in an era that included 16-bit as well as 32-bit int
and the extra bit an unsigned 16-bit size_t
affords was significant. Attention was needed in regard to overflow issues be it int
or unsigned
.
With 32-bit (or wider) applications of Google on non-16 bit int/unsigned
platforms, affords the lack of attention to +/- overflow of int
given its ample range. This makes sense for such applications to encourage int
over unsigned
. Yet int
math is not well protected.
The narrow 16-bit int/unsigned
concerns apply today with select embedded applications.
Google's guidelines apply well for code they write today. It is not a definitive guideline for the larger wide scope range of C/C++ code.
One reason that I can think of using signed int over unsigned int, is that if it does overflow (to negative), it is easier to detect.
In C/C++, signed int math overflow is undefined behavior and so not certainly easier to detect than defined behavior of unsigned math.
As @Chris Uzdavinis well commented, mixing signed and unsigned is best avoided by all (especially beginners) and otherwise coded carefully when needed.
edited 10 hours ago
![](https://i.stack.imgur.com/RIZKi.png?s=32&g=1)
![](https://i.stack.imgur.com/RIZKi.png?s=32&g=1)
Peter Mortensen
12.8k1983111
12.8k1983111
answered 18 hours ago
![](https://i.stack.imgur.com/pIl9T.png?s=32&g=1)
![](https://i.stack.imgur.com/pIl9T.png?s=32&g=1)
chux
72.8k761129
72.8k761129
1
You make a good point that anint
doesn't model the behavior of an "actual" integer either. Undefined behavior on overflow is not how a mathematician thinks of integers: they're no possibility of "overflow" with an abstract integer. But these are machine storage units, not a math guy's numbers.
â tchrist
18 hours ago
add a comment |Â
1
You make a good point that anint
doesn't model the behavior of an "actual" integer either. Undefined behavior on overflow is not how a mathematician thinks of integers: they're no possibility of "overflow" with an abstract integer. But these are machine storage units, not a math guy's numbers.
â tchrist
18 hours ago
1
1
You make a good point that an
int
doesn't model the behavior of an "actual" integer either. Undefined behavior on overflow is not how a mathematician thinks of integers: they're no possibility of "overflow" with an abstract integer. But these are machine storage units, not a math guy's numbers.â tchrist
18 hours ago
You make a good point that an
int
doesn't model the behavior of an "actual" integer either. Undefined behavior on overflow is not how a mathematician thinks of integers: they're no possibility of "overflow" with an abstract integer. But these are machine storage units, not a math guy's numbers.â tchrist
18 hours ago
add a comment |Â
up vote
1
down vote
I have some experience with Google's style guide, AKA the Hitchhiker's Guide to Insane Directives from Bad Programmers Who Got into the Company a Long Long Time Ago. This particular guideline is just one example of the dozens of nutty rules in that book.
Errors only occur with unsigned types if you try to do arithmetic with them (see Chris Uzdavinis example above), in other words if you use them as numbers. Unsigned types are not intended to be used to store numeric quantities, they are intended to store counts such as the size of containers, which can never be negative, and they can and should be used for that purpose.
The idea of using arithmetical types (like signed integers) to store container sizes is idiotic. Would you use a double to store the size of a list, too? That there are people at Google storing container sizes using arithmetical types and requiring others to do the same thing says something about the company. One thing I notice about such dictates is that the dumber they are, the more they need to be strict do-it-or-you-are-fired rules because otherwise people with common sense would ignore the rule.
While I get your drift, the blanket statements made would virtually eliminate bitwise operations ifunsigned
types could only hold counts and not be used in arithmetic. So the "Insane Directives from Bad Programmers" part makes more sense.
â David C. Rankin
2 hours ago
@DavidC.Rankin Please don't take it as a "blanket" statement. Obviously there are multiple legitimate uses for unsigned integers (like storing bitwise values).
â Tyler Durden
2 hours ago
Yes, yes -- I didn't, that's why I said "I get your drift."
â David C. Rankin
2 hours ago
add a comment |Â
up vote
1
down vote
I have some experience with Google's style guide, AKA the Hitchhiker's Guide to Insane Directives from Bad Programmers Who Got into the Company a Long Long Time Ago. This particular guideline is just one example of the dozens of nutty rules in that book.
Errors only occur with unsigned types if you try to do arithmetic with them (see Chris Uzdavinis example above), in other words if you use them as numbers. Unsigned types are not intended to be used to store numeric quantities, they are intended to store counts such as the size of containers, which can never be negative, and they can and should be used for that purpose.
The idea of using arithmetical types (like signed integers) to store container sizes is idiotic. Would you use a double to store the size of a list, too? That there are people at Google storing container sizes using arithmetical types and requiring others to do the same thing says something about the company. One thing I notice about such dictates is that the dumber they are, the more they need to be strict do-it-or-you-are-fired rules because otherwise people with common sense would ignore the rule.
While I get your drift, the blanket statements made would virtually eliminate bitwise operations ifunsigned
types could only hold counts and not be used in arithmetic. So the "Insane Directives from Bad Programmers" part makes more sense.
â David C. Rankin
2 hours ago
@DavidC.Rankin Please don't take it as a "blanket" statement. Obviously there are multiple legitimate uses for unsigned integers (like storing bitwise values).
â Tyler Durden
2 hours ago
Yes, yes -- I didn't, that's why I said "I get your drift."
â David C. Rankin
2 hours ago
add a comment |Â
up vote
1
down vote
up vote
1
down vote
I have some experience with Google's style guide, AKA the Hitchhiker's Guide to Insane Directives from Bad Programmers Who Got into the Company a Long Long Time Ago. This particular guideline is just one example of the dozens of nutty rules in that book.
Errors only occur with unsigned types if you try to do arithmetic with them (see Chris Uzdavinis example above), in other words if you use them as numbers. Unsigned types are not intended to be used to store numeric quantities, they are intended to store counts such as the size of containers, which can never be negative, and they can and should be used for that purpose.
The idea of using arithmetical types (like signed integers) to store container sizes is idiotic. Would you use a double to store the size of a list, too? That there are people at Google storing container sizes using arithmetical types and requiring others to do the same thing says something about the company. One thing I notice about such dictates is that the dumber they are, the more they need to be strict do-it-or-you-are-fired rules because otherwise people with common sense would ignore the rule.
I have some experience with Google's style guide, AKA the Hitchhiker's Guide to Insane Directives from Bad Programmers Who Got into the Company a Long Long Time Ago. This particular guideline is just one example of the dozens of nutty rules in that book.
Errors only occur with unsigned types if you try to do arithmetic with them (see Chris Uzdavinis example above), in other words if you use them as numbers. Unsigned types are not intended to be used to store numeric quantities, they are intended to store counts such as the size of containers, which can never be negative, and they can and should be used for that purpose.
The idea of using arithmetical types (like signed integers) to store container sizes is idiotic. Would you use a double to store the size of a list, too? That there are people at Google storing container sizes using arithmetical types and requiring others to do the same thing says something about the company. One thing I notice about such dictates is that the dumber they are, the more they need to be strict do-it-or-you-are-fired rules because otherwise people with common sense would ignore the rule.
answered 2 hours ago
![](https://i.stack.imgur.com/xObp1.png?s=32&g=1)
![](https://i.stack.imgur.com/xObp1.png?s=32&g=1)
Tyler Durden
6,72963488
6,72963488
While I get your drift, the blanket statements made would virtually eliminate bitwise operations ifunsigned
types could only hold counts and not be used in arithmetic. So the "Insane Directives from Bad Programmers" part makes more sense.
â David C. Rankin
2 hours ago
@DavidC.Rankin Please don't take it as a "blanket" statement. Obviously there are multiple legitimate uses for unsigned integers (like storing bitwise values).
â Tyler Durden
2 hours ago
Yes, yes -- I didn't, that's why I said "I get your drift."
â David C. Rankin
2 hours ago
add a comment |Â
While I get your drift, the blanket statements made would virtually eliminate bitwise operations ifunsigned
types could only hold counts and not be used in arithmetic. So the "Insane Directives from Bad Programmers" part makes more sense.
â David C. Rankin
2 hours ago
@DavidC.Rankin Please don't take it as a "blanket" statement. Obviously there are multiple legitimate uses for unsigned integers (like storing bitwise values).
â Tyler Durden
2 hours ago
Yes, yes -- I didn't, that's why I said "I get your drift."
â David C. Rankin
2 hours ago
While I get your drift, the blanket statements made would virtually eliminate bitwise operations if
unsigned
types could only hold counts and not be used in arithmetic. So the "Insane Directives from Bad Programmers" part makes more sense.â David C. Rankin
2 hours ago
While I get your drift, the blanket statements made would virtually eliminate bitwise operations if
unsigned
types could only hold counts and not be used in arithmetic. So the "Insane Directives from Bad Programmers" part makes more sense.â David C. Rankin
2 hours ago
@DavidC.Rankin Please don't take it as a "blanket" statement. Obviously there are multiple legitimate uses for unsigned integers (like storing bitwise values).
â Tyler Durden
2 hours ago
@DavidC.Rankin Please don't take it as a "blanket" statement. Obviously there are multiple legitimate uses for unsigned integers (like storing bitwise values).
â Tyler Durden
2 hours ago
Yes, yes -- I didn't, that's why I said "I get your drift."
â David C. Rankin
2 hours ago
Yes, yes -- I didn't, that's why I said "I get your drift."
â David C. Rankin
2 hours ago
add a comment |Â
up vote
-7
down vote
One of the main issues is that unsigned integers can't be negative. This can lead to buggy behavior with negative numbers. Take for example:
unsigned int myInt = 0;
myInt -= 1;
printf(" %u", myInt);
Try that and you will see strange results (like the printed being an extremely high number).
5
Gee, but what if my numbers should not be negative (like array indices, for example), and I would like to express that in their type? Or if yourint
underflows / overflows (which is UB forsigned
, but not forunsigned
)?
â DevSolar
yesterday
3
@DevSolar Usingunsigned
to express a number should not be negative is thought by many to be a mistake.
â NathanOliver
yesterday
2
@NathanOliver: [who] [citation needed]. ;-) (Don't. Just pointificating.)
â DevSolar
yesterday
4
@NathanOliver: I still have to meet these "many". The many I know prefer unsigned integers (though not necessarilyunsigned
where appropriate. Among others, they have a well defined overflow behaviour.
â too honest for this site
yesterday
7
Maybe I wasn't clear: "This can lead to buggy behavior with negative numbers" - no, it can't. Because unsigned integers cannot be negative. It's the same argument like "a bike is not as usefull as a truck, because you can't transport a grande piano with it" - that's just no argument, because a bike is not meant to.
â too honest for this site
yesterday
 |Â
show 8 more comments
up vote
-7
down vote
One of the main issues is that unsigned integers can't be negative. This can lead to buggy behavior with negative numbers. Take for example:
unsigned int myInt = 0;
myInt -= 1;
printf(" %u", myInt);
Try that and you will see strange results (like the printed being an extremely high number).
5
Gee, but what if my numbers should not be negative (like array indices, for example), and I would like to express that in their type? Or if yourint
underflows / overflows (which is UB forsigned
, but not forunsigned
)?
â DevSolar
yesterday
3
@DevSolar Usingunsigned
to express a number should not be negative is thought by many to be a mistake.
â NathanOliver
yesterday
2
@NathanOliver: [who] [citation needed]. ;-) (Don't. Just pointificating.)
â DevSolar
yesterday
4
@NathanOliver: I still have to meet these "many". The many I know prefer unsigned integers (though not necessarilyunsigned
where appropriate. Among others, they have a well defined overflow behaviour.
â too honest for this site
yesterday
7
Maybe I wasn't clear: "This can lead to buggy behavior with negative numbers" - no, it can't. Because unsigned integers cannot be negative. It's the same argument like "a bike is not as usefull as a truck, because you can't transport a grande piano with it" - that's just no argument, because a bike is not meant to.
â too honest for this site
yesterday
 |Â
show 8 more comments
up vote
-7
down vote
up vote
-7
down vote
One of the main issues is that unsigned integers can't be negative. This can lead to buggy behavior with negative numbers. Take for example:
unsigned int myInt = 0;
myInt -= 1;
printf(" %u", myInt);
Try that and you will see strange results (like the printed being an extremely high number).
One of the main issues is that unsigned integers can't be negative. This can lead to buggy behavior with negative numbers. Take for example:
unsigned int myInt = 0;
myInt -= 1;
printf(" %u", myInt);
Try that and you will see strange results (like the printed being an extremely high number).
edited 22 hours ago
answered yesterday
![](https://i.stack.imgur.com/q2FYV.jpg?s=32&g=1)
![](https://i.stack.imgur.com/q2FYV.jpg?s=32&g=1)
The Mattbat999
1115
1115
5
Gee, but what if my numbers should not be negative (like array indices, for example), and I would like to express that in their type? Or if yourint
underflows / overflows (which is UB forsigned
, but not forunsigned
)?
â DevSolar
yesterday
3
@DevSolar Usingunsigned
to express a number should not be negative is thought by many to be a mistake.
â NathanOliver
yesterday
2
@NathanOliver: [who] [citation needed]. ;-) (Don't. Just pointificating.)
â DevSolar
yesterday
4
@NathanOliver: I still have to meet these "many". The many I know prefer unsigned integers (though not necessarilyunsigned
where appropriate. Among others, they have a well defined overflow behaviour.
â too honest for this site
yesterday
7
Maybe I wasn't clear: "This can lead to buggy behavior with negative numbers" - no, it can't. Because unsigned integers cannot be negative. It's the same argument like "a bike is not as usefull as a truck, because you can't transport a grande piano with it" - that's just no argument, because a bike is not meant to.
â too honest for this site
yesterday
 |Â
show 8 more comments
5
Gee, but what if my numbers should not be negative (like array indices, for example), and I would like to express that in their type? Or if yourint
underflows / overflows (which is UB forsigned
, but not forunsigned
)?
â DevSolar
yesterday
3
@DevSolar Usingunsigned
to express a number should not be negative is thought by many to be a mistake.
â NathanOliver
yesterday
2
@NathanOliver: [who] [citation needed]. ;-) (Don't. Just pointificating.)
â DevSolar
yesterday
4
@NathanOliver: I still have to meet these "many". The many I know prefer unsigned integers (though not necessarilyunsigned
where appropriate. Among others, they have a well defined overflow behaviour.
â too honest for this site
yesterday
7
Maybe I wasn't clear: "This can lead to buggy behavior with negative numbers" - no, it can't. Because unsigned integers cannot be negative. It's the same argument like "a bike is not as usefull as a truck, because you can't transport a grande piano with it" - that's just no argument, because a bike is not meant to.
â too honest for this site
yesterday
5
5
Gee, but what if my numbers should not be negative (like array indices, for example), and I would like to express that in their type? Or if your
int
underflows / overflows (which is UB for signed
, but not for unsigned
)?â DevSolar
yesterday
Gee, but what if my numbers should not be negative (like array indices, for example), and I would like to express that in their type? Or if your
int
underflows / overflows (which is UB for signed
, but not for unsigned
)?â DevSolar
yesterday
3
3
@DevSolar Using
unsigned
to express a number should not be negative is thought by many to be a mistake.â NathanOliver
yesterday
@DevSolar Using
unsigned
to express a number should not be negative is thought by many to be a mistake.â NathanOliver
yesterday
2
2
@NathanOliver: [who] [citation needed]. ;-) (Don't. Just pointificating.)
â DevSolar
yesterday
@NathanOliver: [who] [citation needed]. ;-) (Don't. Just pointificating.)
â DevSolar
yesterday
4
4
@NathanOliver: I still have to meet these "many". The many I know prefer unsigned integers (though not necessarily
unsigned
where appropriate. Among others, they have a well defined overflow behaviour.â too honest for this site
yesterday
@NathanOliver: I still have to meet these "many". The many I know prefer unsigned integers (though not necessarily
unsigned
where appropriate. Among others, they have a well defined overflow behaviour.â too honest for this site
yesterday
7
7
Maybe I wasn't clear: "This can lead to buggy behavior with negative numbers" - no, it can't. Because unsigned integers cannot be negative. It's the same argument like "a bike is not as usefull as a truck, because you can't transport a grande piano with it" - that's just no argument, because a bike is not meant to.
â too honest for this site
yesterday
Maybe I wasn't clear: "This can lead to buggy behavior with negative numbers" - no, it can't. Because unsigned integers cannot be negative. It's the same argument like "a bike is not as usefull as a truck, because you can't transport a grande piano with it" - that's just no argument, because a bike is not meant to.
â too honest for this site
yesterday
 |Â
show 8 more comments
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f51677855%2fis-using-an-unsigned-rather-than-signed-int-more-likely-to-cause-bugs-why%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
var $window = $(window),
onScroll = function(e)
var $elem = $('.new-login-left'),
docViewTop = $window.scrollTop(),
docViewBottom = docViewTop + $window.height(),
elemTop = $elem.offset().top,
elemBottom = elemTop + $elem.height();
if ((docViewTop elemBottom))
StackExchange.using('gps', function() StackExchange.gps.track('embedded_signup_form.view', location: 'question_page' ); );
$window.unbind('scroll', onScroll);
;
$window.on('scroll', onScroll);
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
3
Try to do
unsigned int x = 0; --x;
and see whatx
becomes. Without limit checks, the size could suddenly get some unexpected value that could easily lead to UB.â Some programmer dude
yesterday
21
At least unsigned overflow has a well-defined behavior and produces expected results.
â VTT
yesterday
24
On an unrelated (to your question but not to Google styleguides) note, if you search a little you will find some (sometimes rightfully) criticism of the Google styleguides. Don't take them as gospel.
â Some programmer dude
yesterday
15
On the other hand,
int
overflow and underflow are UB. You are less likely to experience a situation where anint
would try to express a value it can't than a situation that decrements anunsigned int
below zero but the kind of people that would be surprised by the behavior ofunsigned int
arithmetic are the kind of people that could also write code that would causeint
overflow related UB like usinga < a + 1
to check for overflow.â François Andrieux
yesterday
5
If unsigned integer overflows, it's well defined. If signed integer overflows, it's undefined behaviour. I prefer well defined behaviour, but if your code can't handle overflowed values, you are lost with both. Difference is: for signed you are already lost for the overflowing operation, for unsigned in the following code. The only point I agree is if you need negative values, an unsigned integer type is the wrong choice - obviously.
â too honest for this site
yesterday