Table of Contents
0.2 + 0.1 equals
0.30000000000000004? Or, why does
"" == false evaluate to
0.1 + 0.2 And The Floating Point Format
0.1 + 0.2 in the console and watching it resoundingly fail to get
0.3, but rather a funny-looking
But what exactly is the Floating-Point Arithmetic?
Computers have to represent numbers in all sizes, from the distance between planets and even between atoms. On paper, it’s easy to write a massive number or a minuscule quantity without worrying about the size it will take. Computers don’t have that luxury since they have to save all kinds of numbers in binary and a small space in memory.
Take an 8-bit integer, for example. In binary, it can hold integers ranging from
The keyword here is integers. It can’t represent any decimals between them. To fix this, we could add an imaginary decimal point somewhere along our 8-bit so the bits before the point are used to represent the integer part and the rest are used for the decimal part. Since the point is always in the same imaginary spot, it’s called a fixed point decimal. But it comes with a great cost since the range is reduced from
255 to exactly
Having greater precision means sacrificing range, and vice versa. We also have to take into consideration that computers need to please a large number of users with different requirements. An engineer building a bridge doesn’t worry too much if the measurements are off by just a little, say a hundredth of a centimeter. But, on the other hand, that same hundredth of a centimeter can end up costing much more for someone making a microchip. The precision that’s needed is different, and the consequences of a mistake can vary.
Another consideration is the size where numbers are stored in memory since storing long numbers in something like a megabyte isn’t feasible.
The floating-point format was born from this need to represent both large and small quantities with precision and efficiency. It does so in three parts:
- A single bit that represents whether or not the number is positive or negative (
- A significand or mantissa that contains the number’s digits.
- An exponent specifies where the decimal (or binary) point is placed relative to the beginning of the mantissa, similar to how scientific notation works. Consequently, the point can move around to any position, hence the floating point.
An 8-bit floating-point format can represent numbers between
480 (and its negatives), but notice that the floating-point representation can’t represent all of the numbers in that range. It’s impossible since 8 bits can represent only 256 distinct values. Inevitably, many numbers cannot be accurately represented. There are gaps along the range. Computers, of course, work with more bits to increase accuracy and range, commonly with 32-bits and 64-bits, but it’s impossible to represent all numbers accurately, a small price to pay if we consider the range we gain and the memory we save.
0.3 is obviously below the
MAX_SAFE_INTEGER threshold, so why can’t we get it when adding
0.2? The floating-point format struggles with some fractional numbers. It isn’t a problem with the floating-point format, but it certainly is across any number system.
To see this, let’s represent one-third (1⁄3) in base-10.
No matter how many digits we try to write, the result will never be exactly one-third. In the same way, we cannot accurately represent some fractional numbers in base-2 or binary. Take, for example,
0.2. We can write it with no problem in base-10, but if we try to write it in binary we get a recurring
1001 at the end that repeats infinitely.
0.001 1001 1001 1001 1001 1001 10 [...]
We obviously can’t have an infinitely large number, so at some point, the mantissa has to be truncated, making it impossible not to lose precision in the process. If we try to convert
0.2 from double-precision floating-point back to base-10, we will see the actual value saved in memory:
0.2 + 0.2 correctly compute
0.2 + 0.1. We can see what’s happening under the hood if we try to sum the actual values of
This is the actual value saved when writing
If we manually sum up the actual values of
0.2, we will see the culprit:
That value is rounded to
0.30000000000000004. You can check the real values saved at float.exposed.
Floating-point has its known flaws, but its positives outweigh them, and it’s standard around the world. In that sense, it’s actually a relief when all modern systems will give us the same
0.30000000000000004 result across architectures. It might not be the result you expect, but it’s a result you can predict.
The issue comes from being weakly typed since there are many occasions where the language will try to do an implicit conversion between different types, e.g., from strings to numbers or falsy and truthy values. This is specifically true when using the equality (
==) and plus sign (
+) operators. The rules for type coercion are intricate, hard to remember, and even incorrect in certain situations. It’s better to avoid using
== and always prefer the strict equality operator (
console.log("2" == 2); // true
The inverse applies to the plus sign operator (
+). It will try to coerce a number into a string when possible:
console.log(2 + "2"); // "22"
“I would have avoided some of the compromises that I made when I first got early adopters, and they said, “Can you change this?”
— Brendan Eich
The most glaring example is the reason why we have two equality operators,
==) at all costs and replace it with its strict homonym (
Why do we have two equality operators in the first place? A lot of factors, but we can point a finger at Guy L. Steele, co-creator of the Scheme programming language. He assured Eich that we could always add another equality operator since there were dialects with five distinct equality operators in the Lisp language! This mentality is dangerous, and nowadays, all features have to be rigorously analyzed because we can always add new features, but once they are in the language, they cannot be removed.
Automatic Semicolon Insertion
;) is required at the end of some statements, including:
- Expression statements;
- Class field declarations (public or private);
ASI can make some code work, but most of the time it doesn’t. Take the following code:
const a = 1
const b = 1
[1, 2, 3].forEach(console.log)
You can probably see where the semicolons go, and if we formatted it correctly, it would end up as:
const a = 1;
const b = 1;
[(1, 2, 3)].forEach(console.log);
const a = 1(1).toString();
const b = (1)[(1, 2, 3)].forEach(console.log);
In conclusion, know your semicolons.
Why So Many Bottom Values?
typeof null returning
object). Attempting to get a property value from them raises an exception.
Note that, strictly speaking, all primitive values aren’t objects. But only
undefined aren’t subjected to boxing.
We can even think of
undefined to denote an existing property or variable that doesn’t have a value.
On the other hand,
null is used to represent the absence of an object (hence, its
typeof returns an
object even though it isn’t). However, this is considered a design blunder because
null, but there are some occasions where only
null can be used, as is the case with
Object.create in which we can only create an object without a prototype passing
undefined returns a
undefined both suffer from the path problem. When trying to access a property from a bottom value — as if they were objects — exceptions are raised.
let userName = user.name; // Uncaught TypeError
let userNick = user.name.nick; // Uncaught TypeError
There is no way around this unless we check for each property value before trying to access the next one, either using the logical AND (
&&) or optional chaining (
let userName = user?.name;
let userNick = user && user.name && user.name.nick;
console.log(userName); // undefined
console.log(userNick); // undefined
I said that
NaN has its own shenanigans because it isn’t equal to itself! To test if a value is
NaN or not, use
We can check for all three bottom values with the following test:
if (bottomValue === undefined)
if (bottomValue === null)
++) And Decrement (
As developers, we tend to spend more time reading code rather than writing it. Whether we are reading documentation, reviewing someone else’s work, or checking our own, code readability will increase our productivity over brevity. In other words, readability saves time in the long run.
That’s why I prefer using
+ 1 or
- 1 rather than the increment (
++) and decrement (
It’s illogical to have a different syntax exclusively for incrementing a value by one in addition to having a pre-increment form and a post-increment form, depending on where the operator is placed. It is very easy to get them reversed, and that can be difficult to debug. They shouldn’t have a place in your code or even in the language as a whole when we consider where the increment operators come from.
-- operators were originally crafted for the specific purpose of advancing or stepping back through memory locations.
While the use of
-- remains a standard among developers, an argument for readability can be made. Opting for
+ 1 or
- 1 over
-- not only aligns with the principles of clarity and explicitness but also avoids having to deal with its pre-increment form and post-increment form.
Overall, it isn’t a life-or-death situation but a nice way to make your code more readable.
this keyword and its multipurpose behavior.