A bit (binary digit) is the smallest unit of data in a computer. A bit has either the value 0 or 1. 0 represents the electrical signal βoffβ and 1 βonβ. The binary numeral system uses the base 2. Lets take a look at an example. Here we have a byte, which consists of 8 bits. It has these values:
0000 1 1 0 1
(...) 2^3 2^2 2^1 2^0
The 8 bits/1 byte 00001101 equals the number 13. Because the rightmost number is 1, we know that we have to calculate 2 to the power of 0, which is 1. So we need to remember the 1. To the left we have 0, so we know that we don`t add 2^1.
Then to the left we have another 1, so we have to calculate 1+2^2, which is 5. Remember the 5 now. Next we have a 1, so we need to add 2^3 to the five, so 5+2^3 is 13.
As all the following bits are set to 0, we dont have to calculate anything there. It would have continued with 2^4, 2^5,2^6 and 2^7. Easy, right? :) While programming, have you worked with bits yet?
Top comments (0)