How important is it in languages?
Well, it's pretty fundamental. You really need to learn atleast hexadecimal if you want to work with memory much-- not so much to understand what's in it, but just so you can figure out addressing correctly.
Binary is important too, particularly if you want to work with shifting, masking, flags, and more.
And it's actually easier than you think. If anything, decimal is harder to work with than hex and binary when you're doing programming things. It's because decimal _isn't_ suited to programming. It's fine for loops, but computers are base-2, not base-10 machines.
---
Try this:
Code:
Hex Decimal
---------------------
0x0 0
0x1 1
0x2 2
0x3 3
0x4 4
0x5 5
0x6 6
0x7 7
0x8 8
0x9 9
0xA 10
0xB 11
0xC 12
0xD 13
0xE 14
0xF 15
---------------------
Hex is just that simple. In decimal let's say you roll from 9 to 10. What happened? The right digit went back to zero, and a new digit, the left one, became a 1. Same thing with hex. In hex, rolling from 15 (0xF) to 16 (0x??) means that the rightmost digit goes to 0, and a new leftmost digit is added-- a '1'. 0x10 = 16d.
Hexidecimal is base 16-- which is a power of 2 (so it translates to _binary_ directly. Each hexadecimal character is equivalent to 4 (four) binary bits.
Code:
Binary Hex
----------------------
0000 0x0
0001 0x1
0010 0x2
0011 0x3
0100 0x4
0101 0x5
0110 0x6
0111 0x7
1000 0x8
1001 0x9
1010 0xA
1011 0xB
1100 0xC
1101 0xD
1110 0xE
1111 0xF
----------------------
Okay, okay. So the above binary patterns are _nice_ to look at, but how do you _count_ in binary, right? Here's how-- kinda like decimal:
Each bit is worth a certain amount-- this never changes*. In most cases today, a binary number has a the lowest-worth bit on the right and the highest-worth bit on the left. Another way to say that (which you may have seen) is with acronyms like this: LSB or MSB.
LSB = least significant bit
MSB = most significant bit.
That is to say, a low-value bit isn't worth much, while a high-value bit is worth a lot. This is called 'endian' notation.
Some processors are left-endian (low-bit/LSB on the left), while others are right-endian (low-bit/LSB on the right). For example, Motorola versus Intel).
Most compilers today, are right-endian (LSB on the right).
Now that we know which end is which-- how do we count the darn things? What are those pesky bits worth?--
Make a table--
Code:
Binary: 0 0 0 0
Value: 8 + 4 + 2 + 1
4-bits is called a 'nyble/nibble'. 8-bits is called a 'byte'. Above, we have a nyble. As you can see, each bit has a specific value. So by using the above understanding, what is this binary value worth in decimal?
1101
It should be worth: 8 + 4 + 0 + 1 = 13decimal. 13d converts to what in hex? It should convert to a 0xD.
What about this binary number?
01001011
Well, it should be worth: 0 + 64 + 0 + 0 + 8 + 0 + 2 + 1 = 75decimal. 75d converts to what in hex?
Code:
0 1 0 0 | 1 0 1 1
------------------+-------------------
8 + 4 + 2 + 1 | 8 + 4 + 2 + 1
------------------+-------------------
4 | B
------------------+-------------------
Hope that helps. Sorry, can't write more, I think the fan on my video card just failed...
-----
*caveat- Each bit increases by a power of 2 from LSB to MSB. However, some patterns do this multiple times. If you were converting decimal to binary, then it would be from LSB to MSB, each bit increasing by a power of 2. If it were a large hex number (like say 0xA4 or 0xAC59436D, then the binary pattern would repeat from 1 to 8 (LSB to MSB) for each nyble.