Floating point numbers are represented differently from the usual registers you're used to. They're stored in RAM, as nine-byte data structures in scientific notation. For example, this is what −1337 looks like:
Sign | Exp | Significand | ||||||
---|---|---|---|---|---|---|---|---|
S0 | S1 | S2 | S3 | S4 | S5 | S6 | ||
$80 | $83 | $13 | $37 | $00 | $00 | $00 | $00 | $00 |
Bit 7 (the bit furthest to the left when you write it out) tells you whether the number is positive or negative, while bits 2 and 3 tell you whether it's real or complex. In other words, a valid sign byte is one of the following:
So in this case, the sign byte would be %10000000, or $80.
This is where the fun starts. Floating points are always stored in scientific notation (as in −1.337×103), so you need to know what power of 10 it's being taken to. In this case, that would be three.
But you don't just store a three here; no, that would be too easy, and remember that TI wants to screw us up. So instead, you add $80 to whatever the exponent is, then store it. So for −1337, or −1.337×103, it would be 3+$80, or $83.
This is the actual number itself. There are 7 bytes per number, each of which holds two digits (hence the 14 digits of accuracy on a TI-83 Plus series calc).
It's stored in BCD (binary-coded decimal) format, in which each nibble (four bits, or half a byte) holds a decimal digit (0–9). So a valid byte in the significand would be one of the following:
The first digit (upper nibble of S0) is the characteristic, or the number before the decimal point when written in scientific notation (in this case, the 1 in −1.337×103).
You could actually store a non-BCD value there (such as $BA). It causes some interesting effects when you try doing math with it. Try it for yourself to see (it shouldn't cause a crash, but I won't be held responsible if it does).