Binary Numbers and Computer Speed

Posted by PITHOCRATES - April 2nd, 2014

Technology 101

Computers are Good at Arithmetic thanks to Binary Numbers

Let’s do a fun little experiment.  Get a piece of paper and a pen or a pencil.  And then using long division divide 4,851 by 34.  Time yourself.  See how long it takes to complete this.  If the result is not a whole number take it to at least three places past the decimal point.  Okay?  Ready……..start.

Chances are the older you are the faster you did this.  Because once upon a time you had to do long division in school.  In that ancient era before calculators.  Younger people may have struggled with this.  Because the result is not a whole number.  Few probably could do this in their head.  Most probably had a lot of scribbling on that piece of paper before they could get 3 places past the decimal point.  The answer to three places past the decimal point, by the way, is 142.676.  Did you get it right?  And, if so, how long did it take?

Probably tens of seconds.  Or minutes.  A computer, on the other hand, could crunch that out faster than you could punch the buttons on a calculator.  Because one thing computers are good at is arithmetic.  Thanks to binary numbers.  The language of all computers.  1s and 0s to most of us.  But two different states to a computer.  That make information the computer can understand and process.  Fast.

A Computer can look at Long Streams of 1s and 0s and make Perfect Sense out of Them

The numbers we use in everyday life are from the decimal numeral system.  Or base ten.  For example, the number ‘4851’ contains four digits.  Where each digit can be one of 10 values (0, 1, 2, 3…9).   And then the ‘base’ part comes in.  We say base ten because each digit is a multiple of 10 to the power of n.  Where n=0, 1, 2, 3….  So 4851 is the sum of (4 X 103) + (8 X 102) + (5 X 101) + (1 X 100).  Or (4 X 1000) + (8 X 100) + (5 X 10) + (1 X 1).  Or 4000 + 800 + 50 + 1.  Which adds up to 4851.

But the decimal numeral system isn’t the only numeral system.  You can do this with any base number.  Such as 16.  What we call hexadecimal.  Which uses 16 distinct values (0, 1, 2, 3…9, A, B, C, D, E, and F).  So 4851 is the sum of (1 X 163) + (2 X 162) + (15 X 161) + (3 X 160).  Or (1 X 4096) + (2 X 256) + (15 X 16) + (3 X 1).  Or 4096 + 512 + 240 + 3.  Which adds up to 4851.  Or 12F3 in hexadecimal.  Where F=15.  So ‘4851’ requires four positions in decimal.  And four positions in hexadecimal.  Interesting.  But not very useful.  As 12F3 isn’t a number we can do much with in long division.  Or even on a calculator.

Let’s do this one more time.  And use 2 for the base.  What we call binary.  Which uses 2 distinct values (0 and 1).  So 4851 is the sum of (1 X 212) + (0 X 211) + (0 X 210) + (1 X 29) + (0 X 28) + (1 X 27) + (1 X 26) + (1 X 25) + (1 X 24) + (0 X 23) + (0 X 22) + (1 X 21) + (1 X 20).  Or (1 X 4096) + (0 X 2048) + (0 X 1024) + (1 X 512) + (0 X 256) + (1 X 128) + (1 X 64) + (1 X 32) + (1 X 16) + (0 X 8) + (0 X 4) + (1 X 2) + (1 X 1).  Or 4096 + 0 + 0 + 512 + 0 + 128 + 64 + 32 + 16 + 0 + 0 + 2 + 1.  Which adds up to 4851.  Or 1001011110011 in binary.  Which is gibberish to most humans.  And a little too cumbersome for long division.  Unless you’re a computer.  They love binary numbers.  And can look at long streams of these 1s and 0s and make perfect sense out of them.

A Computer can divide two Numbers in a few One-Billionths of a Second

A computer doesn’t see 1s and 0s.  They see two different states.  A high voltage and a low voltage.  An open switch and a closed switch.  An on and off.  Because of this machines that use binary numbers can be extremely simple.  Computers process bits of information.  Where each bit can be only one of two things (1 or 0, high or low, open or closed, on or off, etc.).  Greatly simplifying the electronic hardware that holds these bits.  If computers processed decimal numbers, however, just imagine the complexity that would require.

If working with decimal numbers a computer would need to work with, say, 10 different voltage levels.  Requiring the ability to produce 10 discrete voltage levels.  And the ability to detect 10 different voltage levels.  Greatly increasing the circuitry for each digit.  Requiring far more power consumption.  And producing far more damaging heat that requires more cooling capacity.  As well as adding more circuitry that can break down.  So keeping computers simple makes them cost less and more reliable.  And if each bit requires less circuitry you can add a lot more bits when using binary numbers than you can when using decimal numbers.  Allowing bigger and more powerful number crunching ability.

Computers load and process data in bytes.  Where a byte has 8 bits.  Which makes hexadecimal so useful.  If you have 2 bytes of data you can break it down into 4 groups of 4 bits.  Or nibbles.  Each nibble is a 4-bit binary number that can be easily converted into a single hexadecimal number.  In our example the binary number 0001 0010 1111 0011 easily converts to 12F3 where the first nibble (0001) converts to hexadecimal 1.  The second nibble (0010) converts to hexadecimal 2.  The third nibble (1111) converts to hexadecimal F.  And the fourth nibble (0011) converts to hexadecimal 3.  Making the man-machine interface a lot simpler.  And making our number crunching easier.

The simplest binary arithmetic operation is addition.  And it happens virtually instantaneously at the bit level.  We call the electronics that make this happen logical gates.  A typical logical gate has two inputs.  Each input can be one of two states (high voltage or low voltage, etc.).  Each possible combination of inputs produces a unique output (high voltage or low voltage, etc.).  If you change one of the inputs the output changes.  Computers have vast arrays of these logical gates that can process many bytes of data at a time.  All you need is a ‘pulsing’ clock to sequentially apply these inputs.  With the outputs providing an input for the next logical operation on the next pulse of the clock.

The faster the clock speed the faster the computer can crunch numbers.  We once measured clock speeds in megahertz (1 megahertz is one million pulses per second).  Now the faster CPUs are in gigahertz (1 gigahertz is 1 billion pulses per second).  Because of this incredible speed a computer can divide two numbers to many places past the decimal point in a few one-billionths of a second.  And be correct.  While it takes us tens of seconds.  Or even minutes.  And our answer could very well be wrong.

www.PITHOCRATES.com

Share

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Boolean Algebra, Logic Gates, Flip-Flop, Bit, Byte, Transistor, Integrated Circuit, Microprocessor and Computer Programming

Posted by PITHOCRATES - February 1st, 2012

Technology 101

A Binary System is one where a Bit of Information can only have One of Two States 

Parents can be very logical when it comes to their children.  Children always want dessert.  But they don’t always clean their rooms or do their homework.  So some parents make dessert conditional.  For the children to have their dessert they must clean their rooms AND do their homework.  Both things are required to get dessert.  Or you could say this in another way.  If the children either don’t clean their rooms OR don’t do their homework they will forfeit their dessert.  Stated in this way they only need to do one of two things (not clean their room OR not do their homework) to forfeit their dessert. 

This was an introduction to logic.  George Boole created a mathematical way to express this logic. We call it Boolean algebra.  But relax.  There will be no algebraic equations here.

In the above example things had only one of two states.  Room cleaned.  Room not cleaned.   Homework done.  Homework not done.  This is a binary system.  Where a bit of information can only have one of two states.  We gave these states names.  We could have used anything.  But in our digital age we chose to represent these two states with either a ‘1’ or a ‘0’.  One piece of information is either a ‘1’.  And if it’s not a ‘1’ then it has to be a ‘0’.  In the above example a clean room and complete homework would both be 1s.  And a dirty room and incomplete homework would be 0s.  Where ‘1’ means a condition is ‘true’.  And a ‘0’ means the condition is ‘false’.

Miniaturization allowed us to place more Transistors onto an Integrated Circuit

Logic gates are electrical/electronic devices that process these bits of information to make a decision.  The above was an example of two logic gates.  Can you guess what we call them?  One was an AND gate.  The other was an OR gate.  Because one needed both conditions (the first AND the second) to be true to trigger a true output.  Children get dessert.  The other needed only one condition (the first OR the second) to be true to trigger a true output.  Children forfeit dessert. 

We made early gates with electromechanical relays and vacuum tubes.  Claude Shannon used Boolean algebra to optimize telephone routing switches made of relays.  But these were big and required big spaces, needed lots of wiring, consumed a lot of power and generated a lot of heat.  Especially as we combined more and more of these logic gates together to be able to make more complex decisions.  Think of what happens when you press a button to call an elevator (an input).  Doors close (an action).  When doors are closed (an input) car moves (an action).  Car slows down when near floor.  Car stops on floor.  When car stops doors open.  Etc.  If you were ever in an elevator control room you could hear a symphony of clicks and clacks from the relays as they processed new inputs and issued action commands to safely move people up and down a building.  Some Boolean number crunching, though, could often eliminate a lot of redundant gates while still making the same decisions based on the same input conditions. 

The physical size constraints of putting more and more relays or vacuum tubes together limited these decision-making machines, though.  But new technology solved that problem.  By exchanging relays and vacuum tubes for transistors.  Made from small amounts of semiconductor material.  Such as silicon.  As in Silicon Valley.  These transistors are very small and consume far less power.  Which allowed us to build larger and more complex logic arrays.  Built with latching flip-flops.  Such as the J-K flip-flop.  Logic gates wired together to store a single bit of information.  A ‘1’ or a ‘0’.  Eight of these devices in a row can hold 8 bits of information.  Or a byte.  When a clock was added to these flip-flops they would check the inputs and change their outputs (if necessary) with each pulse of the clock.  Miniaturization allowed us to place more and more of these transistors onto an integrated circuit.  A computer chip.  Which could hold a lot of bytes of information. 

To Program Computers we used Assembly Language and High-Level Programming Languages like FORTRAN

The marriage of latching flip-flops and a clock gave birth to the microprocessor.  A sequential digital logic device.  Where the microprocessor checks inputs in sequence and based on the instructions stored in the computer’s memory (those registers built from flip-flops encoded with bytes of binary instructions) executes output actions.  Like the elevator.  The microprocessor notes the inputs.  It then looks in its memory to see what those inputs mean.  And then executes the instructions for that set of inputs.  The bigger the registers and the faster the clock speed the faster this sequence.

Putting information into these registers can be tedious.  Especially if you’re programming in machine language.  Entering a ‘1’ or a ‘0’ for each bit in a byte.  To help humans program these machines we developed assembly language.  Where we wrote lines of program using words we could better understand.  Then used an assembler to covert that programming into the machine language the machine could understand.  Because the machine only looks at bytes of data full of 1s and 0s and compares it to a stored program for instructions to generate an output.  To improve on this we developed high-level programming languages.  Such as FORTRAN.  FORTRAN, short for formula translation, made more sense to humans and was therefore more powerful for people.  A compiler would then translate the human gibberish into the machine language the computer could understand.

Computing has come a long way from those electromechanical relays and vacuum tubes.  Where once you had to be an engineer or a computer scientist to program and operate a computer.  Through the high-tech revolution of the Eighties and Silicon Valley.  Where chip making changed our world and created an economic boom the likes few have ever seen.  To today where anyone can use a laptop computer or a smartphone to surf the Internet.  And they don’t have to understand any of the technology that makes it work.  Which is why people curse when their device doesn’t do what they want it to do.  It doesn’t help.  But it’s all they can do.  Curse.  Unlike an engineer or computer scientist.  Who don’t curse.  Much.

www.PITHOCRATES.com

Share

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,