Representing the abstract concept of something like data in the form of high and low voltages with a series of 1’s and 0’s can be quite useful, but still very difficult to read and understand. Imagine having to memorize all of those sequences in order to be an efficient programmer. It would take exorbitant amounts of time to create the simplest of software programs. Fortunately the process of representing human-relevant concepts to a computer does not stop at binary code. Since we know that a certain sequence of 1’s and 0’s on a certain type of machine such as the sequence “00110101” can be used to represent the number “53” why not make the sequence “01001000” and “01101001” represent the characters “h” and “i” put together to make the word “hi”. This is exactly what software developers do, the process of taking a complex set of data and representing it in a more human-readable format is known as abstraction. As you can imagine this makes it a lot simpler to communicate with a computer, so instead of typing “01001000 01101001” we can simply type “hi” and thanks to the process of abstraction our computer knows what to display even though it’s reading a sequence of high and low voltages at the simplest level.
Abstraction is great because it makes the process of communicating with computers easier, so you might be thinking to yourself why then is programming and source code so cryptic and far removed from a common natural spoken human language such as English? Surely, if we can abstract complex data why not keep on abstracting concepts to computers until it can understand a human specific language like plain and simple English?
For example if we wanted to draw a circle on a computer screen why can’t we just say to a computer:
“Computer, could you please draw a circle that has a diameter of 55 pixels and who’s center is somewhere close to the top left of my screen, thanks?”
Instead we communicate with our computers like this:
ellipse(56, 46, 55, 55);
There are three main issues to consider here speed, efficiency and disambiguation.
One of the main reasons we cannot continue to abstract our communications with computers to the point where everyday English is used, is due to the speed at which data can be processed.
As we abstract data in order to make it more human-readable that same data becomes less machine-readable, and in order for a machine to be able to make anything useful out of that data it must first convert that data into a machine-readable format.
All of this abstracting and un-abstracting requires valuable system resources and energy, system resources and energy that could be used more effectively elsewhere. Subsequently as we move further away from machine code we require faster more powerful computers to process the data we create.
It’s estimated that under certain circumstances a consumer grade Intel Core i7 3.3 GHz processor can preform 147,600 Million instructions per second, and if you’ve ever used one of these processors you’ll probably also note that even with all of that processing power there are certain situations where your computer can still lag!
This does not necessarily have anything to do with the processor specifically but simply due to the massive amounts of data that an average modern day computer is expected to cope with. To put this into perspective consider that 40 years ago in order to send a man to the moon computers with a clock speed of 0.043MHz and 64Kbyte of RAM were required to run software ranging about 6MB in size, by today’s standards a single modern day home computer could be about 77 million times more powerful than the computers used to launch a space shuttle, navigate it to the moon and safely return it to earth such as the Apollo Guidance Computer. In retrospect the modern day needs we place on our computers can be somewhat demanding.