Like any natural language, computer languages are built around their environments. Human cultures have developed languages as a means of communicating their perceptions of internal, conceptual and external environments. However what exists in one environment might not be a part of another. Subsequently, a string of words might be used to describe something that is uncommon to one environment, whereas in another culture where that thing is common a single word might be used to describe it.
For example a visitor traveling to Japan in the late 1800’s might describe a specific mode of transport as:
“A human-powered two wheeled cart, that seats one or two people reserved for the social elite.”
However a person familiar with this mode of transport might have a specific name for it, such as:
Obviously if you are living in an environment where such things are common using a single word that can effectively describe a whole sentence can be a more efficient means of communication.
Computer languages are similar in this way, each language is developed to run efficiently under certain conditions and circumstances yet the same language may perform less efficiently under a different set of circumstances.
What qualifies as an optimal environment for a computer’s programming language is something that the developer of that language will have to define. An environment in this sense could be an operating system, the world wide web, a network, or a combination of these environments. This could also be a large contributing factor, possibly explaining why there are so many different computer languages currently in existence each with it’s efficiency optimizations for specific “environments”.
Currently Wikipedia lists approximately 661 popular computer languages and this number is constantly growing compared to it’s listing of 540 currently spoken languages (not including their derivatives). Of course there have been many more spoken languages throughout human history, The Cambridge Encyclopaedia of Language estimates this number to be between 3000 to 10000, but then again what actually qualifies as a language is a whole other topic for discussion in itself. Computer languages share in this ambiguity. Throughout the comparatively short history of computer languages, which can arguably be traced as far back as the first programmable computer, the Z1 invented by Germany’s Konrad Zuse in 1938, programmers have seen languages raise to popularity and become virtually obsolete, one such language is LISP. LISP was in many ways synonymous with Artificial Intelligence (AI) from the mid 1950’s till the early 1970’s but gradually gave way to a dwindling numbers of devotees, who where lured by new programming paradigms such as object oriented programming and other higher level programming language concepts, which we will explore in greater detail later. Although this language never really died, some would say it became somewhat antiquated. Nonetheless, LISP started to regain popularity in the mid 1990’s in a new implementation currently known as Common LISP. Determining whether a programming language closely based on a previous programming language should be considered a new language that is able to stand on it’s own, can be a difficult distinction to make and can add considerably to the ambiguity around what defines and distinguishes one computer language from another. Processing too, has been subject to this topic of debate as it’s roots in the programming language Java have seen it referred to as a library for Java while others identify it as stand-alone programming language.
Regardless of whether a programming language derived from another can stand on it’s own or not one of the key factors that determine it’s popularity, and ultimately contributes to a following of devotees that develop and maintain the language, is the language’s ability to achieve a balance of being fast enough for a computer to process but easy enough for a human to read. It is this balance that currently determines the efficiency of a computer’s programming language.
Computers tend to be very literal. They accept specific instructions more readily than vague descriptions of what you are hoping to achieve with their help. Such that a statement like the above mentioned, and repeated below:
“… draw a circle that has a diameter of 55 pixels and who’s center is somewhere close to the top left of my screen…”
would not be as effective as the statement:
ellipse(56, 46, 55, 55);
The first statement although a perfectly acceptable command issued in English, would fail dismally when translated directly into computer-speak for several reasons relating to the ambiguity associated with it.
Firstly the longer a command tends to be the more prone it is to incorporating an erroneous syntax. Computers are very specific about the type of syntax you use to communicate with them. Syntax in terms of the English definition is the particular arrangements of words making up a sentence. In computer terms, syntax has a very similar meaning but is even more relevant and specific in it’s implementation. Words must follow a particular sequence within the context of a command issued to a computer determined by the language you are currently programming in, and may not be rearranged into various discourses simply because you find them to be more meaningful.
Computers have truly amazing mathematical capabilities and predictably congruent mediocre social skills. Subsequently concepts such as circles and squares which are simply mathematical concepts abstracted for human convenience have little or no relevance to a computer. For example if you were to consider that a circle, as per another description, could be an ellipse with equidistant radii to each point encompassing it’s edge; or technically speaking a square is actually a rectangle with it’s height equal to it’s width. Descriptive abstractions such as these for human convenience are in fact very inconvenient for a computer, and as a result in Processing we refer to a “circle” and an “ellipse” using Processing specific syntax that removes the ambiguity and abstraction for the convenience of a spoken language and groups them into the same context. Subsequently if we want to describe a circle we refer to it as an ellipse with equal dimensions in height and width or more specifically:
Finally I’m sure that if computers had a sense of humour, they would find our former description of the location of our circle “ somewhere close to the top left of my screen” to be somewhat uninformed and laughable, not to mention words such as “somewhere” and “my” which might condition a response equivalent to that of an existential dilemma for a computer. If you supply a computer program with specific screen positions in terms of x, y and z and dimensions in terms of width, height and depth either communicated explicitly or implicitly through an expression (which is something we will discuss in more detail later) you will find your programs to have more predictable and efficient results and would help to remove ambiguities that could lead to errors. Screen space and dimensions are concepts we will be dealing with in more detail in later chapters. Of course there are times when randomness and unpredictability are desirable qualities in a program, in which case no other resources available to man can produce a series of unpredictable results more efficiently than computers. This is a testament to why people devotedly invest in computers to provide the sequence of random digits that could potentially make casinos loose millions upon millions to gambling patrons.