Getting Started Programming Lesson 1 - WTF Am I Doing?
As 2012 begins to wind down, I think I can say that, if not this year, next year will be the year of the programmer. This year, I’ve had numerous friends who are not programmers express an interest in programming, whether it’s wanting to know how to make a web site or wishing they knew more about computers to find out why their games seem to always run slowly.
I’ve been wanting to start a course like this for around a year now. I had originally intended to do it in PHP, but recently I’ve been working almost entirely in Ruby, and I find that, once you strip out Rails, it does a fantastic job of being an entry level language, since it takes care of a lot of the grunt work you don’t want to overwhelm a beginner with.
This course is going to proceed more or less like I learned programming: from the ground up. Hence why Lesson 1 is entitle “WTF Am I Doing?” I think the most important thing most basic classes seem to miss is explaining exactly what it is you’re doing when you program, and why things have to be done a certain way.
So before we get into installing Ruby and getting you set up to code anything, I want to talk a little bit about history and where we’ve been and why we do things the way we do.
In the beginning of computing, computers were really just circuits. They still are, if we are perfectly honest. These circuits have grown exponentially more complex and more powerful as time has gone on. Your phone has more power than the Apollo program had when they landed on the moon.
Initial computers, the ones you might have seen that were the size of rooms, took binary data. At the end of the day, everything we do is converted into electrical signals of either 1 or 0, on or off, power or no power. Computer engineers, the fellows who make these circuits and processors that we call computers, have designed them to react in certain ways when they see certain patterns of 1’s and 0’s. We call these “commands”, and by combining them, we can do pretty much anything.
The first computers had a sole function: compute numbers. You might be floored to know that’s all they still do! What some smart people did before I was even born was come up with a way to represent more than just numbers in a computer. So the text you are reading is made up of a series of numbers. The browser you are using is a collection of millions if not billions of numbers. And all of these are just collections of 1’s and 0’s. Pretty insane stuff if you stop to think about it!
In the old days, computers had a finite set of numbers they could crunch, and no way to store the answer. So what you did was you put in the equations you needed solved on punch cards, then fed them into the machine in order. Out of the other side would pop the answer, and if you made a mistake, you had to start all over and feed the punch cards back into it.
As you can imagine, that got old really fast, so again the smart guys and gals who came before invented what we refer to as computer memory. These come, still today, in two flavors. The first flavor is your hard drive, or semi-permanent memory. This memory is meant to last for a long time, but can be overwritten if you need it to be. The other type is called RAM, random access memory. This is what a computer uses to hold all the crazy long numbers it deals with. So as you’re reading this, think about the fact that the browser you are using exists as an “application” on your hard drive where you installed it. What happens when you “open” it is that you tell your operating system, itself just a very, very complex program, to load that program into your RAM, where it can then be used.
Now, if you’ve been keeping up, you might notice a small issue here. “Why do you have to put it in RAM at all? Why not read it off the hard drive?” For that, we have to talk about how computers crunch these massive programs like Photoshop or even Notepad. Computer processors have a finite amount of memory. It’s so small you couldn’t do anything useful with just that nowadays. We need a way to feed it more data when we can. What happens though is that your processor can crunch numbers far faster than your hard drive can serve it up, even with the new solid state hard drives. RAM serves as a faster holding place for data that needs to be used. Even moving your mouse takes processor power!
So since we can’t wait forever for the data to come off the hard drive (or, even worse, an old DVD!), we’re going to put most or all of the program in RAM. In video games you might be aware of the dreaded “loading screen.” That’s what you get when the game has to make room for another level or something else that isn’t loaded in RAM. Games that are smart with their RAM usage never show you loading screens!
So to review: we’re writing code that lives on the hard drive and is executed in RAM. Great, now how do we do that? Depending on when you ask that, you’d get different answers. In the early days of Windows, you might have written Assembly code, which is one step up from entering those 1’s a 0’s I was talking about earlier. A little later on a language called C was invented, which has proved to be very popular and is still in use today! There have been more and more languages that have come around that have made it easier and easier to program. We call this “levels of abstraction.” I also call this, personally, “inherent complexity,” which to me is how many lines of code it takes to get something done, which also tells you how much you have to know to do something. The higher level languages usually have simpler complexity, so something that might be 100 lines or more in Assembly is just ten lines in C# or Java and maybe a single line in Ruby.
So the question now comes up as to how we get from some high level code like Java down to the 1’s and 0’s I keep going on about? The answer comes in two forms: compilers and interpreters. In programming, there are two methods for getting lower code. You can “compile” it, and translate from something easy to read, like “print ‘Hello World!'”, to something lower, like 110100101010010010101010100010010100101 (note: that’s not what it translates to, I just typed 1’s and 0’s because I’m cool like that). Compiling is done before you can actually run your code, so it takes it from what we call “code” to an “executable” or a program. These can also be called “binaries” since they are, usually, very quickly turned into binary data. Examples of compiled languages are Java and C#.
The other way is called interpreted code. Interpreted code, rather than compiled, is fed through a system constantly that looks at your code, figures out what commands to run at a lower level, and does that. Examples of interpreted languages are PHP, Ruby, and Python.
We’ll get into the pro’s and con’s of each approach in a later lesson, because there are good reasons for choosing each approach. For now, just know that they exist, and that for our lessons we will use Ruby, which has its own interpreter that our code runs on.
So, to recap: We write “code”, human readable text which we then run through a system someone smart developed which translates it down (eventually) to 1’s and 0’s that are fed to the processor, that makes magic happen. Give it a few lessons though, and it won’t be magic anymore!
Coming up in the next lesson: installing Ruby and your first program!