Learn Programming with Typescript
So, you’d like be a programmer! Well, maybe not be one but just do some. Where to begin? This is the place. Here, we’ll look at programming from the perspective of the complete beginner — making no assumptions about anything you might already know. For our exercises, we’ll focus on front end web development in Typescript, also known as making a web page. Along the way, we’ll look at some of the historical origins of programming and how they play into its constraints and possibilities today. Are you ready? Let’s get started!
My Kid Works on Computers
When one mentions programming in casual company, often someone will say their child (or their neighbor, or their spouse) works on computers, too. Too often, such exclamations are misguided because many aspects of working on computers have nothing to do with programming.
Let’s get them out of the way:
- Setting up and building hardware.
- Setting up software.
- Designing networks.
- Designing software on paper as a set of ideas (architecture).
- Using software of any kind, including connecting various kinds.
- Fixing computer problems and tech support.
- Technical writing.
If those things aren’t programming, then what is? Programming refers to the specific act of writing code that will eventually come to run on a particular piece of hardware. We’ll examine in great depth what that code looks like and how it runs, but suffice it to say that — if you are not writing code, you’re not programming. This is why programmers call each other coders, and our work coding. Another term you’ll hear is devs, short for software developers.
So… now how many people do you know who are programmers? Probably very few. It turns out that programming is something of a rare profession. Only about 2% of the US population is employed in programming, and less than 1% can do it well. Programming is also a thankless job, full of brokenness, bug fixing, and the accompanying customer complaints — while praise and recognition are seldom, if ever, heard.
Where Do I Sign Up?
If it’s so terrible and difficult, why does anyone do it? One reason is surely money. Programming is always among the best paying professions, competing with lawyers and doctors for the top income spot. But that’s not the real attraction for most programmers. The lure of programming is that it offers unlimited possibilities — castles in the sky, as I came to think of it in a childhood epiphany. The truth of software development is that you can make absolutely anything with software. There are no limits. How many jobs do you know that can say this?
We can see that it’s true just by looking around us. Software has eaten the world. It’s in everything from agriculture and energy to transportation and video games. Even new life forms are made with CRISPR, and mRNA vaccines are assembled with software. If you become a programmer — in any language and on any platform — this unlimited world is fully opened to you. It’s pretty heady stuff!
All Computers are Equivalent
A remarkable quality of computers is that, despite the evolution they’ve had since their invention in the 1940's, all computers are absolutely equivalent under the hood. And I don’t mean “similar,” I mean any computer that has ever existed can be swapped for any other computer that might exist in the future — and it can do the same work.
You’ve heard that your phone has more computing power than the Apollo mission, so clearly your phone could do those computations. But you may not realize that Apollo-era computers could do all the things your phone can do! We know this is true because two brilliant men, Alan Turing and Alonzo Church, independently proved with mathematics that any machine with a few simple characteristics can implement any function that can be computed.
When trying to generalize the brand new idea of computing, Turing theorized about a physical machine that could only perform a tiny set of physical operations:
- Punch a hole in an endless paper tape.
- Advance to another position on the tape.
- Read whether or not a hole had been punched in that position.
The Turing machine was just a mental exercise and didn’t need to built. Would it surprise you to learn that Turing’s paper punch machine could also launch the Space Shuttle and play Forza? It could.
Why, then, did no one try making video calls or building self-driving cars with 1960’s computers? The reason comes down to the two differences between any two machines that are Turing-complete:
- How fast they run, and…
- How easy they are to program.
The original paper tape machine idea sounds like it would be terribly slow and also awful to program. You’d have to write out “data” onto the paper tape in something resembling Morse code — with a series of holes and no-holes to represent each letter or number. You’d have to write all the machine’s “instructions” in terms of very rudimentary operations, and the result of any computation would have to be written at the end of the tape itself. But… it would be possible.
Lots of Blinking Lights
Instead of physical Turing machines, thankfully, the computer industry was already onto the idea of building digital computers by the ‘40s. Instead of paper tape to hold data and instructions, these machine would hold a voltage (or not) in an electronic circuit. You can think of these voltages as paper punches. If a hole is present in the tape, a voltage is present in the circuit. If not, not. The actual silicon circuits etched into all computer chips have exactly this design under the hood.
Although there’s more that could be said about how this hardware came to be, we don’t need that information to program the computer. In fact, all of programming involves living in a fantasy world of abstractions. Instead of trying to “program” the literal Turing machine, we’ll write our code in a programming language that hides many of the low-level details. And that’s great — but it’s important to know what’s really going on because misunderstandings in your abstractions will lead to real-world errors when your program is run on the hardware.
Input Process Output
All computer operations follow the same pattern of operation, called input-process-output. While many thousands of lines of code are needed to even collect a single keypress from your computer and send it to a program (or to the screen), we’ll abstract over those for now and say that, in general:
- Some input appears. It may be from typing, or from some stored data.
- Some process happens, like doing math or another algorithm.
- The output goes somewhere — like a display, your speakers, or another process.
Modern operating systems let us handle these steps without worrying about how the input appears or where the output goes. In fact, this is the whole purpose of operating systems. When a programmer writes an application for Windows, for example, he or she is allowing Windows to handle the input and output part — so the app programmer doesn’t have to.
This is also the reason that different operating systems are incompatible with each other. Even if two machines have the same processor, each operating system will have a different way of handling the input and output. That’s why desktop programs written for Windows can’t run on a Mac without significant changes to their code — and why iPhone apps from the App Store won’t run on an Android phone.
The Odometer & The Mailbox
Inside the processor, the chip that does our process step, there are two critical parts that I like to call the odometer and the mailbox.
Everyone knows how a car odometer works. It rolls forward as the car rolls forward, being mechanically linked to the wheel. Even though you’re not supposed to roll your odometer backward, it used to be possible. So imagine an odometer component in our hypothetical computer that can be rolled either direction.
We’ve seen exactly these kind of displays on an old-time cash register. The cashier doesn’t need to keep track of the total — it’s on the display, having been “rolled forward” each time an item’s price was entered. Press the button for $1, and the display odometer rolls forward by 1. Have a coupon? That might roll the odometer back by 50¢.
To turn the cash register into a computer, we must first add something else to the circuit mix: storage. I like to think of storage as a series of mailboxes (or cubbyholes from my 70’s childhood). What can we do with these mailboxes? We can keep the values from the odometer!
If the grocery store needed to keep track of 20 people’s purchases, it would need 20 mailboxes, one for each total. Starting with the first customer’s total in the first mailbox, the computer could move to the “next mailbox” after the customer pays. When all the customers had finished checking out, their individual totals would still be in the mailboxes.
The Grand Total
If the mailboxes and the odometer could move data both directions, then we could read the values from each mailbox and roll the odometer forward by that amount — leaving a grand total in the odometer. And, in fact, this is exactly how these parts work.
This is not only the grand total of the store’s purchases, it is also the grand total of computing because this is all we need to make a computer work. If we give ourselves a few more odometers and some more cubbyholes to work with (and give the real names to these parts) we’ll have a diagram of every modern computer!
The “odometers” from analog computers are really called registers, and the mailboxes are known as storage locations. You can see that, with a few more registers and a lot more storage locations, we can easily do more complex calculations. If any register can address any storage location for both reading and writing, we can write a program of any complexity, limited only by the amount of storage we have.
In reality, the term storage represents two different things. It can mean memory, which is a storage device wired to the processor at the silicon level — or it can mean disk drives or other forms of external storage which are not wired directly to the processor on the computer motherboard.
In practice, memory and drive storage are interchangeable. This allows us to save a file from a word processor (in memory) onto a disk or other storage medium, then load it later to continue working with the file.
Why can’t we just type directly into the file on disk? A fundamental restriction of the processor is that it can only act on data that’s in its memory, to which it is physically connected at the silicon level. Any data that’s in other kinds of storage must be loaded into that memory before processing can start. Likewise, in order to be preserved, any data that’s in memory must be written out to external storage if it is to survive being overwritten when new data is loaded in that same location — or when the power is lost.
Notice that registers, memory locations, and disk drives each have their own addressing scheme. In order to move data from Location 2
on Drive 1
into Location 1
in memory, we must write a proper address for both the source and destination locations.
Locating things on a hard disk drive is not something you want to deal with yourself. Your data is literally scattered all over the place, across several platters and sectors of the physical drive. Even a single file can span multiple locations. Fortunately, the operating system abstracts that away, allowing us to refer to disk data by file and folder names.
Addressing memory locations has a similar problem. In reality, physical memory is setup in rows called banks and columns called addresses or offsets. But who wants to try and keep track of that? It would be much better if we could label our memory locations with a name, rather than two long numbers. Then we could say in our program, “Move the value from Susan’s
“mailbox” (memory) into Steven’s
. Or, “Deduct 10% from the value in Marty’s
memory location.”
Variables allow us to abstract over the physical way that data is stored in memory by assigning names to the storage locations, in the same way that file and folder names allow us to read to and write from specific locations on a disk without knowing exactly where they’re physically located.
You might also notice that the registers depicted earlier are completely missing from this last diagram. That’s because they, too, have been abstracted away in high-level programming languages. Registers are used by the processor in computing intermediate results, but we don’t need to worry about them in Typescript. We only need to concern ourselves with the named variables that we create, not where they are really stored in memory.
More Than Words
Up until this last example of employee information, we’d only discussed using registers or storage to keep track of numbers. But what if we want to process another kind of data, like the text that makes up the employee name
or email
? How can letters and words (or asteroids) be depicted on a rotating odometer? The answer is that they can’t.
In order to get from raw numbers into letters and words, we need a kind of cipher, a conversion table where each number stands for one letter. The one we use today is called Unicode. Owing to a long and colorful history which I won’t elaborate upon here, the Unicode code for the capital letter A is 65
. The letter B
is 66
, and so on. All the glyphs used in the world’s languages are thus enumerated in Unicode.
That’s Just How I See It
We said that, in Unicode, the number 65 represents the letter “A”. But obviously it also represents the actual number 65, like when we were adding up those customer’s purchases at the register. We don’t want someone’s purchase for $65 to show up as $A
on their receipt. By the same token, if they throw in a candy bar for two bucks, we don’t want to try to add 2 to the letter “A.”
In practice, the number 65 might appear in a variety of different contexts. It could be the temperature on a Saturday morning in Santa Cruz, or it could be the brightness of a red pixel in the RGB display on your monitor. The important takeaway is that, down at the bare metal, only the number 65 is real — because it’s sitting on an odometer somewhere. Whenever we deal with computer values, getting the value right is half the battle, but interpreting it correctly is the other half. The underlying hardware has no idea of what you intended; it can only give you a 65
.
Operators & Operands
All of this discussion of data representation is lovely, but we haven’t actually done anything useful with that data. The working part of a processor is called it’s instruction set. While processors vary in the specific instruction sets they offer, every processor can be used to produce identical results if given the correct instructions — because all of them are Turing machines.
Each instruction is divided into an operator and an operand (pronounced AH-purr-and). The operator tells what to do and the operand tells what to do it with. If we were trying to add a value in storage to a register (a very common operation), the operator is add and the operands are the name of the register and the address of the storage location.
The Leap to Programmability
We said that adding storage locations was one thing that pushed the calculator/cash register into the realm of computing. But the biggest push came from programmability — the idea that one could provide a list of instructions that would operate on the calculator underneath. Given what we know about computer architecture so far, let’s review the capabilities that should be available for us to program:
- A set of storage locations in memory to hold data.
- A list of operations that the processor can perform on memory values.
- A set of input devices that make data appear in memory.
- A set of output devices that display values from memory.
Today, there are many layers of abstractions in between our programming efforts and the physical devices that make this happen. Regardless, only these physical things we’ve described so far are actually taking place when your programs run — and this concept will become fundamental to the way we write programs in Typescript.
The only aspect of working with this hardware that we haven’t described is how to code the instructions that make the processor do anything useful. Because each processor family or chip design has its own, burned-in instruction set, programming languages were invented by John Backus in 1948 with his introduction of FORTRAN. FORTRAN and the languages that would follow it allow us to “write once, run anywhere” and make programs that run on various processor hardware.
Compilers & Interpreters
In order to get from the programming language statements that we write in Typescript (or FORTRAN) to the actual machine language operators burned into the chip, some translation must take place. This translation step is known as compilation and the software that does this job is called a compiler. Regardless of what language we write or when we introduce our code, this compilation step must take place.
Some languages offer a special type of compiler known as an interpreter. Instead of waiting until the entire program is written, an interpreter compiles every line of code as it is typed. The advantage of interpreters is that the programmer doesn’t need to wait until the program is finished to see the results. The disadvantage is that only traditional compilers can examine the program as a whole in order to further optimize it by choosing the shortest or fastest machine instructions that give the result indicated in the program.
Lacking a view of the whole program from top-to-bottom, interpreters cannot usually offer these kind of optimizations — thus most programs we’re familiar with like Microsoft Word or Photoshop are compiled. In practice, though, modern interpreters have been shown to be nearly as fast as compilers.
JavaScript and its cousin from the future, Typescript, are interpreted languages that run in a browser, like Chrome, Safari, or Firefox. Each time you open a browser window, a JavaScript interpreter is available to you with its own set of language statements that allow us to work with data in memory and storage. It is this environment that we’ll use to learn programming.
What’s Up, Doc?
In this first part of the series, we looked how computers store data and operate upon it, taking note of the fact that all computers can perform the same basic operations. We also saw how part of the computer’s storage can be given over to a program or set of instructions that act on the data in memory. Together, these ideas form the basis of the stored program computer.
In Part 2, we’ll setup the tools we need to develop Typescript apps, including the web server and IDE — and test to make sure we can see our changes in the browser. We’ll also see how console logging can be used to help us debug our program without creating HTML output just for that purpose.
As always, thanks for joining me!
— D