Learn Programming with Typescript

David Bethune
Level Up Coding
Published in
15 min readAug 3, 2022

--

So, you’d like be a programmer! Well, maybe not be one but just do some. Where to begin? This is the place. Here, we’ll look at programming from the perspective of the complete beginner — making no assumptions about anything you might already know. For our exercises, we’ll focus on front end web development in Typescript, also known as making a web page. Along the way, we’ll look at some of the historical origins of programming and how they play into its constraints and possibilities today. Are you ready? Let’s get started!

My Kid Works on Computers

When one mentions programming in casual company, often someone will say their child (or their neighbor, or their spouse) works on computers, too. Too often, such exclamations are misguided because many aspects of working on computers have nothing to do with programming.

Let’s get them out of the way:

  1. Setting up and building hardware.
  2. Setting up software.
  3. Designing networks.
  4. Designing software on paper as a set of ideas (architecture).
  5. Using software of any kind, including connecting various kinds.
  6. Fixing computer problems and tech support.
  7. Technical writing.

If those things aren’t programming, then what is? Programming refers to the specific act of writing code that will eventually come to run on a particular piece of hardware. We’ll examine in great depth what that code looks like and how it runs, but suffice it to say that — if you are not writing code, you’re not programming. This is why programmers call each other coders, and our work coding. Another term you’ll hear is devs, short for software developers.

So… now how many people do you know who are programmers? Probably very few. It turns out that programming is something of a rare profession. Only about 2% of the US population is employed in programming, and less than 1% can do it well. Programming is also a thankless job, full of brokenness, bug fixing, and the accompanying customer complaints — while praise and recognition are seldom, if ever, heard.

Everyone has heard of Microsoft Word, but has anyone used it to write a thank you note to its inventor, Charles Simonyi? Probably not. The Hungarian-American software developer whose company was acquired by Microsoft would go on to oversee the creation of MS Office. His estimated $5 billion net worth is evidence of the kind of money that can be made from successful software.

Where Do I Sign Up?

If it’s so terrible and difficult, why does anyone do it? One reason is surely money. Programming is always among the best paying professions, competing with lawyers and doctors for the top income spot. But that’s not the real attraction for most programmers. The lure of programming is that it offers unlimited possibilities — castles in the sky, as I came to think of it in a childhood epiphany. The truth of software development is that you can make absolutely anything with software. There are no limits. How many jobs do you know that can say this?

We can see that it’s true just by looking around us. Software has eaten the world. It’s in everything from agriculture and energy to transportation and video games. Even new life forms are made with CRISPR, and mRNA vaccines are assembled with software. If you become a programmer — in any language and on any platform — this unlimited world is fully opened to you. It’s pretty heady stuff!

Computer pioneer Alan Turing’s conceptual Turing Machine was never meant to be built; it was a theoretical math exercise on paper — one which became the foundation for all modern computers that would follow. Nonetheless, in 2010, an engineer in Wisconsin built this working physical model. By definition, a Turing Machine has endless storage in the form of paper tape reels at both ends. Clearly, that’s not possible here — nor with any real computer.

All Computers are Equivalent

A remarkable quality of computers is that, despite the evolution they’ve had since their invention in the 1940's, all computers are absolutely equivalent under the hood. And I don’t mean “similar,” I mean any computer that has ever existed can be swapped for any other computer that might exist in the future — and it can do the same work.

You’ve heard that your phone has more computing power than the Apollo mission, so clearly your phone could do those computations. But you may not realize that Apollo-era computers could do all the things your phone can do! We know this is true because two brilliant men, Alan Turing and Alonzo Church, independently proved with mathematics that any machine with a few simple characteristics can implement any function that can be computed.

When trying to generalize the brand new idea of computing, Turing theorized about a physical machine that could only perform a tiny set of physical operations:

  1. Punch a hole in an endless paper tape.
  2. Advance to another position on the tape.
  3. Read whether or not a hole had been punched in that position.

The Turing machine was just a mental exercise and didn’t need to built. Would it surprise you to learn that Turing’s paper punch machine could also launch the Space Shuttle and play Forza? It could.

Home stereo components of the 1980’s were driven by microprocessors (and thus software) but still used custom-built segmented LED displays instead of the pixel-based screens we would use today. While not technically a difference in computing power, advances in display technology led to software looking quite different today than it did on early computers.

Why, then, did no one try making video calls or building self-driving cars with 1960’s computers? The reason comes down to the two differences between any two machines that are Turing-complete:

  1. How fast they run, and…
  2. How easy they are to program.

The original paper tape machine idea sounds like it would be terribly slow and also awful to program. You’d have to write out “data” onto the paper tape in something resembling Morse code — with a series of holes and no-holes to represent each letter or number. You’d have to write all the machine’s “instructions” in terms of very rudimentary operations, and the result of any computation would have to be written at the end of the tape itself. But… it would be possible.

The top front panel of an IBM 1401 computer, introduced in 1959 includes a live diagram of the processor’s internal operations. Input, processing, memory, storage, and output are all depicted with connected lines showing how data flows. In this era, screen-based displays like CRTs or today’s flat panels were simply not available, leading to the “blinking lights” seen on large computers in many old TV shows and movies.

Lots of Blinking Lights

Instead of physical Turing machines, thankfully, the computer industry was already onto the idea of building digital computers by the ‘40s. Instead of paper tape to hold data and instructions, these machine would hold a voltage (or not) in an electronic circuit. You can think of these voltages as paper punches. If a hole is present in the tape, a voltage is present in the circuit. If not, not. The actual silicon circuits etched into all computer chips have exactly this design under the hood.

Although there’s more that could be said about how this hardware came to be, we don’t need that information to program the computer. In fact, all of programming involves living in a fantasy world of abstractions. Instead of trying to “program” the literal Turing machine, we’ll write our code in a programming language that hides many of the low-level details. And that’s great — but it’s important to know what’s really going on because misunderstandings in your abstractions will lead to real-world errors when your program is run on the hardware.

Some early arcade video games like Atari Asteroids were made with vector graphics. In this architecture, a series of coordinates is used to energize the cathode ray tube beam into particular illuminated shapes, evident from the jagged edges of the rocks and the text below which lacks any rounded parts. Much later, a more sophisticated form of these graphics would be added to the web browser with the introduction of SVG.

Input Process Output

All computer operations follow the same pattern of operation, called input-process-output. While many thousands of lines of code are needed to even collect a single keypress from your computer and send it to a program (or to the screen), we’ll abstract over those for now and say that, in general:

  1. Some input appears. It may be from typing, or from some stored data.
  2. Some process happens, like doing math or another algorithm.
  3. The output goes somewhere — like a display, your speakers, or another process.

Modern operating systems let us handle these steps without worrying about how the input appears or where the output goes. In fact, this is the whole purpose of operating systems. When a programmer writes an application for Windows, for example, he or she is allowing Windows to handle the input and output part — so the app programmer doesn’t have to.

This is also the reason that different operating systems are incompatible with each other. Even if two machines have the same processor, each operating system will have a different way of handling the input and output. That’s why desktop programs written for Windows can’t run on a Mac without significant changes to their code — and why iPhone apps from the App Store won’t run on an Android phone.

The same hardware can run several operating systems, like this iMac and its Intel processor shown with Mac OS, Windows, and Linux desktop environments. We know the same instruction set must be running under the hood because the hardware is the same. The software applications, however, are incompatible because they rely on the operating system to handle input and output.

The Odometer & The Mailbox

Inside the processor, the chip that does our process step, there are two critical parts that I like to call the odometer and the mailbox.

Everyone knows how a car odometer works. It rolls forward as the car rolls forward, being mechanically linked to the wheel. Even though you’re not supposed to roll your odometer backward, it used to be possible. So imagine an odometer component in our hypothetical computer that can be rolled either direction.

Mechanical cash registers, like this elaborate art deco model from National Cash Register, would output their totals through an odometer-like set of display wheels. As each lower digit rolled past 9 back to 0, a latch would trigger the next digit to the left to roll up by one position. Electronic computers work the same way with silicon-based storage. NCR would go on to become a full-fledged computer company that still makes computer-based POS systems today.

We’ve seen exactly these kind of displays on an old-time cash register. The cashier doesn’t need to keep track of the total — it’s on the display, having been “rolled forward” each time an item’s price was entered. Press the button for $1, and the display odometer rolls forward by 1. Have a coupon? That might roll the odometer back by 50¢.

The IBM 3660 Supermarket System was an early use of the now-ubiquitous UPC barcode scanners, introduced in 1974. IBM also invented the UPC standard, which works by storing each item’s price separately from the barcode number itself. The computerized cash registers would lookup each item from a central storage device, then add its price to the accumulator and the total display — the same way they work today.

To turn the cash register into a computer, we must first add something else to the circuit mix: storage. I like to think of storage as a series of mailboxes (or cubbyholes from my 70’s childhood). What can we do with these mailboxes? We can keep the values from the odometer!

If the grocery store needed to keep track of 20 people’s purchases, it would need 20 mailboxes, one for each total. Starting with the first customer’s total in the first mailbox, the computer could move to the “next mailbox” after the customer pays. When all the customers had finished checking out, their individual totals would still be in the mailboxes.

While not as nice as these, my kindergarten did have cubbyholes for each child, labeled with our names. Computer memory is organized exactly the same way. The rows and columns hold real information while the labels or names on the outside can be changed.

The Grand Total

If the mailboxes and the odometer could move data both directions, then we could read the values from each mailbox and roll the odometer forward by that amount — leaving a grand total in the odometer. And, in fact, this is exactly how these parts work.

This is not only the grand total of the store’s purchases, it is also the grand total of computing because this is all we need to make a computer work. If we give ourselves a few more odometers and some more cubbyholes to work with (and give the real names to these parts) we’ll have a diagram of every modern computer!

The “odometers” from analog computers are really called registers, and the mailboxes are known as storage locations. You can see that, with a few more registers and a lot more storage locations, we can easily do more complex calculations. If any register can address any storage location for both reading and writing, we can write a program of any complexity, limited only by the amount of storage we have.

Modern memory like today’s DRAM is still made from physical circuits and switches. Originally drawn on film, today these circuit designs are drawn in software at human scale or made directly from code and then etched into silicon at microscopic scale using photolithography.

In reality, the term storage represents two different things. It can mean memory, which is a storage device wired to the processor at the silicon level — or it can mean disk drives or other forms of external storage which are not wired directly to the processor on the computer motherboard.

In practice, memory and drive storage are interchangeable. This allows us to save a file from a word processor (in memory) onto a disk or other storage medium, then load it later to continue working with the file.

Why can’t we just type directly into the file on disk? A fundamental restriction of the processor is that it can only act on data that’s in its memory, to which it is physically connected at the silicon level. Any data that’s in other kinds of storage must be loaded into that memory before processing can start. Likewise, in order to be preserved, any data that’s in memory must be written out to external storage if it is to survive being overwritten when new data is loaded in that same location — or when the power is lost.

Notice that registers, memory locations, and disk drives each have their own addressing scheme. In order to move data from Location 2 on Drive 1 into Location 1 in memory, we must write a proper address for both the source and destination locations.

IBM’s RAMAC was the first mass storage device to use a stack of spinning magnetic platters, indicated here by the engineer. Each platter was further divided into pizza-shaped sectors where data could be accessed by a read-write head hovering over them on a moving arm. We still use spinning drive technology today, but even a solid state storage device like a USB flash drive is organized the same way. Within the operating system, its usage is identical to a spinning drive.

Locating things on a hard disk drive is not something you want to deal with yourself. Your data is literally scattered all over the place, across several platters and sectors of the physical drive. Even a single file can span multiple locations. Fortunately, the operating system abstracts that away, allowing us to refer to disk data by file and folder names.

IBM’s first PC, the model 5150, included two physical storage devices in the form of floppy disk drives, the black rectangles in the system unit on the left. These were lettered “A” and “B.” An optional hard disk drive could be added, called “C.” This led to the “C drive” we have today — and the D, E, and so-on that follow when you plug in a USB stick or external drive.

Addressing memory locations has a similar problem. In reality, physical memory is setup in rows called banks and columns called addresses or offsets. But who wants to try and keep track of that? It would be much better if we could label our memory locations with a name, rather than two long numbers. Then we could say in our program, “Move the value from Susan’s “mailbox” (memory) into Steven’s. Or, “Deduct 10% from the value in Marty’s memory location.”

Variables allow us to abstract over the physical way that data is stored in memory by assigning names to the storage locations, in the same way that file and folder names allow us to read to and write from specific locations on a disk without knowing exactly where they’re physically located.

You might also notice that the registers depicted earlier are completely missing from this last diagram. That’s because they, too, have been abstracted away in high-level programming languages. Registers are used by the processor in computing intermediate results, but we don’t need to worry about them in Typescript. We only need to concern ourselves with the named variables that we create, not where they are really stored in memory.

IBM’s EBCDIC (unfortunately pronounced EBB-suh-dick) was an early code for transforming numbers on punched card storage into letters and words. The letter “A” printed at the top is represented by a punch in the row just below it, plus a punch in the “1” row. The letter “J” is a punch two rows below the text, plus a punch in the “1” row. Later, mechanical brushes could make electrical contact through these holes and read the punches, turning them back into letters and words.

More Than Words

Up until this last example of employee information, we’d only discussed using registers or storage to keep track of numbers. But what if we want to process another kind of data, like the text that makes up the employee name or email? How can letters and words (or asteroids) be depicted on a rotating odometer? The answer is that they can’t.

In order to get from raw numbers into letters and words, we need a kind of cipher, a conversion table where each number stands for one letter. The one we use today is called Unicode. Owing to a long and colorful history which I won’t elaborate upon here, the Unicode code for the capital letter A is 65. The letter B is 66, and so on. All the glyphs used in the world’s languages are thus enumerated in Unicode.

That’s Just How I See It

We said that, in Unicode, the number 65 represents the letter “A”. But obviously it also represents the actual number 65, like when we were adding up those customer’s purchases at the register. We don’t want someone’s purchase for $65 to show up as $A on their receipt. By the same token, if they throw in a candy bar for two bucks, we don’t want to try to add 2 to the letter “A.”

In practice, the number 65 might appear in a variety of different contexts. It could be the temperature on a Saturday morning in Santa Cruz, or it could be the brightness of a red pixel in the RGB display on your monitor. The important takeaway is that, down at the bare metal, only the number 65 is real — because it’s sitting on an odometer somewhere. Whenever we deal with computer values, getting the value right is half the battle, but interpreting it correctly is the other half. The underlying hardware has no idea of what you intended; it can only give you a 65.

Operators & Operands

This diagram from the IBM System/370 Principles of Operations shows the format of low-level machine instructions that only do one simple task — like changing a value in a register, in storage, or both. The yellow operators here tell what to do and the pink register operands and orange storage operands are the locations they operate on.

All of this discussion of data representation is lovely, but we haven’t actually done anything useful with that data. The working part of a processor is called it’s instruction set. While processors vary in the specific instruction sets they offer, every processor can be used to produce identical results if given the correct instructions — because all of them are Turing machines.

Each instruction is divided into an operator and an operand (pronounced AH-purr-and). The operator tells what to do and the operand tells what to do it with. If we were trying to add a value in storage to a register (a very common operation), the operator is add and the operands are the name of the register and the address of the storage location.

The Milton Bradly Big Trak was an early programmable truck, released in 1979 — the year I successfully lobbied my parents for one on my 11th birthday. It wasn’t long before the toy’s small memory of only 16 programmable steps became limiting. One of the steps could be a primitive type of loop, although this machine’s programming language was not Turing complete.

The Leap to Programmability

We said that adding storage locations was one thing that pushed the calculator/cash register into the realm of computing. But the biggest push came from programmability — the idea that one could provide a list of instructions that would operate on the calculator underneath. Given what we know about computer architecture so far, let’s review the capabilities that should be available for us to program:

  1. A set of storage locations in memory to hold data.
  2. A list of operations that the processor can perform on memory values.
  3. A set of input devices that make data appear in memory.
  4. A set of output devices that display values from memory.

Today, there are many layers of abstractions in between our programming efforts and the physical devices that make this happen. Regardless, only these physical things we’ve described so far are actually taking place when your programs run — and this concept will become fundamental to the way we write programs in Typescript.

The only aspect of working with this hardware that we haven’t described is how to code the instructions that make the processor do anything useful. Because each processor family or chip design has its own, burned-in instruction set, programming languages were invented by John Backus in 1948 with his introduction of FORTRAN. FORTRAN and the languages that would follow it allow us to “write once, run anywhere” and make programs that run on various processor hardware.

This diagram from the Big Trak manual shows how a sequence could be programmed from the keypad. The green buttons are the operations and the yellow buttons are the values supplied to them. Turns were specified in minutes with 15 minutes being a quarter turn. The red CM button stood for “clear memory.”

Compilers & Interpreters

In order to get from the programming language statements that we write in Typescript (or FORTRAN) to the actual machine language operators burned into the chip, some translation must take place. This translation step is known as compilation and the software that does this job is called a compiler. Regardless of what language we write or when we introduce our code, this compilation step must take place.

Some languages offer a special type of compiler known as an interpreter. Instead of waiting until the entire program is written, an interpreter compiles every line of code as it is typed. The advantage of interpreters is that the programmer doesn’t need to wait until the program is finished to see the results. The disadvantage is that only traditional compilers can examine the program as a whole in order to further optimize it by choosing the shortest or fastest machine instructions that give the result indicated in the program.

MIT’s Scratch environment was designed to teach programming by using blocks that strongly resemble the button presses of a Big Trak or other physical device. Each colored block represents one operation and the white values in circles are the data it has to work with. Used mostly for game programming, Scratch abstracts away much of the work of designing a user interface — which limits the types of UIs it can create.

Lacking a view of the whole program from top-to-bottom, interpreters cannot usually offer these kind of optimizations — thus most programs we’re familiar with like Microsoft Word or Photoshop are compiled. In practice, though, modern interpreters have been shown to be nearly as fast as compilers.

JavaScript and its cousin from the future, Typescript, are interpreted languages that run in a browser, like Chrome, Safari, or Firefox. Each time you open a browser window, a JavaScript interpreter is available to you with its own set of language statements that allow us to work with data in memory and storage. It is this environment that we’ll use to learn programming.

The excellent Microsoft MakeCode platform can also be programmed with blocks — or with Typescript! These programs can target many interesting handheld maker devices as well as small computers and web browsers — like my MathTown Alley game.

What’s Up, Doc?

In this first part of the series, we looked how computers store data and operate upon it, taking note of the fact that all computers can perform the same basic operations. We also saw how part of the computer’s storage can be given over to a program or set of instructions that act on the data in memory. Together, these ideas form the basis of the stored program computer.

In Part 2, we’ll setup the tools we need to develop Typescript apps, including the web server and IDE — and test to make sure we can see our changes in the browser. We’ll also see how console logging can be used to help us debug our program without creating HTML output just for that purpose.

As always, thanks for joining me!

— D

--

--

I'm a 35 year veteran of the software industry and the founder of DTA Games. Visit us at https://dtagames.io.