Stories

Learning Technology

As a junior physics major in college, I took my first programming course in Fortran.  The first program we were assigned to write was one to find prime numbers.  Apparently there was an algorithm known to the ancient Greeks (see Wikipedia, Generating Primes) for doing so.  At the time I didn’t know that, indeed I didn’t know what an algorithm was.  So I brute forced it.  I punched my program on IBM cards and put the deck into the input bin at the computer center window.  Some time later I returned to get my output and took it across the hall to a room which had an “interpreter”, a device which would print what was on the IBM output deck.  I put the deck into the interpreter and backed up aghast as the printer spewed on paper a list of special symbols that would make a drunken sailor in a cartoon blush (&$#@*&^$#@ etc.)  My god, I thought, my first try and I broke the computer.   I took the deck to one of the student “consultants” with a whimper “help!”  She looked at the results, laughed and said “Sorry, we apparently didn’t run your program.  Let me put it back into the queue.”   None of that, I said, made any sense to me.  So she explained that the first thing the computer had to do was to read my Fortran (a high level language which I could understand) deck and translate it into binary, a string of 1’s and 0’s which the computer could understand.  The program to do that was called a compiler.  The computer which the university used–I remember it as an IBM 1601 but I may be making that up–had only 16 kilo words of core memory so that when it ran the compiler there was no memory to run any other programs.  So the technique they used was to load the compiler, feed in a source deck (e.g. my Fortran program), then punch an object deck, the code the computer would actually use.  When they had done a bunch of input decks, they would then unload the compiler and feed the objects decks one at a time, run them, and punch their output.  The output decks would then be put in the output tray for retrieval and interpreting by the submitters.  Only for my first encounter, they mistakenly put the object deck into the output tray, bypassing running my program.  And of course, the interpreter was set up to read ASCII characters, what we knew as numbers and letters, not binary code.  It took its best shot at rendering the binary code by picking the closest ASCII characters for any given column on the punched card, but the result was of course gibberish.

With that auspicious beginning, I might have been intimidated enough to walk away and leave computers alone for good.  But I was rather intrigued.  The student who assisted me only gave me enough to make me more curious.  I had this mental image of the computer as some sort of magical appliance which understood my program and crafted the answers I had asked for.  Clearly it was much more primitive than that.

I got to discover just how primitive the following year when I landed a summer job with Geophysical Services Inc running a TIAC 128 (Texas Instruments Automatic Computer).   TI was the parent company of GSI from which it originally sprang.  The TIAC 128 had no compiler.  It was programmed in Assembly language, a symbolic representation which had a one-to-one correspondence with the binary code the machine executed.  So you had to think like the computer to tell it what to do.  And of course to run it we had to do the same.  The console display had rows and rows of lights—I’m remembering more than a dozen.   One row showed the instruction register where the command the computer wanted to execute was held.   So if the computer stalled, which it did regularly for various reasons, as operator I would look at the instruction register, decode the binary string into a command, then ask why couldn’t it execute that command.  Sometimes the string didn’t represent a  legitimate command, but more usually it was something like “read paper tape” and looking down next to the console I’d see that the paper tape reader either was turned off or had no tape in it to read.  I tried more than once to write a simple program in TIAC 128 assembly language—the standard first program even back then was “Print ‘Hello World’”.   I don’t recall that I succeeded in the two summers I spent there.

Then it was off to graduate school in physics where it turned out the minicomputer had just made an appearance in the lab.  I did some work there, but I decided I didn’t have time to wait for the older students with prior claim on the lab equipment to finish their thesis work, so I opted to do theory.  The theory I got into involved algebra where the operators were not commutative: i.e., it made a difference in which order one applied the operators.  So A operating on B operating on some function did not give the same result B operating on A operating on the same function.   A*B ≠ B*A.   It general, the expression was A*B – B*A = C, where C was another operator.  I found a programming language recently invented at Bell Labs which was aimed not at crunching numbers but rather manipulating symbols, and I quickly demonstrated that I could use it to do the commutator algebra to any arbitrary order of magnitude.  I did some quantum mechanics calculations for magnetic metals which provided the basis for my PhD thesis.  Up to that point in time, one theory explained metals while a totally different approach explained magnetism based on a insulator model even though most known magnets at the time were metals.   We were able to get good results with a model which combined the two, and it was only tractable because the computer did the heavy lifting of the commutator algebra as well as the subsequent number crunching.  Computers were beginning to look more useful, and quite a bit more intriguing.

During that same timeframe—graduate school—I was exposed to another intriguing computer application, the PLATO (Programmed Logic for Automatic Teaching Operations) system at the University of Illinois (I think I saw a demo at a physics conference, and new systems always had acronyms, often TLAs, i.e., Three Letter Acronyms, no matter how contrived.)  PLATO introduced the concept of programmed learning.  The computer would provide textual material, sometimes with graphs and pictures (video hadn’t yet made the scene) and then test the student on how much s/he absorbed using multiple choice questions.  For answers one got wrong, the computer would present again the relevant material until mastery, or whatever bogey was defined as close enough, was reached.  This technique was later described by Seymour Papert at MIT as using computers to program learners.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s