The Third Generation

And they'll jerk from their beds and think they're dreamin'.
But they'll pinch themselves and squeal
And know that it's for real,
The hour when the ship comes in.

The Hour that the Ship Comes In - Bob Dylan

 

 

Dominic Vautier
updated 04-2012
again 01-2014


I feel that modern data processing began in 1960.

By that year the IBM corporation just about dominated business computing.  This opinion remains firmly fixed in my mind because I found that the only way to get a job in business programming was to learn IBM computers and equipment.  Job opportunities working with other vendor equipment was quite limited or simply non existent.  IBM programmers were a highly sought after commodity.

The following discussion is centered on the IBM world because while it may not have had the best or coolest technology available at that time, the company was definitely and undoubtedly dominating the entire business world.  Everything had virtually turned blue, big blue.  It was the world where I lived and worked--and it was totally blue.

Improvements

I noticed some very profound advances with third generation computing technology.  The idea of a real and dramatic technological breakthrough seems to have been more than exemplified during those very quick changing critical years.  Some advances came almost immediately in the wake of the famous IBM 360 series of computers.  Other advances arrived years later, but they were advances that truly set the third generation up as great.  Here is a list of what I consider were some of these fundamental improvements.

1.  Device Independence.  What I mean here is that computer programs no longer had to be too concerned about what form the data was coming in or going out.  It no longer mattered if it was on cards or tape or disk, since the form of media was no longer at issue.  Processing the data was the issue.  This advantage was quite significant, because before device independence, programs had to be designed to handle types of media on which data was inputted or outputted.  These different I/O operations could be significant, tedious, and error prone.  Device independence decupled the I/O function from the programming function.

2.  Program Relocatibility.  Internal addressing (ADDressing-accent on first syllable) in 360 architecture was indirect.  Locations in core memory used a sophisticated base plus displacement technique.  When programs were loaded into memory, they could reside anywhere in core memory since all the machine had to do was change the value of a base register.  More on this is discussed below.

3. Clear Command Structure Hierarchy.  There was a definite hierarchy in the way the computer functioned.  It was like a general who gave orders to his majors who gave orders to their sergeants who gave orders to their privates, and the privates went home and kicked their dogs.  If lower level failure occurred it did not damage the overall computer function.

4. Operating Systems.  With the third generation came a number of exceptionally good operating systems that maintained overall control of the computer.  Think of Windows 7 or Windows 10.  The point is discussed more here

 

5. High level languages  Several very useful higher level languages had been developed by the year 1960, in particular COBOL, a business language and FORTRAN, a scientific language.  These languages allowed much more freedom in software design and construction. The idea was to get away from the "metal scratchers". See below for more detail. 

6. Program Modularity  An entire new concept began to emerge around this time about how to write programs.  The idea of bigger as better was being replaced by modularity.  By making program functions smaller and isolating functions they became far more manageable.

I did a little more explaining some of these ideas HERE.

7. Improved programming development techniques.  New radical ideas such as "go-to-less" logic and top-down design and bottom-up design became the watchword.  At this time managers began to realize that large programs were too hard to maintain and that lines-of-code as a measurement tool was counter-productive (like body counts).  Here is an additional discussion on this topic,

8.  Professional Recognition. By as early as 1970 the educational establishment began to recognize the value and importance of computer science and systems programming.  These disciplines were beginning to assume professional significance, whereas before they were merely considered trades.  A computer programmer was like a machine tool operator or a tradesman.

9.  Emergence of modern data base concepts.  A number of new ways to look at, organize, access and update data were developed during this time, including file structures like SAM, IDAM, HDAM, HIDAM.  Entire data base systems were developed to handle the movement toward online applications.  The most famous data base system was IBM's IMS (Information Management System), a basically hierarchical system and the later the relational DB2 system,  which was quite similar to what Oracle came up with years later.  Both of these relational data base access methods used a SQL protocol.

Other Things Came Along Too

Of course other good functioning computers existed outside of IBM, but these tended to be dedicated to specialty fields.  Scientific and graphical design areas were dominated by CDC and Univac and a few others.  Honeywell and Burroughs had a small business market segment also.  RCA at one point attempted to reverse engineer IBM hardware without much success with their SPCTRA 70.  Third generation IBM 360 series computers were looked upon as the model, the template to be used as a standard of excellence for just about any serious business or bank.

Batch Processing

Everything was done in "batch mode", that is, programs crunched data on one big batch and were not interrupted by human intervention until EOJ.  Data was supposed to be prepared in advance so the batch processing could all get done at one time.  Any type of error or exception was to be handled later.  The concept of batch processing comes from the TAB tradition.  Perform the operation and correct the errors for the next cycle.  In today's world we interface with a program directly, or operate "on-line" or in "real-time".  During those earlier times there was not enough compute power available or data base sophistication so batch processing was the norm.  Any reference to a "real-time" approach with computing made little sense.

So batch programs (which included about everything on the schedule) were designed to run from start to finish without much, if any operator intervention and they were to do the workload available in a given sequence until all computer processing was complete.  Collections of several programs running in a given sequence were called batch jobs or job streams.  Everything was a batch job and put into a job stream.  This was again a direct reflection of the batch operations of TAB systems.

It was only much later that on-line programs using direct user or customer access techniques became popular, but still these direct access programs had to back up their files constantly.

Batch programs followed several standard techniques.  There were edit type programs that tried to detect errors early.  Then there was the general category of update programs that sequentially match transactions files with master files and produced updated master files again just like what had been done for many years on a collator.

Businesses, especially banks probably continue to rely to some extent on batch job processing.  It's very efficient and cheep.  Your check goes through a batch process and so do all the billing systems and credit card systems.  The entire concept of processing and matching things in file order is lost on most of us today.  It was a kind of reasoning, a pattern of thought that grew out of unit record.

Tape Processing

As unit record files moved to more modern methods of storage, magnetic tape became immensely common.  Data could be processed much faster than cards and could be saved off-site.  Tapes could be reused over and over.  Most of the reels were 24 hundred feet and could hold what was at that time a tremendous amount of data.  Records were grouped together into blocks which greatly improved efficiency because the tape reader could read many records in at one time.  The last block on a file was a short block.  It was possible to have multiple tape volumes of files.  That was an awful big file.  The IBM tape drives had a read write head in the middle and two vacuum columns on either side that sucked up slack tape.  In that way the read head could get the next block of data without having to deal with the spool of tape. Tape units were marvelous machines.

Each tape reel had a write ring that could be removed to protect the file from being overwritten.  The write ring was actually a no-write ring because without the write ring you couldn't write.  Operators often played horse shoes with the write rings.  We had write rings everywhere.

Tape Libraries

DP shops grew as businesses grew so there was a need to store all these tape files.  So there needed to be some kind of order to keep track of the tape files.  That's where our handy librarian came in.

 

We speak in many tongues 

Third generation brought a new very powerful machine level language, Basic Assembly or "Basic" or BAL (basic assembly language), which was the last of the “metal scratchers” because it had power to do just about anything that the computer was able to do and it operated at machine level.  BAL (pronounced Bee A(long) El) made full use of all capabilities of the machine because it was converted directly into machine language instruction for instruction.  Higher languages such as COBOL, FORTRAN and PL1 could not access all computer instructions.  The higher languages also tended to be inefficient generating sometimes up to ten machine instructions for one higher level language instruction.  Higher level languages used compilers to be able to generate an object deck of machine code as opposed to assemblers for BAL.

Basic Assembly Language was by far my favorite because it was so flexible and very efficient.  It operated at the lowest and fastest level that the computer could achieve.  But it required care because with there was much more exposure to machine errors.  The big five errors are described below.  No BAL programmer alive forgets these error codes.  They are burned indelibly into the brain:

0C1 - operation exception.  The machine ran into an op code it did not recognize.
0C4 - protection exception.  the machine was told to go to or access where it ought not.
0C7 - data exception.  Machine could not handle the data, like unpacked format.
813 - program time out
D37 - no more room on disk.

The greater variety in computer languages helped to attract and keep good programmers.  Higher level languages did not involve as much machine knowledge and made software transportable.  The programmer did not need to know how a computer worked or be exposed at actual machine level.  With the Development of FORTRAN around 1956 and COBOL a few years later by 1960, along with RPG, and ADPAC such low level programming knowledge was no longer necessary.  Most important of all, programs were more maintainable and less sensitive to hardware upgrades.  With the higher level languages development times were faster and error exposure was less.

Here is an overview of some of the common computer languages.

Shooting Dumps.

When programs failed either in development or in early stages of production, which they often did, diagnostic tools were not very available and programmers were presented with huge core dump listings all in hexadecimal.  These listings were often inscrutable and caused many hours of exasperation.  It was so much fun to deliver a big fat core dump to a young programmer.

The first step was to find the current program listing that had addresses corresponding to what was in the dump.  The next step was to determine the instruction that caused the interrupt or failure.  Higher level languages usually had trouble with numeric data which was in the wrong format causing the dreaded "data exception".  Lower level languages could do much more evil things, such as "operation exceptions" or exceed memory or try to access memory outside it's partition.

I remember with a kind of dreamy fondness my many days helping other people shoot dumps.  It was a time where bonds were formed that could not be easily broken.

Let's get to the Core

It’s true that IBM 360 processing was faster but it still was not immediately the great breakthrough that was anticipated.  Computers still suffered from problems caused by some unreliable circuitry and were also hampered by lack of sufficient memory.  Memory is like the notepad in the computer where programs and data can work.  Programs residing in one part of memory tell the computer to read data from a file into some other place in memory, perform operations and write the updated memory back out to some I/O (input/output) device, such as a disk or tape, or punch card.  Memory was more often referred to as “core” because it consisted of little doughnut shaped iron rings that had three wires through each metal core.  Two of the wires were the write wires that generated a positive impulse and ran horizontally and vertically.  When electronic impulses converged on a given “bit” or small iron oxide ring it gained a magnetic state and became charged to “ON”.  To turn it off a negative charge converged on the same wire location and the iron core lost it’s magnetic state and was turned ‘OFF’. A third wire, or sense wire ran diagonally and was the read wire.  Programmers were elated to be able to have a mainframe that boasted of 8 “k” (kilobytes) memory.  It was a big bragging point.  “Our computer has more core than your computer. Na...Na..”

Bits and Bytes

Now comes the murky realm of binary codes.  Originally, bits were organized onto groups of seven, representing 128 states.  Sometimes there was an additional bit, a so called check bit to test reliability.  It summed up the state of the other bits to odd or even parity and if this did not match then a memory error was reported.  Groups of cores had seven “bits” because designers felt that 128 states was enough to represent any possible combination of letters, numbers and operation codes.

Third generation architecture decided that 128 combinations was not enough so another bit was added giving a total combination of 256 states.  Someone began calling this group of iron rings a byte since it represented a group of eight bits, or one half word.  A fullword was16 bits which was more often used for floating point in the scientific community. It carried a characteristic and a mantissa.

The value of each byte was determined by using binary notation.  Since each core could only be on or off,  the first core was the unit column, the second core the two column the third core the 4 column, the fourth core the 8 column, and so on.  Four bits could have different values ranging from 0 to 15, or 16 states since you start counting at zero in binary.  A complete set of eight core bits could have 256 values (or 255 starting at zero).

IBM had it's own type of floating point using fullwords.  The first 4 bits were dedicated to the characteristic and the last 12 were used for the mantissa.  They also had a "phantom" bit that represented the first non zero.  FP was never used much in business programming.  Scinetific languages such as Fortran used FP a lot.

I have seen memory boards and they are an amazing sight like a silk quilt woven out and strung on a frame.  Now it's all solid state and billions of bytes are contained in a small stick.

Codes and Binary codes

Early programmers were well schooled in binary arithmetic, but it was so clumsy to use.  A more convenient method was to add, subtract, multiply, or divide binary using hex (hexadecimal) notation because it was much more convenient, and made a lot more sense than multiplying and dividing with long strings of zeros and ones.  Hex uses a base 16 and not 10 so the numbers range from 0 to F; that is 0,1,2,3,4,5,6,7 ,8,9,A,B,C,D,E,F.  After some training the math become second nature to programmers.  Hex discussions were common, although bewildering to the unschooled masses.  Hex was easily converted into binary, if ever it needed to be, but binary was pretty useless.

Each state of a “byte” stood for something, a letter, a number, or a machine instruction.  IBM developed a representation called BCDIC or BCD (Bee-Cee-Dee), which was an acronym for Binary Coded Decimal Interchange Code and represented 128 combinations.  For third generation the code this was expanded to EBCDIC pronounced (EB-si-dick or even EB-ca-dick), which was the eight bit extended version of BCDIC or BCD and represented 256 combinations.  Each combination stood for a letter or special character or machine instruction.  Other vendors used the ASCII (AS-kee) representation which today is the one mostly in use.

A trademark of early programmers was the famous “green” card (later white) which contained all the machine codes, warn proudly in the right shirt pocket.  An IBM card served just as well to the consternation of accountants who only got to sport their pens and pocket protectors.  These card emblems announced your status as a programmer, a guy who talked in bits and bytes and hex, and used strange words like ZAP, pack it, AND-ing, and OR-ing bits, setting word marks, and addressing core. (that’s ADD-ress-ing  not add-RESS-ing).  An add-RESS is where somebody lives.  An ADD-ress is a location in core.  This distinction was important. 

Indirect addressing

In second generation programming each program was designed to run in the same place in memory.  Each instruction went to the same spot.  The program had to be recompiled and retested if it needed to run somewhere else.

A revolutionary concept introduced in third generation IBM computers was indirect addressing of core memory which allowed programs to be executed anywhere in memory.  All memory addresses were dynamic consisting of a base plus a displacement.  In order to find out an actual address it was necessary to look up the register value and add the displacement to it.  Registers were special extra fast storage locations that were kept under direct control of the CPU (central processing unit).  Each address was represented in hex, such as A04C.  A or 10 was the base register and the displacement was 04C.  To find the actual location in memory you needed to get the contents of register A and add in the displacement 04C.  If the contents of A was 103B12 plus 04C would be 103B5E and this is where you would look in memory.

Emulators

IBM made a smart marketing move.  In order to sell their third generation computers they had to come up with a way to run the older 2nd generation 1401 and 7090 programs on the new computers.  They developed the "emulator" or the ghost that allowed older generation programs to run in the 360 environment.  Of course there was no access to many of the new features that third generation offered.

Banks loved the emulator because it allowed them to run all their old checking and saving systems in emulation, which was to say very inefficiently.


One project I helped work on was modifying the emulator to allow a degree of device independence.

Return to previous page or just hit back button
Return to my home page