A History of Data Processing

a perspective

And pluck till time and times are done
The silver apples of the moon,
The golden apples of the sun.

                                                 W. B. Yeats

D.Vautier
updated 06-12
again 1-14


I spent a few years teaching computer history.  The text books seemed to distort the narrative and focus on what we like to refer to as "cool" or "neat" stuff dealing with binary code, bits, bytes, secret algorisms, and other black science.  But there was a lot more involved in data processing history than just the catchy stuff--lots more.  It's the whole world of the tabulation industry.  For a long time a computer was treated like an evil step-child to the whole data processing picture or what we today perceive as the IT industry, that is, the processing of large quantities of data and doing wonderful things with that data.  Computers in fact had little to do with data processing at least until after 1960, and then only gradually--a point that should be understood but is not.  Computers were simply an extension of a much larger and more stabilized technology--the unit record business--less glamorous and very boring and therefore far less recognized historically.

This is a brief discussion that exposes a more accurate view on the history of data processing.  The first computers were not computers at all but ingenious mechanical machines.

 

 

Computer history texts contain the usual material: Charles Babbage and his difference engine, along with his likable assistant Ada Lovelace, who was considered the first programmer.  I don't know how that idea came about or even if it is significant but it does lend charm to an otherwise dull story.

Existing literature is also sure to discuss the typical advances of the heavily funded government ENIAC program, and the more notable achievements of the blessed trinity, John Bardeen, Walter Brattain & William Shockley, inventers of solid-state circuitry but these inventions did not come anywhere close to enhancing data processing, they were developments in basic research which later contributed to computer development.

Instead missing from this standard picture were some of the critically important but less attractive below the surface realities that were not at all glamorous or showy or even particularly newsworthy, but beyond doubt played a huge part of the development of large data management processes.  These realities laid the structure for all modern information systems.  Computers were only a glitzy side show that came much later.

 

                               

 

Processing more data faster has always been a big part of industrialized growth, and being able to do it quickly with little error was always the goal.  What becomes an obvious fact is that it doesn’t take a computer to do data processing or large scale data management.  Data processing can be done and was done for a very long time by using plain old everyday mechanical devices like wires, coils, solenoids, magnetos, magnets, cardboard and electrical switches, certainly not fast or flashy, but done all the same and done quite accurately.  

Imagine for a second that computers never came along and the internet never happened.  What would we be like as a society.  Probably OK.  There would be more jobs, more boring work, more newspapers, less wall street crime, much more reading, and our paychecks would continue to be printed and come just as they had since the advent of data processing in 1900.  The stock market would continue to work, the government would continue to run, we would continue to be fed and clothed. 

All Modern computer data processing today is undoubtedly based on the groundwork of unit record technology.  Unit Record operations lasted a long time, well over 70 years starting about the beginning of the 1900.  There were no computers anywhere to be found.  None.  There was no Babbage or Lovelace or Pascal.  There was no binary notation, or polish notation, or bubble sorts.  These things were not needed. It had already been figgured out.

 

UNIT RECORD

Quite expectedly data processing started in a very physical way with the idea of storing data in a more accessible form than just ledgers.  The first practical system to store digital data and then manipulate it was by using cards, not altogether a new concept.  The introduction of a practical use of digital card files solved a big problem in data storage.  The other problem of data manipulation was solved by a number of ingenious machines.  Such storage systems and processing machines started as early as 1890, many years before electronic computers were ever practical.  Unlike the theoretical and whiz-bang exploits of early pioneers in the field of bits and bytes and complex operations, the early type of data storage was totally physical.

Herman Hollerith started it all.  He did not use esoteric calculations involving wheels and cogs and difference engines and ENIGMA codes but instead employed a very practical way to capture and store data and produce usable results.  His inventions were quickly used in businesses and government in a time when innovation was badly needed and the country was growing like mad.

 

 

Hollerith worked for the Census Department during the 1880 census.  He decided that he could develop a better way to collect, store, summarize, and report census data rather than just doing it tediously in hand recorded ledgers and adding machines.  So he designed a card that contained little punched holes.  These holes represented digital information which was electronically counted or tabulated by a machine that he himself also designed.  The census of 1890 was completed in just three years, even before the 1880 census was done.  

By 1896 Hollerith left the Census Department and struck out on his own, setting up a company that by 1924 become IBM.

 

 

As time went on data processing had to grow to keep up with demand.  In it's subsequent evolution there were at least four discernable technological stages that occurred which were called generations.  

 

The four generations of data processing