I first started programming in 1963 at UW-Milwaukee. We had an IBM 1620 with 20,000 decimal digits of memory, a typewriter and a card reader/card puncher. This machine did all of its arithmetic and addressing in decimal rather than binary. Each decimal digit had 6 bits: 4 bits for the value, 1 flag bit, and one parity bit. Memory problems were frequent, so parity bits were needed. A character was 2 digits. An instruction was a 2-digit opcode and two 5-digit addresses, for a total of 60 bits if you ignore the parity bits. Two addresses were needed because the 1620 had no registers. It used indirect addressing rather than indexing. Execution time was in the milliseconds. There was no hard disk.
To execute a program, you feed the the first pass of the Fortran compiler into the card reader followed by your Fortran program. It punched an intermediate output. You then feed in the second pass of the compiler, followed by the intermediate output, followed by the Fortran library. It punched your executable program. Your then feed your program and data into the card reader. Your program then punched cards with the results. You took the cards to a 407 tabulating machine to get your results printed.
The IBM 1620 did arithmetic by looking up the result in memory table. Addition and multiplication tables were both stored in memory. One could change the tables to do arithmetic in any base less than 10 but then address arithmetic wouldn't work.
You might find http://www.computerhistory.org/projects/ibm_1620/ interesting.
Tranlations of this page: Hungarian, Polish, Portuguese, Romanian, Russian, Ukrainian.
David Wilson / firstname.lastname@example.org