As a freshman at CMU in 1968, I took my first computer programming course - Algol on the Univac 1108. Near the end of the term I decided it would be fun and interesting to write something that simulated parts of the Mercury, Gemini, and Apollo manned space programs. Mercury got into orbit, coasted for a while, and reentered. Gemini added changes to the orbit and rendevous with another mission or target in orbit. Apollo added travel to lunar orbit and landing on the moon in the Lunar Module. And returning to the Command Module and returning to Earth, of course.
I never got past Mercury, as it didn't take long in the second semester before I was too busy with coursework. Besides, all my orbits were decaying and one improvement made things worse. So my Mercury program ended in February 1969. NASA's program ran from 1961 to 1963, Gemini 1965 to 1966, and Apollo 1968 to 1972.
Still, it was a landmark program for me and I've kept the listings for (Yikes!) over 50 years. It also became a bit of a benchmark.
Before I started at CMU, Edsger W. Dijkstra wrote a now famous letter to the Communications of the ACM, Go To Statement Considered Harmful. I may have heard about it in class, I certainly heard about it later, but partially in the newer context of interfering with pattern recognition in optimizing compilers.
Just before my sophomore year, Dijkstra wrote another seminal document, Notes on Structured Programming. I never saw it until, umm today, but it had an immediate influence on teaching programming at CMU. Based on my quick scan, he basically advocated for a "top-down" design where you start with a few lines describing the steps the program will make. Then each is described again and so on until you wind up with the code. Other phrases that have been attached include "Modular Decomposition" and the "K.I.S.S Principal - Keep It Simple, Stupid!"
I think all that is involved in many other activities, like building a house. Start with the number of floors, bed rooms, bathrooms, and distinctive features; then determine sizes, plumbing and electrical requirements, etc. Structured programming also advocated for limiting the scope of variables to reduce the risk of something way over there breaking something over here. Or, don't let the stove control the refrigerator. Especially don't let the stove, microwave oven, and baby monitor all control the refrigerator at the same time.
Sometime or other, let's say the summer between my sophomore and junior years, I took a look at my Mercury program to have a good laugh at my "spaghetti code" - ill-structured code. I was surprised, pleased and a bit chagrined that the code was quite well structured. Basically the program prints "ASCII Art" images of the Earth, ellipses, and simulated orbits. Each had its own subroutine, and I had others for printing a clearing the string array that I "drew" into.
I had seen some badly structured code written by other students in that first class. I remember one assignment where someone else's program was two or three times the size of mine and very hard to follow. Eventually, I decided that the people who most strongly advocated for structured programming were the people who didn't innately write structured code.
In my senior year I took a simulation course that touched on several languages and simulation problems. One was to look at the classic multiple queues at bank and store checkout lines vs a single queue style that is sometimes used now. Another was to model a chain of tailgating cars driving down the road and have the lead car slow down. The reaction time delay flowing back from driver to driver led to rear-end crashes as each car had to make a stronger response.
At the end of the course I took a look at Mercury again and realized that one thing I had done was to vary the time between computing new points in orbits. I made it inversely proportional to the square of the distance between the satellite and the center of the Earth. It worked very well for orbits of objects like meteors that would follow a path close to the center of the Earth. We didn't even talk about something like that in class or discuss problems of the long-term accuracy of adding small changes in the satellite's location on each step and the problems inherent to the loss of precision that entails.
Looking at it now, I see several things I'd do differently today:
The variable names are too short
While some of that comes from using typical algebraic naming style, descriptive variable
names are a lot easier to read and search for when checking code.
No comments
I started learning about comments the next year. I remember Bill
Wulf mentioning that sometimes the best comment is a blank line. It wasn't until I
started reading and changing code that other people wrote that I learned how important
comments are before each subroutine, and well, everywhere.
My expression syntax was questionable.
I would never write "3/5*J" today - I leave forms like this to annoying social media posts
asking what it evaluates to. In general I put spaces on either side of + and - binary
operators and no spaces on higher precedence operators, e.g. "(x + 2)*(x + 3)".
In this case the code wants to convert a vertical character position tracked in tenths of an inch to a string array index that refers to the common six lines per inch line printer output, so I'd be more likely to write "J * 3/5" or "(J * 3) / 5" to ensure J * 3 is computed before 3 / 5 is done in integer math. It looks like Algol did the calculation in floating point anyway. Or define a C macro like "#define POS_TO_LINE(POS) ((POS * 3) / 5)" and code "POS_TO_LINE(J).
Or create gnuplot files and make line graphics instead of ASCII Art. Or do interactive graphics, etc. All in Python, except maybe in C at first to try to reproduce the orbital decay. My suspicion is that single precision floating point has trouble with making small adjustments to postition values. Perhaps the 1108's single precision floating point format and operations have some odd behaviors.
Other notes on the program.
How neat! In Dijkstra's Notes on Structured Programming he uses an example of plotting 1,000 points of a curve on line printer output. I wish I could say Great Minds thinks a like, but it's really more that great minds (and lesser minds) struggle with the same limitations of the technology at hand.
Funky codes for funky Algol syntax
The Algol 60 language specification used mathematical symbols in expressions,
e.g. ÷ for division or ≤ for less than or equal. With the advent of ASCII and
other character encodings we typically use "*" and "<=." The "=" sign was a test for
equality, so Algol expected people to use ":=" for assignment statements, e.g. "val :=
truth = consequence;".
Key punches and other data entry devices didn't have many of those symbols and even common ones like a semi-colon. So Univac changed a lot of symbols to what we call reserved words or digraphs today, and I think accepted ":=" of "=" for assignment and used "eql" for comparison. It likely borrowed a lot from Fortran.
I/O - Mathematicians don't need that
The formal language specification didn't define input/output operations, Univac came up
with a scheme that included a "format" statement that could do complex formatted I/O,
including loops. I once wrote a program to print all permutations of four letter strings
- most of the computation was processing format statements.
See Algol 60 - Sample Implementation and Examples for later recommendations from the Algol group. They're quite different than what Univac did.
Entier? Huh?
Algol defines "entier" as a built-in procedure to convert real numbers to integers by
dropping the fractional part, err, rounds down to the next integer. It looks like the
only place I used it I included "+.5" or " + 0.5" today. Univac included an "integer"
built-in that likely does normal rounding.
From one of Google's excessively inflated (362,000,000 results (0.42 seconds)) references to entier is:
Entier - an old French word meaning 'whole' or 'entire', is the inspiration behind the restaurant's nose-to-tail culinary approach.
The nose part is okay, at least if it's my nose. I'm not so sure about the tail part, especially if it's my tail part.
About those intermediate numerical data
Those are printed every 25th step in the simulation, not every step!
Units? Some comments would be nice.
I see expressions like "VO = VO * R /
6.37&6". I recall it was different, but "&" must be a power-of-ten indicator. From
a run of a satellite barely skimming the surface of an air-free and spherical Earth, VO
started at 475,000 and R at 10 (character widths). That 6.37&6 is between the Earth's
polar radius (6357 km) and equatorial radius (6378 km), so the circumference is some 40,000
km. If a satellite could orbit at sea level, its period would be some 84 minutes, so the
velocity would be 476 km/min. So I'm using meters and minutes. Gack.
Then there's "AO = -9.8 * 3600 * R * R * R / SS / 6.37&6". -9.8 is the acceleration of gravity at the Earth's surface, m/(sec^2), so * 3600 makes m/(min^2). Within the rest is R * R / SS. SS is the square of the distance from the center of the Earth in character widths, call it cw, so that subexpression is a scale factor to convert acceleration in terms of cw, and what's left, R / 6.37&6 scales the Earth's gravitational accelleration from m/(min^2) to cw/(min^2). I wish I could say it's all coming back to me!
Hmm. Perhaps I should have taken the square root of "R * R / SS" instead of focusing on getting the units right. For a satellite above the Earth, suppose an extra R, so 20 cw away from the center, R * R / SS would be 100/400, whereas 10/20 would make a much greater accelleration. Oh yeah - the force of gravity varies with the square of the distance, so the formula is correct.
Contact Ric Werme or return to his home page.
Written 2023 April 23, last updated 2023 April 23.