Software Developer Tooling: Then and Now

Andrew Glynn
10 min readNov 7, 2017
A “simple” case of development tooling today …

While my criticisms of current tooling for development are often met with an attitude of “you’re just remembering it differently, things are better now”, the ability to run up old environments and tools gives the lie to that interpretation, since rather than finding they weren’t as good as I had remembered, in fact I’ve done so and found they were better.

The slow degradation of capability, occurring over a long period, combined with my increase in experience and skill levels, makes them seem almost easy, even compared with my recollections.

While some environments have improved, they’re largely niche environments. It’s in the mainstream environments that things have gotten worse.

To give a bit of background, my first language as a child was Forth. Although I wrote one of the first sequencers in 1981 on an Apple IIe in 6502 assembler, the lack of any conceptual structure enabled me to completely forget it within a few months of finishing it 😊.

At university I majored in ‘anything and everything’ that looked interesting, winding up with a double major in Philosophy and Mass Communications (the latter allowed me to include things like film production and classical electronic music composition as non-electives), and a B.S. in Mathematics, a title that seemed entirely apropos to the subject.

The languages we studied in the programming courses I did take were Smalltalk and Objective-C, which were a little odd for the late 1980s, but then academic languages are often a little odd. Probably odder was that we were working on NeXT cubes in the first place. Smalltalk felt quite natural having used Forth, but Objective-C seemed strange: it felt older and more primitive than either.

Once done with university, I wasn’t sure what I wanted to do, and my first full time job was at an ad agency. Without really intending it I migrated quickly to managing their AS/400 system, which was incredibly easy to both use and manage. As an advertising person though, I had issues, the only ad I wrote in fact got banned,

My next job was at a hydraulics and production automation firm, and as it happened they were running production lines using DOS software written in Forth. The problem, especially considering the price of even an 80386-based machine in 1992, was that hydraulic oil tends to float through the air and eventually settle, quite often on the motherboards of such machines, with the inevitable result that they fried rather quickly. We had had problems, as well, with our accounting/inventory system (not quite an ERP, but close) which was a networked DOS program that used the old Netware DOS based networking (not Netware 3.x, which was its own OS). The system tended to crash, usually during backup, forcing the reentry of an entire week’s work. Seemingly only the Xerox technician could even get it to come back up, since it needed to be reinstalled after every crash.

Since my background, hardware and OS wise, was in Macintosh and NeXT, I didn’t know much about Windows, which was becoming popular, or the then main alternative on PC’s, OS/2 2.0. However, the presales attitude of Microsoft versus IBM (“just buy it and see if it works” versus “if it doesn’t, I’ll fly out and make it work”) made the choice of OS/2 rather easy. We bought the requisite hardware and installed OS/2 with a LAN Server machine to run the network, and its flexibility allowed it to double as the owner’s workstation. The machines in the shop etc. ran the DOS based program in a full screen DOS session, while the rest of us used the full OS/2 system. Since it stopped crashing, in fact it ran the program remarkably well, that was one issue solved.

Playing around with OS/2 and realizing its multitasking was based on hardware interrupts, it occurred to me that even with the DOS Forth based PA software we could run at least a dozen instances on an 80486 without problems (hydraulic based production automation is by no means fast). By porting the software to OS/2, it became more reliable and used less resources, so we marketed a configuration with a single machine that could run 10 production lines, with a hot backup to keep things moving in case of failure. Unsurprisingly, it did rather well, since it saved the cost of 18 machines on a 10-production line system.

I was offered a job in a company focused on the prepress area, which used OS/2 as a shared library server over emulated SCSI, and automatically archived and created a database with view files from the streaming image data. There were some brilliant people in the company, and as a result I learned my trade much more quickly.

Unfortunately, due to an issue with the main salesman’s wife (who had nothing directly to do with the company but just liked interfering), I decided to look around for something else, and am probably the only person ever hired by IBM, Claude Ciborra excluded, based secondarily on my knowledge of programming and OS/2, but primarily on my understanding of Heidegger’s famous tool analytic from Being and Time.

I was hired for IBM’s large CRM initiative, which was based on that tool analytic, specifically on the idea that a good tool should be transparent — if a carpenter is focused on the hammer, he’ll likely hammer his thumb. Good tools only become overt, present-at-hand rather than ready-to-hand in Heidegger’s terminology, when they’re broken or unsuitable.

Initially I worked on the connectivity between Officevision VM and Lotus Domino, using a combination of tools I was unfamiliar with — VisualAge COBOL, VisualAge Smalltalk, VisualAge C++, and VisualAge Java, since Domino had just implemented the same API available in LotusScript for Java, a relatively new language at the time.

The interesting thing about the family of tools was that they were all themselves written in Smalltalk. For a developer used to live object environments, with both Forth and Smalltalk being such, there couldn’t have been an easier way to be introduced to the archaic nature of COBOL, the complexity of C++, or the simple weirdness of Java.

All the environments compiled incrementally on the fly, had built in reliable deployment tools, and allowed debugging of live code. While the LaF was somewhat different between each, they felt relatively natural to use based on the languages they were used for. Below are a few screenshots of each. Due to my background in tool development concepts, I eventually moved to working on both VA Java and VA C++, though I must admit that visual design has never been my forte. In the screenshots, while VA C++and VA Java are running in VirtualBox in OS/2, VAST, which is still available from a company called Instantiations, is running on Windows. Originally all the environments listed were available on OS/2 and eventually NT, and either could deploy to either. The enterprise versions could also deploy to AIX, and with a little fiddling, to Solaris.

VisualAge for Smalltalk (Windows version)
VisualAge C++ (OS/2 version)
VisualAge for Java, OS/2 Version

Anyone familiar with various Smalltalk dialects will find these environments look somewhat familiar, though VA C++ less so, due to the nature of the language itself.

After leaving IBM I worked on a large project that was being ‘rebooted’ after two years of no progress. Despite still being relatively junior (the company consisted largely of very experienced developers) the client decided who was to be team lead on the C++ and Java sides (both were necessary since the product was based on an older X Windows C++ application, being rebuilt as a “client/server” application in C++ and Java — the scare quotes are due to the fact that the application requires both client and server on every machine, the only real shared aspect is the back-end database).

I suppose it’s no surprise when the client is the DoD that they make those decisions, but although at the time we presumed the reason our initial tasks had nothing to do with our backgrounds was simply due to the disorganization usual at the beginning of any project, I realized later that by assigning me a task using low level Solaris RPC code in C, while giving the best C developer I’ve ever met (he wrote an OO version of C entirely in pragma directives) a task to architect and develop a small portion of the Java code, the initial tasks were intended as a test of our flexibility.

Apparently, I passed, and along with the C developer mentioned we became the two leads on the project, he for the C++ side and me for the Java side, the two being integrated via CORBA. Looking back, the way we exchanged knowledge very quickly to get the tasks done was probably the key means by which the decision was made.

I showed him VisualAge for C++ and VisualAge for Java, and unlike most C programmers used to Emacs, he ditched it for VA C++ in a heartbeat. We therefore decided to use the two as the basis for our tooling, much to my relief.

We did have a few disgruntled developers, one of which, though still a friend of mine, was a command line oriented, VI centric C programmer. We were friends by arguing strongly for our points, then going for a beer afterwards and joking about it. My being brought up Jesuit and tending to debate strongly combined with his Romanian background extremely well.

At one point, though, his complaining that he ‘couldn’t see his files’ (since all the environments noted were image-based) led me to lose patience and respond, “nobody gives a flying f** about your goddamned files”, rather loudly in context. Surprisingly this pretty much ended the debate without any major resentments, and he eventually admitted that, at least on that project, we couldn’t have accomplished it any other way, due to the hassle of CORBA and the fact that both largely took care of that automatically.

Apparently, towards the end of the 1990’s, perhaps frustrated by the lack of interest in VA Smalltalk (which wasn’t all that surprising given its price tag compared with the others) IBM told many of its largest VA ST customers that Java was the future direction. Someone online posted about that period from the perspective of an IBM customer, where having been told that, the better engineers moved to Java. However, the Java versions of the applications didn’t work, or work sufficiently well, and to this day many of them remain in VA Smalltalk, despite the developers having initially been less experienced and/or talented than those who moved to Java. That post, in fact, was my initial reason for writing this one.

IBM’s intentions, however, were predicated on using VisualAge for Java, which since it worked much like a Smalltalk environment, given it was written in it — all class objects were live, all debugging was of live objects etc., moving from Smallltalk would be no great difficulty. There was one significant issue though, it was dependent on the JavaBeans spec, which like other specs that have no real tooling to support them (anyone remember Structured C?) tend to be ignored as soon as time pressure starts to increase. This particularly affected the visual development tooling, which allowed any JavaBean to be used in the visual workspace, whether itself a visual element or not.

Eventually IBM gave up on VisualAge, spawning off the Smallltalk variant to Instantiations, and killing the others, though the AIX version of VA C++ did survive for several years. VA Java was replaced by what was initially known as IBM Workbench, but was quickly open sourced as Eclipse.

Eclipse, initially, was nowhere near as good as VA Java. It took over 3x the resources while containing a quarter of the functionality. Although in some ways it improved over time, in others it got worse. The incremental in-memory build system was complicated by the introduction and popularity of Maven, the plugin architecture became unwieldy, leading to the introduction of OSGi as a base for a better plugin architecture, but that too became unwieldy over time, and EMF as a means of approximating the meta-programming techniques available in Smalltalk and implemented in VA Java does some of the things its predecessor did, but not nearly all. Even after 16 years of development Eclipse lacks many useful features of its predecessor. The thing it has, other than support for current Java standards of course, is that the developer can “see his files”.

Meanwhile, back in Eclipse, after 80 CPU hours …

ORMs such as Hibernate and later JPA are still not as good as the data access objects in VA C++ and VA Java (never mind the sometimes obscene overhead of JPA tooling — see Activity Monitor image from the Macbook Pro below). Automatic connections to CICS, MQ Series and other IBM technologies are lacking, while WindowBuilder in Eclipse is something of a joke compared with the visual development tooling in VA Java or the similar-looking “WindowBuilder” in VisualAge Smalltalk.

And on the Macbook Pro, two instances of Eclipse using JPA have been doing so for nearly 197 and 162 CPU hours each.

As or more importantly, there was no DevOps needed, even on a team such as the one talked about above, which had over 90 developers in total.

At a certain point the desire to make things work, the desire that leads one into engineering in the first place, is overwhelmed by the annoyances that come with it. This can be due to a loss of some of that desire, by an increase in the annoyance level, or both.

I’m personally at that point myself, but being able in this case to go back to how things worked rather than relying on memory, means that I can know the degree of increase in the annoyance level much more accurately.

Whether or how much my level of desire has dropped is debatable, but the degree to which the annoyance factors have increased renders it largely inconsequential.

--

--

Andrew Glynn

A thinker / developer / soccer fan. Wanted to be Aristotle when I grew up. With a PhD. (Doctor of Philosophy) in Philosophy, could be a meta-physician.