Andrew Glynn
27 min readOct 11, 2017

Building-With Versus Building-On:

Improving Software Development Incrementally

Three articles and a doctoral thesis that I came across or had pointed out to me recently deal with the state of the software industry from different angles. However different they are, they do relate, and by putting them together a more comprehensive view of both how we got to this state, and how we can possibly get to a better one, becomes clearer.

The three articles are titled “Disintegrated Development Environments — How Did We Get Here?”, the second “How Developers Stop Learning: Rise of the Expert Beginner”, the third “The Coming Software Apocalypse” (though the title rather overhypes the problem, although the article itself doesn’t) ; the doctoral thesis, titled “Re-imagining the Programming Experience”, was written by a (successful) doctoral candidate at UC Davis in 2012. Links to the articles and thesis are below:

https://amasad.me/disintegrated

https://www.daedtech.com/how-developers-stop-learning-rise-of-the-expert-beginner/

https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/

http://web.cs.ucdavis.edu/~su/theses/MA-dissertation.pdf

I’m going to start with the first mentioned article, primarily because it deals with the beginnings of a phenomenon that underlies the situation (as does the second, but in a different sense), and add some information about the period between the beginnings and the present that I feel the article needs in order for someone not familiar with that history to understand how those beginnings are still affecting the situation.

The article mentions a semi-ironic lecture given by Richard Gabriel in 1990 entitled “Worse is Better”. Gabriel was primarily discussing events that occurred in the 1970’s, but continue to shape developers’ perspectives on development practices. The lecture was given at a LISP development conference, and since the ‘worse’ in the lecture title referred to practices not prevalent in the LISP community, it was largely misunderstood. In fact Gabriel has since spent much of his career arguing against his own lecture.

To understand Gabriel’s lecture, how it was misunderstood, and the reactions from people within the LISP community as well as similar reactions in the Smalltalk community, one has to understand the situation at the end of the 1960’s / beginning of the 1070’s. As it happens, I first touched a computer and computer programming in 1972 (no, I’m not a longtime retiree, I happened to be extremely young compared to most people who had access to a computer in 1972).

At the time LISP was (and remains) one of the most powerful languages in software engineering. While defining ‘powerful’ can get into all kinds of argumentation, I’m defining it here in a simple way — how much can be accomplished in a given language given a certain amount of programming effort. Since LISP was one of the earliest programming languages (1955) it was also one of the most developed by 1970, to use an arbitrary date. The paradigm underlying its development, as the author of the article notes, was known as the Right Thing approach, or alternatiively the MIT approach. In the early 1970’s a group of programmers with a different approach achieved some successes, given the limitations of the environment they had access to, and the somewhat crude but effective methods used are what Gabriel was referring to in the lecture title. That the way the approach was described was a bit ponderous, as well as not in keeping with the overall trends in society at the time, is part of the reason it was rejected in terms of an approach with far less real potential. For that reason, as well as to make the difference more tangible, I’m calling the two approaches ‘building-on’ and ‘building-with’.

The ‘other’ developers were working with a new environment called Unix that at the time lacked most of the capabilities of any modern Unix, but ran on machines that were insufficiently powerful to run the more powerful LISP environments. Rather than developing with a methodology that stressed thinking through a programming issue, they tended to ‘hack’ out code in C using text editors, which were basically all that was available. Although they had some minor successes, none of them was really sufficient for the public at large or even the majority of developers to take all that much notice, until some developers in the military were given a difficult project with very little budget in which to accomplish it.

The project was to design a computing system capable of being sufficiently distributed to have no single point of failure, and thus capable of withstanding an all out war, including nuclear war. The military developers were not at all the kind of C hackers associated with Unix in its early days, but given the budget and requirements, had little choice but to use the environment. Since the project was military, though, the personalities and backgrounds of those involved was for the most part unknown, and an assumption that they were the same type of C/Unix hackers became prevalent simply due to the necessity to use the same environment.

No matter what the environment, though, there are ways and ways of using it. Rather than ‘hacking’ the OS capabilities in C, the military developers wrote their code carefully, elegantly, and reliably, practicing in fact the Right Thing / MIT approach, but in an environment that made that approach significantly more difficult. The result was ArpaNet and the technologies that made it possible — TCP/IP, DNS, etc. ArpaNet was and is a brilliant engineering project, one that had nothing to do with hacking C to try to get something to work, without really knowing what was wanted. Its association with Unix, while in reality is what made Unix a capable environment, rather than demonstrating that good engineering can create phenomenal results even in a limited environment, led to the conclusion among many in the development community that the MIT approach was too cumbersome and not ‘creative’ enough.

A year after ArpaNet, the first microprocessor based computers came out. These machines were far more limited even than the early Unix machines. An amateur hacker by the name of Bill Gates managed to get one, an Altair machine, to run a very simple programming language called BASIC. The idea that such a (for the time) cheap but limited machine could run any programming language was a surprise to many, and resulted in a further proliferation of the notion spelled out in Gabriel’s lecture. The introduction of the intel 8080 processor line made microprocessor based computers capable of running simple but competent operating environments such as CP/M and the Apple 1 OS, and running simple word processors such as WordStar, but neither of those capabilities exactly set the world on fire (since the Apple 1 cost more than most such machines, and had no software for its OS, it was a complete failure). Steve Jobs wasn’t, like Gates, an amateur programmer, he simply wasn’t a programmer at all. He was a savvy marketer, if he had something worthwhile to market, and more importantly one of those people who are extremely good at picking talented people to surround himself with. After the failure of the Apple 1 he decided, with his business partner Steve Wozniak, to try again, and primarily Steve Wozniak or ‘Woz’ came up with the Apple ][, (the square brackets were trademarkable). Because the history of the Apple ][ has been online for ever and has been read by many people, I won’t go into it’s success or the reasons for it here, if you haven’t read the history and are interested, the full history is still online at:

https://apple2history.org/history/

IBM, of course, was still king when it came to the ‘real’ computers that ran your bank. But the success of the Apple ][ and its successors (the II+, IIc, Iie etc) caught IBM’s eye. Originally intending to use CP/M, but on a newer intel processor, the 8086, the author of CP/M forgot about the meeting and went fishing. Annoyed, IBM looked for another solution and found it in the form of a hacked CP/M lookalike called QDOS (short for Quick and Dirty Operating System), which had been purchased by a certain Bill Gates. QDOS and its 8086 port, MSDOS, were not as good as CP/M though, and intel was in financial trouble because its 8080 processor had been outdone commercially by the Zylog Z80, essentially a faster, cheaper 8080 clone, with the result that the 8086 was only marginally better than the 8080, not sufficiently better to make up for QDOS’ deficiencies, and certainly not good enough to be a serious challenger to the Apple ][ line, which had taken advantage of falling CPU prices to add features such as color and better I/O. For the first year sales of the IBM PC were almost non-existent, a cheaper version, the PC jr, didn’t help matters by being not that much less expensive, but apppearing much ‘cheaper’ in the derogatory sense due to things like the ‘chiclet’ keyboard, a serious no-no for a company like IBM, a good deal of whose success had been built on the ‘feel’ of the Selectric typewriter keyboard.

Although more expensive, more business-oriented microcomputers such as the IBM PC, didn’t initially impinge much on Apple, its market was being chipped away at on the lower end by cheaper computers aimed at the home and education markets, that took advantage of the same drop in CPU prices that enabled the improvements to the Apple ][ line, but rather than adding to the features, simply dropped the prices. These machines included the Commodore Pet series in education, the Commodore Vic 20 and 64 in the home market, and as far down in terms of price as the Sinclair ZX80, which sold for a mere 99 quid, or about $150. The Sinclair had the kind of ‘chiclet’ keyboard used on the PC jr., but that was far more acceptable at its price point. (the early Unix machines in the early 70’s had used a similar style keyboard, which is one of the reasons most Unix commands are only two letters, they’re terrible to type on).

Apple tried initially with an updated, upgraded computer, the Apple III, but it wasn’t sufficiently different to justify the increased price. Eventually Jobs went back to the drawing board, and what he drew on was a computer designed the Right Way in the 1970’s, based on a language that aimed to make the power of a LISP like language easier to learn and easier to think in, Smalltalk. Engineers who had been monitoring Alan Kay and his language team realized that the graphical nature of the interface, wysiwyg editing, and other features of Smalltalk (including its ability to use a small bootstrapper and then load itself, so that vitually all of the runnoing code was written in Smalltalk itself, a feature inspired by LISP), nnd its networking ability, inspired by ArpaNet, could allow them to produce a desktop publishing system that could take advantage of Xerox’s photocopier technology, allowing a workgroup of machines to print pages with arbitrary font styles and simple graphics to a single (expensive) photocopier with the scanner removed. To make the screen look more page-like, they inverted the original Smalltalk color scheme to produce black text on a white background. The result was the Xerox Star. Unfortunately, Xerox sales people were photocopier sales people, they had no idea how to market and sell the Star, and weren’t all that interested in doing so in any case, given the demand for photocopiers. The Star languished at the Parc laboratories while Alan Kay and his team continued to improve Smalltalk, and finally released a standardized version, Smalltalk-80, in 1980.

Jobs, while on a tour of the Parc facilities, saw the Star and realized it was the Right Thing to take Apple beyond incrementally improving what they had, and set Apple about creatng a similar machine. Since they couldn’t copy both the way the machine worked and the language it was written in, they created a ‘sort of’ object language based on Pascal, and came up with the Lisa. The Lisa, though, was too expensive, and didn’t fare any better than the Apple III. Jobs had been through this before with the Apple 1 though, and was determined to see the vision succeed. By optimizing the OS as much as possible, and through the good fortune of the rapidly declining price of photocopier technology, in 1984 Apple was able to release the Macintosh, a Lisa like machine with less power and a smaller screen, but still capable of being networked in a workgroup connected to a scannerless photocopier, by then known as a laser printer. The subsequent release of Microsoft Office for Mac, until then a poor runner up on the IBM PC to Lotus, gave the Macintosh the business case the Apple II lacked (particularly after the release of Lotus 1–2–3 on the IBM PC, which could create charts, while Visicalc couldn’t).

So from the early 1970’s to the mid-80’s, the ‘mythos’ of ‘worse is better’, of hackers coming up with better results was in fact a ‘mere myth’, something not backed up by reality nor containing the kind of truth that good fiction contains, which was that top engineers used similar hardware and environment, because they were more affordable, and managed to create successful products at a lower price point. By 1986, seeing the Macintosh cutting into its PC sales, which had ballooned after the release of Lotus 1–2–3, IBM knew it had to come up with something that could compete with the Macintosh, and with Microsoft began working on a new PC with a new OS, known as the Personal System/2 and Operating System/2, or PS/2 and OS/2 for short.

The problem was that while Microsoft had gotten to the level of being able to write a usable version of MSDOS, and could take advantage of the capabilities of the Macintosh by building on them. They didn’t actually know how to build a Macintosh like OS. While Jobs, after a dispute with the Apple board, had founded NeXT and had some top engineers busy building a better Mac than the Mac, Gates’ Microsoft was in a quandary. Afraid that IBM would get impatient with the progress of OS/2, they took the simpler parts of it and got them to run on MSDOS. The result couldn’t do much other than run more than one DOS program in different Windows, but it ‘sorta kinda’ worked, and it was faster than the then-current version of OS/2 on the PS/2. By and large, though, Windows 1 and 2 were even more ignored than OS/2, and the majority of PS/2 machines, along with the newer, more compatible clone machines, still ran MSDOS. As Microsoft had feared, after the 3rd largely failed release of OS/2, version 1.2, IBM decided that it was too big a potential market to leave in Microsoft’s hands, brought OS/2 in house and rewrote it for version 1.3. In order to not have a public dispute, IBM agreed with Microsoft that they would work on the release intended after the 2.0 release that was in planning, and would be a more portable, hardware independent version of OS/2 2.0, which would be the first version targeted at the new 32 bit processors beginning with the 80386 line.

Unfortunately for IBM and OS/2 1.3, its reputation had sunk low enough that even the degree to which it improved on 1.2 wasn’t sufficient for it to attract either developers or buyers, and its main use was as the basis for the new LanManager/LanServer local area network that allowed DOS machines to be networked reasonably reliably. The Microsoft designed UI, while superficially Mac-like, lacked the functionality of the Macintosh interface, and despite the improvements to the underlying system, was all that potential buyers saw, which made it look like just a minor update to a failed system.

Meanwhile, although Jobs succeeded with his team of engineers and built the first NeXT systems, which combined the power of newer Unix based systems with a UI that was more advanced than the Macintosh, the combination of pricing, the success of the Macintosh, and the relative maturity of the microcomputer industry made it more difficult to disrupt the market in the way the Apple II and Macintosh had.

Despite the claim in the beginning of Mr. Masad’s article, that despite the issues things have improved dramatically, measuring software productivity is nototiously difficult, and the most respected group in such metrics put the peak of software developer productivity at precisely that point, 1988. That was also the year I happened to get a machine capable of running a version of Alan Kay’s Smalltalk, and the university I was attending took a chance and purchased NeXT cubes for programming students, using Smalltalk and Objective-C, the latter a language with syntax more like C that in fact then compiled to Smalltalk, as the main languages taught.

Two specific developments in the couple of years directly after that, together with my own and colleagues’ experiences in the industry from 1991 to now, make me suspect that that 1988 date, while perhaps not entirely accurate, is far more accurate than assessments that claim things have gotten progressively better since. A third development, which initially appeared as early as 1983, yet didn’t achieve a first standardization until 1998, was a contributing factor that has affected things, at times more overtly, at times more subtly, in the intervening period.

1988 to Now

The two significant developments that occurred in the couple of years directly after 1988 that cemented the ‘worse is better’ mythos were the invention of HTML and HTTP, and as a result the WWW, and the introduction and popularity of Windows 3.x.

HTML and HTTP took the notion of ‘hyperlinked’ documents, itself an interesting and powerful idea, and implemented that idea over a simplistic protocol with only four commands, two of which, for the most part, are equivalent. The power of hyperlinking had been demonstrated by Apple a couple of years earlier with the introduction of HyperCard, and the HyperCard implementation remains far superior to the latest version of HTML, HTML5. However the fact that HyperCard was only available on a Macintosh was problematic in terms of creating a public hyperlinked information source. Although HTML was written by amateur programmers in a far more powerful but far more complex markup language called SGML, and was never designed to be a basis for applications, the ease of obtaining HTML browsers on any platform, especially on the newly popular Windows, enabled it to become ubiquitious very quickly. The limitations of HTML also quickly became apparent as companies rushed to try to take advantage of the WWW for commercial purposes. Due to what was known as the ‘browser war’, where Microsoft challenged the success of the most common browser, Netscape, by including its Internet Explorer browser in Windows, which meant users didn’t have to either purchase or install Netscape, in desperation Netscape designed a platform for client/server applications over HTTP, and quickly wrote a language that could run in the Netscape browser. The language itself was about what one might expect in a language written in a matter of weeks and needing to run in an environment that couldn’t be adapted significantly quickly enough, and therefore had serious limitations of its own. But the fact that it was the only language available in that environment made it, also, become quickly ubiquitous. At that point, the existence of a huge installed base made any improvements difficult, and truly significant improvements virtually impossiible. While Netscape was unable to continue and was acquired by Sun, the browser technology and language was open sourced under the Mozilla foundation and as a result was free, easily obtainable, and widely used. Microsoft’s IE became popular with business since it was pre-installed, although it earned the reputation of being used primarily to download Mozilla Firefox, and later on, Google Chrome.

The first generation of web applications became popular with business both internally and externally since they could reach plenty of potential customers externally, and internally were easier to maintain and deploy than client/server applications, which had in any case earned a poor reputation, due to the limitations of client machines, and problems writing complex applications in the most used languages, C and the ‘other’ C, C++. Ericcson tried to write the open telephony protocol in C++ three times, spending millions in the process, before starting from scratch with a more suitable development environment, ERLANG. A final reason for their popularity was the ease with which people could be trained, compared with mainframe green screen applications, while writing the application itself was fundamentally the same as writing a mainframe application.

Web applications had a problem, though, in that the interactivity potential of HTML and JavaScript was fairly limited, not much better than the green screens they often replaced. Although the capabilities on the server side wre enhanced by technologies such as J2EE, that could perform distributed transactions for example, the client side couldn’t easily be made more flexible. Java applets were one atttempted solution, but had problems with load times, reliability and security. Aside from the poor user experience, writing front ends for web applications in HTML and JavaScript was slow and inefficient. However, since nothing appeared to be immediately capable of replacing them, due to issues with the installed base, some incremental improvements did make web applications a bit more dynamic, particularly the introduction of AJAX, allowing JavaScript to make calls to the server without triggering a screen refresh, and new capabilities in HTML that allowed the screen initially to be divided into different frames, and eventually to support component based UI elements along with the page routing mechanisms used initially. That being said, writing those UI’s remains slow and somewhat painful, and suits neither the mindset nor the capabilities of professional software engineers.

The sudden irruption from the obscurity of Windows 1.x and 2.x with the introduction of Windows 3.0, solidified by the reasonably reliable incremental 3.1 version, was the other major development, very shortly after the invention of the WWW. Microsoft had managed to make Windows capable, if barely, of running a port of its Macintosh based Office package, which had graphical versions of the most common programs in use on PC’s, word processing, spreadsheet, and presenttation applications. Since Microsoft was less than confident that IBM would return development of OS/2 to them after version 2.0, they continued to work on the hardware abstraction and new kernel for 3.0, but made sure the Windows interface ran on it. Eventually this was released as Windows NT, and although it was a failure for nearly the first decade of its existence, it finally replaced the still MSDOS based versions of Windows with the version known as XP.

Writing Windows applications wasn’t far better than writing web applications, although the problems were inverse. Windows could display a decent if limited UI, nd its capabilities were improved greatly in the Windows 95 version, largely by copying what IBM had done for OS/2 2.0. Unlike OS/2, though, the UI was designed in the build-with rather than build-on paradigm. For example, a simple utility to allow UI folders to function as FTP clients had been written for OS/2, building on its System Object Model and Workplace Shell object facilities, and took a few pages of code which compiled to a program size of around 300kb. Due to the popularity of Windows and the similar looking interface of Windows 95, it was ported to it, or rather was rewritten for it. The Windows version required 12x the amount of code in the same language, and compiled to an executtable of over 11MB.

While most of the technology that the mythos of ‘worse is better’ was based on had been created on environments that were constrained due to the cost of better base technology, by the mid-1990’s the technologies that were maintaining and cementing the mythos, not simply among the public, but more significantly among developers themselves, were no longer limitations that arose due to the cost of reasonably powerful hardware and software, but due to the popularity of a set of basesoftware technologies: Windows, HTML, and JavaScript, that imposed constraints due to the fact that they weren’t themnselves either well thought through or well executed. Developers were using text editors that, while doing little more than their predecessors, used more resources than full development environments in Smalltalk and other advanced languages. The third technology I mentioned, that helped to create a hidden, underlying issue that maintained this situation was the ‘other’ C, C++, an attempt to add Smalltalk-like features to C without understanding where the power of Smalltalk came from, and to make up for its shortcomings had everything and the kitchen sink added to it. Java, while not as bad, inherited many of its shortcomings, while simultaneously becoming the most popular language in history, to the extent that more code has been written in it than in all other programming languages combined.

Windows , admittedly, did get somewhat better, as did HTML and JavaScript, but not sufficiently so that they could compete on a technical merit even today with OS/2, Smalltalk or InterLisp from 25, 35, 45 years ago.

Along with the technical shortcomings becoming entrenched in the targets that mainstream software development must aim at. The mythos of ‘worse is better’ became entrenched in a second way. A generation of developers obtained a certain, limited, but nevertheless useful success without really knowing how to write an application. Which brings me to the second article I mentioned at the beginning, “The Rise of the Expert Beginner”.

The phenomenon analyzed in the article is not new, nor is it unique to software development or even technology. What is unique is its prevalence in software development. This prevalence arose from the technical history narrated above, but more crucially from a side effect of this technical history. Competent professional software engineers preferred not to work with the more popular environments and technologies, leaving it as much as possible to “expert beginners” to pick up. In turn, those ‘expert beginners’ who happened to be good at ‘getting stuff to work’, even if their methodology largely consisted of copying and pasting scraps of code from wherever and fiddling with it until it “sort of” met the requirements, were promoted via the tried and true Peter Principle to their level of incompetence, designing and acting as lead developers on complete applications. The first problem is that in the most literal sense they didn’t and don’t “know what they’re doing”, i.e. while they may know how to get something to work, they don’t fully understand what the code they type (or more often, copy/paste) actually does as it cascades from text to bytecode, to assembler and finally executes. The second problem, arising from the first, is that while they may not have achieved the level of competence necessary to know what they don’t yet know, they do know that they don’t know.

This in turn has a couple of results: the first is the extraordinarily high percentage of software projects that are never completed, and the number of completed projects that are scrapped and rewritten shortly after completion. Management, not trusting their best technologists since they were advised by those technologists to use technology that failed to become popular and in many cases was orphaned, makes technology decisions based on a combination fashion show / popularity contest, and lead engineers on projects have no real idea how to accomplish the more complex aspects of an actual application of any kind, aspects made more complex than necessary by the oversimplification of the aspects they do know.

The other result is that technologies generally available in the industry have suffered a marked decline in capabillity. One of the initial and continual difficulties in writing an application involves maintaining state. Since an application is, by definition, a state machine, there isn’t in reality any way to avoid this difficulty. Yet numerous technologies have been advanced and become popular precisely by attempting to avoid it, at least in one area of a full application. These include such common and popular technologies as REST based services and micro-services. At the furthest extreme, technologies such as Electron attempt to give developers who don’t know how to write an application the ability to write something that appears to be an application without learning how to write an application, and leaving the complexities to somebody else on the server side.

The combination of day to day frustration and seeing incompetent peers promoted ahead of them has led many of the best engineers I’ve worked with to abandon the industry completely. Others just do what’s required, saving their real abilities for personal projects. It’s often occurred to me that, while I’m a competent-to-good developer in the terms of the article, that I’ve both contributed significantly to some of the largest and most complex successful projects in the industry during my career, and I’m still in it, is precisely due to the fact that I’m not at the level of many of those I know who’ve left it altogether.

I’m not going to speak much about the third article, since in terms of what it does say, it’s relatively self-explanatory. One thing I would like to note, though, is that while it mentions that newer technologies not only fail to solve the problems, they actively try to avoid them, it doesn’t mention what those technologies are. I’ve given a couple of examples, such as REST, micro-services and Electron, not because some of those technologies don’t have a place, but because they attempt to occupy a place they’re incapable of accomplishing well. The other is that it fails to note the industry’s almost studied ignorance of technologies that exist, or have existed, that could solve many of the problems, an ignorance that continues to be fueled by the myth that ‘real’ programmers are hackers that use nothing but a command line and a text editor.

Although the ideas contained in the doctoral thesis are not bad ideas, the thing that interested me most about the thesis, particularly as a research paper, is that the author appears to have no idea that the ideas he talks about have and had been implemented in existing products. Nor does he appear to have any idea that other products that had implemented many if not most of those ideas failed to gain sufficient popularity and were abandoned by the companies, individuals or OSS groups that had written them. At least three development environments that exist today, and existed when the paper was written, implement all but one of his ideas, and at least partially implement the last one. Three other environments that did exist but were abandoned also implemented all but one of those ideas. That three were abandoned and the existing three only known in specific niches, although they are general programming environments, speaks volumes to the continuing effect of the “worse is better” paradigm. Perhaps the worst aspect of the situation is a specific double bind competent and better than competent developers find themselves in, in the day to day experience of being a developer, which of course is in general not at either extreme, but somewhere in between, and how this double bind ensures that the specific point in between continues to be closer to ‘worse’ than to ‘better’.

This double bind comes up in many ways. Developers that know better methods of accomplishing things, and better tools, can’t get professional work using them. Developers of such tools remain virtually unknown, while developers of tools that offer nothing new, or are even worse than their predecessors, are on the other hand lauded and promoted by the industry. But I will give a specific case of the worst form of the double bind, where via a combination of chance and necessity, developers do get to use better tools and produce a better application, but the situation still results in a serious problem.

On one large and, to the customers of the application, crucial project that I was not simply involved in, but was one of the two main architects and team leads, as a result of the necessity of including specific legacy code and integrating it together, along with the need for the result to work every time, not just most of the time, we were able to choose the best tools available for the languages that were required since the legacy code had been written in them.

As the project got underway, a difference of opinion regarding our methodology arose between myself and another developer, one who despite having more experience than I had at the time, was not one of the leads, since the leads were chosen by the primary customer and that customer isn’t accustomed to being questioned. The dispute came down to his frustration that due to the tooling we were using, he couldn’t “see his files”, i.e. he couldn’t work from a command line in a plain text editor. While the dispute may seem silly, given he could look at his code without any issues within the environment, and the environment contained a perfectly usable command line, since it stored the workspace and the state of all the projects in order to allow incremental, as you save compilation, as a repository in a binary format, he couldn’t work with any text editor, or any command line tools. He had to work with those integrated to the environment. At times the dispute broke out into rather heated argument, but fortunately despite our difference of approach, we respected each others’ ability sufficiently to drop the argument once work was done for the day and enjoy a beer together. Although our paths, geographically, have diverged by most of a continent, we still in fact keep contact with one another.

He eventually came to the opinion, for that project, that I was correct that we couldn’t have accomplished it any other way. At the same time, subsequent events demonstrated that not using more popular if technically inferior tools carries its own problems. We completed the project, on time and on budget, with all the requirements met, sufficiently well that the product bankrupted its closest competitor. As is not uncommon, the company that owned the product neither wrote it nor paid for its writing. The primarycustomer paid, their reason for allowing code they paid for to become part of another company’s product, is that it ensured that their features would be maintained as the product evolved. In a sense, that has taken place, but not because the owning company has maintained their feattures while enhancing the core product, but because after the tooling we used was abandoned by the company that produced it, the product is no longer buildable. The product itself was acquired by a larger company than the original owner, and remains without serious competition in its market, but everything added to it has been added externally, communicating with the product in terms of data and command execution via the web services API we included. Neither the successor environments, nor any of the third party tools availablle for those language (which incidentally are C++ and Java, hardly niche languages), are able to build the product because it uses complex things such as CORBA to exchange objects between the two languages from necessity, but while the original development environment handled most of the under the hood complexity involved, no available tools today do. As a result, I’m wary of proposing that we use one of the three remaining environments that have similar capabilities unless they’re absolutely needed, since I have no confidence that they will be around as long as the applications developed with them.

During the period when the ‘worse is better’ paradigm arose the use of simple tools was primarily dictated by the limitations of the hardware that was affordable. While the successor environment to one of the two we used on that project has now had continuous development for over 15 years, is the most used environment for that language, and requires at least 20x the memory resources, never mind disk and CPU resources, it remains incapable of building that product. A popular text editor whose only development feature is the ability to do syntax highlighting uses over double the resources of the entire environment we used. In a similar way to the little ftp folder program I mentioned, the original environment was written to be built-on, rather than just built-with, and the result today is that the resource usage of the ‘simple’ tools is magnitudes that of the environments that in fact implement the capabilities talked about in the doctoral paper.

The three existing environments with those capabilities have one thing in common, they are all Smalltalk environments. The three environments that were abandoned, while they were for three other languages, were themselves written in Smalltalk. Smalltalk, like LISP, is designed to be built-on, not merely built-with. The resulting reuse of code makes it far more efficient on todays machines than the ‘simple’ environments. The entirety of any of the three Smalltalk environments use less resources than a sample frame for an Electron application, a sample that has no functionality at all other than to display a window outline.

The average day to day experience of developers is, of course, neither Electron nor any of the Smalltalk environments. Mainsream tools are usable, but as pointed out in the first article, they are slow, inefficient, and at times simply annoying to deal with. Those whose business is to write tools in software themselves use barely adequate tools, and won’t use or build better tools because they don’t fit the paradigm most developers today have grown up with. If I speak about the capabilities of those environments that do have better tools, I’m rarely believed unless I actually demonstrate the tools, and when I do demonstrate how well they work, they’re still not considered seriously because they’re ‘niche’, even if that niche is a strange one, the niche where things ‘have to work’.

One of those environments is open source and free. As such, I feel freer to recommend that people at least try it out and see if, in fact, what I’ve said is true. While it may not have a massive user base, it is well proven, both in terms of reliability and performance, and in terms of capability. Kendrick, epidemiology software built on it, is being used to fight measles, something less than benign in a number of countries. The problem is not simply a matter of how much engineers enjoy their work, but it can be a life or death matter, as it was in the case of the Toyota microcode.

The environment that anyone can use, since it is free and open source, is called Pharo Smalltalk. It is a fork of the Squeak Smalltalk that Apple open sourced in the 1990’s, with a more polished, professional interface and tooling that includes refactoring tools, code critique and rewrite tools that work as you write code, not as a long analysis only useful on complete code bases, message flow tools that can display all existing paths to and from a particular method, project configuration and deployment tools, and tooling to connect to mainsream repositories and issue trackers such as git and bugzilla. It can be downloaded at pharo.org , and there is a Pharo MOOC, which consists of a free online course in developing in Pharo and Smalltalk generally that takes around six weeks on average to complete.

At a minimum, learning Smalltalk and its capabilities has been demonstrated to improve the capabilities of developers in other object based languages, and Pharo provides a means to do that at no cost other than a few weeks of spare time. Beyond that, what’s possible largely depends on the willingness of the industry to acknowledge the issues and the existence of potential solutions, Since in large part we as developers determine how the industry as a whole is, the only way the industry can become willing is by developers themselves acknowledging the problems, their own part in them, and being open to the fact that they may not already know the solution, but at least a partial one does exist, and it can be learned.

Pharo is not on its own a silver bullet, there isn’t one, there are capabilities it still lacks, and others that are still somewhat inconvenient, but it is a significant improvement over other OSS environments, and over most proprietary ones, and since the only proprietary environments that do have significant advantages other than in niche situations are also Smalltalk environments, whatever one learns in Pharo is valid in all of the best environments available at this moment.

To improve on it requires building tools in a way that they can be built on and improved, and building applications with those tools that can be built-on and improved. Rather than sitting down at the beginning of every project with the same or similar tools that accomplish the same or similar things, and proceeding from essentially the same starting point each time, it is built in such a way that once something is done right, it can be reused and not rewritten when a similar requirement with minor differences comes along, as it always does. If we expect that, rather than simply wishing for it, and make decisions based on whether proposed solutions can be built on further, we can solve the problems.

Pharo is a means of experiencing building-on rather than merely building-with, and until we’ve experienced what that simple difference accomplishes, better tools, not to mention a more competent industry, will remain as far away for most developers as they are now That Pharo is as capable as it is, considering the relatively small number of developers who contribute to it and the relatively small number that use it compared to a mainstream, heavily invested in environment such as Eclipse to take one example, and not the worst example by any means, is in itself a demonstration of how much capability can arise from building-on what we build, rather than just building-with it.

Andrew Glynn
Andrew Glynn

Written by Andrew Glynn

A thinker / developer / soccer fan. Wanted to be Aristotle when I grew up. With a PhD. (Doctor of Philosophy) in Philosophy, could be a meta-physician.

No responses yet