Reasons … and Reasons , How the Software Industry Turns its Issues into Subscriptions

Andrew Glynn
12 min readOct 28, 2017
Meanwhile, still building …

A while ago I was working on a new project, something being written from scratch (well, at least from the usual starting point of a given set of tools and technologies, largely languages and frameworks). While most of the specific technologies and tools were, perhaps not ideal technically from my perspective, they weren’t particularly controversial. In fact, it’s in large part because more ideal choices from a purely technical perspective would be more controversial, and with that comes worries about maintainability etc., that I’ve learned that arguing against them is more hassle than it’s worth — you’ll rarely win, and if anything doesn’t work out perfectly, even though it likely would have been worse had the more mainstream approach been taken, you’ll be blamed for it.

However, there was one key technology being used that’s very new, rather unstable, and I can’t really think of a better description for it overall than perverse. That I did argue against, and the person primarily responsible for implementation could only come up with “because we need to be doing R&D into new things”. Although that’s often true, R&D isn’t equivalent to ‘basing a key component of your main product on something unproven, with obvious and significant shortcomings’. As a result, I had a discussion with the person who is in the final instance responsible, though they may let the decision be made by the primary implementer if they have no strong opinion on the matter. Although the rationale given had more validity, as far as it went, and wasn’t dissimilar to the rationale that usually dictates a more mainstream choice, it also wasn’t as relevant, nor did it dictate this technology in any particular way, and so although I nominally accepted it, I felt dissatisfied by the situation.

Part of the reason for that feeling is that the rationale itself, whatever it’s applied to in software development, is part of a set of presumptions / heuristics that although partially valid, are major contributors to the fact that we do start every project at the usual point; in turn that’s a key reason the industry still does things largely the way it did 58 years ago, which is unthinkable in most other industries. The other part of the reason is that the perversity of key aspects of the choice made it, from my perspective, a Very Bad Idea®, no matter the rationale given.

To describe what I mean by perversity in this case, without going into too many specifics, unfortunately requires going into development in general a bit. Most applications today are at least “double VM’d”, i.e. they run in a virtual machine that itself runs in a virtual machine. The initial virtual machine is known as the operating system — Windows, Mac, Linux etc. The second is part of the language/environment, whether Java, JavaScript, .NET, or more esoteric languages like Python, LISP, or Smalltalk. In many situations, they’re deployed via a third VM, which is what most people think of first when they hear the term virtualization, and may be either full (such as VMWare) or lightweight (Solaris containers, Docker), but since that’s primarily a deployment and not a development issue, I’m not going to be too concerned with it.

Although it may seem inefficient to write software that way, it does provide some key advantages, and with fourth and fifth generation VM’s, such as Google’s V8 and Pharo, the VM languages can be as fast or in some cases faster than non-VM’d languages (which means, on most platforms, largely C/C++ without .NET). A well-known example to the OSS community, particularly for Linux based developers but becoming better known to developers who primarily target Windows, is that Python can use C libraries, such as those needed for OS integration on Linux or Windows, with better performance than C++ can, and in more than a few cases better than C programs. While in many ways that is, and should be, an embarrassment to the authors of C/C++ linkers and other tooling, that a VM’d language is often nearly as fast, while providing things like automated memory management, application garbage collection etc., is true sufficiently often that the minor loss in performance is more than made up for by the increase in productivity.

Developers though, for the most part, simply use the VM that is part of that environment, the main exceptions being LISP and Smalltalk, where there really isn’t an ‘as is’ in the same sense, because any code written intrinsically becomes part of the VM. Even in those cases, only core language developers work on the more messy aspects of building the VM, primarily those to do with linking to the OS libraries needed for file system and other OS services access. Since the same source is largely used cross-platform, while the libraries are different (and, on the Mac, in a different language), it requires using some complex and arcane tools with names like CLANG and SLANG to accomplish that linking.

The perversity that I’m speaking of is a combination of having to do that oneself for every application that uses the technology, combined with the reality that more than a few people who work in that language have trouble maintaining application state in a single threaded environment, something not all that difficult if you’ve done it, but in many cases it’s not part of the work usually performed by code in that language, and therefore not by developers who focus on it. Maintaining state can get more complex, for instance with multithreaded lock-free applications, and even more if they’re also distributed, but since the language doesn’t support that it’s not a realistic issue unless you’re a more polyglot developer.

Of course, there are scripts and tools that, at least theoretically, should work most of the time, at least on the platform the code was developed on, but even that’s by no means certain currently given how immature the technology is, and the technology in fact does have issues on faster multicore machines, particularly those that also have multiple threads per core, even though it’s not itself threaded (the problem appears to be that the VM itself has issues when it is switched from one CPU thread to another, which in its most common use case is taken care of by other code). Since core aspects of the technology also don’t work cross platform due to the difficulties involved, the likelihood of an average developer in that language being able to accomplish something the core technology developers themselves haven’t properly figured out, looks rather low, at least in the short term.

Given the other inherent limitations, such as that it is single threaded to begin with, that the most common VM is also the origin of the most bugs that involve message oriented middleware, another key part of the technology stack for the project, and general performance, reliability and scalability issues with the core language (which simply wasn’t designed with those things in mind), the idea of the technology in general, never mind using it for production code at the moment, can’t help but seem perverse.

That said, at the end of the day, while that decision may be one that will be regretted, decisions that a similar rationale more rationally dictates in more usual cases are the more problematic ones, precisely because they’re not the ones people regret, and so they keep making them based on the same assumptions and heuristics. This one though also demonstrates one of the key reasons they are problematic, and the way in which that problem contributes to the industry’s overall problematic.

In the event there’s anybody that is unaware the industry has issues, there are a number of articles that a previous article, Building-with versus Building-on, has links to. In short though, from Toyota borrowing Audi’s uncontrolled acceleration 20 years later to a 70+% failure rate in software projects overall, not to mention a shorter average career span than that of professional soccer players, there are some major issues. Making decisions via problemaic presumptions and heuristics is precisely what turns issues into subscriptions.

People have come to expect technology to improve, and at a high pace. Computer technology is odd in that hardware development demonstrates the extreme of that rapid pace, while simultaneously software development is its biggest contradictor. Software is not built to be developed further, and development begins each time at essentially the same point. Were software designed to be replaced, it might be justifiable, but that it isn’t is a problem only a few of the more aware people in the industry have even begun to think about.

The biggest reason for it, overall, is the urge to ‘not get left behind’, which is somewhat ironic for a pattern of behavior that prevents the industry from ever getting ahead. In this sense, the reason given by the implementer is the truer reason, and the reason for the non-ideal nature of more usual choices, whatever the ostensible reasons.

Software, not only the user software which is the product, so to speak, but the software that comprises the tools with which a developer writes user software, is most often chosen by people without all that much technical knowledge. While the computer hardware industry, like most others though at a higher pace, builds on a core set of technologies that themselves change more gradually, software development tends to jump to new core technologies more quickly. This has three intertwined effects: core technologies, to gain the kind of market advantage only relevant in other industries to end products, are often released long before they’re mature, or even decently stable; libraries, frameworks and applications built with them must change regularly as the core technology changes; the core technology does change regularly and significantly, because it isn’t really finished, never mind mature, on initial release. With no stable core technology, the ability to build-on previous work, precisely what would enable rapid improvement in the end-product, is almost completely lacking.

While the technology in this case simply isn’t ready, and beyond that isn’t all that sensible even as a concept, the situation doesn’t affect only new languages or environments, but prevents building-on previous work in the seemingly stable, at least relatively, languages and environments, since each version has substantial differences to ‘keep up’ with competing choices to make ‘getting ahead’ by doing so impossible.

Developers of libraries, frameworks, tooling and what is built with them spend more time updating existing code, to be compatible with the latest changes in the core language and environments, than in making them better. Although in each ecosystem they do improve, they do so only gradually. By the time they are getting to the point of being mature, many developers and companies have dropped the entire platform, either due to its not having become sufficiently stable or sufficiently popular or, more often, because it’s now viewed as out of date.

In summary, then, the following is the basic problem being considered:

The expectation that “newer is better” prevents newer from ‘being’ better in any significant way.

In plenty of ways, this isn’t really surprising. Software is disruptive, as most other industries have discovered by now, so it’s not a huge stretch to think that it would disrupt itself, to the degree of never stabilizing sufficiently to build on much of what has been accomplished. That it’s to the point of barely advancing at all, though, seems too contradictory to the essence of technology itself. By and large developers have done their work the same way since the release of the first OS, VM, in 1959.

“And tonight I’m gonna code like it’s 1959, do dooo, dooo doo do”.

Another project I’ve been working on demonstrates this quite well, and simultaneously demonstrates the effectiveness, at least technically, of not using this rationale. Since this project is intended to be open sourced, while it’s affected by the rationale, it’s not affected to the same degree. Developers write open source largely because they enjoy it. It doesn’t directly put any money in their wallets, and although it may contribute to their earnings indirectly (by being an opportunity to learn other technologies and techniques, and by demonstrating the developer’s ability) similar unpaid work would accomplish the same in many fields, yet few fields have many open source products.

In terms of how the rationale does affect open source software, developers who write it and people who organize projects involving more than one developer both want the product to be used, otherwise they wouldn’t plan to make it available in the first place. In large part this means working in mainstream environments (though environments such as Python are more ‘mainstream’ in OSS than they are in proprietary or custom software). However, individual developers may decide to support or also support less mainstream environments, simply because they prefer them.

On this project and with what I was particularly doing, both ended up being the case. While the main project targeted a mainstream environment, the ease of simultaneously implementing it in an environment I prefer prompted me to do both. That aspect of the project utilizes relatively simple machine learning, but partly due to its simplicity it’s also relatively effective. However, in the mainstream environment it’s more difficult to implement. It took me over three months to accomplish the first version, while in the secondary implementation, it took less than 24 hours development time. Of course part of that comes from having already done it one environment, but considering the difference in the sheer amount of application code and configuration of required middleware, probably less than a third of the total difference arose from the sequence in which they were done.

More cogently, the capabilities of functionally the same software are vastly amplified by existing capabilities of the second environment relative to the mainstream one. The reason for both is that in the niche environment I could build on an analysis toolkit, based on a visualization toolkit and a developer tooling UI kit, themselves based on a UI framework itself based on experience with the concept of the style of UI framework in general and the problems it most often has. The total time involved in developing the preconditions for my ~ 22-hour implementation amounts to 46 years.

To enhance the capabilities of the more mainstream implementation to the level of the niche one, I would have to write an equivalent to at least the analysis toolkit, the visualization toolkit, and the developer tooling UI kit, as well as modifying the core UI framework and certain aspects of the base language itself. Those aspects of the secondary environment account for ~ 24 of the 46-year total.

The key reasons those things are not available in that nor any other mainstream environment are that the mainstream environments become that way only if the core technologies, languages and frameworks are implemented too quickly to be stable prior to release, and that as a result have to be constantly changed to gain stability and/or to add ‘in-vogue’ features. Thus nothing in the environment is stable enough for additions that take a decade or longer to complete to even be attempted in most cases, certainly not stable enough for any to be completed well.

Which brings me to:

Implementing the equivalent functionality in a mainstream environment won’t happen anytime soon, if ever, and it won’t be me that does it.

Since I’m by no means alone in not wanting to spend 20+ years of my life on something, when in all likelihood the environment will no longer be ‘fashionable’ by that time, mainstream environments will never have those capabilities, unless the rationale on which decisions are generally based changes radically.

To that end, developers have to stop calling themselves engineers unless they start to act like engineers, and I mean beyond being the second drunkest bunch of students at most universities (after medical students). An engineer is defined by the ability to make things work. Using things that by and large don’t, makes calling oneself an engineer a farce at best. That also means controlling disruptions to one’s own work to those that offer a significant advantage, which eliminates 99% of ‘new’ languages and tools. By all means we should test them, do prototypes, see how they work, or at the very least if they work. But even doing a prototype in many of the recent tools is next to impossible, because they don’t work well enough to even use them for a prototype.

The obvious difficulty is that developers often aren’t the ones making the decisions.

It’s hardly an insurmountable difficulty, though. It simply requires that we use the disruptive nature of the industry to our own benefit. By that I mean creating the inevitable disruption that would occur if we started acting like engineers and refusing to use sh*t that itself doesn’t, no matter how new or “fashionable” it may be.

Management may feel it has the right to make decisions, despite having no expertise. Management can feel how it wants. Without developers willing to follow them, though, those decisions won’t mean all that much.

--

--

Andrew Glynn

A thinker / developer / soccer fan. Wanted to be Aristotle when I grew up. With a PhD. (Doctor of Philosophy) in Philosophy, could be a meta-physician.