Outsourced Engineering-Software Development


[2019-07-02. Modified to add a working link to the Ladkin-Simons paper on Static Deadlock Analysis.]

Bloomberg has an interesting article by Peter Robison on the difficulties Boeing seems to have been having with outsourced software development. Such outsourcing has been going on for decades in all sorts of software-dependent companies, and is a well-developed model. Sometimes it works; sometimes there are difficulties (I was particularly impressed in Robison’s article with the anecdote about sending something back 18 times until the contracted developers finally understood a requirement).

Thirty years ago, I was technical lead, contracted to a small software-analytical company, on a commercial research project to devise mappings from SA/RT designs to processors, essentially serialisation of a collection of communicating modules, some of which were state machines (“control”) and others transforming data. The client was a then-major telecommunications-system manufacturer. As part of this, we were given some sample “specifications”, which up to that point had been written by hand and the client thought, correctly in my view, that much of it could be automated. Our project was part of that. (It actually got technically rather more complicated. After the project ended, Barbara Simons and I addressed some theoretical problems which had arisen, using the notion of message-flow graphs, and discovered a combinatorial-mathematical condition for when message-passing deadlocks amongst multiple processes conducting handshakes, published in the 1992 ACM Supercomputing conference. Some of the material is also to be found in Chapter 5 of the book Responsive Computer Systems, entitled Static Deadlock Analysis for CSP-Type Communications , available on the ResearchGate site.)

The “specifications” were in fact extremely detailed pseudocode, including all the usual coding-standards boilerplate (about 90% of it was such boilerplate). That was then packed off to contract programmers to “code”. It must have looked to many software people like code written by people who were just too lazy to use the correct high-level-programming-language (HLPL) syntax. I suspect with hindsight, though, that the lack of scoping, declaration and use of the pseudocode quantities (which would have been variables in an HLPL) could have been a stumbling block at the implementation – there was just nothing in the pseudocode to indicate such scoping, information-hiding, modularisation and so on. It could have been all implemented using just global variables. Nowadays, the better specification languages are much more sensitive to the scoping aspects of programming than they were then.

I don’t know what the client’s assessment regime was. We didn’t talk at all about testing and test design. Neither did we deal with any coding standards additional to what was implicit in the pseudocode boilerplate. Both those aspects would have been crucial for any reliability properties of the resulting code.

And, of course, this was not safety-critical code in the traditional sense. Although it was mission-critical in that, if it wasn’t reliable, the company wouldn’t get to sell its private mobile radio communication systems, which at that point were highly-rated for reliability (I don’t know whether they still are) and used widely by fire/police/ambulance emergency services. So the code/system was safety-related in the indirect sense that some emergency systems would have been reliant on it (recall the 1992 London Ambulance Service dispatching-software fiasco for an example of nominally non-safety-related code which in fact was related to safety: here, papers by Adamu et al., and by Dalcher).

Whether this model works, then, still depends to a large extent on the capability of the programming, if you call translation of the pseudocode into a HLPL (and adding decent scoping rules and devising testing regimes) “programming”. While the programmers I knew/know would consider such a task routine and boring, I can well imagine that neophytes previously exposed only to university-coursework programming tasks could have a real struggle with the “missing bits”, the scoping issues, in trying to implement these tens of thousands of lines of pseudocode. In short, there is an unavoidable management issue.

Which is not by any means to say that all outsourced programming has this nature. A colleague of mine, with whom I have been corresponding for a couple of decades, works for this company, which also builds critical software for aerospace, and his book How To Engineer Software, of which I have read parts and is due out in October, is well worth consulting.

In contrast, the Bloomberg article shows by example how much real-world software development is still taking place without effective use of controls such as having a functional specification and demonstrating that your design fulfils the functional specification (refinement), or that your pseudocode refines the design and your HLPL code refines the pseudocode.

In a couple of years there will be an international standard, IEC TS 61508-3-2, stating what assessment of this kind is required in safety-related SW development. The functionality of such standards is not that everybody suddenly starts conforming as if it were a law, because things don’t quite happen like that. But when things go wrong and companies end up in legal arbitration for eight- and nine-figure sums of money, lawyers use such standards documents to evaluate the liability (of the “other side”) in retrospect. “Here it says you gotta do this and you didn’t ” is their second-strongest line of argument in general. (The strongest, “the client gave you a contract to produce X to this specification S and you didn’t “, is somewhat mitigated when the argument is about complex SW because, as everyone knows and the client is presumed to be informed enough to know, “all software has bugs”, and, as developers are well aware and a competent lawyer will make an arbitration tribunal aware, specification S often has bugs and lacunae also.)

I found it odd that the Robison article describes Peter Lemme as having been involved in the Boeing 767 “automated flight controls”. The flight controls on the Boeing 767 are of course mechanical. I wonder which bit of them counts as “automated”?


Leave a Reply

Recent Comments

No comments to show.

Archives