Code Quality for Safety and Code Quality for Security


Some computer security experts put the majority of extant vulnerabilities down to poor code quality; for example, Martyn Thomas in his keynote at the 2016 IET System Safety and Computer Security conference in London. This was evidently the case in the late 1990’s, when some 80% of the newly-formed US CERT’s publicly-announced Internet-transmitted vulnerabilities were buffer-overflow vulnerabilities. Buffer-overflow vulnerabilities are completely eliminated by checking all input-data length against intended-length, something automatically accomplished by compilers for strongly-typed languages, for example, but needing to be explicitly coded in languages such as C.

Things had changed somewhat round about the turn of the millennium. The enormous amount of effort world-wide put into detecting and eliminating or mitigating Y2K vulnerabilities, with plausible estimates ranging from $200bn to $900bn, surely had something to do with it.

Contrary to some superficial opinion, that effort on Y2K was not wasted. People actively involved in Y2K prophylaxis in critical industries have plenty of examples of things that would have gone wildly, sometimes dangerously, wrong had not the vulnerability been discovered and fixed. But, although that effort was demonstrably necessary, I know no one who can make any kind of case for whether the effort was value for money or not. What would have been value for money was for computer programmers not to have used an inappropriate data type for Gregorian-calendar years in the first place.

One influential mid 90’s attitude on code quality can be seen in a 1995 interview with Microsoft’s Bill Gates. It is an industry commonplace that Microsoft realised that the lacklustre dependability of its software was not a long-run advantage, and started offering such products as the .NET PDE in the early 2000’s (there is a specific citation of Gates round about that time saying that code quality was to be the priority at Microsoft, but I can’t seem to find it).

Code quality is evidently an issue in safety-critical applications. It is controlled, for example, in almost 60 documentation requirements in IEC 61508-3, the international general functional-safety standard governing SW.

I mentioned yesterday that various engineers, including some active in standardisation, seem to want to equate safety requirements on software, SILs, with security requirements on software, SLs. I pointed out that conceptually this cannot work. But why would one want to do it? The story might be somewhat as follows.

A SIL 1 safety function implemented in SW is allowed to fail more often than a SIL 2 safety function, which in turn is allowed to fail more often than a SIL 3 safety function, and so on. Code quality concerns lack of failures, so it is tempting to take a SIL to be a requirement on code quality, becoming more stringent as the SIL goes up.

An SL is a requirement on the resistance of a piece of code to subversion of its function. The higher the SL, the more resistant the code must be. If resistance to subversion indeed has something to do with code quality, as the observations above indicate, then a higher SL is also correlated with higher code quality. Voilá! On this reasoning we are talking about code quality with both requirements, so they can be correlated.

But not so fast. Consider an example.

Procedure P passes data over a network to critical-software S. Procedure P always produces data in a specific format, a specific length. Let us assume that it has even been proven through static analysis to do so. Critical-software S relies on that data. But say it does not recheck the data format when it receives input from P. After all, the system designers know for sure what that format is, on each and every possible instance. Let us further assume that S has been proven through static analysis to fulfil its function in every case, on the precondition that it receives its input from P.

From the point of view of safety, the code quality is perfect. P always does what it should, and S, receiving input from P, then always does what it should.

But from the point of view of security, the code quality can be very poor. If the function of S may be subverted through a buffer-overflow attack, anyone who penetrates the network connecting P and S may conduct an MITM exploiting the buffer-overflow.

From the point of view of safety, the code is perfect. From the point of view of security, the code quality is poor. Code quality is not a one-dimensional property. It has a focus. There is code quality for safety properties and code quality for security properties and this example shows they may not be the same.


Leave a Reply

Recent Comments

No comments to show.

Archives