Coda, Interdisciplinary Work, and Scientific Publishing

It sounds like a mish-mash, doesn’t it? will probably read like a mish-mash, too.

Because true interdisciplinary work always looks that way, I think. That is one of the main points I wish to get across. But first, let me get there.

Concerning my last post, Leslie noted that the condition he labels “FAA requirement” in his slide 4, for 10-10 probability of failure per hour was actually a NASA requirement for the SIFT research. SIFT was the first digital flight control computer, and SRI was supposed formally to verify its operating system. The project didn’t succeed in this original goal, over a decade but, as is often the case, we computer people learned far more, and more fruitfully, from this failure, than we ever would have had it “succeeded”. For example, I am not aware of any formal proof that such-and-such a non-trivial system S is guaranteed free of Byzantine failures, for any system S that is not artificially constructed just for the proof. And that’s thirty years after the papers were published! Conclusions: Lamport and co put their fingers on some things that we just can’t do. Not only that, but they classified a cross-disciplinary problem in a new way. Byzantine failures, as spoken of by Driscoll et al., are a system problem, a mixture of phenomena which have to do with the electronic design, as well as the materials, of which system components are made. Transistors get cracks in them and turn into condensers (a Space Shuttle Byzantine agreement problem). But Lamport et al. turned their efforts to a pure algorithmic problem and published in pure computer-science journals (indeed, the best). Leslie is not a computer scientist who deals with avionics, he is a computer scientist who deals with computer science.

But right on the boundaries also. One of his most insightful (and to my mind, one of the best) pieces of work he ever did was on the collection of issues about arbitration in converting continuous (“analog”) data into discrete (“digital”) data: Buridan’s Principle, whose purely technical contribution rests on a mathematical theorem he proved with Dick Palais, his thesis advisor. You can read Leslie’s account of the odd results of his attempts to publish. He gave it to me sometime in the 1980’s. But since the 1990’s, everyone can know about it and read it at will, because he put it on the WWW. Thank heavens for the WWW!

And that is a point about interdisciplinary work with which I have been struggling now for almost twenty years. One writes a paper on the causal analysis of a computer-related aircraft accident using the Lewis semantics (the Counterfactual Test). One sends it to a computer science journal. Review: “that’s got aeronautics in it, no one in computer science understands aeronautics, better to try an aeronautics journal”. One does. Review: “that’s got logic in it, no one in aeronautics understands logic, better to try a logic journal”. One is not stupid, but if one were, one might try to do so. Anticipated review: “that’s got computer science and aeronautics in it, no one in logic understands computer science and aeronautics, better to try a computer-science-and aeronautics journal.”

And that’s all true and that’s all reasonable. Indeed no one in computer science reads aeronautics journals. No one in aeronautics reads logic journals, and so on. That’s why many engineers working on avionics bus systems still do not know about Byzantine failures, 30 years on.

The result is that most of what I write gets on the WWW and stays there. One can spend one’s time writing, or chasing one’s tail around such publishing conventions, but doing one takes time and effort away from the other, and I prefer writing.

Just to give an indication, one of the pieces of work I performed in the last year of which I am most proud is the analysis of causal explanations of the Concorde accident and assessments of responsibility, which I wrote about in my post Concorde, Ten Years On, Part 2. I see there a series of pressing social and technical issues and their interplay, which people have not satisfactorily come to grips with and I regard that piece as some kind of a start. As I said I’m proud of it. One can’t do that kind of work every day, or at least I can’t. One has to sieze the moment and I did. Actually, that is the way many successful researchers work in math or computer science. Or philosophy, for that matter. You spend most of your time laying some kind of groundwork as best you can, and then you are somehow handed a moment and you sieze it: “I can do that!” and you do. Some more than others.

This wasn’t ever different. Disciplines were partitioned, especially academic disciplines. But one would have thought, as I guessed 15 years ago, that the WWW would make everything different. Mais, plus ça change, plus c’est la même chose.

Some more examples.

I recently organised a Workshop on the Fukushima nuclear accident, inviting largely sociologists and computer-system-safety people. People who read my blog know why I laud the sociologists for their insights into technical matters. When I was thinking we could do this, I asked people about funding. The Scientific Board of CITEC, where I am a PI until November 2012, thought it was a cracking good idea and very relevant and offered financial support. My colleagues at the Centre for Software Reliability in Newcastle upon Tyne, when I called them to apologise that we were withdrawing from their exhibition at the Ada Connection in order to put the money in the Workshop, also offered financial support. Thank you all! And I did approach the German central funding agency for scientific research, the DFG, which had circulated an e-mail saying that in the wake of the tsunami there were instruments available to support cooperative research on the matter in the very short term.

Naive as I am, I took this message literally. I contacted the responsible administrator whose address was on the note. He graciously explained that his “instruments” were limited and didn’t support my workshop idea, and passed my request on to, amongst others, the administrator responsible for the support of engineering research, who replied forthwith in one sentence: “from the point of view of engineering, I don’t see any possibility of support” What? The world has just experienced one of the two most devastating engineering accidents ever, German politicians scrambled over each other to devise our exit from nuclear power, and the prestigious German academic research support agency says it ain’t interested? I put in a carefully worded query asking whether this could really be so, and received no reply.

Now, me being me, I would think they should be ashamed of themselves. But if I said that, I’m sure it would be indicated to me how inappropriate that would be, and really that I don’t understand the formal courtesy structures at play, and so on. Maybe all true. But the fact is that I have an international reputation in accident research, here was a biggie with major political consequences, I invited a bunch of top people to discuss it, they all said yes by return e-mail, and the engineering research support organisation said it wasn’t interested. There is no way around that fact, no matter how pretty the words.

And that illustrates something that I feel is going more and more wrong with academic-type research over the years in which I have been involved with it. I suspect it is particularly acute in Germany. Academic research here after the first degree is performed by scientific employees, by people in temporary jobs. There are no “graduate students” (although that is beginning to change: there are now narrowly-defined graduate colleges which offer competitive scholarships. We have one in Bioinformatics and Genome Research, another in Situated Communication, which I think is now over, and another in Cognitive Interaction Technology, which I think has absorbed it). You want to offer a research topic in an American university, you do so, to all the graduate students, and some one will be attracted to it and come and talk to you. In Germany, you have to apply for funding (mostly from the DFG) for a temporary position to perform the research, then wait for the job applications, and hire someone on the basis of an interview. It’s a lot more work for the faculty member; there isn’t the same personal connection to the bright young people you already know are capable of the work; it’s less flexible (I got three quarters of the way through two other thesis topics before I hit on the one I could finish, and none of them were connected with each other. You can’t do that in a job. Indeed, it took me three jobs!); and I believe the quality of the product suffers (but then, I was at Berkeley. Unfair comparison? Well, no. No German university makes the top fifty in any of the more well-known rankings and I’m talking about possible reasons for that).

Let me amplify a little on that parenthetical comment. I had a colleague here in Bielefeld with over twenty or twenty-five “scientific assistants” in his group, people working at temporary jobs who hoped thereby to get their doctoral degree. At Berkeley, people, even Turing-award winners, had at most four or five doctoral candidates whom they supervised. The key word here is “supervised”. No one person can supervise twenty-five doctoral candidates to anything like the Berkeley norm. Indeed, supervision, such as it was, was mostly delegated to the post-docs. Of which, to achieve the same ratio, one would need five or six or seven (I recall there were three or four). And these doctoral-work supervisors were not Turing-award and like winners, not even NSF Young-Investigator Award winners, such as at Berkeley. They were people who had got their first research qualification and were mostly at the beginning of seeing whether they could make any kind of name for themselves.

A couple of years after I got to Bielefeld, I discovered that somebody in that group had just written a doctoral thesis on temporal reasoning for artifical agents. Temporal reasoning for artificial agents? That’s the very work that I was known for, partly on the basis of which I was hired (here, one is not hired but “called”). This guy had never talked to me. Curious, I looked at the work. After I read the statement of the problem, it was obvious how to solve it. Then I looked at his solution and it wasn’t anywhere near as good. (But there was some program code behind it.) Happens here. Happens quite a lot here. Doesn’t happen at Berkeley, by and large.

I faulted the research structure. The guy had a job, with a job description. He was a nice, friendly and capable guy. At the end of the job was the expectation of a doctoral degree. Which was duly awarded after satisfying the appropriate formal criteria. All very neat and clean. DFG money apparently well spent. But the sum total to the world’s knowledge of how to solve temporal reasoning problems with artificial agents was essentially nil. His energies, and the funding support, would surely have been better spent had he talked to me, and then worked on a problem of the same level of difficulty, but to which the solution was not known.

This is already a lot of anecdotes. But it is hard to see how to get at the point without recounting lots of anecdotes. For each anecdote has its individual answer: it’s a special case; or I misinterpreted; or I was sour at someone; or I’m just being arrogant; or I’m looking for excuses for something I haven’t done or don’t do. Maybe all true, but it is the number of anecdotes, interpreted as the weight of evidence, that persuades reasonable people that there is something to the set-up which encourages all this.

Indeed, I am convinced that the model in which aspiring researchers pick their own topic from amongst those offered, make personal connections with a senior researcher who is able to judge whether they might be capable of completing the work, encouraged indeed required to correspond with more accomplished others who have worked on and solved similar issues, along with the freedom to change topic completely when the current one won’t work out, is a better way to induce productive research than the research-as-job model.

But this heavy structurally-constrained interpretation of what constitutes effective research goes much further. Recall my anecdote about DFG support for my workshop, above. Along those lines, consider the following. I am a Principal Investigation in CITEC (above), whose charter is coming up for renewal and the proposal is about to be submitted. It turns out that the business of saying what my group (essentially of two: me and my post doc) have accomplished and what we will accomplish in the next five-year period was delegated to a young colleague, whose job is supported through the institute through the five-year cycle, as indeed now are all professorial jobs in Germany (tenure has gone) and is thus dependent on the success of the upcoming proposal.

Despite offers to help, my colleague didn’t talk to me at all. Indeed, it took me a certain amount of effort to find out who was writing what about our work in the proposal, since apparently none of the stuff I wrote was going to make it in. He wrote one sentence about the work my group had accomplished over the four years (with apologies that he couldn’t find more). And he found no relevant publications, despite (he indicates) trawling our publications page. Well, during the course of the last few months I have been asked variously for one key publication; for five key publications; for ten publications not necessarily within the CITEC remit, all by various people none of whom are he. The Coordinator of CITEC (effectively the director) asked for a meeting, to explain to me that without any publications it didn’t look good for the proposal to include me.

What? People can’t find stuff I’ve written on the safety of mobile automatic devices in the last five years? Well, of course they can, but you see it doesn’t count. The DFG says peer-reviewed journal articles only.

There we go again. Structural constraints. Nobel-memorial-prize-winning economists, and sociologists, and political scientists, and legal scholars all write blogs. Hundreds and thousands of people read them and comment, including their peers, often in their blogs. Peer-reviewed? Most obviously! I just received a copy of a journal article (counts!) written by two colleagues about two essays (cited) I wrote in this blog. Other colleagues read my posts and they comment!

Another example. We started a mailing list in March 2011 on the Fukushima accident and I recently summarised my contributions, which amounted to 117 A4 pages in 12-pt type, for the workshop proceedings. Now, every word I wrote on that mailing list has been read by eminent colleagues on the list, and they have commented, frankly (it is a closed list). And I have commented on their writing in turn. That’s what you do on such a list, if you’re one of the people who do it (others prefer just to read). Peer review? How much more is it possible to get? And more easily?

The WWW has been pervasive for fifteen years and e-mailing lists for thirty. And there is still no measure of quality of contributions that is acceptable to the German research funding agency? (It is not the only one with such a view.) Astounding! It is not as if this is a hard problem. It would get to be a hard problem if what you want to try to define is The Definitive Measure of Scientific Quality, because there can’t be one. But judging the quality of blog posts or sustained mailing-list contributions is no easier or harder a job than judging the quality of peer-reviewed journal publications, indeed it’s often easier because you can ask more people.

Actually, what happened with the CITEC thing is this. Bernd, Jan and I figured a while ago that our textbook on Safety of Computer-Based Systems, which was been solicited by a major publisher some years ago, would be written and out by now. And we thought one book would likely suffice to show what we’d done. One book is not five published papers; in this case it’s more like fifteen, and there will be more. But it isn’t out. Since it is a text, we need to be sure that the techniques introduced can actually be used by the target audience, students and engineers, and so some of our contributors belong to that target audience (not all textbook writers do this, but I happen to think it’s a very good idea). They are not necessarily as experienced writers as I am, so it simply takes longer than we’d thought. Quite understandable, one would have thought. But apparently there is no reasonable way to say to the DFG that the book is almost finished. (Someone might even want to say that a textbook isn’t research. But this one is, you know, just like Nancy Leveson’s new text. Read that one too!)

Structural constraints, and how they hinder effective support of effective research. Is everyone convinced by now? At least convinced to look at the issue more closely? Shall I stop here then?

Not quite. One more word, back to the original topic. “Interdisciplinary” is one of the buzz words of the new modes of research support. But the problems indicated above of support, publication and assessment of work which crosses traditional discipline boundaries, or the new boundaries left in place by a country’s Scientific Wise Owls and Funding Agencies, are deeper than a buzz word, or even than a buzz concept. The logicians can’t read aeronautics and the aeronauticists can’t read formal logic and the computer scientists don’t understand aerodynamics and the engineers don’t understand the sociologists and I doubt that is going to change rapidly under the hierarchically-directed research-as-job model, buzz word or no.

Leave a Reply