A bit of intellectual biography, prompted by a couple of days’ free time leading me to a paper written 27 years ago by a pal, which I have just read. I say a little of what’s in the paper, to encourage others to read it. And then I comment on a couple of disappointing aspects of the WWW, and of academic work here in Bielefeld.
I am reading a collection of papers on The Law of Non-Contradiction, edited by Priest, Beall and Armor-Garb (Oxford University Press, 2004, reprinted 2011), for a seminar I offer on the subject of paraconsistent logics. Amongst them is a paper by Vann McGee, an MIT logician and philosopher, on Frank Ramsey’s Dialethism. Dialethism is the position that there are logically incompatible assertions that are true. In this case, says McGee, “…sometimes Ramsey is willing to count each of two classically logically incompatible theories as true”.
I am interested in such phenomena because I am interested in reasoning in general, and have been induced by a Bielefeld student, Daniel Milne, who has been following such matters for some time, to become interested in reasoning about and reasoning in stories – fiction. For, one could say, one of the ways in which much fiction works is to induce us to reason about situations invented by the author, who may well not be constrained in general by the “laws” of reasoning applying to the physical world. One can imagine a story in which, while I am sitting listening to you present a paper on dialethism in my seminar, you are simultaneously off waterboarding a tax collector. You cannot be in two different places at once physically, but in a story you can be so without the story appearing to be incoherent. But other stories are incoherent – think of Finnegan’s Wake. How to mark a difference?
Further, many stories involve people and objects which are not real, which are invented. Do these people and objects “exist” in some way? If so, then certainly not in the way in which you or I exist, for we are “real”, “actual”, or however you might like to describe us, and the invented entities not – they don’t have an address or an ID card or pay radio licence fees and nobody is going to go looking for them to insist they sign up for any of these. But we can’t just say “anything goes”, that we can reason about such invented entities any way we like. What about that superficial referring phrase itself, “invented entities”? Does it refer? In one sense, obviously it does: you know exactly what I am talking about, because I told you: things invented by people to occur in stories they write. In Fregean logic, modern formal logic, or on Russell’s interpretation of putatively-referring terms, however, the term doesn’t refer. But singular or plural terms in “classical” formal logic (that is, the post-Frege traditional complete formulations of propositional and predicate logic) must refer. What, then, does this term do and how does it do it? And what logic, if definitely not classical as just noted, is involved in reasoning concerning it? Say, in helping to explain what this very paragraph means?
I was in grad school at Berkeley with Vann McGee, who entered a year later than I did – or was it two years? We were in the Group in Logic and Metholodology of Science, started by Tarski and having 15 or so graduate students pursuing PhDs, and about four times that many faculty members, some of whom we never saw, such as the game theorist John Harsanyi, who was to win a Nobel Prize. Vann entered at the same time, I think, as Shaughan Lavine, a loquacious logician interested in physics – at that time Shaughan wanted to solve the riddle of the intellectual incoherence of quantum mechanics, but thought it would take decades and thus the enterprise couldn’t really start until one had tenure, so he considered it wise to pick lower-hanging fruit for PhD and pre-tenure work. Shaughan left Berkeley after a couple of years because he didn’t see how it was actually possible to get a PhD degree in the Group in the environment prevailing in the 1970’s. He worked as an editor for the Physical Review, and came back at the end of the decade to work on a technical problem in mathematical model theory which he thought he could crack in a couple of years (in fact, it took him eight more years, underlining the accuracy of his earlier observation).
I was very interested in mathematical logic. In fact, I came to Berkeley being most interested in the Scott semantics for the lambda calculus, but I found nobody else there interested in them, except a young Japanese scholar, Reiji Nakajima, working with a temporary faculty member in the Computer Science Department who was applying it to programming languages with recursive constructs, so I thought. My interest in computation was theoretical – Turing machines, recursive functions and the like, and I came from a university which had a research group in programming languages, but no department doing work in computing science – the Oxford Computing Laboratory was largely for people who wanted to solve applied-math problems numerically and I was utterly uninterested in those at the time (things would change!). I classified the Berkeley Computer Science Department in that shoebox – one of the many and varied intellectual mistakes I have made in my career, and this one took me a decade to correct.
Even then, I think I was equally or more interested in questions in philosophical logic than in set theory or model theory, but there were more people doing the hard math and I didn’t think you could get a job doing philosophical logic. Further, the math seemed “hard” and the philosophical logic “soft”. The math was hard – it proved too hard for me in the end. But I was worried about career prospects in philosophical logic. I knew some physicists in the mid-1970’s who had told me that at that time there was just one tenure-track academic job in theoretical physics offered in the whole of the US. I thought philosophical logic was going the same way. So rather than follow my inclinations away from math, I went into it even more – I even taught myself and taught others numerical analysis (at both the undergrad and grad levels) because I thought I’d have more chance of a job doing something related to what I enjoyed. I didn’t realise that the South Bay was about to explode into Silicon Valley and help logic become one of the largest applications of mathematics after calculus and numerical algebra and analysis. But the varied non-logic mathematical skills I learned have proved invaluable to me; I don’t regret at all the time spent developing them.
Back to Vann. Vann was quiet in classes and conversation, but his observations and conjectures were pertinent and incisive when he made them and he was obviously both very clever and very able. As well as giving us the impression of being quite other-worldly. None of us at that time in the mid-1970’s knew how to get out of Berkeley with our PhD degree (indeed, the university itself was to recognise the fact that too many clever graduate students were often having too much demanded of them, and was to initiate change), but Vann gave me the impression of not caring about it that much, as long as he could carry on thinking about technical matters in measurement theory and conditionals and all those problems ignored by the logicians in the Math Department. He finished in 1985, having written not only a thesis on Truth and Necessity in Partially Interpreted Languages, but also having done work in the Theory of Measurement (one of Ernie’s Adams’s interests, as well as one of Pat Suppes’, down the road at Stanford) and in the Logic of Conditionals (the major Adams theme along with Probability Logic). Some of his work on conditionals was published the year he was awarded his PhD, in the Journal of Philosophy, a – some say the – leading journal.
I just read the paper, after 27 years. Which is part of what prompted this note.
Me, I’d gone “applied”, having taught math and computer science at two California State Universities, San Francisco and then Hayward to try to support myself while working on my degree in the copious free time left to me on a full teaching schedule at a teaching university. I managed to reprove a result of Humberstone in algebraic logic without realising it, as Johan van Benthem noted when I explained my result. My resolution to “stop reading and get down to working!” had been taken two papers too soon . “It shows what you can do!” said Johan helpfully, but it didn’t seem any consolation at the time after that couple of years’ work. I got my first real break in mid-1984, with a temporary job in SRI’s Computer Science Lab. That helped me write half a thesis on eliminating quantifiers in naive set theories, but that effort ended some months later when my job ran out. The second break was at my next job, at the Kestrel Institute starting in late 1985, where I was put to work on devising a computational system for reasoning about time. Cordell Green pointed me at James Allan’s work on intervals in interpretation of reasoning about time in natural languages, and I recognised a Relation Algebra, which is something I knew something about in algebraic logic, and about which my pal Roger Maddux knew much more. We got some significant new mathematical results (largely his) as well as data structures and algorithms (largely mine, some implemented). I had a book contract with MIT Press Bradford Books together with Pat Hayes (which remains to this day unrequited), submitted my thesis and was awarded my PhD degree in 1987. My code, written in the now defunct language REFINE, which was very modular and mostly declarative, persuaded me of the value of declarative languages with strong typing and rigorous modularity. I spent six months writing code to perform calendrical calculations according to my data structure (to computer scientists a “model”, but not to logicians), for the Project Manager part of the Knowledge-Based Software Assistant project of the USAF. I gave the code along with API to the integrator of the KBSA-PM. She spotted one error (a boundary value) inside a couple of hours of testing – and then the code ran seamlessly for demo at AAAI in 1986 and in the KBSA-PM delivered to the Air Force, for the next ?few years? as far as I know. In the last twenty-five years, we have not gone forward much in industrial programming languages. All the issues I was able to avoid seamlessly by using REFINE still occur all the time in the industrial systems I am acquainted with.
Shaughan finished a year later, in 1988. He had solved a major technical problem in admissible model theory and was successful in his job search at the very time that philosophical logic was suffering the fate of physics a decade earlier – I think he got the one tenure-track job in philosophical logic available at the end of the 1980’s. He was at Stanford – although I was in Palo Alto at my job most days in the week, I never met up with him there – and then went to Columbia, where he wrote his book Understanding the Infinite (Harvard University Press, 1994, reprinted 1998). I haven’t seen Shaughan for twenty years, nor Vann for thirty.
Man, what a paper that is which Vann published in 1985! A Counterexample to Modus Ponens. Tim Williamson in The Philosophy of Philosophy (Blackwell, 2007) calls Vann a “distinguished logician” while explaining one of these results (see for example, this citation).
Let A and B be things you assert (sentences, say, or propositions or statements, if you believe in those and can say what they are). “Assert” means something like “claim to be true”. Modus Ponendo Ponens is the inference rule whereby, from an assertion of A and an assertion that if A then B, you may infer B.
According to the Stanford Encyclopedia of Philosophy, Aristotle discussed a forerunner of Modus Ponens called Theophrastus, whereby from the premises if something is F, it is G and x is F one may infer x is G. Modus Ponens concerns general assertions, whereas Theophrastus is concerned with objects having properties or characteristics, and properly belongs to the logic of predicates rather than to propositional logic.
So, what is an inference rule? What are doing when you “infer”? One common explanation, the “classical” explanation (although “classical” here means largely the 150-year-old Fregean tradition) is that asserting A and A implies B or if A then B means you are taking these sentences to be true. Inference then means that you take the third sentence B also to be true on the basis of the truth of the first two. The rule is said to “preserve truth”. A rule of inference which preserves truth is said to be valid.
There are two main ways of formulating the logic of whole sentences, propositional logic. One is to give a set of axioms – a collection of logical truths (sentences guaranteed to be true just in virtue of their form, such as A implies A, or (A and B) implies B) and just two inference rules: Substitution and Modus Ponens. Substitution says you may replace any schematic letter, such as “A” in the two logical truths just given, by any sentence whatever. This is truth-preserving, because the logical truths are so because of their form, not their content, so no matter what “A” is, something of the form “A implies A” will be true. That “no matter what” phrase is another way of expressing Substitution. No one queries Substitution; it is one of the basic mechanisms of logic as truth/assertability according to form and not content. It looks to be significant for this century-and-a-half-long conception of logic that Modus Ponens may not be truth-preserving when “if…then….” is used in natural-language reasoning! The other way of formulating logic consists of giving no axioms, but plenty of rules of inference, indeed some (“introduction” rules and “elimination” rules) for each logical constant. Modus ponens is the “introduction rule” for the conditional in this formulation. So either way Modus Ponens is key. (The first type of system is popularly ascribed to the German mathematician David Hilbert, the second to the German logician Gerhard Gentzen.)
In fact, when “implies” is taken to be what is called the “material conditional”, Modus Ponens is truth-preserving, as Vann points out. The material conditional is the intepretation of “implies” whereby “A implies B” is taken to be equivalent to saying “either Not-A or B”. One interpretation of logic, one explanation of the meaning, takes the “logical constants” in propositional logic, the connectives “and”, “or”, “implies” and “not” to be purely functions of the truth or falsity of the sentences they combine. This, along with the claim that every sentence whatever either is true or is false, constitute the basis of what is called classical propositional logic (that is, the common propositional logic since Frege).
It is easy to see that, when “implies” is the material conditional, Modus Ponens is truth-preserving, as follows. You assert A. A is taken to be true. You assert A implies B, that is, either Not-A or B. So this is taken to be true. But you have taken A to be true, so it follows that you cannot take Not-A to be true as well, for you would be contradicting yourself (the so-called Law of Non-Contradiction is another foundational principle of classical logic, but exactly what it means can be questioned – see the more than 240 different variations pointed out by Patrick Grim’s article in the eponymous op. cit.). If the “Not-A” part of the true either Not-A or B isn’t true, then it must be the “B” part that is true. That shows that Modus Ponens is truth-preserving, because B is exactly what Modus Ponens infers from the first two sentences.
People using formal logic in mathematics generally take “implies” to mean the material conditional when they are using logic or talking about it. And they take this to be settled. But they also infer, as a professional activity: they prove theorems from other mathematical “facts” (theorems). It is prima facie apparent that inference of this sort may well not be the same kind of activity as when, looking out from my room, I see your shadow on the street and infer that the sun is shining. For that is defeasible – somebody may have turned a searchlight on you on a cloudy day. Whereas mathematical theorems are not usually taken to be defeasible in the same way – they taken to be wrong only if their author has made a technical mistake in reasoning, not if the phenomenon they assert is valid but otherwise explained.
When Vann points out apparent counterexamples to Modus Ponens, he is noting that there are conditionals, “if…then…”-statements, in the language we use, and if one is trying to formulate truth-preserving inferences using those notions of implication, then formal Modus Ponens doesn’t preserve truth.
On the face of it, he’s right. “On the face of it” means that the arguments he uses are formally of the Modus Ponens form (except for a couple of minor typographical differences which are assumed to be contingently grammatical and not substantial). The question is how to explain the phenomenon. Vann suggests it is crucial that the “B” part of his counterexamples is itself a conditional. That is, there is an “if ….. then….” as the “then”-part of an “if….then…..”; known as “nested conditionals”.
There is a substantial amount of work on the logic of conditionals. They seem to be quite tricky, so it is really not surprising that phenomena such as Vann identified have remained unnoticed for so long. Ernie Adams wrote an influential eponymous book on the logic of conditionals, published in 1975. David Lewis addressed it in a number of seminal papers as well as a book, Counterfactuals (Havard U.P./Blackwell’s 1973, reissued Blackwell’s 2001). Jonathan Bennet has an extensive survey of some 380 substantial pages (A Philosophical Guide to Conditionals, Clarendon Press, Oxford, 2003). One locus classicus is a set of papers edited by Frank Jackson (Conditionals, Oxford University Press 1991, unfortunately out of print).
Vann considers also the interpretation of if A then if B then C as if A and B then C and vice versa (he calls this the “law of exportation”, the “law of importation” being the interpretation of the second as the first), and notes that, if these laws are correct interpretations of conditionals, the difficulty is “basic”: that you are stuck with taking “if … then …” to be the material conditional (which it can’t be, because if so there would be no counterexamples to Modus Ponens) or the logically most powerful conditional called “strict implication”, whereby “if A then B” is true only if in every possible world in which A is true, B is also true. Which wouldn’t seem right: “if I have my brown jacket on, then my grey jacket is at the cleaner’s” tells you something about my clothing habits in this world in which we actually live, and tells you nothing about another world, odd but possible, in which I have a pathological hatred specifically of wearing grey jackets and would never do so, even if there were fifty in my closet and I only had my brown one otherwise.
That is a powerful and surprising result.
He goes further, in showing that Robert Stalnaker’s account of a certain kind of conditionals called subjunctive or counterfactual conditionals (conditionals in which the antecedent, the part following the “if” and before the “then” are not actually true but hypothetical) is “inaccurate” (Stalnacker’s account is in A Theory of Conditionals, in Studies in Logical Theory, American Philosophical Quarterly, Monograph 2, 1968, reprinted in Jackson op. cit.). He means wrong, if the law of exportation holds. This is also a significant result, for at the time the Stalnaker and closely-related Lewis semantics for counterfactual conditionals were held to be the best accounts. (They are still the best available for many purposes. Forty years on, we use the Lewis semantics for counterfactual conditionals in my technique for causal analysis of accidents, Why-Because Analysis, where it works very well in the context of complex engineered sociotechnical systems.) The issues with counterfactual conditionals in particular were, I believe, first raised by Nelson Goodman in a paper The Problem of Counterfactual Conditionals, Journal of Philosophy XLIV(5), February 27, 1947, available through JSTOR to those with access. It is also reprinted as Chapter 1 of his book Fact, Fiction and Forecast (Harvard University Press, 1984).
There is much, much more in this short paper. I am so glad I read it finally.
On to my second theme, somewhat distressing. As I have written before, I thought in the mid-1990’s that the advent of the World-Wide Web would render the business models of traditional academic publishing obsolete. That hasn’t happened, to my regret as well as sometimes to my annoyance. But the WWW has led to on-line discussions, and there are various software available to format ongoing discussions of any and all subjects on the WWW. Instead of searching out a bunch of like-minded people to meet to discuss raising blue goldfinches, you can find them right there in the blue-goldfinch forum! What a wonderful enrichment of our lives.
I looked for discussion of Vann’s paper. I only found two discussions in forums on the first few pages of the Google search. The second entry in the Google search for the paper was a discussion on TalkRational: A Republic of Free Thought. A “moderator” brings up McGee’s paper in 2010, a quarter-century after publication. Kudos for drawing attention to it, one might think, but consider hisher comment:
(1) I think that the most obvious problem with McGee’s argument is that he equivocating between two radically different ways of construing the relevant statements. Are there any other problems with the argument that you see?
(2)Is Vann McGee retarded? Seriously, is there any reason whatsoever why his argument should be persuasive?
which is partly personally abusive. Heshe says in a later note:
McGee has basically become a rock star in philosophical logic because of this argument, too. It’s a pretty tragic statement on the condition of contemporary philosophy.
The discussion goes downhill from there, quite steeply. Most people seem to want to deprecate McGee personally, as the moderator implictly does.
Such a combination of incomprehension and abuse is unfortunately rife on WWW forums. It doesn’t seem to happen to anything like the same extent on subscription-only e-mailing lists. This is one area in which e-mail seems to serve a function which the WWW does not, contrary to what one might have anticipated. I regret, and am frustrated by, the low standard of such forum discussion. Recall that this is a discussion which appears high on the Google list responding to the query “vann mcgee modus ponens”.
I wish for a different world, a world in which papers and arguments can be presented and discussed on the WWW the way they are presented and discussed in colloquia, conferences and the better journals. We are unfortunately a long way from that.
On to my third theme.
The first PhD to graduate whom I advised in Bielefeld was Thorsten Scherer. Thorsten built a mobile robot to perform lab assays automatically. I became his advisor after his original advisor left Bielefeld and Thorsten didn’t want to follow. His robot worked in a biotechnology lab. It drew samples from a large (industrial-scale) fermenter, which was producing cells, took them to and installed them in a centrifuge, started the centrifuge, removed them when it stopped and took the results to and installed them in an assay machine. These devices were distributed around the lab. Thorsten had developed the robot to such a degree of reliability that it worked at night when nobody was around. It only spilled stuff one time, near the beginning of development.
I was very impressed by this piece of system engineering. Thorsten had put together algorithms – recognition, motion and control algorithms – some of which he had gleaned from the literature and many of which he had devised himself and had integrated them in a piece of hardware which performed its chosen task to a demonstrated high level of reliability (achieving the task as wished) and safety (avoiding spills, collisions, breakages).
Readers will appreciate that most academic contraptions of this sort are “proof of concept”, that is, their devisers can get it to do what it is supposed to do some of the time, at least once or twice. Adding dependability to such “proof of concept” devices comes out to around ten times as much work, as an industrial rule of thumb. It is very frustrating to those of us who work in the area that, with some notable exceptions, dependability issues are largely ignored in academic computer science, for they are not intellectually trivial. Most of us end up spending far more time talking with industrial engineers than we do with fellow academics.
I thought this superb work, and proposed Thorsten for a summa cum laude designation. So did his second thesis reviewer, his ex-boss. But it was vetoed by the Chair of his committee (as thesis advisor, I could not be Chair) on the basis that he had taken too long – seven years, I think.
Another example. I had an Indonesian scholar in my group, I Made Wiryana. Made’s thesis was on what I would call practical requirements engineering in culturally very different situations from those in the West. Indonesia has many different cultures, information technology is helpful and very much needed, but some ways we have of engineering these systems just don’t fit local cultures there, which are many and varied. Made devised a means of performing dynamic adjustments to sociotechnical system requirements through causal analysis of cultural issues that came up during initial system development and prototyping. Again, unlike most academic work, this was serious “grown-up” engineering. The examples in his thesis included designing and implementing the system to run the blog of the Indonesian president, whom he had personally advised, and designing and implementing the warning-message function associated with the tsunami early-warning system installed with international help after the 2004 December tsunami.
Again, I thought this work worthy of a summa cum laude designation, as indeed Made’s committee decided. But before the defence, I had a brief chat with one of my colleagues, multiple times Dean of our faculty, known for his very effective fund-raising, and now Rector of my university, who opined strongly that it was inappropriate to consider awarding a summa cum laude to someone who had “taken too long” (Made had been working with my group about a decade).
To my mind, the quality of a PhD lies solely in its achievement. Both of these scholars had achieved way beyond what most German PhDs in computer science achieve, in that they had devised and implemented systems with demonstrated dependability. As I noted, that simply takes longer. Made had to work with a number of organisations, including government, to get his results. Anyone setting a clock ticking on government work anywhere is liable to run out of clock batteries.
Why am I saying this here? By means of contrast. Vann took ten or eleven years to get his PhD. Shaughan took 13, as did I. Was ten years worth that one seminal paper of Vann, let alone a PhD? In my view yes, most certainly! Read it, and I bet you’ll agree. But in Germany he would have “taken too long”………