Dewar has written a rich and enthusing paper on Ramsey sentences and Newman's problem. This is a topic I've worked on for my Master and PhD dissertation, but the paper is a bit technical, and as I am not very familiar with the formalism used, it took me two or three careful reads to understand what the author was trying to do. Now it seems to me that the author attempts to formalise in model theory some of Melia and Saatsi's arguments on Newman's problem for structural realism. Then he suggests that the whole Ramsey approach is inappropriate to extract the structural content of a theory, and that we should focus on the notion of definition instead.
Let me recall what all this is about, and give a personal exegesis of the paper. Then I will give some comments and criticisms.
A bit of context
First a bit of context. Recall that the structural realist answers the pessimist meta-induction by claiming that our theories are, if not true, at least “structurally true”. But what is structure? One way to express it is Ramsey sentence formalism: structure is what remains from theoretical equations when theoretical predicates are not interpreted, and this is expressed by replacing them with predicate variables over which one quantifies existentially. All we say is that theoretical predicates refer to “something out there” that realises the structure of the theory, but that “something” need not be natural properties. In particular, they can be multiply realisable by natural properties.
Then comes Newman's objection: if you're too liberal on what counts as “something” (for example, if you accept that any arbitrary set of objects is an admissible property, any collection of couples a relation, and so on) then any structure you want is realised in the world somehow, given that there are enough objects. We just have to “pick” the right structure by arbitrary grouping some objects, and any theory is structurally true (if it match our observations at least). So one should better add constraints on what structures are admissible.
Melia and Saatsi wrote a paper where they examine various solutions to this problem. In particular, they examine how qualifying the structure we are talking about (we're talking about “real” relations, not any structure) could help. They consider limiting it to “natural” properties and relations, or to “qualitative” properties and relations (any arbitrary logical combination of natural properties), and they argue that the former is too strict for structural realism because properties deemed natural do not always survive theory change, and that the latter is too liberal, because if the world is sufficiently asymmetric, any extension we want can be picked by a unique qualitative property and so any theory would have its structure realised somehow.
Summary of the paper (1): definitions
Dewar's paper is mostly focused on these arguments, that he addresses in model theory through the angle of definitions, and through the angle of structural equivalence.
Here is a personal summary of my understanding of the paper (I apologise for imprecisions).The idea is to define a structure (called a frame) that will limit admissible extensions for predicates (the "somethings" theoretical predicates are allowed to refer to). We can imagine that this frame corresponds to a natural structure out there in the world: a domain of objects, and natural groupings of these objects into natural properties and relations. At first, we don't expect any particular constraints on them: they are natural only because nature says so. Now one could say that a theory T is structurally true if one can map its predicates to some "natural" elements of this structure, and if T is true given this mapping.
But that seems too strict: we would intuitively think a theory whose predicate correspond to the negation of a natural property still gets the structure of the world right. So we can liberalise our account by accepting that the predicates of T would not refer to natural properties, but could ideally be defined in terms of natural properties. This is done by extending the frame to all extensions that could be "defined" from natural extensions (in technical terms, the frame is closed under definability, and we get Henkin semantics). The idea is that T gets the structure of the world right, because if we interpret its theoretical vocabulary suitably in terms of natural properties, by means of definitions, then T is true (that would allow for multiple realisation if we accept second-order formula, I guess).
What's interesting with this approach is that it allows you to put more or less liberal constraints on what counts as an admissible “definition from natural properties”. A strict constraint would be that theoretical predicates perfectly match natural properties. More liberal constraints would involve different kinds of definability of theoretical predicates in terms of natural ones (for example through negation). But If admissible definitions are too liberal, then any theory will come out structurally true for a suitably chosen set of definitions, and we run into Newman's problem.
In sum, the notion of definition helps us consider different degrees between quasi-realism (theoretical predicates refer to natural properties only) and quasi-empiricism (theoretical predicates refer to anything so long as the theory is empirically adequate) in the context of Ramsey sentences.
Now the author is not interested in truth, but in theoretical equivalence, so my presentation is not completely faithful, but we can recover the author's idea from it. The idea would be something like: T1 and T2 are structurally equivalent if, whatever the frame is out there, they would come out either both true or both false in this account. Either there is a definition from natural properties that makes them true, or not, but they are always both true or both false “in the same worlds”. However I find it less easy to grasp, and in the following, I will transpose the author's arguments in terms of truth rather than theoretical equivalence, because they work just as well (it's just that the question is different: in one case, are our theories structurally true in some meaningful sense, in the other, what is the structural content of a theory). I will also employ a loose, syntactical language (talking of "natural predicates") instead of a more rigorous semantic account from time to time, for the sake of simplicity.
Summary of the paper (2): collapse problems
In the paper, the author considers a particular class of admissible definitions: if my understanding of its formal account is correct, it requires that definitions be explicit (a predicate should be strictly equivalent to a formula expressed in "natural vocabulary" and observational vocabulary), they should be first order formula, and they should not contain proper names referring to particular objects. As the author explains, this last constraint is sound, because any extension could be defined if we allowed reference to any particular object. He then recovers Melia and Saatsi's result: this constraint is insufficient if the world is asymmetric (loosely speaking, if different objects of the world cannot have exactly the same properties and respective relations), because then, any object can be picked by a qualitative definition: it is identifiable by its place in the structure, so to speak, and in turn, any extension can be defined (the author adds precautionary notes that might, I guess, have to do with complications due to his focus on theoretical equivalence rather than truth).
Finally the author gives another reason why this kind of definition is too liberal: there are theories that are not notationally equivalent (not mere reformulations of one another through explicit definitions), but that would be structurally equivalent with this account, because (my formulation) they are structurally true in exactly the same worlds. He gives an example of this that I will discuss below.
Could we bring more constraints on definitions to avoid the problem? The author gives reasons to be pessimistic: the notion of definition proposed is already quite strict for a structural realist, but even with this strict notion, we already get into troubles. According to the author, the problem is not so much with what definitions we allow, than with the fact that opening the domain of admissible extensions to those which are definable immediately gives us more than what we wanted (in particular, theories that are mutually interpretable, yet not notationally equivalent because interpretations are different in one sense or the other: these theories will be structurally equivalent under this account). In sum, the problem is with the Ramsey sentence approach in the associated semantics: it cannot capture what we would mean by structural equivalence, so it's not the proper tool to extract the structural content of our theories, even if we qualify a domain of properties and relations they are talking about. And the author finishes by suggesting that we should abandon this idea of extracting structural content through semantic frameworks and Ramsey sentences, in favour of a direct account of structural equivalence in terms of definition.
I wasn't convinced by this last part, as I will explain below.
Let me now give brief comments on this. First, I think examples from actual science or other literature in philosophy of science could have made some arguments more palatable, although they (mostly) support the author's points.
Take the idea that a theory expressed with the negative of a natural predicate should still count as structurally true (my formulation). That's very plausible by structural realists standards. For example, think of positrons which were initially thought of as “holes” in a sea of electrons. If positrons are natural entities, then this old theory somehow uses a “negative” predicate (the absence of an electron). Yet the structural realist should want this theory to be structurally correct because it was empirically successful and made novel predictions (see Pashby). Now whether this is true for any definition, however contrived, remains to be seen in my opinion.
The notion of definition introduced also reminds me of Nagel's bridge laws to account for inter-theory reduction, and in some sense, structural realists want old theories to “reduce” to new ones. But it is widely recognized that Nagel's account of inter-theory reduction is too strict. Moving to an account such as Kim's, that is, understanding one theory's predicates as functional (higher-order predicates, identifying “causal profiles”) rather than strictly definable by bridge laws would require more liberal definitions, at least, second-order formula and probably implicit definitions instead of explicit ones. And that's certainly a move that the structural realist should make, in particular if they wish to account for special sciences revealing “real patterns” (as Ladyman and Ross). Now structural realists often mention Post's correspondence principle, that a new theory should “degenerate” into the old one under some limit, as Newtonian mechanics is a limiting case of the theory of relativity when the speed of light goes to infinity. But arguably, this correspondence principle cannot be captured by the strict definitions considered in the paper because “degenerate” involves some approximations and limits. This supports the author's point, in that even too strict notions of definition lead us into troubles.
Finally, one could relate the discussion on the world being asymmetric, which allows for a qualitative identification of any real object, to theoretical symmetries in physics, which are often mentioned by structural realists. Theoretical symmetries are not world symmetries: they relate different models, not a single model to itself (so for example, physics is CPT invariant, but that doesn't mean that there is a twin earth of anti-matter going backward in time somewhere in the universe). Yet world symmetries could count as a sub-part of theoretical symmetries (permutation symmetry for bosons in quantum mechanics could count as such a “world symmetry”).
There is a parallel here to make with notational equivalence. Newtonian physics is translation invariant, from which we could infer that “having position x” in a model could be translated into “having position y” in another without loss of cognitive content. The two models are notationally equivalent, or structurally equivalent. But the point I wish to make is that if structuralist authors are generally tempted to eliminate the “surplus structure” associated with symmetries, including world symmetries, they eliminate symmetries from their models and they will ultimately consider that the world's structure, what's really out there, has to be represented by an asymmetric model. Then we are pretty sure to run into the troubles mentioned in the paper: any “real object” must be identifiable qualitatively (and indeed, ontic structural realists put emphasis on the idea that individuals are identified by their position in the structure!), but that makes any theory structurally true with qualitative definitions. Again, these observations strengthen the author's points.
Now intuitively, I would say that the way to go for a structuralist is to bring restrictions on admissible definitions. The author thinks it's not the right way to go, but I am not really convinced. For sure, the notion of definition proposed is already too strict in some respects. But not in others. Newman's problem is that we could find very contrived and ad-hoc mappings to qualitative properties to make any theory structurally true, so perhaps we should consider the virtue of theoretical simplicity: we want our theories to be informative, we want them to describe the structure of the world with few axioms. Then we could introduce different levels of definability, according to how many logical operators they use for example, and associated to each, we'd have different degrees of structural equivalence: the less contrived our definitions must be, the closer two theories are structurally speaking. Then the structuralist can argue that science progresses toward closer structural equivalence to an ideally true theory of the world...
This brings me to criticism of the final part of the article. Dewar provides a trivial example of theories that do not seem structurally equivalent, intuitively speaking, yet are equivalent because one can be interpreted as a qualitative reformulation of the other, and conversely. Translated in natural language, the first theory T1 says that there is at least one property P in the world (a property P such that any object is either P or not, which is trivial). The second T2 says that there are properties Q and R such that any object that is Q is also R.
How are T1 and T2 structurally equivalent? Well, imagine T1 is true. Then we can make T2 true by defining Q and R as equivalent to P. Now imagine T2 is true. Then we can make T1 true by defining P in any way we want in terms of Q and R (since T1 is trivial). This means that T1 and T2 can always be made true in the same worlds: they are structurally equivalent. But they are not notationally equivalent, and according to the author, they shouldn't be considered structurally equivalent.
Here I disagree. The two theories are not notationally equivalent, but their Ramsey sentences are already structurally equivalent if we assume that P, Q and R are natural properties! (in technical terms, using frame semantics, not Henkin semantics). So we don't even need to make any kind of definitions admissible to see them as equivalent. And that makes sense from a structuralist perspective. We want our theories to be true for “important properties and relations” in the world. Now saying that there is at least one important property (T1) and saying that there is one important property that implies the other (T2) is not saying much: indeed, if T1 is true, T2 is, and conversely, and T2 doesn't say anything important beyond T1 unless we interpret its predicates further. The two theories are inequivalent if they are interpreted, but unless we use observational terms (which is not the case here), what else shall we say about their properties that wouldn't count as more structure? That Q and R are different properties perhaps? That could be logically translated, and then the two theories would no more be equivalent. So contrarily to the author, I think the Ramsey sentences formalism is the right tool to understand what we mean by structural equivalence (or resemblance), and notational equivalence is the wrong tool. T2 cannot be seen as a mere reformulation of T1, but it only adds anything to T1 if we assume metaphysical stuff (an “interpretation” of Q and R) that the structuralist shouldn't care about if it cannot be translated into structure.
Here is an example to make my point. Take Galilean and Newtonian physics: both theories are mutually interpretable (make “position x” equivalent to “metric relation x to the centre of gravity of the universe” and Newtonian physics is true in every Galilean world) although Newtonian physics is not notationally equivalent to Galilean physics, because it has more structure (every object has a “true” position in space-time, which cannot be retrieved from Galilean physics). Now remarking that the two theories are mutually interpretable, hence structurally true in exactly the same worlds for some qualitative reformulation, is certainly of interest for the structural realist. It means that by structuralist standards, absolute space-time in Newtonian physics is a metaphysical component that adds nothing to the structure of the theory.
In light of this, I think the author's complaint could be an artefact of his focus on structural equivalence rather than truth. Notational equivalence is an interesting notion concerning the way our theories are presented, but it is not necessarily so interesting when it comes to know which theories are true or not. Notational equivalence goes both ways, from one theory to another and conversely, but truth only goes one way: from the world to our theories. Having a definition of our theoretical terms from natural properties is quite enough to think that our theories get part of the structure of the world right. Now perhaps we won't have the notion of formal equivalence the author seeks, but all these semantics that the author is using were precisely designed for a purpose: having a formal grasp of the notion of truth, not of theoretical equivalence.