JON WILLIAMSON

Philosophy, SECL, University of Kent, Canterbury, CT2 7NF, UK.
email: j.williamson (at kent.ac.uk)

 

WHAT'S NEW

Project: From objective Bayesian epistemology to inductive logic, AHRC 2012-15.

Brendan Clarke, Donald Gillies, Phyllis Illari, Federica Russo & Jon Williamson: Mechanisms and the Evidence Hierarchy, Topoi doi: 10.1007/s11245-013-9220-9, 2013.

Evidence-based medicine (EBM) makes use of explicit procedures for grading evidence for causal claims. Normally, these procedures categorise evidence of correlation produced by statistical trials as better evidence for a causal claim than evidence of mechanisms produced by other methods. We argue, in contrast, that evidence of mechanisms needs to be viewed as complementary to, rather than inferior to, evidence of correlation. In this paper we first set out the case for treating evidence of mechanisms alongside evidence of correlation in explicit protocols for evaluating evidence. Next we provide case studies which exemplify the ways in which evidence of mechanisms complements evidence of correlation in practice. Finally, we put forward some general considerations as to how the two sorts of evidence can be more closely integrated by EBM.

Brendan Clarke, Bert Leuridan & Jon Williamson: Modelling mechanisms with causal cycles, Synthese doi: 10.1007/s11229-013-0360-7. .

Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (2011) put forward the Recursive Bayesian Net (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling the hierarchical nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way.

Jon Williamson: Deliberation, Judgement and the Nature of Evidence, Economics and Philosophy in press. .

One kind of deliberation involves an individual reassessing the strengths of her beliefs in the light of new evidence. Bayesian epistemology measures the strength to which one ought to believe a proposition by its probability relative to all available evidence, and thus provides a normative account of individual deliberation. This can be extended to an account of individual judgement by treating the act of judgement as a decision problem, amenable to the tools of decision theory. A normative account of public deliberation and judgement can be provided by merging the evidence of the individuals in question and calculating appropriate Bayesian probabilities and judgement thresholds relative to this merged evidence.

But this formal epistemology for deliberation and judgement lacks substance without an account of how evidence can be merged. And in order to provide such an account, we need in turn an account of what the evidence is that grounds Bayesian probabilities. This paper attempts to tackle these two concerns. After finding fault with several views on the nature of evidence (the views that evidence is knowledge; that evidence is whatever is fully believed; that evidence is observationally set credence; that evidence is information), it is argued that evidence is whatever is rationally taken for granted. This view has consequences for an account of merging, and it is shown that standard axioms for merging need to be altered somewhat.

Jürgen Landes & Jon Williamson: Objective Bayesianism and the Maximum Entropy Principle, Entropy 15(9): 3528-3591, 2013. . . doi:10.3390/e15093528

Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy.

Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

Jon Williamson: How uncertain do we need to be? Erkenntnis, doi 10.1007/s10670-013-9516-6. Published version: . Local version: . Video:

Expert probability forecasts can be useful for decision making (§1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or higher order uncertainty (§2). Bayesian epistemology provides the means to halt this escalator, by tying expressions of uncertainty to the propositions expressible in an agent’s language (§3). But Bayesian epistemology comes in three main varieties. Strictly subjective Bayesianism and empirically-based subjective Bayesianism have difficulty in justifying the use of a forecaster’s probabilities for decision making (§4). On the other hand, objective Bayesianism can justify the use of these probabilities, at least when the probabilities are consistent with the agent’s evidence (§5). Hence objective Bayesianism offers the most promise overall for explaining how testimony of uncertainty can be useful for decision making.

Interestingly, the objective Bayesian analysis provided in §5 can also be used to justify a version of the Principle of Reflection (§6).

Jon Williamson: How can causal explanations explain? Erkenntnis 78:257-275, 2013. doi: 10.1007/s10670-013-9512-x

The mechanistic and causal accounts of explanation are often conflated to yield a `causal-mechanical' account. This paper prizes them apart and asks: if the mechanistic account is correct, how can causal explanations be explanatory? The answer to this question varies according to how causality itself is understood. It is argued that difference-making, mechanistic, dualist and inferentialist accounts of causality all struggle to yield explanatory causal explanations, but that an epistemic account of causality is more promising in this regard.

Philosophy of Causality

  • projects: Causality across the levels (BA 2009-11)
    The levels of causality (BA 2008)
    Mechanisms and causality (Leverhulme 2007-10)
  • conferences: Mechanisms and causality in the sciences
  • monograph: Bayesian nets and causality
  • articles: mainly on my epistemic theory of causality

Foundations of Probability

  • project: In defence of objective Bayesianism (Leverhulme 2007-9)
  • conferences: Multiplicity and Unification in Statistics and Probability
    Bayesianism 2000
  • monograph: In defence of objective Bayesianism
  • collection: Foundations of Bayesianism
  • articles: mainly on objective Bayesianism

Logics and Reasoning

  • Centre for Reasoning
  • gazette: The Reasoner
  • The Reasoning Club
  • projects: From objective Bayesian epistemology to inductive logic (AHRC 2012-15)
    progicnet: Probabilistic logic and probabilistic networks (Leverhulme 2006-8)
  • conferences: progic conference series
  • blog: Choice and inference
  • monograph: Probabilistic logics and probabilistic networks 
  • collections: Key terms in logic
    Combining probability and logic I, II, III 
  • articles: mainly on logic and probability

Applications to the sciences

  • projects: Mechanisms and the evidence hierarchy (AHRC 2012)
    Causality and the interpretation of probability in the social and health sciences
    (BA 2006)

    caOBNET: Objective Bayesian nets for integrating cancer knowledge: a systems biology approach (2006-11)
  • conferences: Causality in the Sciences conference series
  • blog: It's only a theory
  • collections: Causality in the sciences
    Causality and probability in the sciences
  • articles: mainly on applications of causality and probability

 

TEACHING

 

 

MONOGRAPHS AND COLLECTIONS

Phyllis McKay Illari, Federica Russo & Jon Williamson (eds): Causality in the sciences, Oxford University Press, [Amazon UK US], 2011. Introduction:

There is a need for integrated thinking about causality, probability and mechanisms in scientific methodology. Causality and probability are long-established central concepts in the sciences, with a corresponding philosophical literature examining their problems. On the other hand, the philosophical literature examining mechanisms is not long-established, and there is no clear idea of how mechanisms relate to causality and probability. But we need some idea if we are to understand causal inference in the sciences: a panoply of disciplines, ranging from epidemiology to biology, from econometrics to physics, routinely make use of probability, statistics, theory and mechanisms to infer causal relationships.

These disciplines have developed very different methods, where causality and probability often seem to have different understandings, and where the mechanisms involved often look very different. This variegated situation raises the question of whether the different sciences are really using different concepts, or whether progress in understanding the tools of causal inference in some sciences can lead to progress in other sciences. The book tackles these questions as well as others concerning the use of causality in the sciences.

Jon Williamson: In defence of objective Bayesianism, Oxford University Press, [Amazon UK US], 2010. Introduction:

How strongly should you believe the various propositions that you can express?

That is the key question facing Bayesian epistemology. Subjective Bayesians hold that it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms:
· Probability - degrees of belief should be probabilities
· Calibration - they should be calibrated with evidence
· Equivocation - they should otherwise equivocate between basic outcomes

Objective Bayesianism has been challenged on a number of different fronts. For example, some claim it is poorly motivated, or fails to handle qualitative evidence, or yields counter-intuitive degrees of belief after updating, or suffers from a failure to learn from experience. It has also been accused of being computationally intractable, susceptible to paradox, language dependent, and of not being objective enough.

Especially suitable for graduates or researchers in philosophy of science, foundations of statistics and artificial intelligence, the book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.

Rolf Haenni, Jan-Willem Romeijn, Gregory Wheeler & Jon Williamson: Probabilistic logics and probabilistic networks, Synthese Library, Springer, 2011.

While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic volume that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.

Specifically, we show in Part I that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question: the standard probabilistic semantics (which takes probability functions as models), probabilistic argumentation (which considers the probability of a hypothesis being a logical consequence of the available evidence), evidential probability (which handles reference classes and frequency data), classical statistical inference (in particular the fiducial argument), Bayesian statistical inference (which ascribes probabilities to statistical hypotheses), and objective Bayesian epistemology (which determines appropriate degrees of belief on the basis of available evidence).

Further, we argue, there is the potential to develop computationally feasible methods to mesh with this framework. In particular, we show in Part I how credal and Bayesian networks can naturally be applied as a calculus for probabilistic logic. The probabilistic network itself depends upon the chosen semantics, but once the network is constructed, common machinery can be applied to generate answers to the fundamental question introduced in Part I.

Jon Williamson & Federica Russo (eds): Key terms in logic, Continuum, 2010.

Key Terms in Logic offers the ideal introduction to this core area in the study of philosophy, providing detailed summaries of the important concepts in the study of logic and the application of logic to the rest of philosophy. A brief introduction provides context and background, while the following chapters offer detailed definitions of key terms and concepts, introductions to the work of key thinkers and lists of key texts. Designed specifically to meet the needs of students and assuming no prior knowledge of the subject, this is the ideal reference tool for those coming to Logic for the first time.

Fabio Cozman, Rolf Haenni, Jan-Willem Romeijn, Federica Russo, Gregory Wheeler & Jon Williamson (eds): Combining probability and logic, Special Issue, Journal of Applied Logic 7(2), 2009; Editorial:

Federica Russo & Jon Williamson (eds): Causality and probability in the sciences, London: College Publications, Texts in Philosophy Series, 2007. Introduction: , Buy: UK, US

Causal inference is perhaps the most important form of reasoning in the sciences. A panoply of disciplines, ranging from epidemiology to biology, from econometrics to physics, make use of probability and statistics in order to infer causal relationships. However, the very foundations of causal inference are up in the air; it is by no means clear which methods of causal inference should be used, nor why they work when they do.
This book brings philosophers and scientists together to tackle these important questions. The papers in this volume shed light on the relationship between causality and probability and the application of these concepts within the sciences. With its interdisciplinary perspective and its careful analysis, Causality and probability in the sciences heralds the transition of causal inference from an art to a science.

Jon Williamson (ed.): Combining probability and logic, Special Issue, Journal of Logic, Language and Information 15(1-2), 2006; Introduction:

Jon Williamson: Bayesian nets and causality: philosophical and computational foundations, Oxford University Press (UK, US) 2005. Preface, Reviews & Errata

Bayesian nets are widely used in artificial intelligence as a calculus for casual reasoning, enabling machines to make predictions, perform diagnoses, take decisions and even to discover casual relationships. This book, aimed at researchers and graduate students in computer science, mathematics and philosophy, brings together two important research topics: how to automate reasoning in artificial intelligence, and the nature of causality and probability in philosophy. 

Jon Williamson & Dov Gabbay (eds): Combining probability and logic, Special Issue, Journal of Applied Logic 1(3-4), 2003; Editorial, pp. 135-138:

David Corfield & Jon Williamson (eds): Foundations of Bayesianism, Kluwer Applied Logic Series 2001, Kluwer. Contents & Reviews

Foundations of Bayesianism is an authoritative collection of papers addressing the key challenges that face the Bayesian interpretation of probability today. Some of these papers seek to clarify the relationships between Bayesian, causal and logical reasoning. Others consider the application of Bayesianism to artificial intelligence, decision theory, statistics and the philosophy of science and mathematics. The volume includes important criticisms of Bayesian reasoning and also gives an insight into some of the points of disagreement amongst advocates of the Bayesian approach. The upshot is a plethora of new problems and directions for Bayesians to pursue. The book will be of interest to graduate students or researchers who wish to learn more about Bayesianism than can be provided by introductory textbooks to the subject. Those involved with the applications of Bayesian reasoning will find essential discussion on the validity of Bayesianism and its limits, while philosophers and others interested in pure reasoning will find new ideas on normativity and the logic of belief.

 

 

ARTICLES

Philosophy of Causality

Jon Williamson: Mechanistic theories of causality, Philosophy Compass 6(6): 421-432, 433-444, 445-447, 2011; Part 1: ; Part II: ; Teaching and learning guide: ; Local combined copy:

Part I of this paper introduces a range of mechanistic theories of causality, including process theories and the complex-systems theories, and some of the problems they face. Part II argues that while there is a decisive case against a purely mechanistic analysis, a viable theory of causality must incorporate mechanisms as an ingredient, and describes one way of providing an analysis of causality which reaps the rewards of the mechanistic approach without succumbing to its pitfalls.

Jon Williamson: Probabilistic theories [of causality], in Helen Beebee, Chris Hitchcock & Peter Menzies (eds): The Oxford Handbook of Causation, Oxford University Press, pp. 185-212, 2009;

This chapter provides an overview of a range of probabilistic theories of causality, including those of Reichenbach, Good and Suppes, and the contemporary causal net approach. It discusses two key problems for probabilistic accounts: counterexamples to these theories and their failure to account for the relationship between causality and mechanisms. It is argued that to overcome the problems, an epistemic theory of causality is required.

Jon Williamson: Causal pluralism versus epistemic causality, Philosophica 77(1), pp. 69-96, 2006;

It is tempting to analyse causality in terms of just one of the indicators of causal relationships, e.g., mechanisms, probabilistic dependencies or independencies, counterfactual conditionals or agency considerations. While such an analysis will surely shed light on some aspect of our concept of cause, it will fail to capture the whole, rather multifarious, notion. So one might instead plump for pluralism: a different analysis for a different occasion. But we do not seem to have lots of different concepts of cause - just one eclectic notion. The resolution of this conundrum, I think, requires us to accept that our causal beliefs are generated by a wide variety of indicators, but to deny that this variety of indicators yields a variety of concepts of cause. This focus on the relation between evidence and causal beliefs leads to what I call *epistemic* causality. Under this view, certain causal beliefs are appropriate or rational on the basis of observed evidence; our notion of cause can be understood purely in terms of these rational beliefs. Causality, then, is a feature of our epistemic representation of the world, rather than of the world itself. This yields one, multifaceted notion of cause.

Jon Williamson: Dispositional versus epistemic causality, Minds and Machines 16, pp. 259-276, 2006;

I put forward several desiderata that a philosophical theory of causality should satisfy: it should account for the objectivity of causality, it should underpin formalisms for causal reasoning, it should admit a viable epistemology, it should be able to cope with the great variety of causal claims that are made, and it should be ontologically parsimonious. I argue that Nancy Cartwright's dispositional account of causality goes part way towards meeting these criteria but is lacking in important respects. I go on to argue that my epistemic account, which ties causal relationships to an agent's knowledge and ignorance, performs well in the light of the desiderata. Such an account, I claim, is all we require from a theory of causality.

Jon Williamson: Causality, in Dov Gabbay & F. Guenthner (eds.): Handbook of Philosophical Logic, volume 14, Springer, pp. 95-126, 2007; 

This chapter addresses two questions: what are causal relationships? how can one discover causal relationships? I provide a survey of the principal answers given to these questions, followed by an introduction to my own view, epistemic causality, and then a comparison of epistemic causality with accounts provided by Judea Pearl and Huw Price.

Jon Williamson & Dov Gabbay: Recursive Causality in Bayesian Networks and Self-Fibring Networks, in Donald Gillies (ed.): `Laws and models in science', London: King's College Publications, 2005, pp. 173-221, with comments pp. 223-245.

Jon Williamson: Learning causal relationships, Discussion Paper 02/02, LSE Centre for Natural and Social Sciences;

How ought we learn causal relationships? While Popper advocated a hypothetico-deductive logic of causal discovery, inductive accounts are currently in vogue. Many inductive approaches depend on the causal Markov condition as a fundamental assumption. This condition, I maintain, is not universally valid, though it is justifiable as a default assumption. In which case the results of the inductive causal learning procedure must be tested before they can be accepted. This yields a synthesis of the hypothetico-deductive and inductive accounts, which forms the focus of this paper. I discuss the justification of this synthesis and draw an analogy between objective Bayesianism and the account of causal learning presented here. 

Foundations of Probablity

Jon Williamson: Why Frequentists and Bayesians Need Each Other, Erkenntnis 78:293-318, 2013. doi: 10.1007/s10670-011-9317-8.

The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed - two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.

Jon Williamson: Calibration and Convexity: Response to Gregory Wheeler, British Journal for the Philosophy of Science 63:851-857, 2012.

This note responds to some criticisms of my recent book In Defence of Objective Bayesianism that were provided by Gregory Wheeler in his ‘Objective Bayesian Calibration and the Problem of Non-convex Evidence’ [available here].

Jon Williamson: An objective Bayesian account of confirmation, in Dennis Dieks, Wenceslao J. Gonzalez, Stephan Hartmann, Thomas Uebel, Marcel Weber (eds), `Explanation, Prediction, and Confirmation. New Trends and Old Ones Reconsidered', The philosophy of science in a European perspective Volume 2, Springer, 2011, pp. 53-81;

This paper revisits Carnap's theory of degree of confirmation, identifies certain shortcomings, and argues that a new approach based on objective Bayesian epistemology can overcome these shortcomings.

Jon Williamson: Bruno de Finetti: Philosophical lectures on probability, Philosophia Mathematica 18(1): 130-135, 2010;

Jon Williamson: Epistemic complexity from an objective Bayesian perspective, in A. Carsetti (ed.) `Causality, meaningful complexity and embodied cognition', Springer, pp. 231-246, 2010;

Evidence can be complex in various ways: e.g., it may exhibit structural complexity, containing information about causal, hierarchical or logical structure as well as empirical data, or it may exhibit combinatorial complexity, containing a complex combination of kinds of information. This paper examines evidential complexity from the point of view of Bayesian epistemology, asking: how should complex evidence impact on an agent's degrees of belief? The paper presents a high-level overview of an objective Bayesian answer: it presents the objective Bayesian norms concerning the relation between evidence and degrees of belief, and goes on to show how evidence of causal, hierarchical and logical structure lead to natural constraints on degrees of belief. The objective Bayesian network formalism is presented, and it is shown how this formalism can be used to handle both kinds of evidential complexity - structural complexity and combinatorial complexity.

Jon Williamson: Objective Bayesianism, Bayesian conditionalisation and voluntarism, Synthese, 178(1): 67-85, 2011;

Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief.

One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief.

Jon Williamson: Objective Bayesianism with predicate languages, Synthese 163(3), pp. 341-356, 2008;

Objective Bayesian probability is normally defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to first-order logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequency-induced probability function which generalises the lambda=0 function of Carnap's continuum of inductive methods.

Jon Williamson: Inductive influence, British Journal for the Philosophy of Science 58, pp. 689-708, 2007;

Objective Bayesianism has been criticised for not allowing learning from experience: it is claimed that an agent must give degree of belief 1/2 to the next raven being black, however many other black ravens have been observed. I argue that this objection can be overcome by appealing to *objective Bayesian nets*, a formalism for representing objective Bayesian degrees of belief. Under this account, previous observations exert an *inductive influence* on the next observation. I show how this approach can be used to capture the Johnson-Carnap continuum of inductive methods, as well as the Nix-Paris continuum, and show how inductive influence can be measured.

Jon Williamson: Objective Bayesian nets, in S. Artemov, H. Barringer, A. S. d'Avila Garcez, L. C. Lamb, and J. Woods (eds.): We Will Show Them: Essays in Honour of Dov Gabbay, Vol 2., pp. 713-730, College Publications, 2005;

I present a formalism that combines two methodologies: *objective Bayesianism* and *Bayesian nets*. According to *objective Bayesianism*, an agent's degrees of belief (i) ought to satisfy the axioms of probability, (ii) ought to satisfy constraints imposed by background knowledge, and (iii) should otherwise be as non-committal as possible (i.e. have maximum entropy). *Bayesian nets* offer an efficient way of representing and updating probability functions. An *objective Bayesian net* is a Bayesian net representation of the maximum entropy probability function.
I show how objective Bayesian nets can be constructed, updated and combined, and how they can deal with cases in which the agent's background knowledge includes knowledge of qualitative *influence relationships*, e.g. causal influences. I then sketch a number of applications of the resulting formalism, showing how it can shed light on probability logic, causal modelling, logical reasoning, semantic reasoning, argumentation and recursive modelling.

Jon Williamson: Motivating objective Bayesianism: from empirical constraints to objective probabilities, in William L. Harper and Gregory R. Wheeler (eds.): Probability and Inference: Essays in Honor of Henry E. Kyburg Jr. London: College Publications, 2007, pp. 155-183;

Kyburg goes half-way towards objective Bayesianism. He accepts that frequencies constrain rational belief to an interval but stops short of isolating an optimal degree of belief within this interval. I examine the case for going the whole hog.

Jon Williamson: Philosophies of probability, in Andrew Irvine (ed.): Handbook of the Philosophy of Mathematics, Volume 4 of the Handbook of the Philosophy of Science, North-Holland, 2009, pp. 493--533; 

This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of mathematics in general.

Jon Williamson: Maximising entropy efficientlyElectronic Transactions in Artificial Intelligence 6, 2002;

Determining a prior probability function via the maximum entropy principle can be a computationally intractable task. However one can easily determine - in advance of entropy maximisation - a list of conditional independencies that the maximum entropy function will satisfy. These independencies can be used to reduce the complexity of the entropy maximisation task. In particular, one can use these independencies to construct a direct acyclic graph in a Bayesian network, and then maximise entropy with respect to the numerical parameters of this network. This can result in an efficient representation of a prior probability function, and one that may allow efficient updating and marginalisation. The computational complexity of maximising entropy can be further reduced when knowledge of causal relationships is available. Moreover, the proposed simplification of the entropy maximisation task may be exploited to construct a proof theory for probabilistic logic.

Jon Williamson: Bayesianism and language change, Journal of Logic, Language and Information, 12(1), 2003, pp. 53-97.

Bayesian probability is normally defined over a fixed language or event space. But in practice language is susceptible to change, and the question naturally arises as to how Bayesian degrees of belief should change as language changes. I argue here that this question poses a serious challenge to Bayesianism. The Bayesian may be able to meet this challenge however, and I outline a practical method for changing degrees of belief over changes in finite propositional languages.

Jon Williamson & David Corfield: Bayesianism into the 21st century, in David Corfield & Jon Williamson (eds): `Foundations of Bayesianism', Kluwer Applied Logic Series, 2001, pp.1-16.

Jon Williamson: Countable additivity and subjective probability, British Journal for the Philosophy of Science 50(3), 1999, pp. 401-416.

While there are several arguments on either side, it is far from clear as to whether or not countable additivity is an acceptable axiom of subjective probability.I focus here on de Finetti's central argument against countable additivity and provide a new Dutch book proof of the principle, to argue that if we accept the Dutch book foundations of subjective probability, countable additivity is an unavoidable constraint.

Jon Williamson: Foundations for Bayesian networks , in David Corfield & Jon Williamson (eds):Foundations of Bayesianism, Kluwer Applied Logic Series, 2001, pp. 75-115. Presented at Bayesianism 2000 (May 11-12 2000).

Bayesian networks may either be treated purely formally or be given an interpretation. I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches. 

Logics and Reasoning

Jon Williamson: From Bayesian epistemology to inductive logic, Journal of Applied Logic 11:468-486, 2013. doi: 10.1016/j.jal.2013.03.006

Inductive logic admits a variety of semantics (Haenni et al., 2011, Part 1). This paper develops semantics based on the norms of Bayesian epistemology (Williamson, 2010, Chapter 7). §1 introduces the semantics and then, in §2, the paper explores methods for drawing inferences in the resulting logic and compares the methods of this paper with the methods of Barnett and Paris (2008). §3 then evaluates this Bayesian inductive logic in the light of four traditional critiques of inductive logic, arguing (i) that it is language independent in a key sense, (ii) that it admits connections with the Principle of Indifference but these connections do not lead to paradox, (iii) that it can capture the phenomenon of learning from experience, and (iv) that while the logic advocates scepticism with regard to some universal hypotheses, such scepticism is not problematic from the point of view of scientific theorising.

Jon Williamson: Review of ‘Reliable Reasoning’ by Gilbert Harman and Sanjeev Kulkarni, Mind 121:1073-1076, 2013. doi: 10.1093/mind/fzt006.

Jon Williamson: Inductive logic, The Reasoner 6(11):176-7, 2012.

Gregory Wheeler & Jon Williamson: Evidential probability and objective Bayesian epistemology, in Prasanta S. Bandyopadhyay & Malcolm R.Forster (eds): Philosophy of statistics, Handbook of the Philosophy of Science volume 7, Elsevier, pp. 307-331, 2011.

In this chapter we draw connections between two seemingly opposing approaches to probability and statistics: evidential probability on the one hand and objective Bayesian epistemology on the other.

Jan-Willem Romeijn, Rolf Haenni, Gregory Wheeler and Jon Williamson: Logical Relations in a Statistical Problem, in B. Lowe, E. Pacuit & J.W. Romeijn (eds): Foundations of the Formal Sciences VI, Reasoning about Probabilities and Probabilistic Reasoning, London: College Publications, pp. 49-79, 2009.

This paper presents the progicnet programme. It proposes a general framework for probabilistic logic that can guide inference based on both logical and probabilistic input. After an introduction to the framework as such, it is illustrated by means of a toy example from psychometrics. It is shown that the framework can accommodate a number of approaches to probabilistic reasoning: Bayesian statistical inference, evidential probability, probabilistic argumentation, and objective Bayesianism. The framework thus provides insight into the relations between these approaches, it illustrates how the results of different approaches can be combined, and it provides a basis for doing efficient inference in each of the approaches.

Jon Williamson: Aggregating judgements by merging evidence, Journal of Logic and Computation 19(3), pp. 461-473, 2009.

The theory of belief revision and merging has recently been applied to judgement aggregation. In this paper I argue that judgements are best aggregated by merging the evidence on which they are based, rather than by directly merging the judgements themselves. This leads to a three-step strategy for judgement aggregation. First, merge the evidence bases of the various agents using some method of belief merging. Second, determine which degrees of belief one should adopt on the basis of this merged evidence base, by applying objective Bayesian theory. Third, determine which judgements are appropriate given these degrees of belief by applying a decision-theoretic account of rational judgement formation.

Rolf Haenni, Jan-Willem Romeijn, Gregory Wheeler and Jon Williamson: Possible Semantics for a Common Framework of Probabilistic Logics, in V. N. Huynh (ed.): Interval / Probabilistic Uncertainty and Non-Classical Logics, Advances in Soft Computing Series, Springer 2008, pp. 268-279.

This paper proposes a common framework for various probabilistic logics. It consists of a set of uncertain premises with probabilities attached to them. This raises the question of the strength of a conclusion, but without imposing a particular semantics, no general solution is possible. The paper discusses several possible semantics by looking at it from the perspective of probabilistic argumentation.

Jon Williamson: A note on probabilistic logics and probabilistic networks, The Reasoner 2(5), pp. 4-5, 2008.

Jon Williamson: Objective Bayesian probabilistic logic, Journal of Algorithms in Cognition, Informatics and Logic 63: 167-183, 2008.

This paper develops connections between objective Bayesian epistemology - which holds that the strengths of an agent's beliefs should be representable by probabilities, should be calibrated with evidence of empirical probability, and should otherwise be equivocal - and probabilistic logic. After introducing objective Bayesian epistemology over propositional languages, the formalism is extended to handle predicate languages. A rather general probabilistic logic is formulated and then given a natural semantics in terms of objective Bayesian epistemology. The machinery of objective Bayesian nets and objective credal nets is introduced and this machinery is applied to provide a calculus for probabilistic logic that meshes with the objective Bayesian semantics.

Jon Williamson: Combining probability and logic: introduction, Journal of Logic, Language and Information 15(1-2), special issue on Combining Probability and Logic, pp. 1-3, 2006.

Jon Williamson & Dov Gabbay: Combining probability and logic - editorial, Journal of Applied Logic 1(3-4), Special Issue on Combining probability and logic, 2003, pp. 135-138.

Jon Williamson: Abduction and its distinctions , Review of Lorenzo Magnani [2001]: Abduction, reason and science: processes of discovery and explanation, British Journal for the Philosophy of Science 54(2), 2003, pp.353-358.

Jon Williamson: Bayesian networks for logical reasoning, in Carla Gomes & Toby Walsh (eds) [2001]: Proceedings of the AAAI Fall Symposium on using Uncertainty within Computation, AAAI Press Technical Report FS-01-04, pp. 136-143.

By identifying and pursuing analogies between causal and logical influence I show how the Bayesian network formalism can be applied to reasoning about logical deductions.

Jon Williamson: Probability logic, in Dov Gabbay, Ralph Johnson, Hans Jurgen Ohlbach & John Woods (eds)[2002]: Handbook of the Logic of Inference and Argument: The Turn Toward the Practical, Studies in Logic and Practical Reasoning Volume 1, Elsevier, pp. 397-424.

I examine the idea of incorporating probability into logic for a logic of practical reasoning. I introduce probability and its interpretations, give an account of the development of the logical approach to probability, its immediate problems, and improved formulations. Then I discuss inference in probabilistic logic, and propose the use of Bayesian networks for inference in both causal logics and proof planning. 

Applications to the sciences

Brendan Clarke, Donald Gillies, Phyllis Illari, Federica Russo & Jon Williamson: The evidence that evidence-based medicine omits, Preventative Medicine 57:745-747, 2013. doi: 10.1016/j.ypmed.2012.10.020

According to current hierarchies of evidence for EBM, evidence of correlation (e.g., from RCTs) is always more important than evidence of mechanisms when evaluating and establishing causal claims. We argue that evidence of mechanisms needs to be treated alongside evidence of correlation. This is for three reasons. First, correlation is always a fallible indicator of causation, subject in particular to the problem of confounding; evidence of mechanisms can in some cases be more important than evidence of correlation when assessing a causal claim. Second, evidence of mechanisms is often required in order to obtain evidence of correlation (for example, in order to set up and evaluate RCTs). Third, evidence of mechanisms is often required in order to generalise and apply causal claims.

While the EBM movement has been enormously successful in making explicit and critically examining one aspect of our evidential practice, i.e., evidence of correlation, we wish to extend this line of work to make explicit and critically examine a second aspect of our evidential practices: evidence of mechanisms.

Phyllis McKay Illari and Jon Williamson: In defence of activities, Journal of General Philosophy of Science, 44(1):69-83, 2013. doi: 10.1007/s10838-013-9217-5.

In this paper, we examine what is to be said in defence of Machamer, Darden and Craver’s controversial dualism about activities and entities (MDC 2000). We explain why we believe the notion of an activity to be a novel, valuable one, and set about clearing away some initial objections that can lead to its being brushed aside unexamined.  We argue that substantive debate about ontology can only be effective when desiderata for an ontology are explicitly articulated.  We distinguish three such desiderata.  The first is a more permissive descriptive ontology of science, the second a more reductive ontology prioritising understanding, and the third a more reductive ontology prioritising minimalism.  We compare MDC’s entities-activities ontology to its closest rival, the entities-capacities ontology, and argue that the entities-activities ontology does better on all three desiderata.

Federica Russo & Jon Williamson: EnviroGenomarkers: the interplay between mechanisms and difference making in establishing causal claims, Medicine Studies: International Journal for the History, Philosophy and Ethics of Medicine & Allied Sciences, 3:249--262, 2012..

According to Russo and Williamson (2007, 2011a,b), in order to establish a causal claim of the form `C is a cause of E', one needs evidence that there is an underlying mechanism between C and E as well as evidence that C makes a difference to E. This thesis has been used to argue that hierarchies of evidence, as championed by evidence-based movements, tend to give primacy to evidence of difference making over evidence of mechanism, and are flawed because the two sorts of evidence are required and they should be treated on a par.

An alternative approach gives primacy to evidence of mechanism over evidence of difference making. In this paper we argue that this alternative approach is equally flawed, again because both sorts of evidence need to be treated on a par. As an illustration of this parity we explain how scientists working in the `EnviroGenomarkers' project constantly make use of the two evidential components in a dynamic and intertwined way. We argue that such an interplay is needed not only for causal assessment but also for policy purposes.

Phyllis McKay Illari and Jon Williamson: What is a mechanism: thinking about mechanisms across the sciences, European Journal for Philosophy of Science 2:119-135, 2012;

After a decade of intense debate about mechanisms, there is still no consensus characterization.  In this paper we argue for a characterization that applies widely to mechanisms across the sciences.  We examine and defend our disagreements with the major current contenders for characterizations of mechanisms.  Ultimately, we indicate that the major contenders can all sign up to our characterization.

Federica Russo and Jon Williamson: Epistemic causality and evidence-based medicine, History and Philosophy of the Life Sciences 33(4):563-582, 2011.

Causal claims in biomedical contexts are ubiquitous albeit that they are not always made explicit. This paper addresses the question of what causal claims mean in the context of disease. It is argued that in medical contexts causality ought to be interpreted according to the epistemic theory. The epistemic theory offers an alternative to traditional accounts that cash out causation either in terms of ‘difference-making’ relations or in terms of mechanisms. According to the epistemic approach, causal claims tell us about which inferences (e.g., diagnoses and prognoses) are appropriate, rather than about the presence of some physical causal relation analogous to distance or gravitational attraction. It is shown that the epistemic theory has important consequences for medical practice, in particular with regard to the evidencebased causal assessment.

Lorenzo Casini, Phyllis McKay Illari, Federica Russo and Jon Williamson: Models for prediction, explanation and control: recursive Bayesian networks, Theoria 26(1):5-33, 2011.

The Recursive Bayesian Net (RBN) formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular how a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell's mechanism for apoptosis.

Barbara Osimani, Federica Russo and Jon Williamson: Scientific evidence and the law: an objective Bayesian formalisation of the precautionary principle in pharmaceutical regulation, Journal of Philosophy, Science and Law 11, 2011;

The paper considers the legal tools that have been developed in German pharmaceutical regulation as a result of the precautionary attitude inaugurated by the Contergan decision (1970). These tools are (i) the notion of “well-founded suspicion”, which attenuates the requirements for safety intervention by relaxing the requirement of a proved causal connection between danger and source, and the introduction of (ii) the reversal of proof burden in liability norms. The paper focuses on the first and proposes seeing the precautionary principle as an instance of the requirement that one should maximise expected utility. In order to maximise expected utility certain probabilities are required and it is argued that objective Bayesianism offers the most plausible means to determine the optimal decision in cases where evidence supports diverging choices.

George Darby and Jon Williamson: Imaging Technology and the Philosophy of Causality, Philosophy and Technology 24(2): 115-136, 2011.

Russo and Williamson (2007) put forward the thesis that, at least in the health sciences, to establish the claim that C is a cause of E one normally needs evidence of an underlying mechanism linking C and E as well as evidence that C makes a difference to E. This epistemological thesis poses a problem for most current analyses of causality which, in virtue of analysing causality in terms of just one of mechanisms or difference making, cannot account for the need for the other kind of evidence. Weber (2009) has suggested to the contrary that Giere’s probabilistic analysis of causality survives this criticism. In this paper we respond to Weber’s suggestion, arguing that Giere’s account does not survive the criticism, and we look in detail at the case of medical imaging technology, which, we argue, supports the thesis of Russo and Williamson (2007).

Federica Russo and Jon Williamson: Generic versus single-case causality: the case of autopsy, European Journal for Philosophy of Science 1(1): 47-69, 2011.

This paper addresses questions about how the levels of causality (generic and single case causality) are related. One question is epistemological: can relationships at one level be evidence for relationships at the other level? We present three kinds of answer to this question, categorised according to whether inference is top-down, bottom-up, or the levels are independent. A second question is metaphysical: can relationships at one level be reduced to relationships at the other level? We present three kinds of answer to this second question, categorised according to whether single-case relations are reduced to generic, generic relations are reduced to single-case, or the levels are independent.

We then explore causal inference in autopsy. This is an interesting case study, we argue, because it refutes all three epistemologies and all three metaphysics. We close by sketching an account of causality that survives autopsy---the epistemic theory.

Phyllis McKay Illari and Jon Williamson: Function and organization: comparing the mechanisms of protein synthesis and natural selection, Studies in History and Philosophy of Biological and Biomedical Sciences 41, pp. 279-291, 2010, doi 10.1016/j.shpsc.2010.07.001; ;

In this paper, we compare the mechanisms of protein synthesis and natural selection. We identify three core elements of mechanistic explanation: functional individuation, hierarchical nestedness or decomposition, and organization. These are now well understood elements of mechanistic explanation in fields such as protein synthesis, and widely accepted in the mechanisms literature. But Skipper and Millstein have argued (2005) that natural selection is neither decomposable nor organized. This would mean that much of the current mechanisms literature does not apply to the mechanism of natural selection.

We take each element of mechanistic explanation in turn. Having appreciated the importance of functional individuation, we show how decomposition and organization should be better understood in these terms. We thereby show that mechanistic explanation by protein synthesis and natural selection are more closely analogous than they appear – both possess all three of these core elements of a mechanism widely recognized in the mechanisms literature.

Phyllis McKay Illari and Jon Williamson: Mechanisms are real and local, in Phyllis McKay Illari, Federica Russo and Jon Williamson (eds): Causality in the Sciences, Oxford University Press, pp. 818-844, 2011;

Mechanisms have become much-discussed, yet there is still no consensus on how to characterise them. In this paper, we start with something everyone is agreed on – that mechanisms explain – and investigate what constraints this imposes on our metaphysics of mechanisms. We examine two widely shared premises about how to understand mechanistic explanation: (1) that mechanistic explanation offers a welcome alternative to traditional laws-based explanation and (2) that there are two senses of mechanistic explanation that we call ‘epistemic explanation’ and ‘physical explanation’. We argue that mechanistic explanation requires that mechanisms are both real and local. We then go on to argue that real, local mechanisms require a broadly active metaphysics for mechanisms, such as a capacities metaphysics.

Lorenzo Casini, Phyllis McKay Illari, Federica Russo and Jon Williamson: Recursive Bayesian networks for prediction, explanation and control in cancer science: a position paper, Proceedings of the First International Conference on Bioinformatics, Valencia, 20-23 January 2010;

The Recursive Bayesian Net formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of physical mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations are vital for prediction, explanation and control respectively, a recursive Bayesian net can be applied to all these tasks.

We show how a Recursive Bayesian Net can be used to model mechanisms in cancer science. The highest level of the proposed model will contain variables at the clinical level, while a middle level will map the structure of the DNA damage response mechanism and the lowest level will contain information about gene expression.

Jon Williamson: The philosophy of science and its relation to machine learning, in Mohamed Medhat Gaber (ed.): Scientific Data Mining and Knowledge Discovery: Principles and Foundations, Springer, pp. 77-89, 2009.

In this chapter I discuss connections between machine learning and the philosophy of science. First I consider the relationship between the two disciplines. There is a clear analogy between hypothesis choice in science and model selection in machine learning. While this analogy has been invoked to argue that the two disciplines are essentially doing the same thing and should merge, I maintain that the disciplines are distinct but related and that there is a *dynamic interaction* operating between the two: a series of mutually beneficial interactions that changes over time. I will introduce some particularly fruitful interactions, in particular the consequences of automated scientific discovery for the debate on inductivism versus falsificationism in the philosophy of science, and the importance of philosophical work on Bayesian epistemology and causality for contemporary machine learning. I will close by suggesting the locus of a possible future interaction: evidence integration.

Jan-Willem Romeijn and Jon Williamson: Intervention, underdetermination, and theory generation, under submission

We consider the use of intervention data for eliminating the underdetermination in statistical modelling, and for guiding extensions of the statistical models. The leading example is factor analysis, a major statistical tool in the social sciences. We first relate indeterminacy in factor analysis to the problem of underdetermination. Then we draw a parallel between factor analysis models and Bayesian networks with hidden nodes, which allows us to clarify the use of intervention data for dealing with indeterminacy. It will be shown that in some cases, the indeterminacy can be resolved by an intervention. In the other cases, the intervention data suggest specific extensions of the model. The upshot is that intervention data can replace theoretical criteria that are typically employed in resolving underdetermination and theory change.

Federica Russo and Jon Williamson: Interpreting causality in the health sciences, International Studies in the Philosophy of Science 21(2): 157-170, 2007.

We argue that the health sciences make causal claims on the basis of evidence both of physical mechanisms and of probabilistic dependencies. Consequently, an analysis of causality solely in terms of physical mechanisms, or solely in terms of probabilistic relationships, does not do justice to the causal claims of these sciences. Yet there seems to be a single concept of cause in these sciences - pluralism about causality will not do either. Instead, we maintain, the health sciences require a theory of causality that unifies its mechanistic and probabilistic aspects. We argue that the *epistemic* theory of causality provides the required unification.

Federica Russo and Jon Williamson: Interpreting probability in causal models for cancer, in Federica Russo and Jon Williamson (eds): Causality and probability in the sciences, London: College Publications, 2007, pp. 217-241.

How should probabilities be interpreted in causal models in the social and health sciences? In this paper we take a step towards answering this question by investigating the case of cancer in epidemiology and arguing that the objective Bayesian interpretation is most appropriate in this domain.

Sylvia Nagl, Matt Williams and Jon Williamson: Objective Bayesian nets for systems modelling and prognosis in breast cancer, in Dawn Holmes and L.C. Jain (eds): `Innovations in Bayesian Networks: Theory and Applications', Springer, 2008, pp. 131-167.

Cancer treatment decisions should be based on all available evidence. But this evidence is complex and varied: it includes not only the patient's symptoms and expert knowledge of the relevant causal processes, but also clinical databases relating to past patients, databases of observations made at the molecular level, and evidence encapsulated in scientific papers and medical informatics systems. Objective Bayesian nets offer a principled path to knowledge integration, and we show in this chapter how they can be applied to integrate various kinds of evidence in the cancer domain. This is important from the systems biology perspective, which needs to integrate data that concern different levels of analysis, and is also important from the point of view of medical informatics.

Sylvia Nagl, Matt Williams, Nadjet El-Mehidi, Vivek Patkar and Jon Williamson: Objective Bayesian nets for integrating cancer knowledge: a systems biology approach, in Juho Rousu, Samuel Kaski and Esko Ukkonen (eds): Proceedings of the Workshop on Probabilistic Modeling and Machine Learning in Structural and Systems Biology (Tuusula, Finland, 17-18 June 2006), Helsinki University Printing House, 2006, pp. 44-49. Video.

According to objective Bayesianism, an agent’s degrees of belief should be determined by a probability function, out of all those that satisfy constraints imposed by background knowledge, that maximises entropy. A Bayesian net offers a way of efficiently representing a probability function and efficiently drawing inferences from that function. An objective Bayesian net is a Bayesian net representation of the maximum entropy probability function. In this paper we apply the machinery of objective Bayesian nets to breast cancer prognosis. Background knowledge is diverse and comes from several different sources: a database of clinical data, a database of molecular data, and quantitative data from the literature. We show how an objective Bayesian net can be constructed from this background knowledge and how it can be applied to yield prognoses and aid translation of clinical knowledge to genomics research.

Matt Williams and Jon Williamson: Combining argumentation and Bayesian nets for breast cancer prognosis, Journal of Logic, Language and Information 15: 155-178, 2006.

We present a new framework for combining logic with probability, and demonstrate the application of this framework to breast cancer prognosis. Background knowledge concerning breast cancer prognosis is represented using logical arguments. This background knowledge and a database are used to build a Bayesian net that captures the probabilistic relationships amongst the variables. Causal hypotheses gleaned from the Bayesian net in turn generate new arguments. The Bayesian net can be queried to help decide when one argument attacks another. The Bayesian net is used to perform the prognosis, while the argumentation framework is used to provide a qualitative explanation of the prognosis.

Jon Williamson: From Bayesianism to the Epistemic View of Mathematics: Remarks motivated by Richard Jeffrey’s ‘Subjective probability: the real thing', Philosophia Mathematica 14(3), pp. 365-369, 2006;

Jon Williamson: A dynamic interaction between machine learning and the philosophy of science, Minds and Machines 14(4), 2004, pp. 539-549;

The relationship between machine learning and the philosophy of science can be classed as a dynamic interaction: a mutually beneficial connection between two autonomous fields that changes direction over time. I discuss the nature of this interaction and give a case study highlighting interactions between research on Bayesian networks in machine learning and research on causality and probability in the philosophy of science.

Jung-Wook Bang, Raphael Chaleil & Jon Williamson:Two-stage Bayesian networks for metabolic network prediction, in Peter Lucas (ed), Proceedings of the Workshop on Qualitative and Model-Based Reasoning in Biomedicine, 9th Conference on Artificial Intelligence in Medicine Europe, 18-22 October 2003, Cyprus, pp. 19-23;

Metabolism is a set of chemical reactions, used by living organisms to process chemical compounds in order to take energy and eliminate toxic compounds, for example. Its processes are referred as metabolic pathways. Understanding metabolism is imperative to biology, toxicology and medicine, but the number and complexity of metabolic pathways makes this a difficult task. In our paper, we investigate the use of causal Bayesian networks to model the pathways of yeast saccharomyces cerevisiae metabolism: such a network can be used to draw predictions about the levels of metabolites and enzymes in a particular specimen. We propose a two-stage methodology for causal networks, as follows. First construct a causal network from the network of metabolic pathways. The viability of this causal network depends on the validity of the causal Markov condition. If this condition fails, however, the principle of the common cause motivates the addition of a new causal arrow or a new `hidden' common cause to the network (stage 2 of the model formation process). Algorithms for adding arrows or hidden nodes have been developed separately in a number of papers, and in this paper we combine them, showing how the resulting procedure can be applied to the metabolic pathway problem. Our general approach was tested on neural cell morphology data and demonstrated noticeable improvements in both prediction and network accuracy.

Jon Williamson: A probabilistic approach to diagnosis, Proceedings of the Eleventh International Workshop on Principles of Diagnosis (DX-00), Morelia, Michoacen, Mexico, June 8-11 2000.

This paper addresses the foundations of diagnostic reasoning, in particular the viability of a probabilistic approach. One might be reluctant to adopt such an approach for one of two reasons: one may suppose that the probabilistic approach is inappropriate or that it is impractical to implement. I shall attempt to overcome any such doubts and to argue that on the contrary the probabilistic method is extremely promising.

Jon Williamson: Approximating discrete probability distributions with Bayesian networks , in Proceedings of the International Conference on Artificial Intelligence in Science and Technology, Hobart Tasmania, 16-20 December 2000.

I generalise the arguments of [Chow and Liu 1968] to show that a Bayesian network satisfying some arbitrary constraint that best approximates a probability distribution is one for which mutual information weight is maximised. I give a practical procedure for finding an approximation network.

INTERESTS
TEACHING
ARTICLES
BOOKS