Portrait of Professor Jon Williamson

Professor Jon Williamson

Professor of Reasoning, Inference and Scientific Method

About

Jon Williamson works in the area of philosophy of science and medicine. He is co-director of the Centre for Reasoning and is a member of the Theoretical Reasoning research cluster

Research interests

Professor Jon Williamson works on the philosophy of causality, the foundations of probability, formal epistemology, inductive logic, and the use of causality, probability and inference methods in science and medicine.

His books Bayesian Nets and Causality and In Defence of Objective Bayesianism develop the view that causality and probability are features of the way we reason about the world, not a part of the world itself. His books Probabilistic Logics and Probabilistic Networks and Lectures on Inductive Logic apply recent developments in Bayesianism to motivate a new approach to inductive logic. 

Jon's latest book, Evaluating Evidence of Mechanisms in Medicine, seeks to broaden the range of evidence considered by evidence-based medicine. Jon can be found on Twitter @Jon_Williamson_.

Teaching

Jon teaches logic and reasoning.

Publications

Showing 50 of 88 total publications in the Kent Academic Repository. View all publications.

Article

  • Samet, J., Chiu, W., Cogliano, V., Jinot, J., Kriebel, D., Lunn, R., Beland, F., Bero, L., Browne, P., Fritschi, L., Kanno, J., Lachenmeier, D., Lan, Q., Lasfargues, G., Curieux, F., Peters, S., Shubat, P., Sone, H., White, M., Williamson, J., Yakubovskaya, M., Siemiatycki, J., White, P., Guyton, K., Schubauer-Berigan, M., Hall, A., Grosse, Y., Bouvard, V., Benbrahim-Tallaa, L., Ghissassi, F., Lauby-Secretan, B., Armstrong, B., Saracci, R., Zavadil, J., Straif, K. and Wild, C. (2019). The IARC Monographs: Updated procedures for modern and transparent evidence synthesis in cancer hazard identification. JNCI: Journal of the National Cancer Institute [Online]. Available at: https://doi.org/10.1093/jnci%2Fdjz169.
    The Monographs produced by the International Agency for Research on Cancer (IARC) apply rigorous procedures for the scientific review and evaluation of carcinogenic hazards by independent experts. The Preamble to the IARC Monographs, which outlines these procedures, was updated in 2019, following recommendations of a 2018 expert Advisory Group. This article presents the key features of the updated Preamble, a major milestone that will enable IARC to take advantage of recent scientific and procedural advances made during the 12 years since the last Preamble amendments. The updated Preamble formalizes important developments already being pioneered in the Monographs Programme. These developments were taken forward in a clarified and strengthened process for identifying, reviewing, evaluating and integrating evidence to identify causes of human cancer. The advancements adopted include strengthening of systematic review methodologies; greater emphasis on mechanistic evidence, based on key characteristics of carcinogens; greater consideration of quality and informativeness in the critical evaluation of epidemiological studies, including their exposure assessment methods; improved harmonization of evaluation criteria for the different evidence streams; and a single-step process of integrating evidence on cancer in humans, cancer in experimental animals and mechanisms for reaching overall evaluations. In all, the updated Preamble underpins a stronger and more transparent method for the identification of carcinogenic hazards, the essential first step in cancer prevention.
  • Tonelli, M. and Williamson, J. (2019). Mechanisms in clinical practice: use and justification. Medicine, Health Care and Philosophy [Online]. Available at: https://doi.org/10.1007/s11019-019-09915-5.
    While the importance of mechanisms in determining causality in medicine is currently the subject of active debate, the role of mechanistic reasoning in clinical practice has received far less attention. In this paper we look at this question in the context of the treatment of a particular individual, and argue that evidence of mechanisms is indeed key to various aspects of clinical practice, including assessing population-level research reports, diagnostic as well as therapeutic decision making, and the assessment of treatment effects. We use the pulmonary condition bronchiectasis as a source of examples of the importance of mechanistic reasoning to clinical practice.
  • Williamson, J. (2019). Evidential Proximity, Independence, and the evaluation of carcinogenicity. Journal of Evaluation in Clinical Practice [Online]. Available at: https://doi.org/10.1111/jep.13226.
    This paper analyses the methods of the International Agency for Research on Cancer (IARC) for evaluating the carcinogenicity of various agents. I identify two fundamental evidential principles that underpin these methods, which I call Evidential Proximity and Independence. I then show, by considering the 2018 evaluation of the carcinogenicity of styrene and styrene‐7,8‐oxide, that these principles have been implemented in a way that can lead to inconsistency. I suggest a way to resolve this problem: admit a general exception to Independence and treat the implementation of Evidential Proximity more flexibly where this exception applies. I show that this suggestion is compatible with the general principles laid down in the 2019 version of IARC's methods guide, its Preamble to the Monographs.
  • Williamson, J. (2019). Establishing causal claims in medicine. International Studies in the Philosophy of Science [Online]. Available at: https://doi.org/10.1080/02698595.2019.1630927.
    Russo and Williamson (2007) put forward the following thesis: in order to establish a causal claim in medicine, one normally needs to establish both that the putative cause and putative effect are appropriately correlated and that there is some underlying mechanism that can account for this correlation. I argue that, although the Russo-Williamson thesis conflicts with the tenets of present-day evidence-based medicine (EBM), it offers a better causal epistemology than that provided by present-day EBM because it better explains two key aspects of causal discovery. First, the thesis better explains the role of clinical studies in establishing causal claims. Second, it yields a better account of extrapolation.
  • Williamson, J. (2019). Calibration for epistemic causality. Erkenntnis [Online]. Available at: https://doi.org/10.1007/s10670-019-00139-w.
    The epistemic theory of causality is analogous to epistemic theories of probability. Most proponents of epistemic probability would argue that one's degrees of belief should be calibrated to chances, insofar as one has evidence of chances. The question arises as to whether causal beliefs should satisfy an analogous calibration norm. In this paper, I formulate a particular version of a norm requiring calibration to chances and argue that this norm is the most fundamental evidential norm for epistemic probability. I then develop an analogous calibration norm for epistemic causality, argue that it is the *only* evidential norm required for epistemic causality, and show how an epistemic account of causality that incorporates this norm can be used to analyse objective causal relationships.
  • Williamson, J. (2018). Establishing the teratogenicity of Zika and evaluating causal criteria. Synthese [Online]. Available at: https://doi.org/10.1007/s11229-018-1866-9.
    The teratogenicity of the Zika virus was considered established in 2016, and is an interesting case because three different sets of causal criteria were used to assess teratogenicity. This paper appeals to the thesis of Russo and Williamson (2007) to devise an epistemological framework that can be used to compare and evaluate sets of causal criteria. The framework can also be used to decide when enough criteria are satisfied to establish causality. Arguably, the three sets of causal criteria considered here offer only a rudimentary assessment of mechanistic studies, and some suggestions are made as to alternative ways to establish causality.
  • Aronson, J., La Caze, A., Kelly, M., Parkkinen, V. and Williamson, J. (2018). The use of evidence of mechanisms in drug approval. Journal of Evaluation in Clinical Practice [Online]. Available at: https://doi.org/10.1111/jep.12960.
    The role of mechanistic evidence tends to be under?appreciated in current evidence?based medicine (EBM), which focusses on clinical studies, tending to restrict attention to randomized controlled studies (RCTs) when they are available. The EBM+ programme seeks to redress this imbalance, by suggesting methods for evaluating mechanistic studies alongside clinical studies. Drug approval is a problematic case for the view that mechanistic evidence should be taken into account, because RCTs are almost always available. Nevertheless, we argue that mechanistic evidence is central to all the key tasks in the drug approval process: in drug discovery and development; assessing pharmaceutical quality; devising dosage regimens; assessing efficacy, harms, external validity, and cost?effectiveness; evaluating adherence; and extending product licences. We recommend that, when preparing for meetings in which any aspect of drug approval is to be discussed, mechanistic evidence should be systematically analysed and presented to the committee members alongside analyses of clinical studies.
  • Romeijn, J. and Williamson, J. (2018). Intervention and Identifiability in Latent Variable Modelling. Minds and Machines [Online] 28:243-264. Available at: https://doi.org/10.1007/s11023-018-9460-y.
    We consider the use of interventions for resolving a problem of unidentified statistical models. The leading examples are from latent variable modelling, an influential statistical tool in the social sciences. We first explain the problem of statistical identifiability and contrast it with the identifiability of causal models. We then draw a parallel between the latent variable models and Bayesian networks with hidden nodes. This allows us to clarify the use of interventions for dealing with unidentified statistical models. We end by discussing the philosophical and methodological import of our result.
  • Williamson, J. (2018). Justifying the Principle of Indifference. European Journal for the Philosophy of Science [Online]. Available at: https://link.springer.com/content/pdf/10.1007%2Fs13194-018-0201-0.pdf.
    This paper presents a new argument for the Principle of Indifference. This
    argument can be thought of in two ways: as a pragmatic argument, justifying
    the principle as needing to hold if one is to minimise worst-case expected loss,
    or as an epistemic argument, justifying the principle as needing to hold in order
    to minimise worst-case expected inaccuracy. The question arises as to which
    interpretation is preferable. I show that the epistemic argument contradicts
    Evidentialism and suggest that the relative plausibility of Evidentialism provides
    grounds to prefer the pragmatic interpretation. If this is right, it extends to a
    general preference for pragmatic arguments for the Principle of Indifference,
    and also to a general preference for pragmatic arguments for other norms of
    Bayesian epistemology.
  • Williamson, J. (2017). Models in Systems Medicine. Disputatio [Online] 9:429-469. Available at: https://content.sciendo.com/view/journals/disp/9/47/article-p429.xml.
    Systems medicine is a promising new paradigm for discovering associations, causal relationships and mechanisms in medicine. But it faces some tough challenges that arise from the use of big data: in particular, the problem of how to integrate evidence and the problem of how to structure the development of models. I argue that objective Bayesian models offer one way of tackling the evidence integration problem. I also offer a general methodology for structuring the development of models, within which the objective Bayesian approach fits rather naturally.
  • Hawthorne, J., Landes, J., Wallmann, C. and Williamson, J. (2015). The Principal Principle Implies the Principle of Indifference. The British Journal for the Philosophy of Science [Online] 68:123-131. Available at: http://dx.doi.org/10.1093/bjps/axv030.
  • Landes, J. and Williamson, J. (2015). Justifying Objective Bayesianism on Predicate Languages. Entropy [Online] 17:2459-2543. Available at: http://doi.org/10.3390/e17042459.
    Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss.
  • Williamson, J. (2015). Deliberation, Judgement and the Nature of Evidence. Economics and Philosophy [Online] 31:27-65. Available at: http://dx.doi.org/10.1017/S026626711400039X.
    A normative Bayesian theory of deliberation and judgement requires a procedure for merging the evidence of a collection of agents. In order to provide such a procedure, one needs to ask what the evidence is that grounds Bayesian probabilities. After finding fault with several views on the nature of evidence (the views that evidence is knowledge; that evidence is whatever is fully believed; that evidence is observationally set credence; that evidence is information), it is argued that evidence is whatever is rationally taken for granted. This view is shown to have consequences for an account of merging evidence, and it is argued that standard axioms for merging need to be altered somewhat.
  • Williamson, J. (2014). How Uncertain Do We Need to Be?. Erkenntnis [Online] 79:1249-1271. Available at: http://dx.doi.org/10.1007/s10670-013-9516-6.
    Expert probability forecasts can be useful for decision making (Sect. 1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or higher order uncertainty (Sect. 2). Bayesian epistemology provides the means to halt this escalator, by tying expressions of uncertainty to the propositions expressible in an agent’s language (Sect. 3). But Bayesian epistemology comes in three main varieties. Strictly subjective Bayesianism and empirically-based subjective Bayesianism have difficulty in justifying the use of a forecaster’s probabilities for decision making (Sect. 4). On the other hand, objective Bayesianism can justify the use of these probabilities, at least when the probabilities are consistent with the agent’s evidence (Sect. 5). Hence objective Bayesianism offers the most promise overall for explaining how testimony of uncertainty can be useful for decision making. Interestingly, the objective Bayesian analysis provided in Sect. 5 can also be used to justify a version of the Principle of Reflection (Sect. 6).
  • Clarke, B., Leuridan, B. and Williamson, J. (2014). Modelling Mechanisms with Causal Cycles. Synthese [Online] 191:1651-1681. Available at: http://dx.doi.org/10.1007/s11229-013-0360-7.
    Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (Theoria 26(1):5–33, 2011) put forward the Recursive Bayesian Networks (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling the hierarchical nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way.
  • Clarke, B., Gillies, D., Illari, P., Russo, F. and Williamson, J. (2014). Mechanisms and the Evidence Hierarchy. Topoi [Online] 33:339-360. Available at: http://dx.doi.org/10.1007/s11245-013-9220-9.
    Evidence-based medicine (EBM) makes use of explicit procedures for grading evidence for causal claims. Normally, these procedures categorise evidence of correlation produced by statistical trials as better evidence for a causal claim than evidence of mechanisms produced by other methods. We argue, in contrast, that evidence of mechanisms needs to be viewed as complementary to, rather than inferior to, evidence of correlation. In this paper we first set out the case for treating evidence of mechanisms alongside evidence of correlation in explicit protocols for evaluating evidence. Next we provide case studies which exemplify the ways in which evidence of mechanisms complements evidence of correlation in practice. Finally, we put forward some general considerations as to how the two sorts of evidence can be more closely integrated by EBM.
  • Williamson, J. (2013). From Bayesian Epistemology to Inductive Logic. Journal of Applied Logic [Online] 11:468-486. Available at: http://dx.doi.org/10.1016/j.jal.2013.03.006.
    Inductive logic admits a variety of semantics (Haenni et al. (2011) [7, Part 1]). This paper develops semantics based on the norms of Bayesian epistemology (Williamson, 2010 [16, Chapter 7]). Section 1 introduces the semantics and then, in Section 2, the paper explores methods for drawing inferences in the resulting logic and compares the methods of this paper with the methods of Barnett and Paris (2008) [2]. Section 3 then evaluates this Bayesian inductive logic in the light of four traditional critiques of inductive logic, arguing (i) that it is language independent in a key sense, (ii) that it admits connections with the Principle of Indifference but these connections do not lead to paradox, (iii) that it can capture the phenomenon of learning from experience, and (iv) that while the logic advocates scepticism with regard to some universal hypotheses, such scepticism is not problematic from the point of view of scientific theorising.
  • Clarke, B., Gillies, D., Illari, P., Russo, F. and Williamson, J. (2013). The Evidence that Evidence-based Medicine Omits. Preventative Medicine [Online] 57:745-747. Available at: http://dx.doi.org/10.1016/j.ypmed.2012.10.020.
    According to current hierarchies of evidence for EBM, evidence of correlation (e.g., from RCTs) is always more important than evidence of mechanisms when evaluating and establishing causal claims. We argue that evidence of mechanisms needs to be treated alongside evidence of correlation. This is for three reasons. First, correlation is always a fallible indicator of causation, subject in particular to the problem of confounding; evidence of mechanisms can in some cases be more important than evidence of correlation when assessing a causal claim. Second, evidence of mechanisms is often required in order to obtain evidence of correlation (for example, in order to set up and evaluate RCTs). Third, evidence of mechanisms is often required in order to generalise and apply causal claims. While the EBM movement has been enormously successful in making explicit and critically examining one aspect of our evidential practice, i.e., evidence of correlation, we wish to extend this line of work to make explicit and critically examine a second aspect of our evidential practices: evidence of mechanisms.
  • Williamson, J. (2013). How can Causal explanations Explain?. Erkenntnis [Online] 78:257-275. Available at: http://dx.doi.org/10.1007/s10670-013-9512-x.
    The mechanistic and causal accounts of explanation are often conflated to yield a `causal-mechanical' account. This paper prizes them apart and asks: if the mechanistic account is correct, how can causal explanations be explanatory? The answer to this question varies according to how causality itself is understood. It is argued that difference-making, mechanistic, dualist and inferentialist accounts of causality all struggle to yield explanatory causal explanations, but that an epistemic account of causality is more promising in this regard.
  • Williamson, J. (2013). Why Frequentists and Bayesians Need Each Other. Erkenntnis [Online] 78:293-318. Available at: http://dx.doi.org/10.1007/s10670-011-9317-8.
    The orthodox view in statistics has it that frequentism and Bayesianism
    are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are
    complementary and need to mesh if probabilistic reasoning is to be carried out
    correctly.
  • Russo, F. and Williamson, J. (2012). EnviroGenomarkers: The Interplay between Mechanisms and Difference Making in Establishing Causal Claims. Medicine Studies [Online] 3:249-262. Available at: http://dx.doi.org/10.1007/s12376-012-0079-7.
    According to Russo and Williamson (2007, 2011a,b), in order to establish a causal claim of the form `C is a cause of E', one needs evidence that there is an underlying mechanism between C and E as well as evidence that C makes a difference to E. This thesis has been used to argue that hierarchies of evidence, as championed by evidence-based movements, tend to give primacy to evidence of difference making over evidence of mechanism, and are flawed because the two sorts of evidence are required and they should be treated on a par.

    An alternative approach gives primacy to evidence of mechanism over evidence of difference making. In this paper we argue that this alternative approach is equally flawed, again because both sorts of evidence need to be treated on a par. As an illustration of this parity we explain how scientists working in the `EnviroGenomarkers' project constantly make use of the two evidential components in a dynamic and intertwined way. We argue that such an interplay is needed not only for causal assessment but also for policy purposes.
  • Illari, P. and Williamson, J. (2012). What is a Mechanism? Thinking about Mechanisms across the Sciences. European Journal for Philosophy of Science [Online] 2:119-135. Available at: http://dx.doi.org/10.1007/s13194-011-0038-2.
    After a decade of intense debate about mechanisms, there is still no consensus
    characterization. In this paper we argue for a characterization that applies widely to
    mechanisms across the sciences. We examine and defend our disagreements with the
    major current contenders for characterizations of mechanisms. Ultimately, we indicate
    that the major contenders can all sign up to our characterization.
  • Williamson, J. (2011). Mechanistic Theories of Causality. Philosophy Compass [Online] 6:421-447. Available at: http://dx.doi.org/10.1111/j.1747-9991.2011.00400.x.
    Part I of this paper introduces a range of mechanistic theories of causality, including process theories and the complex-systems theories, and some of the problems they face. Part II argues that while there is a decisive case against a purely mechanistic analysis, a viable theory of causality must incorporate mechanisms as an ingredient, and describes one way of providing an analysis of causality which reaps the rewards of the mechanistic approach without succumbing to its pitfalls.
  • Osimani, B., Russo, F. and Williamson, J. (2011). Scientific Evidence and the Law: An Objective Bayesian Formalisation of the Precautionary Principle in Pharmaceutical Regulation. Journal of Philosophy, Science and Law [Online] 11. Available at: http://www.miami.edu/ethics/jpsl/.
    The paper considers the legal tools that have been developed in German pharmaceutical regulation as a result of the precautionary attitude inaugurated by the Contergan decision (1970). These tools are (i) the notion of "well-founded suspicion", which attenuates the requirements for safety intervention by relaxing the requirement of a proved causal connection between danger and source, and the introduction of (ii) the reversal of proof burden in liability norms. The paper focuses on the first and proposes seeing the precautionary principle as an instance of the requirement that one should maximise expected utility. In order to maximise expected utility certain probabilities are required and it is argued that objective Bayesianism offers the most plausible means to determine the optimal decision in cases where evidence supports diverging choices.
  • Williamson, J. (2011). Objective Bayesianism, Bayesian Conditionalisation and Voluntarism. Synthese [Online] 178:67-85. Available at: http://dx.doi.org/10.1007/s11229-009-9515-y.
    Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief. One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief.
  • Casini, L., Illari, P., Russo, F. and Williamson, J. (2011). Models for Prediction, Explanation and Control: Recursive Bayesian Networks. Theoria [Online] 26:5-33. Available at: http://www.ehu.es/ojs/index.php/THEORIA/article/view/1192/825.
    The Recursive Bayesian Net (RBN) formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular how a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell's mechanism for apoptosis.
  • Russo, F. and Williamson, J. (2011). Generic versus Single-Case Causality: The Case of Autopsy. European Journal for Philosophy of Science [Online] 1:47-69. Available at: http://dx.doi.org/10.1007/s13194-010-0012-4.
    This paper addresses questions about how the levels of causality (generic and single-case causality) are related. One question is epistemological: can relationships at one level be evidence for relationships at the other level? We present three kinds of answer to this question, categorised according to whether inference is top-down, bottom-up, or the levels are independent. A second question is metaphysical: can relationships at one level be reduced to relationships at the other level? We present three kinds of answer to this second question, categorised according to whether single-case relations are reduced to generic, generic relations are reduced to single-case, or the levels are independent. We then explore causal inference in autopsy. This is an interesting case study, we argue, because it refutes all three epistemologies and all three metaphysics. We close by sketching an account of causality that survives autopsy—the epistemic theory.
  • Darby, G. and Williamson, J. (2011). Imaging Technology and the Philosophy of Causality. Philosophy & Technology [Online] 24:115-136. Available at: http://dx.doi.org/10.1007/s13347-010-0010-7.
    Russo and Williamson (Int Stud Philos Sci 21(2):157–170, 2007) put forward the thesis that, at least in the health sciences, to establish the claim that C is a cause of E, one normally needs evidence of an underlying mechanism linking C and E as well as evidence that C makes a difference to E. This epistemological thesis poses a problem for most current analyses of causality which, in virtue of analysing causality in terms of just one of mechanisms or difference making, cannot account for the need for the other kind of evidence. Weber (Int Stud Philos Sci 23(2):277–295, 2009) has suggested to the contrary that Giere’s probabilistic analysis of causality survives this criticism. In this paper, we look in detail at the case of medical imaging technology, which, we argue, supports the thesis of Russo and Williamson, and we respond to Weber’s suggestion, arguing that Giere’s account does not survive the criticism.
  • Russo, F. and Williamson, J. (2011). Epistemic Causality and Evidence-Based Medicine. History and Philosophy of the Life Sciences [Online] 33:563-582. Available at: http://www.hpls-szn.com/articles.asp?id=146&book=31.
    Causal claims in biomedical contexts are ubiquitous albeit they are not always made explicit. This paper addresses the question of what causal claims mean in the context of disease. It is argued that in medical contexts causality ought to be interpreted according to the epistemic theory. The epistemic theory offers an alternative to traditional accounts that cash out causation either in terms of “difference-making” relations or in terms of mechanisms. According to the epistemic approach, causal claims tell us about which inferences (e.g., diagnoses and prognoses) are appropriate, rather than about the presence of some physical causal relation analogous to distance or gravitational attraction. It is shown that the epistemic theory has important consequences for medical practice, in particular with regard to evidence-based causal assessment.
  • McKay Illari, P. and Williamson, J. (2010). Function and Organization: Comparing the Mechanisms of Protein Synthesis and Natural Selection. Studies in History and Philosophy of Science Part C [Online] 41:279-291. Available at: http://dx.doi.org/10.1016/j.shpsc.2010.07.001.
    In this paper, we compare the mechanisms of protein synthesis and natural selection. We identify three core elements of mechanistic explanation: functional individuation, hierarchical nestedness or decomposition, and organization. These are now well understood elements of mechanistic explanation in fields such as protein synthesis, and widely accepted in the mechanisms literature. But Skipper and Millstein have argued (2005) that natural selection is neither decomposable nor organized. This would mean that much of the current mechanisms literature does not apply to the mechanism of natural selection.

    We take each element of mechanistic explanation in turn. Having appreciated the importance of functional individuation, we show how decomposition and organization should be better understood in these terms. We thereby show that mechanistic explanation by protein synthesis and natural selection are more closely analogous than they appear—both possess all three of these core elements of a mechanism widely recognized in the mechanisms literature.
  • Williamson, J. (2009). Aggregating Judgements by Merging Evidence. Journal of Logic and Computation [Online] 19:461-473. Available at: http://dx.doi.org/10.1093/logcom/exn011.
    The theory of belief revision and merging has recently been applied to judgement aggregation. In this paper I argue that judgements are best aggregated by merging the evidence on which they are based, rather than by directly merging the judgements themselves. This leads to a three-step strategy for judgement aggregation. First, merge the evidence bases of the various agents using some method of belief merging. Second, determine which degrees of belief one should adopt on the basis of this merged evidence base, by applying objective Bayesian theory. Third, determine which judgements are appropriate given these degrees of belief by applying a decision-theoretic account of rational judgement formation.

Book

  • Parkkinen, V., Wallmann, C., Wilde, M., Clarke, B., Illari, P., Kelly, M., Norell, C., Russo, F., Shaw, B. and Williamson, J. (2018). Evaluating Evidence of Mechanisms in Medicine: Principles and Procedures. [Online]. Springer Netherlands. Available at: https://link.springer.com/book/10.1007/978-3-319-94610-8#about.
    This book is the first to develop explicit methods for evaluating evidence of mechanisms in the field of medicine. It explains why it can be important to make this evidence explicit, and describes how to take such evidence into account in the evidence appraisal process. In addition, it develops procedures for seeking evidence of mechanisms, for evaluating evidence of mechanisms, and for combining this evaluation with evidence of association in order to yield an overall assessment of effectiveness.

    Evidence-based medicine seeks to achieve improved health outcomes by making evidence explicit and by developing explicit methods for evaluating it. To date, evidence-based medicine has largely focused on evidence of association produced by clinical studies. As such, it has tended to overlook evidence of pathophysiological mechanisms and evidence of the mechanisms of action of interventions.

    The book offers a useful guide for all those whose work involves evaluating evidence in the health sciences, including those who need to determine the effectiveness of health interventions and those who need to ascertain the effects of environmental exposures.
  • Williamson, J. (2017). Lectures on Inductive Logic. [Online]. Oxford, UK: Oxford University Press. Available at: https://global.oup.com/academic/product/lectures-on-inductive-logic-9780199666478.
    Logic is a field studied mainly by researchers and students of philosophy, mathematics and computing. Inductive logic seeks to determine the extent to which the premisses of an argument entail its conclusion, aiming to provide a theory of how one should reason in the face of uncertainty. It has applications to decision making and artificial intelligence, as well as how scientists should reason when not in possession of the full facts.

    In this book, Jon Williamson embarks on a quest to find a general, reasonable, applicable inductive logic (GRAIL), all the while examining why pioneers such as Ludwig Wittgenstein and Rudolf Carnap did not entirely succeed in this task.

    Along the way he presents a general framework for the field, and reaches a new inductive logic, which builds upon recent developments in Bayesian epistemology (a theory about how strongly one should believe the various propositions that one can express). The book explores this logic in detail, discusses some key criticisms, and considers how it might be justified. Is this truly the GRAIL?

    Although the book presents new research, this material is well suited to being delivered as a series of lectures to students of philosophy, mathematics, or computing and doubles as an introduction to the field of inductive logic
  • Haenni, R., Romeijn, J., Wheeler, G. and Williamson, J. (2011). Probabilistic Logics and Probabilistic Networks. [Online]. Vol. 350. Berlin: Springer. Available at: http://www.springer.com/philosophy/epistemology+and+philosophy+of+science/book/978-94-007-0007-9.
    While probabilistic logics in principle might be applied to solve a range of problems, in practice they are rarely applied --- perhaps because they seem disparate, complicated, and computationally intractable. This programmatic book argues that several approaches to probabilistic logic fit into a simple unifying framework in which logically complex evidence is used to associate probability intervals or probabilities with sentences. Specifically, Part I shows that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question, while Part II shows that there is the potential to develop computationally feasible methods to mesh with this framework. The book is intended for researchers in philosophy, logic, computer science and statistics. A familiarity with mathematical concepts and notation is presumed, but no advanced knowledge of logic or probability theory is required.
  • Williamson, J. (2010). In Defence of Objective Bayesianism. Oxford: Oxford University Press.

Book section

  • Wallmann, C. and Williamson, J. (2017). Four approaches to the reference class problem. In: Hofer-Szabó, G. and Wro?ski, L. eds. Making It Formally Explicit: Probability, Causality and Indeterminism. Springer, pp. 61-81. Available at: http://dx.doi.org/10.1007/978-3-319-55486-0_4.
  • Wilde, M. and Williamson, J. (2016). Models in medicine. In: The Routledge Companion to Philosophy of Medicine. Abingdon, Oxfordshire: Routledge, pp. 271-284.
  • Wilde, M. and Williamson, J. (2016). Evidence and Epistemic Causality. In: Wiedermann, W. and von Eye, A. eds. Statistics and Causality: Methods for Applied Empirical Research. Wiley, pp. 31-41.
  • Wilde, M. and Williamson, J. (2016). Bayesianism and Information. In: The Routledge Handbook of Philosophy of Information. Abingdon: Routledge, pp. 180-187. Available at: https://www.routledge.com/The-Routledge-Handbook-of-Philosophy-of-Information/Floridi/p/book/9781138796935.
  • Williamson, J. (2011). Mechanisms are Real and Local. In: Illari, P., Russo, F. and Williamson, J. eds. Causality in the Sciences. Oxford: Oxford University Press, pp. 818-844.
    Mechanisms have become much-discussed, yet there is still no consensus on how to characterise them. In this paper, we start with something everyone is agreed on – that mechanisms explain – and investigate what constraints this imposes on our metaphysics of mechanisms. We examine two widely shared premises about how to understand mechanistic explanation: (1) that mechanistic explanation offers a welcome alternative to traditional laws-based explanation and (2) that there are two senses of mechanistic explanation that we call ‘epistemic explanation’ and ‘physical explanation’. We argue that mechanistic explanation requires that mechanisms are both real and local. We then go on to argue that real, local mechanisms require a broadly active metaphysics for mechanisms, such as a capacities metaphysics.
  • Williamson, J. (2011). An Objective Bayesian Account of Confirmation. In: Dieks, D., Gonzalez, W., Hartmann, S., Uebel, T. and Weber, M. eds. Explanation, Prediction, and Confirmation. New Trends and Old Ones Reconsidered. Dordrecht: Springer, pp. 53-81. Available at: http://dx.doi.org/10.1007/978-94-007-1180-8.
  • Wheeler, G. and Williamson, J. (2011). Evidential Probability and Objective Bayesian Epistemology. In: Bandyopadhyay, P. S. and Forster, M. R. eds. Philosophy of Statistics. Oxford: Elsevier Science & Technology/ North Holland, pp. 307-331. Available at: http://www.elsevier.com/wps/find/bookdescription.cws_home/BS_HPHS/description.
    In this chapter we draw connections between two seemingly opposing approaches to probability and statistics: evidential probability on the one hand and objective Bayesian epistemology on the other.
  • Williamson, J. (2010). Epistemic Complexity from an Objective Bayesian Perspective. In: Carsetti, A. ed. Causality, Meaningful Complexity and Embodied Cognition. Dordrecht: Springer, pp. 231-246. Available at: http://dx.doi.org/10.1007/978-90-481-3529-5_13.
  • Williamson, J. (2009). The Philosophy of Science and its relation to Machine Learning. In: Gaber, M. M. ed. Scientific Data Mining and Knowledge Discovery: Principles and Foundations. Berlin: Springer, pp. 77-89.
  • Williamson, J. (2009). Philosophies of probability. In: Gabbay, D., Thagard, P. and Woods, J. eds. Philosophy of Mathematics. Oxford: Elsevier Science & Technology/ North Holland, pp. 493-533.
    This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of mathematics in general.

Conference or workshop item

  • Landes, J. and Williamson, J. (2016). Objective Bayesian nets from consistent datasets. In: 35TH INTERNATIONAL WORKSHOP ON BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING. AIP, p. 20007. Available at: http://doi.org/10.1063/1.4959048.
    This paper addresses the problem of finding a Bayesian net representation of the probability function that agrees with the distributions of multiple consistent datasets and otherwise has maximum entropy. We give a general algorithm which is significantly more efficient than the standard brute-force approach. Furthermore, we show that in a wide range of cases such a Bayesian net can be obtained without solving any optimisation problem.

Edited book

  • Russo, F. and Williamson, J. (2011). Causality in the Sciences. Illari, P., Russo, F. and Williamson, J. eds. Oxford: Oxford University Press.

Thesis

  • Dragulinescu, S. (2018). Grading the Quality of Evidence of Mechanisms.
  • Groves, T. (2015). Let’s Reappraise Carnapian Inductive Logic!.
  • Wilde, M. (2015). Causing Problems: The Nature of Evidence and the Epistemic Theory of Causality.
    The epistemic theory of causality maintains that causality is an epistemic relation, so that causality is taken to be a feature of the way an agent represents the world rather than an agent-independent or non-epistemological feature of the world. The objective of this essay is to cause problems for the epistemic theory of causality. This is not because I think that the epistemic theory is incorrect. In fact, I spend some time arguing in favour of the epistemic theory of causality. Instead, this essay should be regarded as something like an exercise in stress testing. The hope is that by causing problems for a particular version of the epistemic theory, the result will be a more robust version of that theory.

    My gripe is with a particular version of the epistemic theory of causality, a version that is articulated with the help of objective Bayesianism. At first sight, objective Bayesianism looks like a plausible theory of rational belief. However, I argue that it is committed to a certain theory of evidence, a theory of evidence that recent work in epistemology has shown to be incorrect. In particular, objective Bayesianism maintains that evidence is perfectly accessible in a certain sense. But evidence just is not so perfectly accessible, according to recent developments in epistemology. However, this is not the end of the line for the epistemic theory of causality. Instead, I propose an epistemic theory of causality that dispenses with the assumption that evidence is perfectly accessible in the relevant sense.
Last updated