profile image for Professor Jim Griffin

Professor Jim Griffin

Professor of Statistics, Director of Research

SMSAS - Statistics Group

Room: E116
Modules taught:
MA632: Regression
MA882: Advanced Regression Modelling

Office hours

Research Interests:

  • Bayesian nonparametric methods including slice sampling for posterior simulation and the construction of dependent nonparametric priors.
  • Inference with financial data including the analysis of high frequency data, volatility processes with jumps and the application of Bayesian nonparametric methods to stochastic volatility modelling.
  • Bayesian methods for variable selection methods with many regressors including efficient computation, the use of shrinkage priors and applications in bioinformatics.
  • Efficiency measurement using stochastic frontier models.

Jim serves on the School's Graduate Studies Committee and he is the Director of Studies and Admissions Officer for MSc programmes in Statistics/Statistics with Finance. He is the School's Director of Research and chairs our Research and Enterprise Committee.

back to top


Also view these in the Kent Academic Repository

    Griffin, Jim E. and Kalli, Maria (2015) Flexible Modelling of Dependence in Volatility Processes. Journal of Business and Economic Statistics, 33 (1). pp. 102-113. ISSN 0735-0015.


    This article proposes a novel stochastic volatility (SV) model that draws from the existing literature on autoregressive SV models, aggregation of autoregressive processes, and Bayesian nonparametric modeling to create a SV model that can capture long-range dependence. The volatility process is assumed to be the aggregate of autoregressive processes, where the distribution of the autoregressive coefficients is modeled using a flexible Bayesian approach. The model provides insight into the dynamic properties of the volatility. An efficient algorithm is defined which uses recently proposed adaptive Monte Carlo methods. The proposed model is applied to the daily returns of stocks.

    Kalli, Maria and Griffin, Jim E. (2014) Time-varying sparsity in dynamic regression models. Journal of Econometrics, 178 (2). pp. 779-793. ISSN 0304-4076.


    A novel Bayesian method for inference in dynamic regression models is proposed where both the values of the regression coefficients and the importance of the variables are allowed to change over time. We focus on forecasting and so the parsimony of the model is important for good performance. A prior is developed which allows the shrinkage of the regression coefficients to suitably change over time and an efficient Markov chain Monte Carlo method for posterior inference is described. The new method is applied to two forecasting problems in econometrics: equity premium prediction and inflation forecasting. The results show that this method outperforms current competing Bayesian methods.

    Kolossiatis, Michalis and Griffin, Jim E. and Steel, Mark F.J. (2013) On Bayesian nonparametric modelling of two correlated distributions. Statistics and Computing, 23 (1). pp. 1-15. ISSN 0960-3174.


    In this paper, we consider the problem of modelling a pair of related distributions using Bayesian nonparametric methods. A representation of the distributions as weighted sums of distributions is derived through normalisation. This allows us to define several classes of nonparametric priors. The properties of these distributions are explored and efficient Markov chain Monte Carlo methods are developed. The methodology is illustrated on simulated data and an example concerning hospital efficiency measurement.

    Griffin, Jim E. and Kolossiatis, Michalis and Steel, Mark F.J. (2013) Comparing distributions by using dependent normalized random-measure mixtures. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3). pp. 499-529. ISSN 1369-7412.


    A methodology for the simultaneous Bayesian nonparametric modelling of several distributions is developed. Our approach uses normalized random measures with independent increments and builds dependence through the superposition of shared processes. The properties of the prior are described and the modelling possibilities of this framework are explored in some detail. Efficient slice sampling methods are developed for inference. Various posterior summaries are introduced which allow better understanding of the differences between distributions. The methods are illustrated on simulated data and examples from survival analysis and stochastic frontier analysis.

    Griffin, Jim E. and Walker, Stephen G. (2013) On adaptive Metropolis-Hastings method. Statistics and Computing, 23 (1). pp. 123-134. ISSN 0960-3174.


    This paper presents a method for adaptation in Metropolis–Hastings algorithms. A product of a proposal density and K copies of the target density is used to define a joint density which is sampled by a Gibbs sampler including a Metropolis step. This provides a framework for adaptation since the current value of all K copies of the target distribution can be used in the proposal distribution. The methodology is justified by standard Gibbs sampling theory and generalizes several previously proposed algorithms. It is particularly suited to Metropolis-within-Gibbs updating and we discuss the application of our methods in this context. The method is illustrated with both a Metropolis–Hastings independence sampler and a Metropolis-with-Gibbs independence sampler. Comparisons are made with standard adaptive Metropolis–Hastings methods.

    Griffin, Jim E. and Delatola, Eleni-Ioanna (2013) A Bayesian semiparametric model for volatility with a leverage effect. Computational Statistics and Data Analysis, 60 (1). pp. 97-110. ISSN 0167-9473.


    A Bayesian semiparametric stochastic volatility model for financial data is developed. This nonparametrically estimates the return distribution from the data allowing for stylized facts such as heavy tails of the distribution of returns whilst also allowing for correlation between the returns and changes in volatility, which is usually termed the leverage effect. An efficient MCMC algorithm is described for inference. The model is applied to simulated data and two real data sets. The results of fitting the model to these data show that choosing a parametric return distribution can have a substantial effect on inference about the leverage effect.

    Griffin, Jim E. and Brown, Philip J. (2013) Some Priors for Sparse Regression Modelling. Bayesian Analysis, 8 (3). pp. 691-702. ISSN 1936-0975.


    A wide range of methods, Bayesian and others, tackle regression when there are many variables. In the Bayesian context, the prior is constructed to reflect ideas of variable selection and to encourage appropriate shrinkage. The prior needs to be reasonably robust to different signal to noise structures. Two simple evergreen prior constructions stem from ridge regression on the one hand and g-priors on the other. We seek to embed recent ideas about sparsity of the regression coefficients and robustness into these priors. We also explore the gains that can be expected from these differing approaches

    Lamnisos, Demetris and Griffin, Jim E. and Steel, Mark F.J. (2013) Adaptive Monte Carlo for Bayesian Variable Selection in Regression Models. Journal of Computational and Graphical Statistics, 22 (3). pp. 729-748. ISSN 1061-8600.


    This article describes methods for efficient posterior simulation for Bayesian variable selection in generalized linear models with many regressors but few observations. The algorithms use a proposal on model space that contains a tuneable parameter. An adaptive approach to choosing this tuning parameter is described that allows automatic, efficient computation in these models. The method is applied to examples from normal linear and probit regression. Relevant code and datasets are posted online as supplementary materials.

    Lamnisos, Demetris and Griffin, Jim E. and Steel, Mark F.J. (2012) Cross-validation prior choice in Bayesian probit regressionwith many covariates. Statistics and Computing, 22 (2). pp. 359-373. ISSN 0960-3174.


    This paper examines prior choice in probit regression through a predictive cross-validation criterion. In particular, we focus on situations where the number of potential covariates is far larger than the number of observations, such as in gene expression data. Cross-validation avoids the tendency of such models to fit perfectly. We choose the scale parameter c in the standard variable selection prior as the minimizer of the log predictive score. Naive evaluation of the log predictive score requires substantial computational effort, and we investigate computationally cheaper methods using importance sampling.We find that K-fold importance densities perform best, in combination with either mixing over different values of c or with integrating over c through an auxiliary distribution.

    Griffin, Jim E. and Brown, Philip J. (2012) Structuring Shrinkage: Some Correlated Priors for Regression. Biometrika, 99 (2). pp. 481-487. ISSN 1464-3510.


    This paper develops a rich class of sparsity priors for regression effects that encourage shrinkage of both regression effects and contrasts between effects to zerowhilst leaving sizeable real effects largely unshrunk. The construction of these priors uses some properties of normal-gamma distributions to include design features in the prior specification, but has general relevance to any continuous sparsity prior. Specific prior distributions are developed for serial dependence between regression effects and correlation within groups of regression effects.

    Kirk, Paul and Griffin, Jim E. and Savage, Richard S. et al. (2012) Bayesian correlated clustering of integrated multiple datasets. Bioinformatics, 28 (24). pp. 3290-3297. ISSN 1367-4803.


    Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods.

    Kalli, Maria and Griffin, Jim E. and Walker, Stephen G. (2011) Slice Sampling Mixture Models. Statistics and Computing, 21 (1). pp. 93-105. ISSN 0960-3174.


    We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (Commun. Stat., Simul. Comput. 36:45–54, 2007). This new sampler allows for the fitting of infinite mixture models with a wide-range of prior specifications. To illustrate this flexibility we consider priors defined through infinite sequences of independent positive random variables. Two applications are considered: density estimation using mixture models and hazard function estimation. In each case we show how the slice efficient sampler can be applied to make inference in the models. In the mixture case, two submodels are studied in detail. The first one assumes that the positive random variables are Gamma distributed and the second assumes that they are inverse-Gaussian distributed. Both priors have two hyperparameters and we consider their effect on the prior distribution of the number of occupied clusters in a sample. Extensive computational comparisons with alternative “conditional” simulation techniques for mixture models using the standard Dirichlet process prior and our new priors are made. The properties of the new priors are illustrated on a density estimation problem.

    Griffin, Jim E. and Walker, Stephen G. (2011) Posterior Simulation of Normalized Random Measure Mixtures. Journal of Computational and Graphical Statistics, 20 (1). pp. 241-259. ISSN 1061-8600.


    This article describes posterior simulation methods for mixture models whose mixing distribution has a Normalized Random Measure prior. The methods use slice sampling ideas and introduce no truncation error. The approach can be easily applied to both homogeneous and nonhomogeneous Normalized Random Measures and allows the updating of the parameters of the random measure. The methods are illustrated on data examples using both Dirichlet and Normalized Generalized Gamma process priors. In particular, the methods are shown to be computationally competitive with previously developed samplers for Dirichlet process mixture models. Matlab code to implement these methods is available as supplemental material.

    Griffin, Jim E. and Oomen, Roel C. A. (2011) Covariance measurement in the presence of non-synchronous trading and market microstructure noise. Journal of Econometrics, 160 (1). pp. 58-68. ISSN 0304-4076.


    This paper studies the problem of covariance estimation when prices are observed non-synchronously and contaminated by i.i.d. microstructure noise. We derive closed form expressions for the bias and variance of three popular covariance estimators, namely realised covariance, realised covariance plus lead and lag adjustments, and the Hayashi and Yoshida estimator, and present a comprehensive investigation into their properties and relative efficiency. Our main finding is that the ordering of the covariance estimators in terms of efficiency crucially depends on the level of microstructure noise, as well as the level of correlation. In fact, for sufficiently high levels of noise, the standard realised covariance estimator (without any corrections for non-synchronous trading) can be most efficient. We also propose a sparse sampling implementation of the Hayashi and Yoshida estimator, study the robustness of our findings using simulations with stochastic volatility and correlation, and highlight some important practical considerations.

    Kolossiatis, Michalis and Griffin, Jim E. and Steel, Mark F.J. (2011) Modeling overdispersion with the Normalized Tempered Stable distribution. Computational Statistics and Data Analysis, 55 (7). pp. 2288-2301. ISSN 0167-9473.


    A multivariate distribution which generalizes the Dirichlet distribution is introduced and its use for modeling overdispersion in count data is discussed. The distribution is constructed by normalizing a vector of independent tempered stable random variables. General formulae for all moments and cross-moments of the distribution are derived and they are found to have similar forms to those for the Dirichlet distribution. The univariate version of the distribution can be used as a mixing distribution for the success probability of a binomial distribution to define an alternative to the well-studied beta-binomial distribution. Examples of fitting this model to simulated and real data are presented.

    Griffin, Jim E. and Steel, Mark F.J. (2011) Stick-Breaking Autoregressive Processes. Journal of Econometrics, 162 (2). pp. 383-396. ISSN 0304-4076.


    This paper considers the problem of defining a time-dependent nonparametric prior for use in Bayesian nonparametric modelling of time series. A recursive construction allows the definition of priors whose marginals have a general stick-breaking form. The processes with Poisson–Dirichlet and Dirichlet process marginals are investigated in some detail. We develop a general conditional Markov Chain Monte Carlo (MCMC) method for inference in the wide subclass of these models where the parameters of the marginal stick-breaking process are nondecreasing sequences. We derive a generalised Pólya urn scheme type representation of the Dirichlet process construction, which allows us to develop a marginal MCMC method for this case. We apply the proposed methods to financial data to develop a semi-parametric stochastic volatility model with a time-varying nonparametric returns distribution. Finally, we present two examples concerning the analysis of regional GDP and its growth.

    Griffin, Jim E. (2011) The Ornstein-Uhlenbeck Dirichlet Process and other time-varying processes for Bayesian nonparametric inference. Journal of Statistical Planning and Inference, 141 (11). pp. 3648-3664. ISSN 0378-3758.


    This paper introduces a new class of time-varying, measure-valued stochastic processes for Bayesian nonparametric inference. The class of priors is constructed by normalising a stochastic process derived from non-Gaussian Ornstein-Uhlenbeck processes and generalises the class of normalised random measures with independent increments from static problems. Some properties of the normalised measure are investigated. A particle filter and MCMC schemes are described for inference. The methods are applied to an example in the modelling of financial data.

    Griffin, Jim E. (2011) Inference in Infinite Superpositions of Non-Gaussian Ornstein–Uhlenbeck Processes Using Bayesian Nonparametic Methods. Journal of Financial Econometrics, 9 (3). pp. 519-549. ISSN 1479-8409.


    This paper describes a Bayesian nonparametric approach to volatility estimation. Volatility is assumed to follow a superposition of an infinite number of Ornstein–Uhlenbeck processes driven by a compound Poisson process with a parametric or nonparametric jump size distribution. This model allows a wide range of possible dependencies and marginal distributions for volatility. The properties of the model and prior specification are discussed, and a Markov chain Monte Carlo algorithm for inference is described. The model is fitted to daily returns of four indices: the Standard and Poors 500, the NASDAQ 100, the FTSE 100, and the Nikkei 225.

    Griffin, Jim E. (2011) Bayesian clustering of distributions in stochastic frontier analysis. Journal of Productivity Analysis, 36 (3). pp. 275-283. ISSN 0895-562X.


    In stochastic frontier analysis, firm-specific efficiencies and their distribution are often main variables of interest. If firms fall into several groups, it is natural to allow each group to have its own distribution. This paper considers a method for nonparametrically modelling these distributions using Dirichlet processes. A common problem when applying nonparametric methods to grouped data is small sample sizes for some groups which can lead to poor inference. Methods that allow dependence between each group’s distribution are one set of solutions. The proposed model clusters the groups and assumes that the unknown distribution for each group in a cluster are the same. These clusters are inferred from the data. Markov chain Monte Carlo methods are necessary for model-fitting and efficient methods are described. The model is illustrated on a cost frontier application to US hospitals.

    Delatola, Eleni-Ioanna and Griffin, Jim E. (2011) Bayesian Nonparametric Modelling of the Return Distribution with Stochastic Volatility. Bayesian Analysis, 6 (4). pp. 901-926. ISSN 1936-0975.


    This paper presents a method for Bayesian nonparametric analysis of the return distribution in a stochastic volatility model. The distribution of the logarithm of the squared return is flexibly modelled using an infinite mixture of Normal distributions. This allows efficient Markov chain Monte Carlo methods to be developed. Links between the return distribution and the distribution of the logarithm of the squared returns are discussed. The method is applied to simulated data, one asset return series and one stock index return series. We find that estimates of volatility using the model can differ dramatically from those using a Normal return distribution if there is evidence of a heavy-tailed return distribution.

    Griffin, Jim E. and Brown, Philip J. (2011) Bayesian hyper-lassos with non-convex penalization. Australian and New Zealand Journal of Statistics, 53 (4). pp. 423-442. ISSN 1369-1473.


    The Lasso has sparked interest in the use of penalization of the log-likelihood for variable selection, as well as for shrinkage. We are particularly interested in the more-variables-thanobservations case of characteristic importance for modern data. The Bayesian interpretation of the Lasso as the maximum a posteriori estimate of the regression coefficients, which have been given independent, double exponential prior distributions, is adopted. Generalizing this prior provides a family of hyper-Lasso penalty functions, which includes the quasi-Cauchy distribution of Johnstone and Silverman as a special case. The properties of this approach, including the oracle property, are explored, and an EM algorithm for inference in regression problems is described. The posterior is multi-modal, and we suggest a strategy of using a set of perfectly fitting random starting values to explore modes in different regions of the parameter space. Simulations show that our procedure provides significant improvements on a range of established procedures, and we provide an example from chemometrics.

    Griffin, Jim E. and Steel, Mark F.J. (2010) Bayesian Nonparametric Modelling with the Dirichlet Process Regression Smoother. Statistica Sinica, 20 (4). pp. 1507-1527. ISSN 1017-0405.

    Griffin, Jim E. (2010) Default priors for density estimation with mixture models. Bayesian Analysis, 5 (1). pp. 45-64. ISSN 1931-6690.


    The infinite mixture of normals model has become a popular method for density estimation problems. This paper proposes an alternative hierarchical model that leads to hyperparameters that can be interpreted as the location, scale and smoothness of the density. The priors on other parts of the model have little effect on the density estimates and can be given default choices. Automatic Bayesian density estimation can be implemented by using uninformative priors for location and scale and default priors for the smoothness. The performance of these methods for density estimation are compared to previously proposed default priors for four data sets.

    Griffin, Jim E. and Brown, Philip J. (2010) Inference with normal-gamma prior distributions in regression problems. Bayesian Analysis, 5 (1). pp. 171-188. ISSN 1936-0975.


    This paper considers the effects of placing an absolutely continuous prior distribution on the regression coefficients of a linear model. We show that the posterior expectation is a matrix-shrunken version of the least squares estimate where the shrinkage matrix depends on the derivatives of the prior predictive density of the least squares estimate. The special case of the normal-gamma prior, which generalizes the Bayesian Lasso (Park and Casella 2008), is studied in depth. We discuss the prior interpretation and the posterior effects of hyperparameter choice and suggest a data-dependent default prior. Simulations and a chemometric example are used to compare the performance of the normal-gamma and the Bayesian Lasso in terms of out-of-sample predictive performance.

    Savage, Richard S. and Ghahramani, Zoubin and Griffin, Jim E. et al. (2010) Discovering Transcriptional Modules from Bayesian Data Fusion. Bioinformatics, 26 (12). pp. 1158-1167. ISSN 1367-4803.


    Motivation: We present a method for directly inferring transcriptional modules (TMs) by integrating gene expression and transcription factor binding (ChIP-chip) data. Our model extends a hierarchical Dirichlet process mixture model to allow data fusion on a geneby- gene basis. This encodes the intuition that co-expression and co-regulation are not necessarily equivalent and hence we do not expect all genes to group similarly in both datasets. In particular, it allows us to identify the subset of genes that share the same structure of transcriptional modules in both datasets. Results: We find that by working on a gene-by-gene basis, our model is able to extract clusters with greater functional coherence than existing methods. By combining gene expression and transcription factor binding (ChIP-chip) data in this way, we are better able to determine the groups of genes that are most likely to represent underlying TMs.

    Griffin, Jim E. and Steel, Mark F.J. (2010) Bayesian inference with stochastic volatility models using continuous superpositions of non-Gaussian Ornstein-Uhlenbeck processes. Computational Statistics and Data Analysis, Online (54). pp. 2594-2608. ISSN 0167-9473.


    Continuous superpositions of Ornstein–Uhlenbeck processes are proposed as a model for asset return volatility. An interesting class of continuous superpositions is defined by a Gamma mixing distribution which can define long memory processes. In contrast, previously studied discrete superpositions cannot generate this behaviour. Efficient Markov chain Monte Carlo methods for Bayesian inference are developed which allow the estimation of such models with leverage effects. The continuous superposition model is applied to both stock index and exchange rate data. The continuous superposition model is compared with a two-component superposition on the daily Standard and Poor’s 500 index from 1980 to 2000.

    Lamnisos, Demetris and Griffin, Jim E. and Steel, Mark F.J. (2009) Transdimensional Sampling Algorithms for Bayesian Variable Selection in Classification Problems With Many More Variables Than Observations. Journal of Computational and Graphical Statistics, 18 (3). pp. 592-612. ISSN 1061-8600.


    Model search in probit regression is often conducted by simultaneously exploring the model and parameter space, using a reversible jump MCMC sampler. Standard samplers often have low model acceptance probabilities when there are many more regressors than observations. Implementing recent suggestions in the literature leads to much higher acceptance rates. However, high acceptance rates are often associated with poor mixing of chains. Thus, we design a more general model proposal that allows us to propose models “further” from our current model. This proposal can be tuned to achieve a suitable acceptance rate for good mixing. The effectiveness of this proposal is linked to the form of the marginalization scheme when updating the model and we propose a new efficient implementation of the automatic generic transdimensional algorithm of Green (2003). We also implement other previously proposed samplers and compare the efficiency of all methods on some gene expression datasets. Finally, the results of these applications lead us to propose guidelines for choosing between samplers. Relevant code and datasets are posted as an online supplement.

    Griffin, Jim E. and Steel, Mark F.J. (2008) Flexible mixture modelling of stochastic frontiers. Journal of Productivity Analysis, 29 (1). pp. 33-50. ISSN 0895-562X.


    This paper introduces new and flexible classes of inefficiency distributions for stochastic frontier models. We consider both generalized gamma distributions and mixtures of generalized gamma distributions. These classes cover many interesting cases and accommodate both positively and negatively skewed composed error distributions. Bayesian methods allow for useful inference with carefully chosen prior distributions. We recommend a two-component mixture model where a sensible amount of structure is imposed through the prior to distinguish the components, which are given an economic interpretation. This setting allows for efficiencies to depend on firm characteristics, through the probability of belonging to either component. Issues of label-switching and separate identification of both the measurement and inefficiency errors are also examined. Inference methods through MCMC with partial centring are outlined and used to analyse both simulated and real data. An illustration using hospital cost data is discussed in some detail.

    Griffin, Jim E. and Oomen, Roel C. A. (2008) Sampling Returns for Realized Variance Calculations: Tick Time or Transaction Time? Econometric Reviews, 27 (1-3). pp. 230-253. ISSN 0747-4938.


    This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are sampled with every price change. Both sampling schemes have been used in the literature on realized variance, but a formal investigation into their properties has been lacking. Our empirical and theoretical results indicate that the return dynamics in transaction time are very different from those in tick time and the choice of sampling scheme can therefore have an important impact on the properties of realized variance. For RV we find that tick time sampling is superior to transaction time sampling in terms of mean-squared-error, especially when the level of noise, number of ticks, or the arrival frequency of efficient price moves is low. Importantly, we show that while the microstructure noise may appear close to IID in transaction time, in tick time it is highly dependent. As a result, bias correction procedures that rely on the noise being independent, can fail in tick time and are better implemented in transaction time.

    Griffin, Jim E. and Steel, Mark F.J. (2007) Bayesian Stochastic Frontier Analysis Using WinBUGS. Journal of Productivity Analysis, 27 (3). pp. 163-176. ISSN 0895-562X.


    Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.

    Griffin, Jim E. and Steel, Mark F.J. (2006) Order-Based Dependent Dirichlet Processes. Journal of the American Statistical Association, 101 (473). pp. 179-94. ISSN 0162-1459.


    In this article we propose a new framework for Bayesian nonparametric modeling with continuous covariates. In particular, we allow the nonparametric distribution to depend on covariates through ordering the random variables building the weights in the stick-breaking representation. We focus mostly on the class of random distributions that induces a Dirichlet process at each covariate value. We derive the correlation between distributions at different covariate values and use a point process to implement a practically useful type of ordering. Two main constructions with analytically known correlation structures are proposed. Practical and efficient computational methods are introduced. We apply our framework, through mixtures of these processes, to regression modeling, the modeling of stochastic volatility in time series data, and spatial geostatistical modeling.

    Griffin, Jim E. and Steel, Mark F.J. (2006) Inference with non-Gaussian Ornstein–Uhlenbeck processes for stochastic volatility. Journal of Econometrics, 134 (2). pp. 605-644. ISSN 0304-4076.


    Continuous-time stochastic volatility models are becoming an increasingly popular way to describe moderate and high-frequency financial data. Barndorff-Nielsen and Shephard (2001a) proposed a class of models where the volatility behaves according to an Ornstein–Uhlenbeck (OU) process, driven by a positive Lévy process without Gaussian component. These models introduce discontinuities, or jumps, into the volatility process. They also consider superpositions of such processes and we extend that to the inclusion of a jump component in the returns. In addition, we allow for leverage effects and we introduce separate risk pricing for the volatility components. We design and implement practically relevant inference methods for such models, within the Bayesian paradigm. The algorithm is based on Markov chain Monte Carlo (MCMC) methods and we use a series representation of Lévy processes. MCMC methods for such models are complicated by the fact that parameter changes will often induce a change in the distribution of the representation of the process and the associated problem of overconditioning. We avoid this problem by dependent thinning methods. An application to stock price data shows the models perform very well, even in the face of data with rapid changes, especially if a superposition of processes with different risk premiums and a leverage effect is used.

    Griffin, Jim E. and Steel, Mark F.J. (2004) Semiparametric Bayesian inference for stochastic frontier models. Journal of Econometrics, 123 (1). pp. 121-152. ISSN 0304-4076.


    In this paper we propose a semiparametric Bayesian framework for the analysis of stochastic frontiers and efficiency measurement. The distribution of inefficiencies is modelled nonparametrically through a Dirichlet process prior. We suggest prior distributions and implement a Bayesian analysis through an efficient Markov chain Monte Carlo sampler, which allows us to deal with practically relevant sample sizes. We also consider the case where the efficiency distribution varies with firm characteristics. The methodology is applied to a cost frontier, estimated from a panel data set on 382 U.S. hospitals.

Book Sections
Research Reports

    Kalli, Maria and Griffin, Jim E. and Walker, Stephen G. (2008) Slice Sampling Mixture Models. Centre for Health Services Studies, 23 pp.


    We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (2007). This sampler allows the fitting of infinite mixture models with a wide–range of prior specification. To illustrate this flexiblity we develop a new nonparametric prior for mixture models by normalizing an infinite sequence of independent positive random variables and show how the slice sampler can be applied to make inference in this model. Two submodels are studied in detail. The first one assumes that the positive random variables are Gamma distributed and the second assumes that they are inverse–Gaussian distributed. Both priors have two hyperparameters and we consider their effect on the prior distribution of the number of occupied clusters in a sample. Extensive computational comparisons with alternative ”conditional” simulation techniques for mixture models using the standard Dirichlet process prior and our new prior are made. The properties of the new prior are illustrated on a density estimation problem.


    Hodges, S.D. and Roberts, G. and Papaspiliopoulos, O. et al. (2001) Non-Gaussian Ornstein-Uhlenbeck-based Models and Some of their Uses in Financial Economics - Discussion. discussion_paper. BLACKWELL PUBLISHING LTD 10.1111/1467-9868.00282.


    Non-Gaussian processes of Ornstein–Uhlenbeck (OU) type offer the possibility of capturing important distributional deviations from Gaussianity and for flexible modelling of dependence structures. This paper develops this potential, drawing on and extending powerful results from probability theory for applications in statistical analysis. Their power is illustrated by a sustained application of OU processes within the context of finance and econometrics. We construct continuous time stochastic volatility models for financial assets where the volatility processes are superpositions of positive OU processes, and we study these models in relation to financial data and theory.

Conference Items

    Hoggart, C.J. and Griffin, Jim E. (2001) A Bayesian Partition Model for Customer Attrition. In: UNSPECIFIED pp. 223-232.


    This paper presents a nonlinear Bayesian model for covariates in a survival model with a surviving fraction. The work is a direct extension of the cure rate model of Chen et al. (1999). In their model the covariates depend naturally on the cure rate through a generalised linear model. We use a more flexible local model of the covariates utilizing the Bayesian partition model of Holmes et al. (1999). We apply the model to a large retail banking data set and compare our results with the generalised linear model used by Chen et al. (1999).

Total publications in KAR: 39 [See all in KAR]
back to top
back to top

School of Mathematics, Statistics and Actuarial Science, Cornwallis Building, University of Kent, Canterbury, Kent CT2 7NF.

Contact us - Website queries

Last Updated: 20/11/2014