require(rjags) # PACKAGE TO RUN THE jags MODEL. MANDATORY
require(MCMCvis) # THIS PACKAGE CONTAINS THE MCMCsummary FUNCTION USED IN THIS SCRIPT
require(mcmcplots) # USED FOR THE CREATION OF THE CONVERGENCE PLOTS
require(DT) # THIS LIBRARY ALLOWS A NICE DATA DISPLAY WITH THE SEARCH BAR OPTION.
Bayesian 2-LC Fixed Effects Model
Introduction
This article is intended to give the reader basic instructions on how to run an rjags
script to perform a Bayesian analysis of diagnostic test accuracy and disease prevalence in the absence of a perfect reference test with a 2-latent class fixed effects model (Dendukuri and Joseph (2001)). The script is implemented in R
using the rjags
package, which interfaces with the JAGS
(Just Another Gibbs Sampler) software for Bayesian analysis.
The term “2-latent class” refers to the presence of two hidden or latent classes in the data - often referred to in diagnostic test accuracy research as target condition positive and target condition negative.
Conditional dependence among observed diagnostic tests is modeled using the covariance between tests within the target condition positive and target condition negative populations.
An example dataset is provided for the user to familiarize themself with the script. It is from a study conducted to estimate the prevalence of Strongyloides infection among a group of Cambodian refugees to Canada (Joseph and Coupal (1995)).
Download rjags
Script
The full script, can be downloaded here.
Script Instructions
Suggested R Package
Below is a list of packages we recommend installing. Aside from rjags
, which is mandatory, the other packages are optional when performing LC analysis. We do recommend them as they are used in the script. Be aware that some functionalities of the script may not work if you do not install every package listed below.
Strongyloides Dataset
The Strongyloides dataset is taken from a study conducted to estimate the prevalence of Strongyloides infection among a group of Cambodian refugees to Canada). It includes participants with results on 2 diagnostic tests. From a notation point of view, we suppose here that the Stool examination is the reference test and the Serology test is the index test.
- n11 cell = Number of patients positive on both tests
- n10 cell = Number of patients positivie on first test (index test) and negative on second test (reference test)
- n01 cell = Number of patients negative on first test (index test) and positive on second test (reference test)
- n00 cell = Number of patients negative on both tests
We recommend to save the Strongyloides dataset in a . txt extension file as Strongyloides.txt
in the same folder as the script. The data can be uploaded with the read.table
function. The data comprises a single row and 4 columns whose entries are the number of patients falling in each of the 4 categories defined above (n11, n10, n01, n00).
<- read.table("Strongyloides.txt", header=TRUE)
DATA datatable(DATA, extensions = 'AutoFill')#, options = list(autoFill = TRUE))
The data need to be stored in a list object which we will call dataList
. N
denotes the total sample size and y
the cross-classification of the diagnostic test results given above.
# Cross-classification results of the serology test and stool examination
<- c(DATA$n11, DATA$n10, DATA$n01, DATA$n00)
y # Number of patients
= sum(y)
N
<- list(y=y, N=N) dataList
Bayesian Latent Class Fixed Effects Model
Implementing the Bayesian 2-latent class fixed effects model in rjags
involves specifying the priors, likelihood, and the structure of the latent classes. Markov Chain Monte Carlo (MCMC) methods, such as Gibbs sampling, are then employed to estimate the posterior distribution of the model parameters.
The rjags
model is saved on the current directory (where your script and data should already be saved ideally) as model.txt
. Below is the model following the JAGS
syntax.
=
modelString
"model {
#============
# LIKELIHOOD
#============
y[1:4]~dmulti(p12[1:4],N)
# probabilities of observing different cross-classifications of two dichotomous diagnostic tests
p12[1]<- prev*(se[1]*se[2]+covs12)+(1-prev)*((1-sp[1])*(1-sp[2])+covc12)
p12[2]<- prev*(se[1]*(1-se[2])-covs12)+(1-prev)*((1-sp[1])*sp[2]-covc12)
p12[3]<- prev*((1-se[1])*se[2]-covs12)+(1-prev)*(sp[1]*(1-sp[2])-covc12)
p12[4]<- prev*((1-se[1])*(1-se[2])+covs12)+(1-prev)*(sp[1]*sp[2]+covc12)
#=======================================
# upper limits of covariance parameters
#=======================================
us<- min(se[1],se[2])-(se[1]*se[2])
uc<- min(sp[1],sp[2])-(sp[1]*sp[2])
#==============================================
# adjustment of range of covariance parameters
#==============================================
covs12<- u.covs12*us
covc12<- u.covc12*uc
#==================================================
# Prior distributions of prevalence, sensitivities
# and specificities
#==================================================
prev~dbeta(1,1)
se[1]~dbeta(21.96,5.49)
sp[1]~dbeta(4.1,1.76)
se[2]~dbeta(4.44,13.31)
sp[2]~dbeta(71.25,3.75)
#==============================================================
# prior distribution of transformed covariances on (0,1) range
#==============================================================
u.covs12~ dbeta(1,1)
u.covc12~ dbeta(1,1)
}"
writeLines(modelString,con="model.txt")
Prior Distributions
The prior distributions used in the script are inspired from those provided in Dendukuri and Joseph (2001).
A Beta(1,1)
prior distribution is used for the prevalence parameter which is equivalent to a uniform(0,1) vague prior.
Beta prior distributions for the sensitivity and specificity are as follows:
For test 1 (index test =Serology test)
- \(se[1] \sim Beta(21.96,5.49)\)
- \(sp[1] \sim Beta(4.1,1.76)\)
For test 2 (reference test =Stool examination)
- \(se[2] \sim Beta(4.44,13.31)\)
- \(sp[2] \sim Beta(71.25,3.75)\)
The covariance parameters \(covs12\) and \(covc12\) follow a Generalized beta distribution with lower and upper limits determined by the sensitivity and specificity as follows:
- \((se[1]-1)\cdot(1-se[2]) \le covs12 \le min(se[1],se[2]) - se[1] \cdot se[2]\)
- \((sp[1]-1)\cdot(1-sp[2]) \le covc12 \le min(sp[1],sp[2]) - sp[1] \cdot sp[2]\)
This is implemented in our program by creating variables (\(u.covs12\) and \(u.covc12\)) that follow a Beta(1,1)
distribution and then transforming them to lie within these limits.
Both covariance lower bounds are truncated at 0 to reflect the authors were only interested in the situation when the two tests are positively correlated.
Initial Values
Initial values are needed as the starting point for estimating and updating parameters of the model in rjags
. We strongly encourage the user to provide their own method of generating initial values rather than counting on rjags
to generate them. Initial values can be provided in different ways in rjags
. We propose one method below based on the creation of a home made function to randomly generate initial values based on the prior distributions. For more options on how to provide initial values, please see A guide on how to provide initial values in rjags
# Initial values
= function(){
GenInits
<- rbeta(1,21.96,5.49)
se1 <- rbeta(1,4.1,1.76)
sp1 <- rbeta(1,4.44,13.31)
se2 <- rbeta(1,71.25,3.75)
sp2 <- rbeta(1,1,1)
u.covs12 <- rbeta(1,1,1)
u.covc12 <- rbeta(1,1,1)
prev
<- c(se1, se2)
se <- c(sp1, sp2)
sp
list(
se = se,
sp = sp,
prev = prev,
u.covs12=u.covs12,
u.covc12=u.covc12,
.RNG.name="base::Wichmann-Hill",
.RNG.seed=321
)
}
Below we use our created GenInits
function to initialize 3 chains. ** We provide a Seed value for reproducibility : **
# Initial values
set.seed(123)
= vector('list',3)
initsList for(i in 1:3){
= GenInits()
initsList[[i]] }
Compiling the model with rjags
We compile the model with the jags.model
function.
# Compile the model
= jags.model("model.txt",data=dataList,n.chains=3,n.adapt=0, inits=initsList) jagsModel
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 1
Unobserved stochastic nodes: 7
Total graph size: 58
Initializing model
Posterior Sampling
The posterior samples for the parameters of the model are obtained by running more than one independent chain having its own starting values to assess convergence of the MCMC algorithm. Here in the script, we elected to run 3 separate chains.
The posterior sampling step is in fact a 2-part step.
- First we discard a certain number of iterations with the
update
function. This step is often referred to as theBurn-in
step and is needed to prevent the posterior samples including samples obtained while the algorithm had not yet converged. Here, we elected to discard 5,000 iterations. - Then we use the
coda.samples
function to sample another 5,000 iterations from the posterior distribution. The posterior sample assembled is stored in theoutput
object.
Generally, the number of burn-in and sampling iterations needed will depend on the complexity of the model, the prior distribution as well as the quality of the initial values.
#jagsModel$state(internal=FALSE)
# Burn-in iterations
update(jagsModel,n.iter=5000)
# Parameters to be monitored
= c( "se","sp", "covs12", "covc12", "prev")
parameters
# Posterior samples
= coda.samples(jagsModel,variable.names=parameters,n.iter=5000)
posterior_results = posterior_results output
Posterior Results
The MCMCsummary
function will provide the following posterior statistics.
- The
mean
, - The standard deviation (
sd
), - The median (
50%
) - The 95% credible interval (
2.5%
and97.5%
).
Convergence statistics are also provided.
Rhat
is the Gelman-Rubin statistic (Gelman and Rubin (1992), Brooks and Gelman (1998)). It is enabled when 2 or more chains are generated. It evaluates MCMC convergence by comparing within- and between-chain variability for each model parameter.Rhat
tends to 1 as convergence is approached.
n.eff
is the effective sample size (Gelman et al. (2013)). Because the MCMC process causes the posterior draws to be correlated, the effective sample size is an estimate of the sample size required to achieve the same level of precision if that sample was a simple random sample. When draws are correlated, the effective sample size will generally be lower than the actual numbers of draws resulting in poor posterior estimates.
= MCMCsummary(output, digits=4)
res datatable(res, extensions = 'AutoFill')
Convergence Diagnostic Plots
Visual inspection of convergence for key parameters can be studied using different tools. We opted to write our own code as it gives us more flexibility and control on what we want to display. For a given parameter, panel (a) shows posterior density plot; (b) the running posterior mean value; and (c) the history plot. Each chain is identified by a different color. Similar behavior from all 3
chains would suggest the algorithm has converged. For example, here is the 3-panel plot for the prevalence parameter.
Index Test SEROLOGY TEST
Sensitivity and Specificity
for(k in 1) {
for(i in 1:2) {
# tiff(paste(parameters[i],"[",j,"].tiff",sep=""),width = 23, height = 23, units = "cm", res=200)
par(oma=c(0,0,3,0))
layout(matrix(c(1,2,3,3), 2, 2, byrow = TRUE))
denplot(output, parms=c(paste(parameters[i],"[",k,"]",sep="")), auto.layout=FALSE, main="(a)", xlab=paste(parameters[i],"",sep=""), ylab="Density")
rmeanplot(output, parms=c(paste(parameters[i],"[",k,"]",sep="")), auto.layout=FALSE, main="(b)")
title(xlab="Iteration", ylab="Running mean")
traplot(output, parms=c(paste(parameters[i],"[",k,"]",sep="")), auto.layout=FALSE, main="(c)")
title(xlab="Iteration", ylab=paste(parameters[i],"[",k,"]",sep=""))
mtext(paste("Diagnostics for ", parameters[i],"[",k,"]","",sep=""), side=3, line=1, outer=TRUE, cex=2)
# dev.off()
} }
Reference Test STOOL EXAMINATION
Sensitivity and Specificity
for(k in 2) {
for(i in 1:2) {
# tiff(paste(parameters[i],"[",j,"].tiff",sep=""),width = 23, height = 23, units = "cm", res=200)
par(oma=c(0,0,3,0))
layout(matrix(c(1,2,3,3), 2, 2, byrow = TRUE))
denplot(output, parms=c(paste(parameters[i],"[",k,"]",sep="")), auto.layout=FALSE, main="(a)", xlab=paste(parameters[i],"",sep=""), ylab="Density")
rmeanplot(output, parms=c(paste(parameters[i],"[",k,"]",sep="")), auto.layout=FALSE, main="(b)")
title(xlab="Iteration", ylab="Running mean")
traplot(output, parms=c(paste(parameters[i],"[",k,"]",sep="")), auto.layout=FALSE, main="(c)")
title(xlab="Iteration", ylab=paste(parameters[i],"[",k,"]",sep=""))
mtext(paste("Diagnostics for ", parameters[i],"[",k,"]","",sep=""), side=3, line=1, outer=TRUE, cex=2)
# dev.off()
} }
Covariance Parameters (target condition positive)
# Plots to check convergence for parameters shared across studies:
for(i in 3) {
jpeg(paste(result_folder,"/",parameters[i],".jpeg",sep=""),width = 23, height = 23, units = "cm", res=200)
par(oma=c(0,0,3,0))
layout(matrix(c(1,2,3,3), 2, 2, byrow = TRUE))
denplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(a)", xlab=paste(parameters[i],"",sep=""), ylab="Density")
rmeanplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(b)")
title(xlab="Iteration", ylab="Running mean")
traplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(c)")
title(xlab="Iteration", ylab=paste(parameters[i], sep=""))
mtext(paste("Diagnostics for ", parameters[i], sep=""), side=3, line=1, outer=TRUE, cex=2)
dev.off()
}
Covariance Parameters (target condition negative)
# Plots to check convergence for parameters shared across studies:
for(i in 4) {
jpeg(paste(result_folder,"/",parameters[i],".jpeg",sep=""),width = 23, height = 23, units = "cm", res=200)
par(oma=c(0,0,3,0))
layout(matrix(c(1,2,3,3), 2, 2, byrow = TRUE))
denplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(a)", xlab=paste(parameters[i],"",sep=""), ylab="Density")
rmeanplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(b)")
title(xlab="Iteration", ylab="Running mean")
traplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(c)")
title(xlab="Iteration", ylab=paste(parameters[i], sep=""))
mtext(paste("Diagnostics for ", parameters[i], sep=""), side=3, line=1, outer=TRUE, cex=2)
dev.off()
}
Prevalence
for(i in 5) {
# tiff(paste(parameters[i],".tiff",sep=""),width = 23, height = 23, units = "cm", res=200)
par(oma=c(0,0,3,0))
layout(matrix(c(1,2,3,3), 2, 2, byrow = TRUE))
denplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(a)", xlab=paste(parameters[i],"",sep=""), ylab="Density")
rmeanplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(b)")
title(xlab="Iteration", ylab="Running mean")
traplot(output, parms=c(paste(parameters[i], sep="")), auto.layout=FALSE, main="(c)")
title(xlab="Iteration", ylab=paste(parameters[i], sep=""))
mtext(paste("Diagnostics for ", parameters[i], sep=""), side=3, line=1, outer=TRUE, cex=2)
# dev.off()
}
References
Citation
@online{schiller2023,
author = {Ian Schiller and Nandini Dendukuri},
title = {Bayesian {2-LC} {Fixed} {Effects} {Model}},
date = {2023-11-14},
url = {https://www.nandinidendukuri.com/LCA/Bayesian_2-LC_Fixed_Effects_Models.html/},
langid = {en}
}