Introduction
When doing Bayesian modeling, initial values serve as the starting point for estimating and updating parameters in a statistical model. These values are critical because they can significantly impact the efficiency and accuracy of the Bayesian inference process.
Think of Bayesian modeling as a journey to find the most likely values of parameters given your data and prior beliefs. Initial values are like the starting point on this journey. Choosing appropriate initial values can help you reach your destination (the posterior distribution of parameters) more quickly and accurately.
Selecting poor initial values may lead to slow convergence or even failure to converge in complex models. On the other hand, well-chosen initial values can result in more efficient sampling and a better exploration of the posterior distribution. This can save computational time and resources, making Bayesian modeling more practical and effective.
Therefore, the importance of initial values in Bayesian modeling cannot be overstated. They play a vital role in determining the success and efficiency of the modeling process, ensuring that you arrive at meaningful and accurate posterior estimates.
In this blog article, we will provide a guide on how to properly set initial values with rjags
THE DATA
To illustrate the different ways we can supply initial values in rjags we will use an example on latent turberculosis (TB). Patients with latent TB carry live, dormant Mycobacterium tuberculosis organisms despite being asymptomatic. Traditionally, the Tuberculin Skin Test (TST) has been used to screen for latent TB. However, TST is well known for having poor specificity due to cross-reactivity with BCG vaccination and infection with non-TB mycobacteria. A few years ago, T-cell based interferron-gamma release assays (IGRAs) attracted attention as a more specific alternative to TST.
Our dataset consists of 719 health care workers in India presumed to have latent TB (Pai (2005)) who were tested with both TST and GFT-G assays. The cross-tabulation of results on TST and QFT-G is as followed:
| TST + | TST - | |
|---|---|---|
| QFTG + | 226 | 62 |
| QFTG - | 72 | 359 |
THE MODEL
Assuming the TST is an imperfect test, we are modeling the diagnostic accuracy of both the QFT-G and TST tests, as well as latent TB prevalence. Informative priors are provided for the TST sensitivity and specificity according to results found in meta-analysis (Menzies and Comstock (2007)). The rjags model can be written as followed:
modelString =
"model {
#=== LIKELIHOOD ===#
t12[1:4] ~ dmulti(p12[1:4],N)
p12[1]<-prev*(s_T*s_Q)+(1-prev)*((1-c_T)*(1-c_Q));
p12[2]<-prev*((1-s_T)*s_Q)+(1-prev)*(c_T*(1-c_Q));
p12[3]<-prev*(s_T*(1-s_Q))+(1-prev)*((1-c_T)*c_Q);
p12[4]<-prev*((1-s_T)*(1-s_Q))+(1-prev)*(c_T*c_Q);
#=== PRIOR DISTRIBUTIONS ===#
prev~dbeta(1,1)
s_T~dbeta(77.85,15.75)
c_T~dbeta(46.33,10.85)
s_Q~dbeta(1,1)
c_Q~dbeta(1,1)
}"
writeLines(modelString,con="model.txt")INITIAL VALUES
If we omit initial values
In rjags, the jags.model function is used to compile the model. Initial values must be passed to the function through the argument inits. In the help file for jags.model function, providing initial values is set to be optional. If the argument inits is omitted, initial values will be generated automatically by the function.
To verify how rjags handles generation of initial values, we can compile jags.model while setting the n.adapt argument to zero, i.e. n.adapt=0.
dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=2, n.adapt=0)The jags.model function is an object of class jags which returns, among others, a list for each chain, containing the current parameter values in that given chain. The function call jagsModel$state() will therefore return the initial values generated by jags
inits <- jagsModel$state()
inits.table <- rbind(as.vector(inits[[1]]), as.vector(inits[[2]]))
rownames(inits.table) <- c("Chain 1", "Chain 2")INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.5 | 0.813880585014812 | 0.5 | 0.5 | 0.834098644501192 |
| Chain 2 | 0.5 | 0.813880585014812 | 0.5 | 0.5 | 0.834098644501192 |
We can see above that the initial values generated by jags for the first chain are exactly the same for the second chain. The JAGS user manual V 4.3.0 (section 3.3.1) states that the value selected by the function is usually the mean, median or mode of the distribution of the stochastic node and that the same initial values will be used across all chains if we are running multiple parallel chains. This appears to be problematic and the JAGS manual acknowledges this issue by promising a fix in future release. The purpose of running a MCMC algorithm with multiple chains is to ensure that if we cover different zones of the parameter space (i.e. if we provide different initial values) the chains will eventually converge to the same solution. Using multiple chains with the same starting point seems to go against that idea.
Another potential issue is that the way jags.model generates its own initial values will be static. If we repeat the call to jags.model below, we will see the same initial values obtained above appear.
dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=2, n.adapt=0)INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.5 | 0.81388 | 0.5 | 0.5 | 0.8341 |
| Chain 2 | 0.5 | 0.81388 | 0.5 | 0.5 | 0.8341 |
For those reasons, it is good practice to provide your own initial values. Even the JAGS manual advises the user to set the initial values manually.
Initial values supplied as a list of numeric values
It is possible to supply our own initial values as a list of numeric values. Initial values can only be supplied for random nodes, i.e. any parameters defined with the ~ symbol. You will trigger an error message otherwise. For our example, we have 5 parameters we can initialize: prev, s_T, c_T, s_Q and c_Q. We would trigger an error message if we tried to provide initial values for parameters p12 which is defined as a function of the 5 parameters listed above. With that in mind, we can initialize the 5 parameters as followed
initLIST = list(
prev=0.3,
s_T=0.7,
c_T=0.7,
s_Q=0.8,
c_Q=0.9
)However, as said in the help file of the jags.model function, if we run multiple parallel chains, the same list will be re-used for each chain. This means that every chain will have the same starting values as seen below.
dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=2, n.adapt=0, inits=initLIST)INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.9 | 0.7 | 0.3 | 0.8 | 0.7 |
| Chain 2 | 0.9 | 0.7 | 0.3 | 0.8 | 0.7 |
Initial values supplied as a list of lists
To allow each chain to have their own separate initial values, we need to provide as many lists of initial values as there are chains. For a 2-parallel-chain model we would need to create two lists of initial values, one for each chain, and both lists would need to be embed inside a list. Below, we define the lists init.chain1 and init.chain2, which contain the initial values of chain 1 and chain 2, respectively. Then, both lists are regrouped inside the initList list which is then passed to the inits argument of the jags.model function.
init.chain1 = list(
prev=0.3,
s_T=0.7,
c_T=0.7,
s_Q=0.8,
c_Q=0.9
)
init.chain2 = list(
prev=0.1,
s_T=0.6,
c_T=0.8,
s_Q=0.95,
c_Q=0.95
)
initLIST = list(init.chain1,
init.chain2
)dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=2, n.adapt=0, inits=initLIST)INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90 | 0.7 | 0.3 | 0.80 | 0.7 |
| Chain 2 | 0.95 | 0.8 | 0.1 | 0.95 | 0.6 |
Initial values supplied through a function
If we don’t want to subjectively select initial values, it is possible to write a function that will generate randomly initial values. Below is an example of a function that we created and named GenInits to generate random initial values for the 5 parameters of our model based on their respective prior distribution.
GenInits = function() {
prev <- rbeta(1, 1, 1)
s_T <- rbeta(1, 77.85, 15.75)
c_T <- rbeta(1, 46.33, 10.85)
s_Q <- rbeta(1, 1, 1)
c_Q <- rbeta(1, 1, 1)
list(
prev=prev,
s_T=s_T,
c_T=c_T,
s_Q=s_Q,
c_Q=c_Q
)
}We could then pass the GenInits function we created to the inits argument of the jags.model function.
dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=2, n.adapt=0, inits=GenInits())INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.30371 | 0.83175 | 0.9485 | 0.23408 | 0.84283 |
| Chain 2 | 0.30371 | 0.83175 | 0.9485 | 0.23408 | 0.84283 |
However, the results above present the same issue we have seen when passing a single list to the inits argument. It repeats the same initial values across all chains. Instead, we can first run the GenInits function repeatedly and store the values in a list that we will then assign to the inits argument of the jags.model function.
To do so, let’s start by defining an empty list with as many lists as there will be chains.
num.chains=2
initsList = vector('list',num.chains) Then we call our home made GenInitis function to fill in both lists
for(i in 1:num.chains){
initsList[[i]] = GenInits()
}Finally we pass the initsList object we created to the inits argument of the jags.model function to obtain 2 distinct sets of randomly generated initial values based on the prior distributions.
dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=2, n.adapt=0, inits=initsList)INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.47931 | 0.82022 | 0.91959 | 0.41632 | 0.87944 |
| Chain 2 | 0.70502 | 0.77072 | 0.53754 | 0.95325 | 0.84077 |
A mixture of the previous methods
We might want to provide our own initial values for certain chains and let our GenInitis function generate the initial values for other chains. For example, let’s suppose we want to provide some subjective initial values manually for chain 1 and let the GenInitis function we created generate random initial values based on the prior distributions for chain 2. This can be done by first defining an empty list with as many lists as there will be chains.
num.chains=2
initsList = vector('list',num.chains) Then we manually assign initial values for the first chain.
initsList[[1]] = list(
prev=0.3,
s_T=0.7,
c_T=0.7,
s_Q=0.8,
c_Q=0.9
) Finally we call our GenInitis function to fill in the second chain to get the desired initial value structure.
initsList[[2]] = GenInits() dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=2, n.adapt=0, inits=initsList)INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90000 | 0.70000 | 0.30000 | 0.80000 | 0.70000 |
| Chain 2 | 0.68598 | 0.76034 | 0.05228 | 0.15233 | 0.79608 |
USING A SEED FOR REPRODUCIBILITY
The RNGs used by JAGS are pseudo-random number generators in the sense that they generate a sequence of numbers that looks random but is entirely determined by the initial state.
In certain circumstances it may be relevant to be able to reproduce the results of a model. The need for reproducibility may arise, among other things, for academic purpose or in a model development process.
If we want to make the model output reproducible, we can specify a random number generator (RNG) to use for each chain. This can be done by adding .RNG.name and .RNG.seed to the list of initial values.
- .RNG.name: This will identify which (RNG) will be used with the chain. There are four implemented in JAGS (they are listed in the
jags.modelhelp file). - .RNG.seed: This must be a numeric integer value. It represents the seed (initial state) needed for reproducibility.
To achieve the desired result, we need to be careful on how we set up the initial value step. For example, if we were running 3 chains but only supplied initial values with .RNG.name and .RNG.seed for a single chain, as discussed in the early section of this blog article, all 3 chains would have the same initial values. But that’s not all. Because we would also be interested in fixing .RNG.name and .RNG.seed, it means that all 3 chains would have the same sequence of pseudo-random numbers and would virtually be identical. This would produce instances were convergence statistics, like the Gelman-Ruben statistic would not be computed despite the presence of multiple chains. It would therefore be important to avoid something like this
dataLIST=list(t12=c(226,62,72,359), N=719)
jagsModel = jags.model("model.txt",data=dataLIST, n.chains=3, n.adapt=0,
inits=list(prev=0.3, s_T=0.7, c_T=0.7, s_Q=0.8, c_Q=0.9,
.RNG.name="base::Super-Duper", .RNG.seed=99))To avoid that, we must either have different initial values for each chain or different .RNG.name and .RNG.seed for each chain or both.
To illustrate reproducibility, we add .RNG.name and .RNG.seed to our GenInits function. Here the .RNG.name and .RNG.seed will be fix but each call to our GenInits function will assign a set of initial values for our parameters that will hopefully be different from one chain to another, preventing the sequencing issue we just mentioned above.
GenInits = function() {
prev <- rbeta(1, 1, 1)
s_T <- rbeta(1, 77.85, 15.75)
c_T <- rbeta(1, 46.33, 10.85)
s_Q <- rbeta(1, 1, 1)
c_Q <- rbeta(1, 1, 1)
list(
prev=prev,
s_T=s_T,
c_T=c_T,
s_Q=s_Q,
c_Q=c_Q,
.RNG.name="base::Wichmann-Hill",
.RNG.seed=66
)
}We must also add .RNG.name and .RNG.seed even if we intend to provide our own initial values. To avoid the potential issue of pseudo-random numbers mentioned just above, we are using a different .RNG.name and .RNG.seed
initsList[[1]] = list(
prev=0.3,
s_T=0.7,
c_T=0.7,
s_Q=0.8,
c_Q=0.9,
.RNG.name="base::Super-Duper",
.RNG.seed=99
) EXAMPLE 1
RUNNING THE MODEL 3 TIMES WITH .RNG.name AND .RNG.seed
To illustrate the reproducible example, we ran our model 3 times with the same initial values provided by fixing the .RNG.seed to 66. We used 3 parallel chains. The first chain was initialized manually while the two other chains were initialized randomly with our GenInits function as seen below. The set.seed(123) is needed for reproducibility right before calling the GenInits() function. Note that the set.seed function would not be needed for reproducibility if we were providing our own initial values for all chains.
num.chains=3
initsList = vector('list',num.chains)
initsList[[1]] = list(
prev=0.3,
s_T=0.7,
c_T=0.7,
s_Q=0.8,
c_Q=0.9,
.RNG.name="base::Super-Duper",
.RNG.seed=99
)
set.seed(123)
for(i in 2:num.chains){
initsList[[i]] = GenInits()
}As seen below, we obtained the exact same posterior estimates for all 3 runs, confirming that providing the .RNG.name and .RNG.seed arguments for each chain allow for reproducible examples.
FIRST RUN
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90000 | 0.70000 | 0.30000 | 0.80000 | 0.70000 |
| Chain 2 | 0.44856 | 0.68694 | 0.71242 | 0.47189 | 0.84164 |
| Chain 3 | 0.67208 | 0.87803 | 0.04317 | 0.75391 | 0.81024 |
POSTERIOR ESTIMATES
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3938 | 0.0328 | 0.3322 | 0.3926 | 0.4612 | 1 | 5202 |
| s_T | 0.8382 | 0.0315 | 0.7773 | 0.8382 | 0.8992 | 1 | 7054 |
| c_T | 0.8560 | 0.0259 | 0.8099 | 0.8542 | 0.9111 | 1 | 5893 |
| s_Q | 0.9343 | 0.0456 | 0.8322 | 0.9413 | 0.9970 | 1 | 4989 |
| c_Q | 0.9454 | 0.0295 | 0.8853 | 0.9463 | 0.9956 | 1 | 6237 |
*SECOND RUN**
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90000 | 0.70000 | 0.30000 | 0.80000 | 0.70000 |
| Chain 2 | 0.44856 | 0.68694 | 0.71242 | 0.47189 | 0.84164 |
| Chain 3 | 0.67208 | 0.87803 | 0.04317 | 0.75391 | 0.81024 |
POSTERIOR ESTIMATE
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3938 | 0.0328 | 0.3322 | 0.3926 | 0.4612 | 1 | 5202 |
| s_T | 0.8382 | 0.0315 | 0.7773 | 0.8382 | 0.8992 | 1 | 7054 |
| c_T | 0.8560 | 0.0259 | 0.8099 | 0.8542 | 0.9111 | 1 | 5893 |
| s_Q | 0.9343 | 0.0456 | 0.8322 | 0.9413 | 0.9970 | 1 | 4989 |
| c_Q | 0.9454 | 0.0295 | 0.8853 | 0.9463 | 0.9956 | 1 | 6237 |
THIRD RUN
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90000 | 0.70000 | 0.30000 | 0.80000 | 0.70000 |
| Chain 2 | 0.44856 | 0.68694 | 0.71242 | 0.47189 | 0.84164 |
| Chain 3 | 0.67208 | 0.87803 | 0.04317 | 0.75391 | 0.81024 |
POSTERIOR ESTIMATE
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3938 | 0.0328 | 0.3322 | 0.3926 | 0.4612 | 1 | 5202 |
| s_T | 0.8382 | 0.0315 | 0.7773 | 0.8382 | 0.8992 | 1 | 7054 |
| c_T | 0.8560 | 0.0259 | 0.8099 | 0.8542 | 0.9111 | 1 | 5893 |
| s_Q | 0.9343 | 0.0456 | 0.8322 | 0.9413 | 0.9970 | 1 | 4989 |
| c_Q | 0.9454 | 0.0295 | 0.8853 | 0.9463 | 0.9956 | 1 | 6237 |
EXAMPLE 2
RUNNING THE MODEL 3 TIMES WITHOUT .RNG.name AND .RNG.seed
Just for the sake of completeness we also ran our model once more with a fixed set.seed(123) and with the same initial values but without .RNG.name AND .RNG.seed. As before, we ran the model with 3 parallel chains. The first chain was initialized manually while the two other chains were initialized randomly with our GenInits function and with set.seed(123). So bascially everything was the same except that we removed any reference to .RNG.name and .RNG.seed.
GenInits = function() {
prev <- rbeta(1, 1, 1)
s_T <- rbeta(1, 77.85, 15.75)
c_T <- rbeta(1, 46.33, 10.85)
s_Q <- rbeta(1, 1, 1)
c_Q <- rbeta(1, 1, 1)
list(
prev=prev,
s_T=s_T,
c_T=c_T,
s_Q=s_Q,
c_Q=c_Q
)
}
num.chains=3
initsList = vector('list',num.chains)
initsList[[1]] = list(
prev=0.3,
s_T=0.7,
c_T=0.7,
s_Q=0.8,
c_Q=0.9
)
set.seed(123)
for(i in 2:num.chains){
initsList[[i]] = GenInits()
}This time, we did not obtain identical posterior estimates for all 3 runs. Results were similar but not equal as you would have expected for a reproducible example. So this confirms that just fixing the seed with set.seed is not enough. To perfectly get a reproducible example, we need to add .RNG.name and .RNG.seed in the initializing process of the model.
FIRST RUN
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90000 | 0.70000 | 0.30000 | 0.80000 | 0.70000 |
| Chain 2 | 0.44856 | 0.68694 | 0.71242 | 0.47189 | 0.84164 |
| Chain 3 | 0.67208 | 0.87803 | 0.04317 | 0.75391 | 0.81024 |
POSTERIOR ESTIMATES
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3947 | 0.0331 | 0.3339 | 0.3934 | 0.4626 | 1 | 4934 |
| s_T | 0.8381 | 0.0315 | 0.7780 | 0.8378 | 0.8992 | 1 | 6996 |
| c_T | 0.8567 | 0.0261 | 0.8097 | 0.8551 | 0.9119 | 1 | 5958 |
| s_Q | 0.9334 | 0.0457 | 0.8319 | 0.9402 | 0.9971 | 1 | 5064 |
| c_Q | 0.9456 | 0.0297 | 0.8856 | 0.9467 | 0.9959 | 1 | 6185 |
*SECOND RUN**
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90000 | 0.70000 | 0.30000 | 0.80000 | 0.70000 |
| Chain 2 | 0.44856 | 0.68694 | 0.71242 | 0.47189 | 0.84164 |
| Chain 3 | 0.67208 | 0.87803 | 0.04317 | 0.75391 | 0.81024 |
POSTERIOR ESTIMATE
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3932 | 0.0327 | 0.3316 | 0.3923 | 0.4599 | 1 | 4848 |
| s_T | 0.8388 | 0.0315 | 0.7783 | 0.8384 | 0.9005 | 1 | 6808 |
| c_T | 0.8559 | 0.0258 | 0.8090 | 0.8544 | 0.9099 | 1 | 6066 |
| s_Q | 0.9347 | 0.0453 | 0.8350 | 0.9417 | 0.9972 | 1 | 5047 |
| c_Q | 0.9450 | 0.0298 | 0.8857 | 0.9460 | 0.9956 | 1 | 5937 |
THIRD RUN
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90000 | 0.70000 | 0.30000 | 0.80000 | 0.70000 |
| Chain 2 | 0.44856 | 0.68694 | 0.71242 | 0.47189 | 0.84164 |
| Chain 3 | 0.67208 | 0.87803 | 0.04317 | 0.75391 | 0.81024 |
POSTERIOR ESTIMATE
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3945 | 0.0331 | 0.3323 | 0.3934 | 0.4617 | 1 | 4972 |
| s_T | 0.8382 | 0.0313 | 0.7781 | 0.8378 | 0.8998 | 1 | 6886 |
| c_T | 0.8566 | 0.0261 | 0.8097 | 0.8549 | 0.9116 | 1 | 5709 |
| s_Q | 0.9339 | 0.0455 | 0.8333 | 0.9406 | 0.9972 | 1 | 4786 |
| c_Q | 0.9457 | 0.0296 | 0.8860 | 0.9468 | 0.9958 | 1 | 6122 |
EXAMPLE 3
RUNNING THE MODEL 3 TIMES WITH .RNG.name AND .RNG.seed AND SELECTED INITIAL VALUES FOR ALL 3 CHAINS
We mentioned above that the set.seed function was not needed for reproducibility if we were providing our own initial values for all chains. Let’s do this. Notice that despite using the same .RNG.name, we set different .RNG.seed. But since we fixed the initial values to be all different across the 3 chains, it was not necessary, but a good practice to avoid the issue of the identical pseudo-random number sequence mentioned at the start of this section.
num.chains=3
initsList = vector('list',num.chains)
initsList[[1]] = list(
prev=0.3,
s_T=0.7,
c_T=0.7,
s_Q=0.8,
c_Q=0.9,
.RNG.name="base::Wichmann-Hill",
.RNG.seed=66
)
initsList[[2]] = list(
prev=0.15,
s_T=0.8,
c_T=0.8,
s_Q=0.9,
c_Q=0.95,
.RNG.name="base::Wichmann-Hill",
.RNG.seed=99
)
initsList[[3]] = list(
prev=0.4,
s_T=0.6,
c_T=0.6,
s_Q=0.95,
c_Q=0.99,
.RNG.name="base::Wichmann-Hill",
.RNG.seed=33
)As expected below, we again obtained the exact same posterior estimates for all 3 runs.
FIRST RUN
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90 | 0.7 | 0.30 | 0.80 | 0.7 |
| Chain 2 | 0.95 | 0.8 | 0.15 | 0.90 | 0.8 |
| Chain 3 | 0.99 | 0.6 | 0.40 | 0.95 | 0.6 |
POSTERIOR ESTIMATES
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3936 | 0.0330 | 0.3327 | 0.3923 | 0.4616 | 1 | 4959 |
| s_T | 0.8393 | 0.0316 | 0.7786 | 0.8393 | 0.9004 | 1 | 6745 |
| c_T | 0.8562 | 0.0260 | 0.8092 | 0.8545 | 0.9113 | 1 | 6155 |
| s_Q | 0.9338 | 0.0456 | 0.8317 | 0.9405 | 0.9971 | 1 | 5150 |
| c_Q | 0.9444 | 0.0298 | 0.8846 | 0.9454 | 0.9954 | 1 | 5766 |
*SECOND RUN**
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90 | 0.7 | 0.30 | 0.80 | 0.7 |
| Chain 2 | 0.95 | 0.8 | 0.15 | 0.90 | 0.8 |
| Chain 3 | 0.99 | 0.6 | 0.40 | 0.95 | 0.6 |
POSTERIOR ESTIMATE
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3936 | 0.0330 | 0.3327 | 0.3923 | 0.4616 | 1 | 4959 |
| s_T | 0.8393 | 0.0316 | 0.7786 | 0.8393 | 0.9004 | 1 | 6745 |
| c_T | 0.8562 | 0.0260 | 0.8092 | 0.8545 | 0.9113 | 1 | 6155 |
| s_Q | 0.9338 | 0.0456 | 0.8317 | 0.9405 | 0.9971 | 1 | 5150 |
| c_Q | 0.9444 | 0.0298 | 0.8846 | 0.9454 | 0.9954 | 1 | 5766 |
THIRD RUN
INITIAL VALUES
| c_Q | c_T | prev | s_Q | s_T | |
|---|---|---|---|---|---|
| Chain 1 | 0.90 | 0.7 | 0.30 | 0.80 | 0.7 |
| Chain 2 | 0.95 | 0.8 | 0.15 | 0.90 | 0.8 |
| Chain 3 | 0.99 | 0.6 | 0.40 | 0.95 | 0.6 |
POSTERIOR ESTIMATE
| mean | sd | 2.5% | 50% | 97.5% | Rhat | n.eff | |
|---|---|---|---|---|---|---|---|
| prev | 0.3936 | 0.0330 | 0.3327 | 0.3923 | 0.4616 | 1 | 4959 |
| s_T | 0.8393 | 0.0316 | 0.7786 | 0.8393 | 0.9004 | 1 | 6745 |
| c_T | 0.8562 | 0.0260 | 0.8092 | 0.8545 | 0.9113 | 1 | 6155 |
| s_Q | 0.9338 | 0.0456 | 0.8317 | 0.9405 | 0.9971 | 1 | 5150 |
| c_Q | 0.9444 | 0.0298 | 0.8846 | 0.9454 | 0.9954 | 1 | 5766 |
REFERENCES
Citation
@online{schiller2023,
author = {Ian Schiller and Nandini Dendukuri},
title = {A Guide on How to Provide Initial Values in Rjags},
date = {2023-10-12},
url = {https://www.nandinidendukuri.com/blogposts/2023-10-12-initial-values-in-rjags/},
langid = {en}
}