Built using Zelig version 5.1.0.90000
Bayesian Multinomial Logistic Regression
Use Bayesian multinomial logistic regression to model unordered categorical variables. The dependent variable may be in the format of either character strings or integer values. The model is estimated via a random walk Metropolis algorithm or a slice sampler. See for the maximum-likelihood estimation of this model.
zelig() accepts the following arguments for mlogit.bayes:
baseline
: either a character string or numeric value (equal to one of the observed values in the dependent variable) specifying a baseline category. The default value is NA
which sets the baseline to the first alphabetical or numerical unique value of the dependent variable.The model accepts the following additional arguments to monitor the Markov chains:
burnin
: number of the initial MCMC iterations to be discarded (defaults to 1,000).
mcmc
: number of the MCMC iterations after burnin (defaults to 10,000).
thin
: thinning interval for the Markov chain. Only every thin
-th draw from the Markov chain is kept. The value of mcmc
must be divisible by this value. The default value is 1.
mcmc.method
: either “MH” or “slice”, specifying whether to use Metropolis Algorithm or slice sampler. The default value is MH
.
tune
: tuning parameter for the Metropolis-Hasting step, either a scalar or a numeric vector (for \(k\) coefficients, enter a \(k\) vector). The tuning parameter should be set such that the acceptance rate is satisfactory (between 0.2 and 0.5). The default value is 1.1.
verbose
: defaults to FALSE
. If TRUE
, the progress of the sampler (every \(10\%\)) is printed to the screen.
seed
: seed for the random number generator. The default is NA
which corresponds to a random seed of 12345.
beta.start
: starting values for the Markov chain, either a scalar or a vector (for \(k\) coefficients, enter a \(k\) vector). The default is NA
where the maximum likelihood estimates are used as the starting values.
Use the following arguments to specify the priors for the model:
b0
: prior mean for the coefficients, either a scalar or vector. If a scalar, that value will be the prior mean for all the coefficients. The default is 0.
B0
: prior precision parameter for the coefficients, either a square matrix with the dimensions equal to the number of coefficients or a scalar. If a scalar, that value times an identity matrix will be the prior precision parameter. The default is 0 which leads to an improper prior.
Zelig users may wish to refer to help(MCMCmnl)
for more information.
Attaching the sample dataset:
data(mexico)
Estimating multinomial logistics regression using mlogit.bayes
:
z.out <- zelig(vote88 ~ pristr + othcok + othsocok,
model = "mlogit.bayes", data = mexico,
verbose = FALSE)
## Calculating MLEs and large sample var-cov matrix.
## This may take a moment...
## Inverting Hessian to get large sample var-cov matrix.
## Warning in if (mcmc.method == "RWM") {: the condition has length > 1 and
## only the first element will be used
## Warning in if (mcmc.method == "IndMH") {: the condition has length > 1 and
## only the first element will be used
You can check for convergence before summarizing the estimates with three diagnostic tests. See the section Diagnostics for Zelig Models for examples of the output with interpretation:
z.out$geweke.diag()
z.out$heidel.diag()
z.out$raftery.diag()
summary(z.out)
## Model:
##
## Iterations = 1001:11000
## Thinning interval = 1
## Number of chains = 1
## Sample size per chain = 10000
##
## 1. Empirical mean and standard deviation for each variable,
## plus standard error of the mean:
##
## Mean SD Naive SE Time-series SE
## (Intercept).2 -2.4837 0.40502 0.0040502 0.0041921
## (Intercept).3 -2.8815 0.40317 0.0040317 0.0043275
## pristr.2 -0.7259 0.09548 0.0009548 0.0010007
## pristr.3 -0.6012 0.09323 0.0009323 0.0009769
## othcok.2 1.1091 0.11462 0.0011462 0.0012004
## othcok.3 1.2495 0.11157 0.0011157 0.0012048
## othsocok.2 0.3521 0.15631 0.0015631 0.0016596
## othsocok.3 0.3021 0.15035 0.0015035 0.0015035
##
## 2. Quantiles for each variable:
##
## 2.5% 25% 50% 75% 97.5%
## (Intercept).2 -3.29435 -2.7551 -2.4789 -2.2124 -1.6941
## (Intercept).3 -3.69192 -3.1436 -2.8763 -2.6069 -2.1031
## pristr.2 -0.91989 -0.7887 -0.7243 -0.6623 -0.5439
## pristr.3 -0.78711 -0.6632 -0.6009 -0.5383 -0.4170
## othcok.2 0.89056 1.0319 1.1066 1.1870 1.3332
## othcok.3 1.02985 1.1770 1.2464 1.3219 1.4777
## othsocok.2 0.05033 0.2450 0.3515 0.4590 0.6604
## othsocok.3 0.01330 0.1992 0.3020 0.4037 0.5960
##
## Next step: Use 'setx' method
Setting values for the explanatory variables to their sample averages:
x.out <- setx(z.out)
Simulating quantities of interest from the posterior distribution given x.out
.
s.out1 <- sim(z.out, x = x.out)
summary(s.out1)
plot(s.out1)
Estimating the first difference (and risk ratio) in the probabilities of voting different candidates when pristr
(the strength of the PRI) is set to be weak (equal to 1) versus strong (equal to 3) while all the other variables held at their default values.
x.weak <- setx(z.out, pristr = 1)
x.strong <- setx(z.out, pristr = 3)
s.out2 <- sim(z.out, x = x.strong, x1 = x.weak)
summary(s.out2)
plot(s.out2)
Let \(Y_{i}\) be the (unordered) categorical dependent variable for observation \(i\) which takes an integer values \(j=1, \ldots, J\).
\[ \begin{aligned} Y_{i} &\sim& \textrm{Multinomial}(Y_i \mid \pi_{ij}), \end{aligned} \]
where \(\pi_{ij}=\Pr(Y_i=j)\) for \(j=1, \ldots, J\).
\[ \begin{aligned} \pi_{ij}=\frac{\exp(x_i\beta_j)}{\sum_{k=1}^J \exp(x_i\beta_k)}, \textrm{ for } j=1,\ldots, J-1, \end{aligned} \]
where \(x_{i}\) is the vector of \(k\) explanatory variables for observation \(i\) and \(\beta_j\) is the vector of coefficient for category \(j\). Category \(J\) is assumed to be the baseline category.
\[ \begin{aligned} \beta_j \sim \textrm{Normal}_k\left( b_{0},B_{0}^{-1}\right) \textrm{ for } j = 1, \ldots, J-1, \end{aligned} \]
where \(b_{0}\) is the vector of means for the \(k\) explanatory variables and \(B_{0}\) is the \(k \times k\) precision matrix (the inverse of a variance-covariance matrix).
qi$ev
) for the multinomial logistics regression model are the predicted probability of belonging to each category:\[ \begin{aligned} \Pr(Y_i=j)=\pi_{ij}=\frac{\exp(x_i \beta_j)}{\sum_{k=1}^J \exp(x_J \beta_k)}, \quad \textrm{ for } j=1,\ldots, J-1, \end{aligned} \]
and
\[ \begin{aligned} \Pr(Y_i=J)=1-\sum_{j=1}^{J-1}\Pr(Y_i=j) \end{aligned} \]
given the posterior draws of \(\beta_j\) for all categories from the MCMC iterations.
The predicted values (qi$pr
) are the draws of \(Y_i\) from a multinomial distribution whose parameters are the expected values( qi$ev
) computed based on the posterior draws of \(\beta\) from the MCMC iterations.
The first difference (qi$fd
) in category \(j\) for the multinomial logistic model is defined as
\[ \begin{aligned} \text{FD}_j=\Pr(Y_i=j\mid X_{1})-\Pr(Y_i=j\mid X). \end{aligned} \]
qi$rr
) in category \(j\) is defined as\[ \begin{aligned} \text{RR}_j=\Pr(Y_i=j\mid X_{1})\ /\ \Pr(Y_i=j\mid X). \end{aligned} \]
qi$att.ev
) for the treatment group in category \(j\) is\[ \begin{aligned} \frac{1}{n_j}\sum_{i:t_{i}=1}^{n_j}[Y_{i}(t_{i}=1)-E[Y_{i}(t_{i}=0)]], \end{aligned} \]
where \(t_{i}\) is a binary explanatory variable defining the treatment (\(t_{i}=1\)) and control (\(t_{i}=0\)) groups, and \(n_j\) is the number of treated observations in category \(j\).
qi$att.pr
) for the treatment group in category \(j\) is\[ \begin{aligned} \frac{1}{n_j}\sum_{i:t_{i}=1}^{n_j}[Y_{i}(t_{i}=1)-\widehat{Y_{i}(t_{i}=0)}], \end{aligned} \]
where \(t_{i}\) is a binary explanatory variable defining the treatment (\(t_{i}=1\)) and control (\(t_{i}=0\)) groups, and \(n_j\) is the number of treated observations in category \(j\).
The output of each Zelig command contains useful information which you may view. For example, if you run:
z.out <- zelig(y ~ x, model = "mlogit.bayes", data)
then you may examine the available information in z.out
by using names(z.out)
, see the draws from the posterior distribution of the coefficients
by using z.out$coefficients
, and view a default summary of information through summary(z.out)
. Other elements available through the $
operator are listed below.
Bayesian logistic regression is part of the MCMCpack library by Andrew D. Martin and Kevin M. Quinn . The convergence diagnostics are part of the CODA library by Martyn Plummer, Nicky Best, Kate Cowles, Karen Vines, Deepayan Sarkar, Russell Almond.
Martin AD, Quinn KM and Park JH (2011). “MCMCpack: Markov Chain Monte Carlo in R.” Journal of Statistical Software, 42 (9), pp. 22. <URL: http://www.jstatsoft.org/v42/i09/>.