Monday, December 25, 2017

Which movie is rated better? (Don't treat ordinal ratings as metric)

When deciding what movie to watch online, have you ever considered the star ratings provided by previous viewers? For example, Amazon.com has a 5-star rating system, in which reviewers can give a movie an ordinal rating from 1 star to 5 stars. Here are frequency histograms of 30 movies listed under "drama":

Frequency histograms of star ratings from 30 movies (shown as pink bars). Posterior predictions of an ordered-probit model are shown by blue dots with blue vertical segments indicating the 95% HDIs.

Usually people analyze rating data as if the data were metric, that is, people pretend that 1 star is 1.0 on a metric scale, and 2 stars is 2.0 on the metric scale, and 3 stars is 3.0, and so forth. But this is not appropriate because all we know about the star ratings is their order, not their interval separation. The ordinal data should instead be described with an ordinal model. For more background, see Chapter 23 of DBDA2E, and this manuscript.

Here I used an ordered-probit model to describe the data from the 30 movies. I assumed the same response thresholds across the movies because the response scale is presented to everyone the same way, for all movies; this is a typical assumption. Each movie was given its own latent mean (mu) and standard deviation (sigma). I put no hierarchical structure on the means, as I didn't want the means of small-N movies to be badly shrunken toward enormous-N movies. But I did put hierarchical structure on the standard deviations, because I wanted some constraint on the sigma's of movies that show extreme ceiling effects in their data; it turns out the sigma's were estimated to vary quite a lot anyway.

Below is a graph of the resulting latent means (mu's) of the movies plotted against the means of their ordinal ratings treated as metric:
Each point is a movie. Vertical axis is posterior mean (mu) of ordered probit model, with 95% HDI displayed as blue segment. Horizontal axis is mean of the star ratings treated as metric values.
In the scatter plot above, notice the many non-monotonicities; that is, as the means of ordinal-as-metric values increase along the horizontal axis, the latent mu's do not consistently increase on the vertical axis. In other words, the latent mu's are telling a different story than the ordinal-as-metric means.

Two movies with nearly equal ordinal-as-metric means, but with very different latent means in the ordered-probit model:

Upper row shows ordered-probit fit; lower row shows t test with unequal variances. Notice the blue dots from the ordered-probit model fit the data much better than the blue normal distributions of the ordinal-as-metric model. (Case 19 is Ekaterina: The rise of Catherine the Great, and Case 26 is John Grisham's The Rainmaker.)
Do we conclude that the movies (above) are rated about the same, or that movie 19 is rated much better than movie 26? I think we have to conclude that movie 19 is rated much better than movie 26 because the ordered-probit model is a much better description of the data.

Two movies with ordinal-as-metric means that are significantly different in one direction but the latent means in the ordered-probit model are quite different in the opposite direction:

Upper row shows ordered-probit fit; lower row shows t test with unequal variances. Notice the blue dots from the ordered-probit model fit the data much better than the blue normal distributions of the ordinal-as-metric model. (Case 10 is Miss Sloane, and Case 26 is John Grisham's The Rainmaker.)
Do we conclude that movie 26 is rated better than movie 10, or the other way around? I think we have to conclude that movie 10 is rated better than movie 26 because the ordered probit model is a much better description of the data.

This isn't (only) about movies: The point is that ordinal data from any source should not be treated as metric. Pretending that a rating of "1" is numeric 1.0, and rating "2" is 2.0, and rating "3" is 3.0, and so forth, is usually nonsensical because it's assuming metric information in the data that simply is not there. Treating the data as normally-distributed metric values is often a terrible description of the data. Instead, use an ordinal model for ordinal data. The ordinal model will describe the data better, and sometimes yield rather different implications than the ordinal-as-metric description.

For more information, see Chapter 23 of DBDA2E, and this manuscript titled, "Analyzing ordinal data with metric models: What could possibly go wrong?"

Sunday, December 17, 2017

The skew-normal distribution in JAGS

For a project I'm working on, I decided to try the skew-normal distribution to describe residual noise. JAGS does not have the skew-normal built in, so I used the Bernoulli ones trick to express the skew-normal in a JAGS model specification. This blog post shows how, and also demonstrates that when skew is near zero the autocorrelation can be severe and the posterior distribution has an interesting boomerang shape.

First, an example of simulated data from a skew-normal distribution, along with the recovered parameter values from Bayesian inference.
Gray histogram shows the data; blue curves are a smattering of skew-normals from the posterior.
The generating parameters had values location=1, scale=2, and skew=3. The data had fairly large N=2,000 so that the estimated parameter values might have reasonably high precision. Here are the MCMC diagnostics, along with a pairwise plot of the parameters in the posterior distribution:



These graphs (above) were the result from 12,000 MCMC steps thinned by 5. They have not yet achieved the heuristic desired ESS of 10,000 but at least it's clear that letting the chains run longer would do the job. It's also clear that the data-generating parameter values were recovered fairly well, but interestingly the location is pulled in the direction of the skew, which lets the scale and skew be a bit smaller than their generating values.

Now another example: The same as before but this time with skew=0, that is, normally distributed data being fit with a skew-normal distribution. In this case, the autocorrelation in the MCMC chains is nasty, and the pairwise posterior distribution shows a boomerang shape:




You can see in the diagnostic plots that the ESS is tiny, despite 12,000 steps thinned by 5. It looks like the chains have settled into the correct parameter values -- after all the posterior-predictive curves superimposed on the data look good -- but it will take a ton more steps to get a smooth representation of the posterior distribution.

Pursuing this topic showed me yet again how Present Self reinvents the wheels of Past Self. I didn't recall ever trying to do this particular application before, so I searched the web to see if anyone had previously posted something about the skew normal distribution in JAGS. One of the first links in the search results was this one, a discussion thread. I skimmed through it and thought it was just the sort of thing I could adapt for my present purposes. I thought the name of the initial poster looked familiar, and then I realized the name of the responder was very familiar -- it was mine. Sigh.

The JAGS model specification, using the Bernoulli ones trick. The Bernoulli ones trick is explained in DBDA2E in Section 8.6.1, pp. 214-215. This script will take several minutes to run for N=2,000; perhaps reduce N on a first try.

# housekeeping
graphics.off() # This closes all of R's graphics windows.
rm(list=ls())  # Careful! This clears all of R's memory!

if ( !("sn" %in% installed.packages()[,"Package"]) ) { install.packages("sn") }
library("sn") 
source("DBDA2E-utilities.R") # for diagMCMC(), openGraph(), etc.!
require(runjags) # should already be loaded from DBDA2E-utilities.R
require(rjags)   # should already be loaded from DBDA2E-utilities.R

# Generate data:

# Specify generating parameter values:
locat = 1 # location
scale = 2 # scale
skew = 3 # skew

# Construct file name root based on generating parameter values:
fileNameRoot = paste0("SkewNormalPlay","-",locat,"-",scale,"-",skew)
saveType = "png"

# Create random data from skew-normal distribution:
N = 2000
set.seed(47405)
y = rsn( N , xi=locat , omega=scale , alpha=skew )
# Scaling constant for use in JAGS Bernoulli ones trick:
dsnMax = 1.1*max(dsn( seq(-10,10,length=1001) , xi=locat,omega=scale,alpha=skew ))
# Assemble data for JAGS:
dataList = list(
  y = y ,
  N = N ,
  ones = rep(1,length(y)) ,
  C = dsnMax # constant for keeping scaled dsn < 1
)

# Define the JAGS model using Bernoulli ones trick
# as explained in DBDA2E Section 8.6.1 pp. 214-215.
modelString = "
model {
  for ( i in 1:N ) {
    dsn[i] <- ( (2/scale) 
                * dnorm( (y[i]-locat)/scale , 0 , 1 ) 
                * pnorm( skew*(y[i]-locat)/scale , 0 , 1 ) )
    spy[i] <- dsn[i] / C
    ones[i] ~ dbern( spy[i] )
  }
  scale ~ dgamma(1.105,0.105)
  locat ~ dnorm(0,1/10^2)
  skew ~ dnorm(0,1/10^2)
}
" # close quote for modelString
writeLines( modelString , con="TEMPmodel.txt" )

# Run the chains:
runJagsOut <- run.jags( method="parallel" ,
                        model="TEMPmodel.txt" , 
                        monitor=c("scale","locat","skew") , 
                        data=dataList ,  
                        #inits=initsList , 
                        n.chains=3 ,
                        adapt=500 ,
                        burnin=500 , 
                        sample=ceiling(12000/3) , 
                        thin=5 ,
                        summarise=FALSE ,
                        plots=FALSE )
codaSamples = as.mcmc.list( runJagsOut ) # from rjags package
save( codaSamples , file=paste0(fileNameRoot,"Mcmc.Rdata") )
mcmcMat = as.matrix( codaSamples )

# Examine the chains:
# Convergence diagnostics:
diagMCMC( codaObject=codaSamples , parName="scale" , 
          saveName=fileNameRoot , saveType=saveType )
diagMCMC( codaObject=codaSamples , parName="locat" , 
          saveName=fileNameRoot , saveType=saveType )
diagMCMC( codaObject=codaSamples , parName="skew" , 
          saveName=fileNameRoot , saveType=saveType )

# Examine correlation of parameters in posterior distribution:
openGraph()
pairs( mcmcMat , col="skyblue" )
saveGraph( file=paste0(fileNameRoot,"-Pairs") , type=saveType )

# plot data with posterior predictions
openGraph(height=4,width=7)
histInfo = hist( dataList$y , probability=TRUE , breaks=31 ,
                 xlab="Data Value" , main="Data with Posterior Pred." ,
                 col="gray" , border="white" )
nCurves = 30
curveIdxVec = round(seq(1,nrow(mcmcMat),length=nCurves))
xComb = seq( min(histInfo$breaks) , max(histInfo$breaks) , length=501 )
for ( curveIdx in curveIdxVec ) {
  lines( xComb , dsn( xComb , 
                      xi=mcmcMat[curveIdx,"locat"] , 
                      omega=mcmcMat[curveIdx,"scale"] , 
                      alpha=mcmcMat[curveIdx,"skew"] ) ,
         col="skyblue" )
}
saveGraph( file=paste0(fileNameRoot,"-Data-PostPred") , type=saveType )

Perhaps this is a case in which Stan would work more efficiently. Read about Stan in Chapter 14 of DBDA2E. Not only would Hamiltonian Monte Carlo drastically reduce autocorrelation, but Stan also has the skew-normal built in. Perhaps a future blog post... if Future Self remembers Present Self...

Friday, December 8, 2017

Graphs of imputed censored y values

When using JAGS with censored data, the censored values are imputed to be consistent with the parameters of the model and the censoring limits. It's straight forward to record and graph the imputed values. Here are a couple of slides from my workshops that show how. Start by looking at Section 25.4 of DBDA2E, then the code below follows:





Tuesday, July 4, 2017

DBDA2E Scripts in Stan


Many more scripts from DBDA2E have been re-written for Stan, by Prof. Joseph Houpt at Wright State University. Joe has posted his scripts at GitHub: https://github.com/jhoupt/DBDA2Estan

To run Joe's Stan scripts you will need the usual other supporting scripts and data files from DBDA2E, available at the book's web site (see Step 5 of Software Installation). Just copy Joe's scripts into your folder of scripts from the book.

Joe's GitHub site includes only the new scripts he modified from the book's JAGS scripts. The book had several Stan scripts already, and Joe has not re-written those. The book describes Stan in Chapter 14, and has several Stan scripts accompanying applications in later chapters. (Regarding discrete parameters: The book points out that Stan does not directly implement discrete parameters like JAGS, but the book does not discuss how to marginalize over discrete parameters instead. Some of Joe's scripts implement marginalization over discrete parameters.)

Big thanks to Joe Houpt for writing these scripts and making them available!

Notes regarding the figure at the top of this post:
1. Yes, the "<-" operator is deprecated in Stan, and I avoid using it in R, but it still looks like an arrow, and programmers who are familiar with R and JAGS will recognize it.
2. The Stan icon comes from the Stan web page. If the Stan folks don't like my use of the icon here, please contact me and I'll remove it or modify it as required.


Reminders of some recent posts:

Thursday, June 29, 2017

Difference of means for paired data: Model the mean of the differences or the joint distribution

I'm regularly asked about how to analyze the difference of means for paired data, such as pre-treatment and post-treatment scores (for the same subjects), or, say, blood pressures of spouses, etc. The usual approach is merely to consider the difference scores (e.g., post minus pre for each subject) and examine the mean of the differences. But we could also consider the joint distribution of the paired scores and describe the joint distribution as, say, a bivariate normal. In this post I'll show you how to do both. I'll illustrate with both positively correlated pairs and negatively correlated pairs.

Suppose we have positively correlated pairs of scores (e.g., if the pre-treatment score is higher than average, then the post-treatment score tends to be higher than average; or, if one person's relationship satisfaction is higher than average, then his/her spouse's relationship satisfaction tends to be higher than average).

We can describe the joint distribution as a bivariate normal, and then consider the posterior distribution of the difference of means, as shown below:




The red crosses (above) mark the arithmetic means in the data.

The script for generating the plots above is appended at the end of this post. It is merely a variation of the script for multivariate normals posted on this blog a few days ago.

Alternatively, we could consider the single group of difference scores. Here I'll use the BEST package, but we could also use the script Jags-Ymet-Xnom1grp-Mnormal-Example.R or Jags-Ymet-Xnom1grp-Mrobust-Example.R from DBDA2E.The result:

Notice that the posterior of the mean of the difference scores (from BEST) is essentially the same as the posterior of the difference of means from the previous analysis. The small discrepancies between the two analyses stem from two sources: First, there is MCMC wobble. Second, the BEST package models the data with a t distribution not a normal. In this case the simulated data are generated from a normal so the normality parameter is estimated to be large anyway.


Suppose instead we have negatively correlated pairs of scores (e.g., if one person's job satisfaction is higher than average, then his/her spouse's job satisfaction tends to be lower than average).

We can describe the joint distribution as a bivariate normal, and then consider the posterior distribution of the difference of means, as shown below:




Notice that these negatively-correlated data have exactly the same arithmetic difference of means and exactly the same variances as the data in the previous example. The only difference is the correlation. Now, for negatively correlated pairs, the estimated difference of means is essentially the same as for the previous data (only MCMC wobble makes the mode discrepant), but the estimate is much less certain, with a much wider HDI.

Analyzing the mean of the differences scores yields the same result:

The R script for generating the plots is appended below, but first,

Reminders of some recent posts:



Appendix: R script used for this post:


#----------------------------------------------------------------------------
# Jags-DifferenceOfPairedScores.R
# John Kruschke, June 2017.
# For further info, see:
# Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:
# A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.

# Optional generic preliminaries:
graphics.off() # This closes all of R's graphics windows.
rm(list=ls())  # Careful! This clears all of R's memory!

#--------------------------------------------------------------------------
# Load the data:
#--------------------------------------------------------------------------

# For real research, you would read in your data file here. This script expects
# the data to have two columns for the paired scores, one row per pair. No
# missing data.
#
# myData = read.csv("YourDataFileName.csv")
# y = myData[,c("ColumnNameOfPreScore","ColumnNameOfPostScore")]

# Generate simulated data:
library(MASS)
mu1 = 100.0
sigma1 = 15.0
mu2 = 108.0
sigma2 = 20.0
rho = 0.9
Sigma <- matrix(c( sigma1^2 , rho*sigma1*sigma2 ,
                   rho*sigma2*sigma1 , sigma2^2 ) , ncol=2 , byrow=TRUE )
set.seed(47405)
N = 100
yMat = mvrnorm(n=N, mu=c(mu1,mu2), Sigma=Sigma, empirical=TRUE)
colnames(yMat) = c("Y1","Y2")
write.csv( data.frame(yMat) , file="Jags-DifferenceOfPairedScores-Data.csv" )
 

# Now read in simulated data as if it were a data file:
myData = read.csv("Jags-DifferenceOfPairedScores-Data.csv")
y = myData[,c("Y1","Y2")]

#----------------------------------------------------------------------------
# The rest can remain unchanged, except for the specification of difference of
# means at the very end.
#
# The script below is a modification of the script Jags-MultivariateNormal.R,
# and there are some vestigial and unnecessary complexities. The previous script
# involved data with possibly more than two "y" columns, whereas the present
# script is only concerned with two "y" columns.
#----------------------------------------------------------------------------

# Load some functions used below:
source("DBDA2E-utilities.R") # Must be in R's current working directory.
# Install the ellipse package if not already:
want = c("ellipse")
have = want %in% rownames(installed.packages())
if ( any(!have) ) { install.packages( want[!have] ) }

# Standardize the data:
sdOrig = apply(y,2,sd)
meanOrig = apply(y,2,mean)
zy = apply(y,2,function(yVec){(yVec-mean(yVec))/sd(yVec)})
# Assemble data for sending to JAGS:
dataList = list(
  zy = zy ,
  Ntotal =  nrow(zy) ,
  Nvar = ncol(zy) ,
  # Include original data info for transforming to original scale:
  sdOrig = sdOrig ,
  meanOrig = meanOrig ,
  # For wishart (dwish) prior on inverse covariance matrix:
  zRscal = ncol(zy) ,  # for dwish prior
  zRmat = diag(x=1,nrow=ncol(zy))  # Rmat = diag(apply(y,2,var))
)

# Define the model:
modelString = "
model {
  for ( i in 1:Ntotal ) {
    zy[i,1:Nvar] ~ dmnorm( zMu[1:Nvar] , zInvCovMat[1:Nvar,1:Nvar] )
  }
  for ( varIdx in 1:Nvar ) { zMu[varIdx] ~ dnorm( 0 , 1/2^2 ) }
  zInvCovMat ~ dwish( zRmat[1:Nvar,1:Nvar] , zRscal )
  # Convert invCovMat to sd and correlation:
  zCovMat <- inverse( zInvCovMat )
  for ( varIdx in 1:Nvar ) { zSigma[varIdx] <- sqrt(zCovMat[varIdx,varIdx]) }
  for ( varIdx1 in 1:Nvar ) { for ( varIdx2 in 1:Nvar ) {
    zRho[varIdx1,varIdx2] <- ( zCovMat[varIdx1,varIdx2]
                               / (zSigma[varIdx1]*zSigma[varIdx2]) )
  } }
  # Convert to original scale:
  for ( varIdx in 1:Nvar ) {
    sigma[varIdx] <- zSigma[varIdx] * sdOrig[varIdx]
    mu[varIdx] <- zMu[varIdx] * sdOrig[varIdx] + meanOrig[varIdx]
  }
  for ( varIdx1 in 1:Nvar ) { for ( varIdx2 in 1:Nvar ) {
    rho[varIdx1,varIdx2] <- zRho[varIdx1,varIdx2]
  } }
}
" # close quote for modelString
writeLines( modelString , con="Jags-MultivariateNormal-model.txt" )

# Run the chains:
nChain = 3
nAdapt = 500
nBurnIn = 500
nThin = 10
nStepToSave = 20000
require(rjags)
jagsModel = jags.model( file="Jags-MultivariateNormal-model.txt" ,
                        data=dataList , n.chains=nChain , n.adapt=nAdapt )
update( jagsModel , n.iter=nBurnIn )
codaSamples = coda.samples( jagsModel ,
                            variable.names=c("mu","sigma","rho") ,
                            n.iter=nStepToSave/nChain*nThin , thin=nThin )

# Convergence diagnostics:
parameterNames = varnames(codaSamples) # get all parameter names
for ( parName in parameterNames ) {
  diagMCMC( codaObject=codaSamples , parName=parName )
}

# Examine the posterior distribution:
mcmcMat = as.matrix(codaSamples)
chainLength = nrow(mcmcMat)
Nvar = ncol(y)
# Create subsequence of steps through chain for plotting:
stepVec = floor(seq(1,chainLength,length=20))

# Make plots of posterior distribution:

# Preparation -- define useful functions:
library(ellipse)
expandRange = function( x , exMult=0.2 ) {
  lowVal = min(x)
  highVal = max(x)
  wid = max(x)-min(x)
  return( c( lowVal - exMult*wid , highVal + exMult*wid ) )
}

for ( varIdx in 1:Nvar ) {
  openGraph(width=7,height=3.5)
  par( mar=c(3.5,3,2,1) , mgp=c(2.0,0.7,0) )
  layout(matrix(1:2,nrow=1))
  # Marginal posterior on means:
  plotPost( mcmcMat[ , paste0("mu[",varIdx,"]") ] ,
            xlab=paste0("mu[",varIdx,"]") ,
            main=paste( "Mean of" , colnames(y)[varIdx] ) )
  # Marginal posterior on standard deviations:
  plotPost( mcmcMat[ , paste0("sigma[",varIdx,"]") ] ,
            xlab=paste0("sigma[",varIdx,"]") ,
            main=paste( "SD of" , colnames(y)[varIdx] ) )
}

for ( varIdx1 in 1:(Nvar-1) ) {
  for ( varIdx2 in (varIdx1+1):Nvar ) {
    openGraph(width=7,height=3.5)
    par( mar=c(3.5,3,2,1) , mgp=c(2.0,0.7,0) )
    layout(matrix(1:2,nrow=1))
    # Marginal posterior on correlation coefficient
    plotPost( mcmcMat[ , paste0("rho[",varIdx1,",",varIdx2,"]") ] ,
              xlab=paste0("rho[",varIdx1,",",varIdx2,"]") ,
              main=paste( "Corr. of" , colnames(y)[varIdx1] ,
                          "and" , colnames(y)[varIdx2] ) )
    # Data with posterior ellipse
    ellipseLevel = 0.90
    plot( y[,c(varIdx1,varIdx2)] , asp=1 , # pch=19 ,
          xlim=expandRange(y[,varIdx1],0.1) ,
          ylim=expandRange(y[,varIdx2],0.1) ,
          xlab=colnames(y)[varIdx1] , ylab=colnames(y)[varIdx2] ,
          main=bquote("Data with posterior "*.(ellipseLevel)*" level contour") )
    abline(0,1,lty="dashed")
    # Posterior ellipses:
    for ( stepIdx in stepVec ) {
      points( ellipse( mcmcMat[ stepIdx ,
                                paste0("rho[",varIdx1,",",varIdx2,"]") ] ,
                       scale=mcmcMat[ stepIdx ,
                                      c( paste0("sigma[",varIdx1,"]") ,
                                         paste0("sigma[",varIdx2,"]") ) ] ,
                       centre=mcmcMat[ stepIdx ,
                                       c( paste0("mu[",varIdx1,"]") ,
                                          paste0("mu[",varIdx2,"]") ) ] ,
                       level=ellipseLevel ) ,
              type="l" , col="skyblue" , lwd=1 )
    }
    # replot data:
    points( y[,c(varIdx1,varIdx2)] )
    points( mean(y[,varIdx1]) , mean(y[,varIdx2]) , pch="+" , col="red" , cex=2 )
  }
}

# Show data descriptives on console:
cor( y )
apply(y,2,mean)
apply(y,2,sd)

#-----------------------------------------------------------------------------
# Difference of means. N.B.: THIS ONLY MAKES SENSE IF THE MEANS ARE ON THE SAME
# SCALE, E.G., BEFORE-TREATMENT AND AFTER-TREATMENT SCORES ON THE SAME MEASURE.
# Change the specification of indices as appropriate to your variable names and
# interests.
#-----------------------------------------------------------------------------
# Specify indices of desired means:
MuIdx1 = which(colnames(y)=="Y2")
MuIdx2 = which(colnames(y)=="Y1")
# Make plot of posterior difference:
openGraph(height=3.5,width=4)
par( mar=c(3.5,1,2,1) , mgp=c(2.0,0.7,0) )
plotPost( mcmcMat[ , paste0("mu[",MuIdx1,"]") ]
          - mcmcMat[ , paste0("mu[",MuIdx2,"]") ],
          xlab=paste0( "mu[",MuIdx1,"] - mu[",MuIdx2,"]" ) ,
          main=bquote( mu[ .(colnames(y)[MuIdx1]) ]
                       - mu[ .(colnames(y)[MuIdx2]) ] ) ,
          cex.main=1.5 , ROPE=c(-1.5,1.5) )
points( mean(y[,MuIdx1])-mean(y[,MuIdx2]) , 0 , pch="+" , col="red" , cex=2 )

#---------------------------------------------------------------------------

# Re-do using BEST package:
library(BEST)
BESTout = BESTmcmc( y[,MuIdx1] - y[,MuIdx2] ,
                    numSavedSteps=nStepToSave , thinSteps=nThin )
openGraph(height=3.5,width=4)
par( mar=c(3.5,1,2,1) , mgp=c(2.0,0.7,0) )
plot( BESTout , compVal=0.0 , ROPE=c(-1.5,1.5) ,
      xlab=paste0( "mu[",MuIdx1,"] - mu[",MuIdx2,"]" ) ,
      main=bquote( mu[ .(colnames(y)[MuIdx1]) ]
                   - mu[ .(colnames(y)[MuIdx2]) ] ) )

#---------------------------------------------------------------------------

Monday, June 26, 2017

Bayesian estimation of correlations and differences of correlations with a multivariate normal

Within days of each other I received two emails asking about Bayesian estimation of correlations and differences of correlations. Hence this post, which presents an implementation in JAGS and R.

Traditionally, a joint distribution of several continuous/metric variables is modeled as a multivariate normal. We estimate the mean vector and covariance matrix of the multivariate normal. When standardized, the covariance matrix is the correlation matrix, so pairwise correlations can be inspected. Moreover, if it's meaningful to compare correlations, then we can also examine the posterior difference of pairwise correlations.

As an example, I'll use the data regarding Scholastic Aptitude Test (SAT) scores from Guber (1999), explained in Chapter 18 of DBDA2E. We'll consider three data variables: SAT total score (SATT), spending per student (Spend), and percentage of students taking the exam (PrcntTake). Here's a plot of the data from different perspectives:

Instead of doing linear regression of SAT on Spend and PrcntTake, we'll describe the joint distribution as a multivariate normal. This is straightforward in JAGS, because JAGS has a multivariate normal distribution built in. The model specification includes the likelihood function:

  zy[i,1:Nvar] ~ dmnorm( zMu[1:Nvar] , zInvCovMat[1:Nvar,1:Nvar] )

where the "z" prefixes of the variable names indicate that the data have been standardized inside JAGS. In the code above, zy[i,:] is the ith data point (a vector), zMu[:] is the vector of estimated means, and zInvCovMat[:,:] is the inverse covariance matrix. Just as JAGS parameterizes the normal distribution by the inverse variance (i.e., the precision), JAGS parameterizes the multivariate normal by the inverse covariance.

The prior on each mean is a broad normal. The prior on the inverse covariance matrices is a generic Wishart distribution, which is a multidimensional generalization of a gamma distribution. The Wishart is a distribution over matrices. The Wishart is built into JAGS.

The R code appended below provides the complete script, including the model specification and the graphics commands. The script needs the utilities script from DBDA2E for opening graphics windows.

Here is some selected output from the program. In particular, the program produces plots of the posterior distributions of the pairwise correlations, presented with the data and level contours from the multivariate normal:








Finally, here is a plot of the posterior of the difference of two correlations:
You should consider differences of correlations only if it's really meaningful to do so. Just as with taking differences of standardized regression coefficients, taking differences of correlations is assuming that the correlations are "on the same scale" which only makes sense to the extent that one standard deviation of SAT is "the same" as one standard deviation of Spending and is "the same" as one standard deviation of Percentage Taking the Exam.

The full R script follows. Add a comment to this post if this script is useful to you. Enjoy!

# Jags-MultivariateNormal.R
# John Kruschke, November 2015 - June 2017.
# For further info, see:
# Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:
# A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.

# Optional generic preliminaries:
graphics.off() # This closes all of R's graphics windows.
rm(list=ls())  # Careful! This clears all of R's memory!

#--------------------------------------------------------------------------
# Load the data:
#--------------------------------------------------------------------------
myData = read.csv("Guber1999data.csv") # must have file in curr. work. dir.
# y must have named columns, with no missing values!
y = myData[,c("SATT","Spend","PrcntTake")]

#----------------------------------------------------------------------------
# The rest can remain unchanged, except for the specification of difference of
# correlations at the very end.
#----------------------------------------------------------------------------

# Load some functions used below:
source("DBDA2E-utilities.R") # Must be in R's current working directory.
# Install the ellipse package if not already:
want = c("ellipse")
have = want %in% rownames(installed.packages())
if ( any(!have) ) { install.packages( want[!have] ) }

# Standardize the data:
sdOrig = apply(y,2,sd)
meanOrig = apply(y,2,mean)
zy = apply(y,2,function(yVec){(yVec-mean(yVec))/sd(yVec)})
# Assemble data for sending to JAGS:
dataList = list(
  zy = zy ,
  Ntotal =  nrow(zy) ,
  Nvar = ncol(zy) ,
  # Include original data info for transforming to original scale:
  sdOrig = sdOrig ,
  meanOrig = meanOrig ,
  # For wishart (dwish) prior on inverse covariance matrix:
  zRscal = ncol(zy) ,  # for dwish prior
  zRmat = diag(x=1,nrow=ncol(zy))  # Rmat = diag(apply(y,2,var))
)

# Define the model:
modelString = "
model {
  for ( i in 1:Ntotal ) {
    zy[i,1:Nvar] ~ dmnorm( zMu[1:Nvar] , zInvCovMat[1:Nvar,1:Nvar] )
  }
  for ( varIdx in 1:Nvar ) { zMu[varIdx] ~ dnorm( 0 , 1/2^2 ) }
  zInvCovMat ~ dwish( zRmat[1:Nvar,1:Nvar] , zRscal )
  # Convert invCovMat to sd and correlation:
  zCovMat <- inverse( zInvCovMat )
  for ( varIdx in 1:Nvar ) { zSigma[varIdx] <- sqrt(zCovMat[varIdx,varIdx]) }
  for ( varIdx1 in 1:Nvar ) { for ( varIdx2 in 1:Nvar ) {
    zRho[varIdx1,varIdx2] <- ( zCovMat[varIdx1,varIdx2]
                               / (zSigma[varIdx1]*zSigma[varIdx2]) )
  } }
  # Convert to original scale:
  for ( varIdx in 1:Nvar ) {
    sigma[varIdx] <- zSigma[varIdx] * sdOrig[varIdx]
    mu[varIdx] <- zMu[varIdx] * sdOrig[varIdx] + meanOrig[varIdx]
  }
  for ( varIdx1 in 1:Nvar ) { for ( varIdx2 in 1:Nvar ) {
    rho[varIdx1,varIdx2] <- zRho[varIdx1,varIdx2]
  } }
}
" # close quote for modelString
writeLines( modelString , con="Jags-MultivariateNormal-model.txt" )

# Run the chains:
nChain = 3
nAdapt = 500
nBurnIn = 500
nThin = 10
nStepToSave = 20000
require(rjags)
jagsModel = jags.model( file="Jags-MultivariateNormal-model.txt" ,
                        data=dataList , n.chains=nChain , n.adapt=nAdapt )
update( jagsModel , n.iter=nBurnIn )
codaSamples = coda.samples( jagsModel ,
                            variable.names=c("mu","sigma","rho") ,
                            n.iter=nStepToSave/nChain*nThin , thin=nThin )

# Convergence diagnostics:
parameterNames = varnames(codaSamples) # get all parameter names
for ( parName in parameterNames ) {
  diagMCMC( codaObject=codaSamples , parName=parName )
}

# Examine the posterior distribution:
mcmcMat = as.matrix(codaSamples)
chainLength = nrow(mcmcMat)
Nvar = ncol(y)
# Create subsequence of steps through chain for plotting:
stepVec = floor(seq(1,chainLength,length=20))

# Make plots of posterior distribution:

# Preparation -- define useful functions:
library(ellipse)
expandRange = function( x , exMult=0.2 ) {
  lowVal = min(x)
  highVal = max(x)
  wid = max(x)-min(x)
  return( c( lowVal - exMult*wid , highVal + exMult*wid ) )
}

for ( varIdx in 1:Nvar ) {
  openGraph(width=7,height=3.5)
  par( mar=c(3.5,3,2,1) , mgp=c(2.0,0.7,0) )
  layout(matrix(1:2,nrow=1))
  # Marginal posterior on means:
  plotPost( mcmcMat[ , paste0("mu[",varIdx,"]") ] ,
            xlab=paste0("mu[",varIdx,"]") ,
            main=paste( "Mean of" , colnames(y)[varIdx] ) )
  # Marginal posterior on standard deviations:
  plotPost( mcmcMat[ , paste0("sigma[",varIdx,"]") ] ,
            xlab=paste0("sigma[",varIdx,"]") ,
            main=paste( "SD of" , colnames(y)[varIdx] ) )
}

for ( varIdx1 in 1:(Nvar-1) ) {
  for ( varIdx2 in (varIdx1+1):Nvar ) {
    openGraph(width=7,height=3.5)
    par( mar=c(3.5,3,2,1) , mgp=c(2.0,0.7,0) )
    layout(matrix(1:2,nrow=1))
    # Marginal posterior on correlation coefficient
    plotPost( mcmcMat[ , paste0("rho[",varIdx1,",",varIdx2,"]") ] ,
              xlab=paste0("rho[",varIdx1,",",varIdx2,"]") ,
              main=paste( "Corr. of" , colnames(y)[varIdx1] ,
                          "and" , colnames(y)[varIdx2] ) )
    # Data with posterior ellipse
    ellipseLevel = 0.90
    plot( y[,c(varIdx1,varIdx2)] , # pch=19 ,
          xlim=expandRange(y[,varIdx1]) , ylim=expandRange(y[,varIdx2]) ,
          xlab=colnames(y)[varIdx1] , ylab=colnames(y)[varIdx2] ,
          main=bquote("Data with posterior "*.(ellipseLevel)*" level contour") )
    # Posterior ellipses:
    for ( stepIdx in stepVec ) {
      points( ellipse( mcmcMat[ stepIdx ,
                                paste0("rho[",varIdx1,",",varIdx2,"]") ] ,
                       scale=mcmcMat[ stepIdx ,
                                      c( paste0("sigma[",varIdx1,"]") ,
                                         paste0("sigma[",varIdx2,"]") ) ] ,
                       centre=mcmcMat[ stepIdx ,
                                       c( paste0("mu[",varIdx1,"]") ,
                                          paste0("mu[",varIdx2,"]") ) ] ,
                       level=ellipseLevel ) ,
              type="l" , col="skyblue" , lwd=1 )
    }
    # replot data:
    points( y[,c(varIdx1,varIdx2)] )
  }
}

# Show data descriptives on console:
cor( y )
apply(y,2,mean)
apply(y,2,sd)

#-----------------------------------------------------------------------------
# Difference of correlations. Change the specification of indices as
# appropriate to your variable names and interests.
#-----------------------------------------------------------------------------
# Specify indices of desired correlations:
rhoAidx1 = which(colnames(y)=="SATT")
rhoAidx2 = which(colnames(y)=="Spend")
rhoBidx1 = which(colnames(y)=="SATT")
rhoBidx2 = which(colnames(y)=="PrcntTake")
# Make plot of posterior difference:
openGraph(height=3.5,width=4)
par( mar=c(3.5,1,2,1) , mgp=c(2.0,0.7,0) )
plotPost( mcmcMat[ , paste0("rho[",rhoAidx1,",",rhoAidx2,"]") ]
          - mcmcMat[ , paste0("rho[",rhoBidx1,",",rhoBidx2,"]") ],
          xlab=paste0(
          "rho[",rhoAidx1,",",rhoAidx2,"] - rho[",rhoBidx1,",",rhoBidx2,"]" ) ,
          main=bquote( rho[ list( .(colnames(y)[rhoAidx1]) ,
                                  .(colnames(y)[rhoAidx2]) ) ] 
                       - rho[ list( .(colnames(y)[rhoBidx1]) ,
                                    .(colnames(y)[rhoBidx2]) ) ] ) ,
          cex.main=1.5 , ROPE=c(-0.1,0.1) )

#---------------------------------------------------------------------------



Sunday, June 11, 2017

Posterior distribution of predictions in multiple linear regression

In a previous post I showed how to compute posterior predictions from multiple linear regression inside JAGS. In this post I show how to do it after JAGS, in R.

There are trade-offs doing it in JAGS or afterwards in R. If you do it in JAGS, then you know that the model you're using to generate predictions is exactly the model you're using to describe the data, because you only specify the model once. If you do it after JAGS in R, then you have to code the model all over again in R, and face the peril of making mistakes because JAGS and R parameterize distributions differently (e.g., in dnorm and dt are parameterized differently in JAGS than in R). On the other hand, if you generate the predictions in JAGS, then you have to specify all the to-be-probed x values in advance, along with the data to be fit. If instead you generate predictions in R (after JAGS), then you can re-use the already-generated MCMC chain for to-be-probed x values as much as you like, without having to run MCMC all over again. In this post, we face the peril of specifying the model in R.

Please see the previous post for background information about this specific application to predicting SAT scores from spending per student and percentage of students taking the exam. To run the R code below, please open the high-level script Jags-Ymet-XmetMulti-Mrobust-Example.R. Run the script "out of the box" so it uses the SAT data (Guber1999data.csv) and the two predictors mentioned in the previous sentence. Then, at the end of the script, append the following R code:

# Remind self of the predictors:
show( xName )
# Specify probe values of the predictors. Columns must match predictors actually
# used, in order! Specify as many probe rows as you like:
xProbe = matrix( c( 4.0 , 10 ,   # Spend , PrcntTake
                    9.0 , 10 ,   # Spend , PrcntTake
                    4.0 , 90 ,   # Spend , PrcntTake
                    9.0 , 10 ) , # Spend , PrcntTake
                 ncol=length(xName) , byrow=TRUE )
# Convert mcmc coda object to matrix:
mcmcMat = as.matrix( mcmcCoda )
# Show column names of mcmcMat. Notice they include beta0, beta[1], beta[2],
# ..., which can be extracted using grep("^beta",colnames(mcmcMat)):
colnames( mcmcMat )
grep("^beta",colnames(mcmcMat),value=TRUE)
# Now, go through rows of xProbe matrix and display predicted mu and y:
for ( xProbeIdx in 1:nrow(xProbe) ) {
  # Predicted mu is b0+b1*x1+b2*x2+... at every step in chain:
  muPred = ( mcmcMat[,grep("^beta",colnames(mcmcMat))]
             %*% matrix(c(1,xProbe[xProbeIdx,]),ncol=1) )
  # Predicted y is is pred mu plus t-distrib noise at every step in chain:
  yPred = muPred + mcmcMat[,"sigma"] * rt(nrow(mcmcMat),df=mcmcMat[,"nu"])
  # Make plot of posterior distribution of prediction. Notice that plotPost()
  # also returns numerical values to console.
  openGraph(width=3.5,height=5)
  layout(matrix(1:3,nrow=3),heights=c(1.5,2,2))
  par( mar=c(4,1,1,1) , mgp=c(2.0,0.7,0) , xpd=NA )
  plot(0,0,xaxt='n',yaxt='n',bty='n',pch='',ylab='',xlab='')
  text(0,0, adj=c(0.5,0.7),
       paste0(xName,"=",xProbe[xProbeIdx,],collapse="\n"), cex=1.25 )
  xLim = quantile( yPred , probs=c(0.005,0.995) )
  muPredInfo = plotPost( muPred ,xlab="Predicted mu" , xlim=xLim ,
                 main="" )
  show( muPredInfo )
  yPredInfo = plotPost( yPred , xlab="Predicted y" , xlim=xLim ,
                main="" )
  show( yPredInfo )


The code produces output plots such as the following (along with numerical values displayed on the command line):


N.B.: The plots only display the first three significant digits; see the numerical output at the command line for more digits.

Enjoy!


Reminders of some recent posts: