Wednesday, November 16, 2016

Is it legitimate to view the data and then decide on a distribution for the dependent variable?

An emailer asks,
In Bayesian parameter estimation, is it legitimate to view the data and then decide on a distribution for the dependent variable? I have heard that this is not “fully Bayesian”.
The shortest questions often probe some of the most difficult issues; this is one of those questions.

Let me try to fill in some details of what this questioner may have in mind. First, some examples:
  • Suppose we have some response-time data. Is it okay to look at the response-time data, notice they are very skewed, and therefore model them with, say, a Weibull distribution? Or must we stick with a normal distribution because that was the mindless default distribution we might have used before looking at the data? Or, having noticed the skew in the data and decided to use a skewed model, must we now obtain a completely new data set for the analysis?
  • As another example, is it okay to look at a scatter plot of population-by-time data, notice they have a non-linear trend, and therefore model them with, say, an exponential growth trend? Or must we stick with a linear trend because that was the mindless default trend we might have used before looking at the data? Or, having noticed the non-linear trend in the data and decided to use a non-linear model, must we now obtain a completely new data set for the analysis?
  • As a third example, suppose I’m looking at data about the positions of planets and asteroids against the background of stars. I’m trying to fit a Ptolemaic model with lots of epicycles. After considering the data for a long time, I realize that a completely different model, involving elliptical orbits with the sun at a focus, would describe the data nicely. Must I stick with the Ptolemaic model because it’s what I had in mind at first? Or, having noticed the Keplerian trend in the data, must I now obtain a completely new data set for the analysis?
What is the worry of the unnamed person who said it would not be “fully Bayesian” to look at the data and then decide on the model? I can think of a few possible worries:

One worry might be that selecting a model after considering the data is HARKing (hypothesizing after the results are known; Kerr 1998, http://psr.sagepub.com/content/2/3/196.short). Kerr even discusses Bayesian treatments of HARKing (pp. 206-207), but this is not a uniquely Bayesian problem. In that famous article, Kerr discusses why HARKing may be inadvisable. In particular, HARKing can transform Type I errors (false alarms) into confirmed hypothesis ergo fact. With respect to the three examples above, the skew in the RT data might be a random fluke, the non-linear trend in the population data might be a random fluke, and the better fit by the solar-centric ellipse might be a random fluke. Well, the only cure to Type I errors (false alarms) is replications (as Kerr mentions). Pre-registered replications. There are lots of disincentives to replication attempts, but these disincentives are gradually being mitigated by recent innovations like registered replication reports (https://www.psychologicalscience.org/publications/replication). In the three examples above, there are lots of known replications of skewed RT distributions, exponential growth curves, and elliptical orbits.

Another worry might be that the analyst had a tacit but strong theoretical commitment to a particular model before collecting the data, and then reneged on that commitment by sneaking a peak at the data. With respect to the three examples above, it may have been the case that the researcher had a strong theoretical commitment to normally distributed data, but, having noticed the skew, failed to mention that theoretical commitment and used a skewed distribution instead. And analogously for the other examples. But I think this mischaracterizes the usual situation of generic data analysis. The usual situation is that the analyst has no strong commitment to a particular model and is trying to get a reasonable summary description of the data. To be “fully Bayesian” in this situation, the analyst should set up a vast model space that includes all sorts of plausible descriptive models, including many different noise distributions and many different trends, because this would more accurately capture the prior uncertainty of the analyst than a prior with only a single default model. But doing that would be very difficult in practice, as the space of possible models is infinite. Instead, we start with some small model space, and refine the model as needed.

Bayesian analysis is always conditional on the assumed model space. Often the assumed model space is merely a convenient default. The default is convenient because it is familiar to both the analyst and the audience of the analysis, but the default need not be a theoretical commitment. There are also different goals for data analysis: Describing the one set of data in hand, and generalizing to the population from which the data were sampled. Various methods for penalizing overfitting of noise are aimed at finding a statistical compromise between describing the data in hand and generalizing to other data. Ultimately, I think it only gets resolved by (pre-registered) replication.

This is a big issue over which much ink has been spilled, and the above remarks are only a few off-the-cuff thoughts. What do you think is a good answer to the emailer's question?

Thursday, November 3, 2016

Bayesian meta-analysis of two proportions in random control trials


For an article that's accepted pending final revision (available here at OSF), I developed a Bayesian meta-analysis of two proportions in random control trials. This blog post summarizes and links to the complete R scripts.

We consider scenarios in which the data consist of the number of occurrences and the number of opportunities in a control group and in a treatment group.  The number of occurrences in the treatment group is denoted \(z_T\) and the number of opportunities in the treatment group is denoted \(n_T\), and analogously \(z_C\) and \(n_C\) in control group. The proportion of occurrences in the treatment group is \(z_T/n_T\), and the proportion of occurrences in the control group is \(z_C/n_C\).

For example, perhaps we are interested in the occurrence of mortality after myocardial infarction (death after heart attack), in a control group and in a group treated with beta-blockers (heart muscle relaxant). In this case, if beta-blockers have a beneficial effect, \(z_T/n_T\) should be less than \(z_C/n_C\).

As another example, perhaps we are interested in re-use of towels by patrons of hotels (instead of having towels changed every day for the same patron, which wastes laundering electricity and detergent). We consider towel re-use in a control group and in a group treated with a sign that indicates it's normal for people to re-use their towels. In this case, if the treatment has a beneficial effect, \(z_T/n_T\) should be greater than \(z_C/n_C\).


In meta-analysis, there are several studies that each test the treatment. The data from study \(s\) are denoted \(z_{T[s]}, n_{T[s]}, z_{C[s]}, n_{C[s]}\). Each study is conducted at its own site (e.g., hotel, hospital). Because each site has its own specific attributes, we do not expect the underlying proportions of occurrence to be identical across sites. But we do expect them to be similar and mutually informative, so we treat the data from different sites as representative of higher-level parameters in the model that describe what's typical across sites and how much variability there is across sites. This approach is the usual random-effects model for meta-analysis.

Here are the parameters I'll use to describe the data:
  • \(\theta_{C[s]}\) is the probability of occurrence in the control group for study \(s\).
  • \(\theta_{T[s]}\) is the probability of occurrence in the treatment group for study \(s\).
  • \(\rho_{[s]}\) is the difference of log-odds between groups:
       \(\rho_{[s]} = logit(\theta_{T[s]}) - logit(\theta_{C[s]})\)
Re-arranged, the equation for \(\rho_{[s]}\) expresses the relation of \(\theta_{T[s]}\) to \(\theta_{C[s]}\): \[ \theta_{T[s]} = logistic( \rho_{[s]} + logit( \theta_{C[s]} ) )\] This relation is a natural way to represent the dependency of the probabilities between groups because the relation is (i) symmetric with respect to what outcome is defined as success or failure because \(logit(\theta) = -logit(1-\theta)\), and (ii) symmetric with respect to which group is defined as the treatment by reversing the sign of \(\rho\). Note that \(\rho_{[s]}\) is the so-called log odds ratio across groups: \(\rho_{[s]} = log( [\theta_{T[s]}/(1-\theta_{T[s]})] / [\theta_{C[s]}/(1-\theta_{C[s]})] )\). I hope this little explanation and motivation of \(\rho_{[s]}\) is helpful.

I'll describe the distribution of  \(\theta_{C[s]}\) across studies as a beta distribution, parameterized by its mode and concentration:
  • \(\omega_{\theta C}\) is the modal value (of the beta description) of \(\theta_{C[s]}\)
  • \(\kappa_{\theta C}\) is the concentration (of the beta description) of \(\theta_{C[s]}\)
For beta distributions parameterized by the usual \(a,b\) shape parameters, we convert mode and concentration to \(a,b\), and the above specification becomes \(\theta_{C[s]} \sim beta( \omega_{\theta C}(\kappa_{\theta C}-2)+1 , (1-\omega_{\theta C})(\kappa_{\theta C}-2)+1 )\).

I'll describe the distribution of  \(\rho_{[s]}\) across studies as a normal distribution, parameterized by its mean and standard deviation:
  • \(\mu_{\rho}\) is the mean (of the normal description) of \(\rho_{[s]}\)
  • \(\sigma_{\rho}\) is the standard deviation (of the normal description) of \(\rho_{[s]}\) 
In other words, for a normal distribution parameterized by mean and precision as in JAGS, \( \rho_{[s]} \sim normal( \mu_{\rho} , 1/\sigma_{\rho}^2 ) \). Usually, the primary focus of research is the value of \(\mu_{\rho}\), that is, the estimate of the treatment effect across studies.

This type of hierarchical model is a typical random-effects model for meta-analysis, because the model gives each study its own individual parameter values which are assumed to be ("exchangably") representative of a common underlying tendency.

I'll set vague priors on the top-level parameters. An implementation of the model in R, JAGS, and runjags is provided below.

Notice that an alternative model that estimated \(\theta_{C[s]}\) independently from \(\theta_{T[s]}\), and then computed \(\rho_{[s]}\) afterwards, would not produce the same results. Nor would it be an appropriate model! It would not be appropriate because it would treat the control group and treatment group in the same study as being completely unrelated --- if we independently permuted the study indices in the treatment and control groups (i.e., re-arranged which control groups go with which treatment groups) the results of this alternative model would be unchanged. Instead, the model I am using assumes that the treatment probability is linked to the control probability. For example, if one  hospital has a low rate of heart attacks in the control condition but another hospital has a high rate of heart attacks in the control condition, the treatment should reduce heart attacks relative to the particular hospital's base rate, not to some absolute rate independent of hospital.


I'll apply the model to two sets of data. First, some data on death after heart attack, summarized on pp. 124-128 of Gelman et al., 2014, Bayesian Data Analysis, Third Edition. There were 22 studies, involving as few as 77 patients and as many as 3,887 patients. The treatment group received beta-blockers. If the treatment is effective, the log-odds-ratio will be less than 0. Below is a forest plot of the results:

In the plot above, each of the 22 lower rows shows an individual study's observed log-odds-ratio with a gray triangle. Notice that the gray triangle is greater than 0 in 6 of the 22 studies. The size of the triangle indicates the sample size in the study. The blue distribution is the posterior distribution for \(\rho_{[s]}\). The distribution is marked with its modal value and its 95% highest density interval (HDI). The numerical values of the mode and HDI are indicated at the right margin. At the top of the plot is shown the posterior distribution of \(\mu_{\rho}\). In words, this means that across studies the typical effect of treatment has a log-odds-ratio of about \(-0.25\), with a range of uncertainty from \(-0.39\) to \(-0.12\), well below 0. (These values are very similar to those reported by Gelman et al., BDA3, Table 5.3, p. 127.)

Notice also in the forest plot above that there is strong shrinkage of the individual study estimates toward the modal value across studies. For example, Study 22 has a greatly reduced rate of heart attack in its treatment group (gray triangle is at a low value), but the posterior estimate of its treatment effect is not so extreme, and its posterior distribution shows a skew to accommodate the "pull" of the extreme data from the shrunken distribution. Complementary skew and shrinkage can be seen, for example, in Study 14. The posterior distributions of the individual studies also show different uncertainties depending on the sample size in the study. For example, Study 10, with a large sample size, has a much narrower HDI than Study 19 with its small sample size.

Here is another application. In this case the data come from studies of towel re-use (Scheibehenne, Jamil, and Wagenmakers (2016). Bayesian evidence synthesis can reconcile seemingly inconsistent results: The case of hotel towel reuse. Psychological Science, 27, 1043-1046). At 7 different hotels, patrons in the treatment group were told it is normal to re-use towels (see article for details). If the treatment is effective, the log-odds-ratio should be greater than 0. Here is a forest plot of the results from the analysis:
(In Study 1 above, the gray triangle is so small that it falls outside of the plot range. It had N=162.) In this case, because there were only 7 studies and wide variation in the results across studies, the overall estimate of the log-odds-ratio is fairly uncertain: Its 95% HDI goes from \(-0.12\) to \(+0.47\). While the mode of the overall estimate is positive (at \(0.21\)), the uncertainty is great enough that we would want to do more studies to nail down the magnitude of the effect of treatment. Notice also the posterior distributions of the individual studies: There is evident shrinkage, but also lots of uncertainty, again with smaller studies showing more uncertainty than larger studies.

(In Scheibehenne et al.'s published analysis, a fixed-effects model was used, which is tantamount to using a single \(\rho\) and single \(\theta_C\) for all studies. This can be approximated in the model used here by specifying a prior that forces there to be tiny variance across \(\rho_{[s]}\) and tiny variance across \(\theta_{C[s]}\).)

As mentioned at the beginning, this model was developed for an article that is accepted pending final revision, available here. I also talked about Bayesian meta-analysis, and these applications in particular, in a presentation about Bayesian methods for replication analysis, that you can watch here.

The complete R script is available here.