Thursday, December 8, 2011

Some more nice reviews on

It's greatly appreciated when people go to the effort to write a nice review on Amazon. It's appreciated not only by the author :-) but crucially also by prospective readers who are trying to decide whether the book is worth getting. Here are some excerpts from some recent reviews on
This is one of the best written and accessible statistics books I've come across. Obviously, a lot of thinking went into coming up with examples and intuitive explanation of various ideas. I was consistently amazed at author's ability to not just say how something is done but why it is done that way using simple examples. I've read far more mathematically sophisticated explanations of statiscal modeling but, in this book,I felt I was allowed to peek into the mind of previous authors as to what they were really thinking when writing down their math formulas. (Posted November 11, 2011 by Davar314, San Francisco, CA)
As far as I am concerned, if you write a book this good, you get to put whatever you like on the cover - puppies, Angelina Jolie, even members of the metal band "Das Kruschke". While reading "DBDA" - reading *and* stepping through the code examples - will not make you a "Bayesian black-belt", it's impressive how much information it *will* give you - the book is almost 700 pages, after all - and you don't need (but it helps) to have tried to get the hang of the "Bayesian stuff" with other books to appreciate how friendly and effective this one is. (The author's explanation of the Metropolis algorithm is a good example). At the risk of sounding grandiose, the book just might do for Bayesian methods what Apple's original Mac did for the personal computer; here's hoping. (Posted December 7, 2011 by Dimitri Shvorob)
Click here for the full text of all the reviews on Amazon. Thanks again, reviewers, for the nice comments and for helping prospective readers.


  1. Hi John,
    I buyed your book at amazon, but I haven't revieved it yet. In the meanwhile, could you maybe point me to any passage in the book or on your block or anywhere else where you adress the problem that if probabilities are unknown often equal probabilities are assumed by Bayesian Estimations, when in realitiy we don't know if they are equal or not.
    Many thanks.

  2. Thanks for ordering the book. I hope you got the 2nd Edition, not the 1st.

    When you are estimating continuous parameters and have only vague, noncommittal prior knowledge, the exact form of the prior distribution has virtually no effect on the posterior distribution, as long as the prior is reasonably flat in the vicinity of the likelihood function. In other words, every reasonably noncommittal prior gives about the same answer.

    If, however, you are doing Bayesian model comparison (such computing a Bayes factor for null hypothesis testing), then the form of the prior is crucial. In this case, the prior must be carefully chosen to reflect actual prior knowledge.