[Taxacom] Markov chain Monte Carlo

Richard Zander Richard.Zander at mobot.org
Wed Apr 18 17:24:57 CDT 2012


Help! Would a Taxacomer familiar with MCMC criticize the appended
paragraphs I wrote for an article? Am I right? Am I wrong? If the
coalescent MCMC method is sampling, then sampling is sampling. On the
other hand . . . 
* * * * * * * * * * * *
Richard H. Zander
Missouri Botanical Garden, PO Box 299, St. Louis, MO 63166-0299 USA  
Web sites: http://www.mobot.org/plantscience/resbot/ and
http://www.mobot.org/plantscience/bfna/bfnamenu.htm
Modern Evolutionary Systematics Web site:
http://www.mobot.org/plantscience/resbot/21EvSy.htm
UPS and FedExpr -  MBG, 4344 Shaw Blvd, St. Louis 63110 USA
One might point out here that the percent posterior probability of
Bayesian analysis only involves (adds to "probability one") those trees
actually sampled by the Monte Carlo process. Any sampling process tries
to generate a "representative" group with the same proportions as the
full set. A Markov chain Monte Carlo analysis randomly selects a set of
possible solutions and probabilistically selects one tree (or a small
set of trees) that appears with greatest frequency as posterior
probability in that sample. It may well be, however, that the total
probabilistic support of improbable, unsampled trees may add to an
amount that will reduce the support for the most probable tree (say, of
0.95 posterior probability) to a value below that acceptable (Zander
2001: 433). Remember that the data set supports all possible trees. One
of the the improbable trees of among those making up the set with 0.05
probability will be correct 0.05 of the time, so the mass of improbable
trees is important in calculation. Consider, for instance, a million
trees sampled (computed by sampling the data in a likelihood-based
coalescent process) in a molecular systematics Markov chain Monte Carlo
analysis, and the best tree is correct at 0.95 probability. Say there
are 20 million more trees unsampled. Ideally the sampled trees represent
proportions in the sampled population, so there should be 20 more more
or less highly likely trees in the unsampled set involving multiple
peaks in the target distribution. This would reduce the posterior
probability of the best tree by a factor of about twenty. But suppose we
assume all unsampled trees are simply to be of low, nearly insignificant
probability.
	If these quite improbable trees had the same average probability
of those that are not the one that is of 0.95 probability, then the
probability space is extended twenty times (total of 21 million trees).
The probability of the best tree, which was 0.95 of 1.00, is now only
0.045 of the extended probability space (21.0). It remains by far the
best solution compared to any other single solution, but 0.955 of the
time, one of the improbable trees will be the correct tree. Statistical
sampling procedures do generally estimate percentages well by increasing
the differences (statistical power or effect size) between one included
set and another, but when sampling involves correct distinction of only
a single item out of a large number of items, the relative size of that
full set counts because one of the sample is a set of 1. Again, for the
same problem, imagine a die with one side labled "most probable" and the
other side consisting of a sphere with one million small flattenings.
The analysis does not present that die as the solution to the problem by
creating it from a die of the same configuration and size but 21 million
flattenings on the spherical side. It creates the die from a die with
the same configuration and 21 million flattenings but with the
flattenings of there same size original small flat areas, making the
spherical area of the die 20 times larger. Thus, rolling the die gives
the spherical area a far greater chance of having a "correct" face.
	A possible explanation of why poorly supported arrangements are
tolerated or accepted is "statistical relevance" (Salmon 1971: 11).
Statistical relevance is the philosophy-of-science version of the Bayes
Factor, recently much promoted by Bayesian statisticians (e.g.
Aris-Brosou & Yang, 2002; Suchard & al., 2002). The prior understanding,
in this case, is that there is no or equal support for a particular
hypothesis, and this is replaced after analysis by some statistical
support which demonstrates what appears to be a relatively great and
perhaps significant increase in support. This, however, is only
apparent; a particular absolute level of support is required for the
arrangement to be accepted as due to shared ancestry. As Huelsenbeck et
al. (2002) have pointed out, although the Bayes Factor has applications
of value, such as model selection, it is the posterior probability that
genuinely reflects the chance of an arrangement being correct. Also, a
similar attitude known as "clinical relevance" (Hopkins 2001, 2003) is
valuable in practice when an effect is demonstrated as not entirely
reliable (e.g. p-values of 0.80 or 0.90) but the chance that it is
helpful far outweighs the risk, e.g. using a harmless drug to treat a
dread illness. In betting our science, however, the loss upon failure
far outweighs any benefit on success. 




More information about the Taxacom mailing list