[Taxacom] Likelihoodism versus probabilism

Stephen Thorpe stephen_thorpe at yahoo.co.nz
Fri Oct 9 15:51:10 CDT 2020

 Hi Richard,Your writing is like art, beautiful to look at, but totally opaque in meaning! I was trying to make the point that there are some fundamental difficulties with the notion of probability. Fundamentally, probabilities only really exist in a robust sense if there is indeterminism, otherwise they are more a measure of human ignorance, i.e. if we knew all the relevant facts about a forthcoming coin toss (e.g. the weight of the coin, the precise amount and direction of force applied in the toss, etc.), then we could predict the outcome with certainty. The only true indeterminism recognised by physics is at the quantum level, and even that is murky. Applying this to phylogenetics is even murkier. Isn't it really just finding the most parsimonious solution(s) and going with that? It is unclear to me what role probabilities even really play here. As something of an analogy, crimes are probably solved by finding the hypothesis/hypotheses which invoke the fewest conspiracies, given that conspiracies are considered to be "unlikely" in some sense. So, a hypothesis invoking 100 conspiracies is "much less likely" than a hypothesis involving ony 2 conspiracies.Cheers, Stephen
    On Saturday, 10 October 2020, 03:50:38 am NZDT, Richard Zander <richard.zander at mobot.org> wrote:  
 #yiv6803317336 #yiv6803317336 -- _filtered {} _filtered {} _filtered {} _filtered {}#yiv6803317336 #yiv6803317336 p.yiv6803317336MsoNormal, #yiv6803317336 li.yiv6803317336MsoNormal, #yiv6803317336 div.yiv6803317336MsoNormal {margin:0in;margin-bottom:.0001pt;font-size:12.0pt;font-family:New serif;}#yiv6803317336 a:link, #yiv6803317336 span.yiv6803317336MsoHyperlink {color:blue;text-decoration:underline;}#yiv6803317336 a:visited, #yiv6803317336 span.yiv6803317336MsoHyperlinkFollowed {color:purple;text-decoration:underline;}#yiv6803317336 p.yiv6803317336msonormal0, #yiv6803317336 li.yiv6803317336msonormal0, #yiv6803317336 div.yiv6803317336msonormal0 {margin-right:0in;margin-left:0in;font-size:12.0pt;font-family:New serif;}#yiv6803317336 span.yiv6803317336EmailStyle18 {font-family:sans-serif;color:windowtext;font-weight:normal;font-style:normal;text-decoration:none none;}#yiv6803317336 .yiv6803317336MsoChpDefault {font-size:10.0pt;} _filtered {}#yiv6803317336 div.yiv6803317336WordSection1 {}#yiv6803317336 
Thanks, Stephen, for the comment on the dysclarity of probabilities. I’m sure there is a solution in combinatorics that will give an exact answer to the probability of the probability in the case you proffer. 
The case I’m referring to is when probability is at all relevant. Likelihoodists eschew the idea of probability distribution but focus on evidence alone. There is, for the “evidentialists,”  no alpha, p-value, multiple test problem, and all the other impedimenta of the “expectationists”. The only important hypothesis is the one that maximally explains the actual data. Once the top of the bell-shaped curve is identified, the curve no longer matters.
This is fine if there are only two hypotheses and when maximum likelihood or maximum posterior probability is sufficient. But it isn’t sufficient, unless you are publishing in a major phylogenetics oriented journal. If hypotheses are discarded in likelihood analysis when adding log-likelihoods, or MCMC subsampling for a maximum, then the “alpha.” or chance of being wrong, doubles with each test (Bonferroni correction), exponentially (right?). 
Turing used likelihood or empiric Bayes to break German codes during WWII. But, although hypotheses of lower probability were discarded, his hypothesis of maximal likelihood or of maximal posterior probability was tested and verified by dropping a depth charge and seeing debris of a submarine churn up in the sea. Verification of molecular results of optimal explaining of data would properly be how well they match morphological results.  But… this is either turned around such that morphological results have to match molecular results, or, you are a “Verificationist,” a deprecation designed to isolate likelihood from reality.
Richard H. Zander
Missouri Botanical Garden – 4344 Shaw Blvd. – St. Louis – Missouri – 63110 – USA
richard.zander at mobot.org Ofc: +1 314 577-0276
Web sites:http://www.mobot.org/plantscience/bfna/bfnamenu.htm andhttp://www.mobot.org/plantscience/resbot/
From: Stephen Thorpe [mailto:stephen_thorpe at yahoo.co.nz]
Sent: Thursday, October 08, 2020 3:48 PM
To: Taxacom(taxacom at mailman.nhm.ku.edu} <taxacom at mailman.nhm.ku.edu>; Richard Zander <Richard.Zander at mobot.org>
Subject: Re: [Taxacom] Likelihoodism versus probabilism
Here's a good example of how unclear probabilities are, I call it "Double Russian Roulette". A machine randomly loads bullets into a revolver (which has a maximum capacity of ten bullets). Not only is the position of bullet insertion random, but so is the total number of bullets (anywhere from 0 to 10). You point the gun at your head and pull the trigger. What is the probability of you shooting yourself? One thing which is clear is that if there is, for example just 1 bullet in the gun, then the probability is 1/10. But the probability (risk) would seem to be greater than that, even if it does turn out there was in fact just 1 bullet in the gun. Probabilities seem to depend on what you know or don't know.
On Thursday, 8 October 2020, 11:38:51 pm NZDT, Richard Zander via Taxacom <taxacom at mailman.nhm.ku.edu> wrote:
Are you, as a systematist, a likelihoodist or a probabilist? Here is a test:
Q.: What does the support measure of 1.00 BPP mean in a molecular cladogram?
A.: A probabilist would say it means that exactly that molecular cladogram represents what happened in nature because there is no evidence of support for any other cladogram. In other words, it is statistically certain that the probability distribution is entirely taken up by the chance of that cladogram modeling what happened in past evolution.
A likelihoodist would say that exactly that molecular cladogram is most likely to have generated that molecular data. And that considering a probability distribution of the chance of other cladograms generating that data is irrelevant.
The probabilist is wrong, having not been alert to a major shift in systematics.
A very simple example of using MCMC Bayesian methods (which involve likelihood ratios) is as follows.
Consider a die of six sides, which is weighted such that one side comes up more often than any of the other sides. Roll the die, which is like Monte Carlo sampling. Keep track of how often each side comes up. Discard, as you roll the die, any data on the sides that come up if lower than the data on the side that comes up most often. The data converge on the side that comes up most often. The data converge on the "truth" of the side of maximum likelihood. If done a large number of times, the Bayesian Posterior Probability of 1.00 is given that side. This does not tell you how often that side came up out of all those rolls of the die. It could be anywhere from almost all the time or only a little bit more than 1/6 of the time. This is why molecular cladograms may have all the branches supported at 1.00 BPP.
Molecular systematists are likelihoodists. The actual probability that a molecular cladogram with branches all of 1.00 BPP does represent what actually happened in nature is almost certainly near zero. The actual probability that a 1.00 BPP three-taxon split (two branches more closely related than a third) is "correct" as opposed to a simple nearest-neighbor interchange (switch one branch of a sister group with the next lowest branch) may not be much more than 50:50
By "correct" a probabilist means the hypothesis which probably happened in nature given the fact that all possible hypotheses do probabilistically explain the data, not what the likelihoodist means by "correct," which is that the one cladogram is definitely the best hypothesis that explains the data in spite of the chances that other hypotheses are also possible.
Likelihoodists say we systematists are all likelihoodists now.  Are we? Did you take the test? I think it should be intolerable that classification changes be based on likelihood and optimality alone. Likelihood and MCMC Bayesian analysis should never be used as a basis on which to make changes in classifications given the low probability that the results are retrodictions of the evolutionary past.
Am I right? I hope Taxacom probabilists and likelihoodists might weigh in on this problem, which I think is a fundamental difficulty with modern systematics.
[For more info, Wikipedia has good treatments of likelihood, and literature by A.F.W. Edwards, the phylogeneticist who popularized likelihood as a replacement for probability, is available on the Web, as are additional criticisms. Likelihood ratios and the similar Bayes factors have utility when there are only two hypotheses or when the number of possible hypotheses is unknown. Log likelihoods are added in likelihood analyses just as Shannon informational bits are added in macroevolutionary analysis, but the similarity in dealing with trait differentials ends there.]
Richard H. Zander
Missouri Botanical Garden - 4344 Shaw Blvd. - St. Louis - Missouri - 63110 - USA
richard.zander at mobot.org<mailto:richard.zander at mobot.org> Ofc: +1 314 577-0276
Web sites:http://www.mobot.org/plantscience/bfna/bfnamenu.htm and http://www.mobot.org/plantscience/resbot/
Taxacom Mailing List
Send Taxacom mailing list submissions to:taxacom at mailman.nhm.ku.edu
For list information; to subscribe or unsubscribe, visit:http://mailman.nhm.ku.edu/cgi-bin/mailman/listinfo/taxacom
You can reach the person managing the list at:taxacom-owner at mailman.nhm.ku.edu
The Taxacom email archive back to 1992 can be searched at:http://taxacom.markmail.org
Nurturing nuance while assaulting ambiguity for about 33 years, 1987-2020.

More information about the Taxacom mailing list