tdib at UMICH.EDU
Mon Mar 3 13:17:15 CST 1997
I've broken up my response into two or three parts to keep things
manageable. This is the first; I'll deal with the Popper stuff and
your general points later.
, James Francis Lyons-Weiler wrote:
> If probability plays an implicit role in the assessment of
>phylogenetic hypotheses, its role should be made explicit.
Probability does not play an implicit role in the assessment of
phylogenetic hypotheses. It plays an explicit role amongst those who
find it to be a useful and justifiable approach, and it plays no role
for those who find such approaches problematical. Perhaps you should
contemplate what implicit really means.
> If probability does play an implicit role, it follows that then
>that explicit statements of corroboration in terms of probabilities should
>be turned to the task of increasing phylogenetic accuracy.
And those who reject such apporaches may find these statements to be
unreliable in terms of increasing such accuracy.
>remarkable to me that the various camps that persist to this day fail to
>see that because they are asking the same questions, there must exist a
>common set of assumptions.
And this strikes me as an attempt at a calm and dignified way of
saying that everyone should accept your assumptions. Although I
appreciate the tone, I will pass on the content. I do not assume that
the probabilites which you calculate are useful to increase
> The failure to recognize these assumptions (which include probability)
>does a disservice to the field, and that's when people start talking past
>each other. I have outlined the shared assumptions I managed to
>recognize; perhaps there are more. To avoid this, I will address your
>concerns in the terms of homology testing.
Sorry to be blunt James, but the disservice to the field arises from
those without sufficient imagination to recognize that others may
pass different judgements on methodological standards. You share
these assumptions with yourself and with those who think as you do,
You do not share them with me. I find it remarkably arrogant of you
to assume that the points upon which we differ are not even open to
>for me, the character state distributions that might appear to
>be homologous might also NOT be homologous, despite the a priori testing
>and the use of parsimony.
Well, good. Perhaps we find a starting point upon which we agree.
With matrix in hand, and before the algorithm is run, we all agree
that some of the distributions may not in fact indicate truly
homologous transformations. To jump ahead for a moment, we can
probably also agree that in the end, after all the analyses are run,
no matter which ones they may be, we may also very well be in error.
Such are the inherent limitations of any human endeavor, compounded
by our difficult task of reconstructing historical events. So this
leaves us with the algorithm as a procedure which should advance the
accuracy of our data, with the understanding that it will not be
perfect. I use a congruence test applied to the set of homologies, a
set of character distributions, each of which represents an assertion
that a particular evolutionary transformation occured. The real
transformations are inherently equal. My "confidence" in my ability
to have correctly discovered them is also equal, for they all pass
the tests that I find relevant. You dont really apply a further test
to them, but merely quantify your confidence in your "homology
hypotheses" in light of generalizations which you find to be relevant
and to be able to inform such decisions. That is our disagreement.
>I gave a particular example of how, when characters are equally weighted
>apparent homology may be wrong, and the equal probabilities about
>transformation implied by equally weighting differences would be arbitrary
So what? We seemed to agree last time that all weighting schemes, in
a probabilistic approach are arbitrary. As to being wrong, well that
happens sometimes. It certainly can happen with weights imposed
through a probability calculation. If we were to know the true
answer, we would be able to go back and reweight our characters such
that we could run the matrix and reproduce the right answer. What
would those weights be? In all cases, either 0 or 1, for if we knew
what happened, we would know that a hypothesized transformation
either happened or didnt. The probabilites say nothing about the
reality of the transformation; they are statements about what we
predict was likely to happen, given reference to some knowledge we
think may be relevant. Why is it so hard for you to understand that
others might find the knowledge you use to be irrelevant? Or that the
knowledge has very weak predictive power in the situtations in which
you attemt to apply it?
> The amount of evidence each character (or state) carries is where
>probability first sneaks into parsimony.
I think this plain wrong. The character distributions imply
transformations which either happened or did not. Those
transformations are either erroneous (hence carrying no information
relative to our goal), or they carry as much information as any other
truly homologous transformation. There is no probability adhering to
the 'amount of evidence" that the character carries. Probability
enters only to the extent that the investigator assigns a priori
notions of what he thinks likely to have happened.
>The second time is the degree of
>support afforded the hypothesis by the characters. Consider that for
>decades people have listed synapomorphies supporting clades as evidence in
>favor of that hypothesis (and in disfavor of alternatives, which include
>the null that all taxa are equally related). Why are 2 synapomorphies
>more convincing than 1? Why are 100 more convincing than 2?
Because they constitute more evidence for a hypothesis, plain and
simple. As I have said, there is without question an inherent amount
of uncertainty adhering to any hypothesis formulated by a human
being. This does not lead to the inevitable conclusion that the
uncertainty can be alieviated by the approaches you take. I dont know
why you assume they must.
>>In a sense, we are adopting a far stricter standard than you are, for a
>hypothesis (a column of states in a matrix) is only advanced when we are
>convinced that it is homologous, rather than assigning weights to evidence
>we are, in some sense, not sure of.
> The standard you are referring to is which character are allowed
>to interact in a parsimony analyses. Those that you have rejected as
>homologous are weighted "zero". The debate is not whether or not the
>approach of testing hypotheses of homology is better than other
>approaches; let's say for arguments' sake that it does take a logical
>priority. The debate is whether or not, in the process of becoming
>convinced of a hypothesis of homology, probability plays an implicit role.
And I say it does not. Yes we "weigh" a transformation zero if we
cannot sustain a hypothesis of its homology through our testing
procedures. If such a hypothesis can be sustained, it is weighted
one. This is the application of standards which rest on our
understanding of what evolutioanry events are. We eliminate
characters from our matrix if we cannot in good faith hypothesize
that they are evolutionary events. If they could be, then they are
advanced as assertions that this particular event occurred. The
events are inherently equal. If we propose that the character
distribution indicated the event, it is indicating something
inherently equal to that which other characters indicate. We weigh
equally because the reality we are reconstructing is made up of
inherently equal events. Unequal weighting merely represents
statements on your part that are essentially predictive, but in a
manner which is not then put to a test, rather it is allowed to
structure an answer. It is only through the answer, the phylogeny,
that we can answer the question of which transformations occured.
Probabilities calculated from other situations are not meaningful
ways of deciding whether a particular event occurred.
> If probability plays an implicit role in the assessment of phylogenetic
>hypotheses, its role should be made explicit.
> Since the practice of phylogenetic inference via parsimony has been
>generalized, it is clear that its role has already been made explicit.
It has been generalized along an axis which does not exist for most
of those who actually use it. To repeat, probability does not play an
implicit role in anyones approach. It is *possible* to weigh
characters from a probabilistic perspective. If one does not do that,
because of a rejection of the approach, that does not make it simply
implicit. The "making explicit" which you refer to is nothing more
than the development of the possiblity that one could approach these
questions in a probabilistic manner, and can use a modified parsimony
approach to do so.
> The impact of equal weighting can be explored in a any instance. Take for
>instance a sensitivity analysis; if, in an applied instance, different
>weighting schemes do not change the result (say, tree topology), then it
>can be inferred that weighting is not an issue (IN THAT CASE). But if
>modifications to weighting changes the optimal topology, then it also
>changes the outcome of the parsimony congruence "test", and a weighting
>scheme must therefore be justified somehow.
There is no question that weighting can change the outcome. And there
is no question that if you believe it appropriate to calculate
apriori probabilites, and to enter those as a factor in the analysis,
then you must justify your actual values, even if they are all the
same. The equal weighting which has been traditionally employed
simply does not rest on notions of equi-probability. The issue is
irrelevant to our analyses.
What you are doing is simply biasing the outcomes to favor your
predictions; namely that some parameter estimated in an independant
situation, or at a higher level, can meaningfully inform the test. I
see no reason to assume that this will be so. I have no desire to
constrain my findings to such expectations.
Based on first principles of evolutionary theory, we can
>that sometimes the history of life will have produced confounding pattern
>in the distribution of character states among taxa that guarantees that
>the cladistic algorithm you describe will fail.
Or yours or anyones.
>Its weakness and
>susceptibility to this can be moderated by the incorporation of
>considering probabilities of patterns of character state distributions.
Or such incorporation can lead one further from the truth. This is
clearly a possibility you do not even imagine. I find that
>I'm interested, Tom: would you agree or disagree that your favored set of
>methodologies can be improved upon?
I cannot imagine how any methodology could not be improved upon.
Sorry to break this to you, but that doesnt mean that I will buy
anything from anyone. I think your approach introduces factors based
on fundamentally irrelevant generalizations, and thus is as likely,
or more likely to lead us in the wrong direction.
More information about the Taxacom