Weights

James Francis Lyons-Weiler weiler at ERS.UNR.EDU
Tue Mar 4 06:06:38 CST 1997


Tom,

        In your partial response, you are talking past me owing to
        a major assumption on your part about what method of inference
        you think I espouse.  Therefore, I am responding to the first
        (half?) of your response with the hope that we might actually
        get have some worthwhile discussion.  I dislike overlapping
        responses, but the discussion an easily become muddled if
        we assume to much of each other...


On Mon, 3 Mar 1997, Tom DiBenedetto wrote:

> Probability does not play an implicit role in the assessment of
> phylogenetic hypotheses. It plays an explicit role amongst those who
> find it to be a useful and justifiable approach, and it plays no role
> for those who find such approaches problematical. Perhaps you should
> contemplate what implicit really means.


        I don't find any of your points particulaly compelling.  This is
        merely a claim, and your viewpoint.

My Webster's Ninth Collegiate defines IMPLICIT as

        1.
        a. capable of being understood from something else though
           unexpressed.
        b. involved in the nature or essence of something though
           not revealed, expressed, or developed.
        c. a mathematical function

        2. being without doubt, or reserve: unquestioning, absolute.

In case I've not been clear enough, what I mean to say is that
probability is involved in the nature and essence of "unweighted"
parsimony, and although it has been revealed, it in not often expressed,
and not fully developed.

>
> >       If probability does play an implicit role, it follows that  then
> >that explicit statements of corroboration in terms of probabilities should
> >be turned to the task of increasing phylogenetic accuracy.
>
> And those who reject such apporaches may find these statements to be
> unreliable in terms of increasing such accuracy.


        How has this been demonstrated?  What do you mean?



> >It is
> >remarkable to me that the  various camps that persist to this day fail to
> >see that because they are asking the same questions, there must exist a
> >common set of assumptions.
>
> And this strikes me as an attempt at a calm and dignified way of
> saying that everyone should accept your assumptions. Although I
> appreciate the tone, I will pass on the content. I do not assume that
> the probabilites which you calculate are useful to increase
> phylogenetic accuracy.

        Given our shared, enlightened understanding of what I mean
        by "implicit", I don't expect ANYONE to "accept" the
        assumptions.  They are not mine, by the way.  They simply
        are.  If an assumption plays a role, you can't change
        that by claiming that it doesn't.  Each of the assumptions
        I listed poses a very real threat to accuracy if they
        are assumed away and nevertheless obtain (in specific
        instances).

        What probabilities are you referring to?  I;m certain that
        the probabilities that you calculate (implicitly (ie. 1 and 0)
        may be
        useful to increase phylogenetic accuracy, but the problem
        is that you can't tell when you're on the mark with equal
        probabilities.
>
> > The failure to recognize these assumptions (which include  probability)
> >does a disservice to the field, and that's when people start talking past
> >each other.  I have outlined the shared assumptions I managed to
> >recognize; perhaps there are more. To avoid this, I will address your
> >concerns in the terms of homology testing.
>
> Sorry to be blunt James, but the disservice to the field arises from
> those without sufficient imagination to recognize that others may
> pass different judgements on methodological standards. You share
> these assumptions with yourself and with those who think as you do,
> You do not share them with me. I find it remarkably arrogant of you
> to assume that the points upon which we differ are not even open to
> question.

        I also share these assumptions with anyone who did not know
        that their methodology required these assumptions.  I'm not
        being arrogant.  You seem to misunderstand or reject the
        dynamic nature of our understanding of methods of scientific
        inference, phylogenetic methods particularly.  For you,
        cladistic parsimony is all it takes.  For me, and for many
        others, methods of phylogenetic inference are still in their
        infancy.  If there is arrogance about, I'm afraid it's in the
        form of a failure for the appreciation of making assumptions
        explicit, laying the limitations of methodology bare, and
        a frank discussion of such things.  There is nothing sacred
        about cladistic parsimony that can remove from it its
        assumptions, explicit AND implicit, known and unknown.

        Can you provide a list of assumptions that you think _do_
        obtain for cladistic parsimony?

>
> Well, good. Perhaps we find a starting point upon which we agree.
> With matrix in hand, and before the algorithm is run, we all agree
> that some of the distributions may not in fact indicate truly
> homologous transformations. To jump ahead for a moment, we can
> probably also agree that in the end, after all the analyses are run,
> no matter which ones they may be, we may also very well be in error.
> Such are the inherent limitations of any human endeavor, compounded
> by our difficult task of reconstructing historical events. So this
> leaves us with the algorithm as a procedure which should advance the
> accuracy of our data, with the understanding that it will not be
> perfect. I use a congruence test applied to the set of homologies, a
> set of character distributions, each of which represents an assertion
> that a particular evolutionary transformation occured. The real
> transformations are inherently equal.


This is simply an assumption.  You claim that you CONSIDER them to be
equal, but you also don't KNOW that they are equal.  This contradiction
hobbles phylogenetic inference via parsimony alone.  Wouldn't it be nice
if one could assess by examining character covariation whether or not
there was a local erosion of the fidelty of the information content
contained in differences among character states (i.e., pattern in the
distribution of character states)?  Such a local erosion occurs when
there have been long branches, and superimposed changes have occured that
make some of the comparisons utterly meaningless with respect to
phylogenetic history.  Comparisons among taxa on long branches compared
to comparisons among taxa that are NOT on long branches are not
equivalent, but "blind" equal weighting assumes that they are.  Note that
long branches may result simply by taxon sampling, and are not only a
product of increased rates of anagenetic evolution.

>. My "confidence" in my ability
> to have correctly discovered them is also equal, for they all pass
> the tests that I find relevant. You dont really apply a further test
> to them, but merely quantify your confidence in your "homology
> hypotheses" in light of generalizations which you find to be relevant
> and to be able to inform such decisions. That is our disagreement.

        I'm afraid that you don't understand what generalizations
        I find to be relevant;  if you think I mean models of evolution,
        or probabilities of character state transformations, I have not
        been explicit enough, and must apologize.  I focus all of my
        consideration on pattern in character state matrices, and the
        NULL probabilities of such occurences.  My probabilistic
        inferences are about the matrix of character states itself,
        and not about patterns of characters on internal nodes in
        trees, or the probability of transformations among states.

        But then, all of that really is independent of the ongoing
        discussion.  I can plainly see how probability is implicitly
        incorporated into "unweighted" parsimony, but you claim that
        you cannot see it.  Let's not talk past each other; you're
        saying that I advocate the incorporation of probabilistic
        assertions into phylogenetic inferences; I'm saying that
        those assertions are already there.  You find that arrogant,
        which is unfortunate, because unnecessary acrimony clouds
        the issues.  I'm saying that since it's there, it should
        turned to an advantage (without saying HOW); you're assuming
        how I might go about doing that.  Necessarily, if you
        reject its presence, you reject that an advantage can be had.


> So what? We seemed to agree last time that all weighting schemes, in
> a probabilistic approach are arbitrary. As to being wrong, well that
> happens sometimes. It certainly can happen with weights imposed
> through a probability calculation. If we were to know the true
> answer, we would be able to go back and reweight our characters such
> that we could run the matrix and reproduce the right answer. What
> would those weights be? In all cases, either 0 or 1, for if we knew
> what happened, we would know that a hypothesized transformation
> either happened or didnt. The probabilites say nothing about the
> reality of the transformation; they are statements about what we
> predict was likely to happen, given reference to some knowledge we
> think may be relevant. Why is it so hard for you to understand that
> others might find the knowledge you use to be irrelevant? Or that the
> knowledge has very weak predictive power in the situtations in which
> you attemt to apply it?


        I know that others may assume that their methods aren't
        influenced by such things; whether that asssumption is valid or not
        is the point of contention.

        Exactly what knowledge do you presume that I attempt to
        apply?
        I do admit that weighting is arbitrary; in fact, that's my
        point.  Obviously, you agree that if parsimony requires implicit
        probabilistic inference, then equal weighting WOULD be
        arbitrary (it's a simple conditional).  So I pose the following
        challenge: provide a PROOF (logical) that parsimony does not
        incorporate implicit probabilistic inference of character state
        transformations, including that homoplasy is rare relative to
        synapomorphy.

        And I beg of you not to simply respond by saying "provide a
        proof that it does!".  That would be counterproductive.

        Further, please understand that when I use the term
        "probabilistic inference" I am not in particular thinking of maximum
        likelihood as it has been applied to phylogenetic systematics.
        In particular, I am referring to probabilistic tests of
        null patterns, which (AND THIS IS AN IMPORTANT POINT) may
        never even exist!  MANY people make the silly mistake of saying
        things like "that's a bad null; we don't expect that anyway".
        They just don't "get" nulls and the value and rigor they bring
        to statistical (and phylogenetic) inference.  Obviously,
        homology is non-random, and even better, it's non-independent.
        The non-independence among characters you wrote of earlier
        can easily be turned to an advantage by employing a probabilistic
        strategy.  First, one must ask "what are the observable
        consequences of such non-independence"?  The answer(s) to this
        question then suggest possible venues into detecting the
        presence or absence of such non-independence.  One then only
        needs define an appropriate comparison, null hypothesis (absence
        of observable consequences), and error term to be in business.

        Some think the task impossible.  It is a certainty that
        they are wrong.


> >       The amount of evidence each character (or state) carries is where
> >probability first sneaks into parsimony.
>
> I think this plain wrong. The character distributions imply
> transformations which either happened or did not. Those
> transformations are either erroneous (hence carrying no information
> relative to our goal), or they carry as much information as any other
> truly homologous transformation. There is no probability adhering to
> the 'amount of evidence" that the character carries. Probability
> enters only to the extent that the investigator assigns a priori
> notions of what he thinks likely to have happened.


        You apparently ascribe probabilities
        of 1.0 and 0 because you think (absolutely) that something
        happened (1.0) or it didn't (0.0).  Sometimes a character
        carries a mix of truth and misinformation (consider that
        a character has its states distributed among taxa).

        Sometimes some of the pairwise comparisons (e.g., A in
        taxon i vs. A in taxon j) represent true homologies, while
        some of the pairwise comparisons (for the SAME character)
        do not (e.g., A in taxon i vs. A in taxon Z), due to
        homoplasy.  Not every character is equally informative
        owing to this feature, and the degree to which a character
        is weighted equally belies and ignores this entirely.

        Because a proportion of the among-taxon comparisons (REMINDER:
        I am discussing comparisons, NOT transformations, which are
        inferences,not observations) are homologous, and some are not,
        fairly simple math provides the actual proportion (and therefore
        realized probability) of a state comparison being informative.
        What we don't know by looking at the differences is WHICH are
        informative, and we can't _know_ them by cladistic parsimony
        alone, but I propose that we can determine at least a relative
        determination of which comparisons are misleading, and accurately
        identify which comparisons are not, and thereby IMPROVE upon the
        performance of cladistic parsimony and other tree-selection
        criteria.

>
> >The second time is the degree of
> >support afforded the hypothesis by the characters.  Consider that for
> >decades people have listed synapomorphies supporting clades as evidence in
> >favor of that hypothesis (and in disfavor of alternatives, which include
> >the null that all taxa are equally related).  Why are 2 synapomorphies
> >more convincing than 1? Why are 100 more convincing than 2?
>
> Because they constitute more evidence for a hypothesis, plain and
> simple. As I have said, there is without question an inherent amount
> of uncertainty adhering to any hypothesis formulated by a human
> being. This does not lead to the inevitable conclusion that the
> uncertainty can be alieviated by the approaches you take. I dont know
> why you assume they must.

        Popper argued against such positivism.  Gathering support
        FOR hypotheses isn't the same as trying to refute them.

        The point is, and the reason why you consider two synapomorphies
        more impressive than one, is because you don't expect them
        to occur by chance, and that two are more improbable than one,
        and 100 are more improbable than 2, by chance alone.  It is
        either pure positivism, or probabilism unexposed.
>
> >>In a sense, we are adopting a far stricter standard than you are, for a
> >hypothesis (a column of states in a matrix) is only advanced when we are
> >convinced that it is homologous, rather than assigning weights to evidence
> >we are, in some sense, not sure of.
> >
> >       The standard you are referring to is which character are allowed
> >to interact in a parsimony analyses.  Those that you have rejected as
> >homologous are weighted "zero".   The debate is not whether or not the
> >approach of testing hypotheses of homology is better than other
> >approaches; let's say for arguments' sake that it does take a logical
> >priority. The debate is whether or not, in the process of becoming
> >convinced of a hypothesis of homology, probability plays an implicit role.
>
> And I say it does not. Yes we "weigh" a transformation zero if we
> cannot sustain a hypothesis of its homology through our testing
> procedures. If such a hypothesis can be sustained, it is weighted
> one. This is the application of standards which rest on our
> understanding of what evolutioanry events are. We eliminate
> characters from our matrix if we cannot in good faith hypothesize
> that they are evolutionary events. If they could be, then they are
> advanced as assertions that this particular event occurred. The
> events are inherently equal. If we propose that the character
> distribution indicated the event, it is indicating something
> inherently equal to that which other characters indicate.  We weigh
> equally because the reality we are reconstructing is made up of
> inherently equal events. Unequal weighting merely represents
> statements on your part that are essentially predictive, but in a
> manner which is not then put to a test, rather it is allowed to
> structure an answer. It is only through the answer, the phylogeny,
> that we can answer the question of which transformations occured.
> Probabilities calculated from other situations are not meaningful
> ways of deciding whether a particular event occurred.

        First, by phylogeny I presume you mean either a cladogram
        or some other evolutionary tree.  For me (and many others)
        the phylogeny is the goal; it is the truth; lines on
        paper are "trees".

        Second, hopefully it's clear that I don't go for generalization
        of probabilities across time, characters, or taxa.  One
        such generalization happens to be equal weights.

        Third, I've already argued that some transformations, equally
        homologous, are nevertheless NOT equivalent; that some
        transformations ACTUALLY carry more information than others
        (independent of one's inability to determine that).  This
        become particularly true the more complex the evolutionary
        transformations are.

        On whether or not you can in good faith propose some as
        evolutionary hypotheses, if you had an adequate test of
        non-homology, you'd be able to distinguish (with explicit
        confidence) between non-homologies and homologies.  Recall
        that Popper never cared a whit about HOW hypotheses were
        constructed, only how they were tested.  Ergo, cladistic
        parsimony is not a sufficient test (if it is a test at all).



> > If probability plays an implicit role in the assessment of phylogenetic
> >hypotheses, its role should be made explicit.
> > Since the practice of phylogenetic inference via parsimony has been
> >generalized, it is clear that its role has already been made explicit.
>
> It has been generalized along an axis which does not exist for most
> of those who actually use it. To repeat, probability does not play an
> implicit role in anyones approach. It is *possible* to weigh
> characters from a probabilistic perspective. If one does not do that,
> because of a rejection of the approach, that does not make it simply
> implicit. The "making explicit" which you refer to is nothing more
> than the development of the possiblity that one could approach these
> questions in a probabilistic manner, and can use a modified parsimony
> approach to do so.

        One version of generalized parsimony is "unweighted" parsimony.
        Therefore, generalized parsimony encompasses this single,
        arbitrary point along its continuum.  Not only does that axis
        exist, but you've been on the axis all along!.
>
> > The impact of equal weighting can be explored in a any instance.  Take for
> >instance a sensitivity analysis; if, in an applied instance, different
> >weighting schemes do not change the result (say, tree topology), then it
> >can be inferred that weighting is not an issue (IN THAT CASE).  But if
> >modifications to weighting changes the optimal topology, then it also
> >changes the outcome of the parsimony congruence "test", and  a weighting
> >scheme must therefore be justified somehow.
>
> There is no question that weighting can change the outcome. And there
> is no question that if you believe it appropriate to calculate
> apriori probabilites, and to enter those as a factor in the analysis,
> then you must justify your actual values, even if they are all the
> same. The equal weighting which has been traditionally employed
> simply does not rest on notions of equi-probability. The issue is
> irrelevant to our analyses.

        So you admit that equal weighting is arbitrary?

> What you are doing is simply biasing the outcomes to favor your
> predictions; namely that some parameter estimated in an independant
> situation, or at a higher level, can meaningfully inform the test. I
> see no reason to assume that this will be so. I have no desire to
> constrain my findings to such expectations.

        Sensitivity analyses explores the boundaries of an already
        constrained set of circumstances (in this instance, equal
        weights).  I don't see how trying to define what the
        boundaries would bias the outcome to favor predictions;
        maybe I just don't know what you mean here by "my
        predictions".  What are my predictions, if I alter my
        weighting on successive proportion of characters, say
        selected at random, to see how many standard deviations
        in either direction I have to go before the mpt changes?
        The goal here would be to set boundaries not on the mpt,
        but to provide some measure of how my (equal) weighting
        is influencing the outcome, which would tell me how
        constraining the original assumption is or is not.


>
>          Based on first principles of evolutionary theory, we can
> expect
> >that sometimes the history of life will have produced confounding pattern
> >in the distribution of character states among taxa that guarantees that
> >the cladistic algorithm you describe will fail.
>
> Or yours or anyones.

        Here', I'll grant you some license, because you seem to have
        some idea about what my algorithm is before I've offered it.
        I won't go into detail about RASA here, because it's not the
        focus of the thread, and I'm sure it's not what you mean
        when you say "my algorithm".  I will say that I can use it
        determine in a given instance whether or not various
        method will fail.

>
> >Its weakness and
> >susceptibility to this can be moderated by the incorporation of
> >considering probabilities of patterns of character state distributions.
>
> Or such incorporation can lead one further from the truth. This is
> clearly a possibility you do not even imagine. I find that
> incredible.

        you should findit incrediable, for I have.  note that again I
        am referring to
        distributions, and to patterns, which can be explored with
        great return statistically; I am not referring to
        transformations, but to observed differences.  Transformations
        imply processes,which we both agree are difficult to
        ascribe probabilities to, (you maintain it's a
        non-sequitur, I know).  But patterns lend themselves well
        to probabilistic measures.
>
> >I'm interested, Tom: would you agree or disagree that your favored set of
> >methodologies can be improved upon?
>
> I cannot imagine how any methodology could not be improved upon.
> Sorry to break this to you, but that doesnt mean that I will buy
> anything from anyone. I think your approach introduces factors based
> on fundamentally irrelevant generalizations, and thus is as likely,
> or more likely to lead us in the wrong direction.

Again, I'm convinced you're talking past me for having assumed
that I espouse an alternative, probabilistically-based weighting
scheme.  I proposed nothing of the sort.  What I proposed (as have
many others) is that "unweighted" parsimony can no longer be thought
of anything other than an arbitrary weigting scheme, no better justified
than any other.  I went a step further than most by postulating that
probabilistic inferences are already incorporated in the equal weighting
point, and every other point, along the continuum of generalized
parsimony.

Now that at least my position is clear, are there any real points
of contention you have to offer on these two issues that don't stray owing
to your assumptions of my position?




More information about the Taxacom mailing list