Weights

Stuart G. Poss sgposs at WHALE.ST.USM.EDU
Fri Mar 7 20:05:17 CST 1997


Tom DiBenedetto wrote:

>What makes you think that a statistical paradigm for character
evaluation has any power to judge failure?

I suggest that you take a look at the work of Dr. Christopher Meacham:

Meacham, C. 1994.  Phylogentic relationships at the basal radiation of
Angiosperms: Futher study by probability of character compatibility.
Systematic Botany.  19:506-522".

The point I was trying to make (and I believe was made earlier in this
thread) is that if given purely random data, we may want a method that
would fail to find a tree, or at least provide some means to determine
that the result returned equated in some sense to "no significant
signal".  If it has no chance of failing to find a tree under such
circumstances (or at least providing some measure cautioning the user
about the resultant trees), then I do not see how we can regard the
results as informative.  In the presence of random data or perhaps only
partially random data, I remain unclear under what circumstances you
would determine your approach (observations+method) had failed.

>From my perspective, should I simply treat my character hypotheses as
true by definition (having a Baysian probability of one; hence, included
as opposed to excluded), it would be extremely difficult for me to
determine which of my characters has, to use your terminology "sank or
swam".   If Felsenstein's characterization is correct, because of a lack
of consistency, we can not be sure if simply adding more data and more
data to parsimony methods will give us the true tree.

Thus, it may be a more modest, but nonetheless scientifically
defensible  excercise, that rather than seeking "truth", we instead
focus on issues for which we can have certain knowledge.  The
impossibility of two incompatible cladistic characters being true
simultaneously provides one measure of logical certainty.

Under Meacham's probability approach, a search for compatibility will
utterly fail to find a tree should it be presented with entirely random
data.  Since I am interested in scientific questions rather wholely
theoretical ones, this is a property I would very much like to _MY_
theory of methodology to have.

Although the null model Meacham constructs is not the only one that
could be conceived, nor may it necessarily be the most appropriate one
under some assumptions regarding the relative rates of evolution of
specific characters, in my opinion it is not an unreasonable model for
certain classes of morphological characters with which I am familiar.

I would note that the probabilities of most interest to me are "a
posteriori" ones, that result from the observed character distributions.
Should, in the unlikely event, certain characters actually produce a
tree under such circumstances, we would have some basis to conclude that
these characters provide us with an outcome that requires an explanation
(as opposed to one that does not).  Should a sufficiently large number
of observations suggest that such characters continue to "behave in a
similar fashion", at some point, it might not be unreasonable to regard
such "a posteriori" weights as "a priori" ones. Obviously to do so will
require much work to be done, but such a decision would be made on
empirical evidence rather than for theoretical reasons.  However, at
this point, I am quite willing to admit that such a supposition is
largely conjecture, because not enough observations have been made to
determine if such an approach would work in general.

>Huh? Sounds like all the wonderful Popperian arguments focussed on
the wrong level.
...
>Except that "theories of methodology" dont predict anything, it is
>the hypotheses generated in those methodologies which make
>predictions that may or may not prove informative.

We simply have different conceptions of what the issues are.  From my
perspective, all assumptions, including those that are inherent in
our methodologies must be subject to test when a test is constructed.
A close reading of Popper will, I believe, lead to an understanding
as to why Popper chose to address this very issue in the context of
Heisenberg's uncertainty principle.  Our observations (data points)
themselves can only be interpreted in the context of how our hypotheses
are constructed and within the limits our methods (and the ideas
inherent in them) permit.  It is for this reason that science spends so
much effort with the notion of measurement (see Klein, 1988.  The
Science of Measurement, Dover;  Wise, 1995.  The Values of Precision.
Princeton U. Press), as well as with certain key notions such as
"homology" and "character".

In fairness to other users of Taxacom, I think I may have already
clogged the arteries of the internet with enough of my potentially
ateriosclerotic views on this subject and will not further use this
forum to further discuss this topic, unless a separate "sublist can be
created to deal with such specialty topics; perhaps such differences of
opinion are best carried out in print (with the help of reviewers and
editors).

--
_____________________________________________________________________
Stuart G. Poss                       E-mail: sgposs at whale.st.usm.edu
Senior Ichthyologist & Curator       Tel: (601)872-4238
Gulf Coast Research Laboratory       FAX: (601)872-4204
P.O. Box 7000
Ocean Springs, MS  39566-7000
_____________________________________________________________________




More information about the Taxacom mailing list