Significance Testing
Colin Kaneen
ckaneen at HOME.COM
Thu Jun 14 15:51:37 CDT 2001
Gabriel:
There are some reasons for using the methods you are talking about (keep in
mind while reading this my background only goes as far as second year
undergrad statistics):
>Why do we use significance (P values like: P>.01) ?
>
>What does it really tell us that using confidence intervals don't?
The p-value is used in hypothesis testing. It tells us "the smallest
significance level at which the null hypotheis can be rejected" (Weiss,
1995 _Introductory Stataistics_, 4th ed.). A confidence interval gives us
the end-point values for a given singificance test. In science we tend to
use p>.05
>
>Why, in biological papers mainly having to do with experiments, don't we
>incorporate a consideration for "power", i.e. calculate for sample size and
>parameter range?
Sometimes power is considered, but it may depend on the type of test
performed. If the test is a z- or t-test, power is equal to 1-beta, where
beta is 1-alpha. In other types of tests however, such as chi-squares,
power is not calculated this way. As I understand, the calculation of
power is quite complex and is beyond the scope of my knowledge.
>
>At what point is a sample size "good"?
>
I don't know what you mean by "good." If you mean how many points are
needed for a sample to be useful, that depends on what is being sampled.
Normally 30 with a non-normal population is large enough to allow us to use
normal statistical tests rather than resorting to non-parametric tests such
as the sign or Wilcoxon Signed Rank tests.
Does this help?
Colin Kaneen
More information about the Taxacom
mailing list