# اختبار الفرضيات الإحصائية

اختبار الفرضيات الإحصائية Statistical hypothesis testing هو من دراسات الإحصاء ، نحكم من خلاله على صحة فرضية إحصائية وذلك عن طريق إجراء دراسة إحصائية على المجتمع المراد اختبار الفرضية فيه حيث نجري اختبارا إحصائيا على عينة عشوائية مسحوبة من هذا المجتمع ونصدر القرار الإحصائي الذي يحكم على هذه الفرضية .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

## احصاء الاختبارات الشائعة

In the table below, the symbols used are defined at the bottom of the table. Many other tests can be found in other articles.

الاسم الصيغة الافتراضات أو الملاحظات
One-sample z-test ${\displaystyle z={\frac {{\overline {x}}-\mu _{0}}{(\ s/{\sqrt {n}})}}}$ (Normal population or n > 30) and σ known.

(z is the distance from the mean in relation to the standard deviation of the mean). For non-normal distributions it is possible to calculate a minimum proportion of a population that falls within k standard deviations for any k (see: Chebyshev's inequality).

Two-sample z-test ${\displaystyle z={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}}}}}$ Normal population and independent observations and σ1 and σ2 are known
Two-sample pooled t-test, equal variances* ${\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{s_{p}{\sqrt {{\frac {1}{n_{1}}}+{\frac {1}{n_{2}}}}}}},}$

${\displaystyle s_{p}^{2}={\frac {(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}},}$
${\displaystyle df=n_{1}+n_{2}-2\ }$ [1]

(Normal populations or n1 + n2 > 40) and independent observations and σ1 = σ2 and σ1 and σ2 unknown
Two-sample unpooled t-test, unequal variances* ${\displaystyle t={\frac {({\overline {x}}_{1}-{\overline {x}}_{2})-d_{0}}{\sqrt {{\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}}}},}$

${\displaystyle df={\frac {\left({\frac {s_{1}^{2}}{n_{1}}}+{\frac {s_{2}^{2}}{n_{2}}}\right)^{2}}{{\frac {\left({\frac {s_{1}^{2}}{n_{1}}}\right)^{2}}{n_{1}-1}}+{\frac {\left({\frac {s_{2}^{2}}{n_{2}}}\right)^{2}}{n_{2}-1}}}}}$ [1]

(Normal populations or n1 + n2 > 40) and independent observations and σ1 ≠ σ2 and σ1 and σ2 unknown
One-proportion z-test ${\displaystyle z={\frac {{\hat {p}}-p_{0}}{\sqrt {\frac {p_{0}(1-p_{0})}{n}}}}}$ n .p0 > 10 and n (1 − p0) > 10 and it is a SRS (Simple Random Sample), see notes.
Two-proportion z-test, pooled for ${\displaystyle d_{0}=0}$ ${\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})-d_{0}}{\sqrt {{\hat {p}}(1-{\hat {p}})({\frac {1}{n_{1}}}+{\frac {1}{n_{2}}})}}}}$

${\displaystyle {\hat {p}}={\frac {x_{1}+x_{2}}{n_{1}+n_{2}}}}$

n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations, see notes.
Two-proportion z-test, unpooled for ${\displaystyle |d_{0}|>0}$ ${\displaystyle z={\frac {({\hat {p}}_{1}-{\hat {p}}_{2})-d_{0}}{\sqrt {{\frac {{\hat {p}}_{1}(1-{\hat {p}}_{1})}{n_{1}}}+{\frac {{\hat {p}}_{2}(1-{\hat {p}}_{2})}{n_{2}}}}}}}$ n1 p1 > 5 and n1(1 − p1) > 5 and n2 p2 > 5 and n2(1 − p2) > 5 and independent observations, see notes.
One-sample chi-square test ${\displaystyle \chi ^{2}={\frac {(n-1)s^{2}}{\sigma _{0}^{2}}}}$ One of the following

• All expected counts are at least 5

• All expected counts are > 1 and no more than 20% of expected counts are less than 5

*Two-sample F test for equality of variances ${\displaystyle F={\frac {s_{1}^{2}}{s_{2}^{2}}}}$ Arrange so ${\displaystyle s_{1}^{2}}$ > ${\displaystyle s_{2}^{2}}$ and reject H0 for ${\displaystyle F>F(\alpha /2,n_{1}-1,n_{2}-1)}$[2]
In general, the subscript 0 indicates a value taken from the null hypothesis, H0, which should be used as much as possible in constructing its test statistic. ... Definitions of other symbols:
 ${\displaystyle \alpha }$, the probability of Type I error (rejecting a null hypothesis when it is in fact true) ${\displaystyle n}$ = sample size ${\displaystyle n_{1}}$ = sample 1 size ${\displaystyle n_{2}}$ = sample 2 size ${\displaystyle {\overline {x}}}$ = sample mean ${\displaystyle \mu _{0}}$ = hypothesized population mean ${\displaystyle \mu _{1}}$ = population 1 mean ${\displaystyle \mu _{2}}$ = population 2 mean ${\displaystyle \sigma }$ = population standard deviation ${\displaystyle \sigma ^{2}}$ = population variance ${\displaystyle s}$ = sample standard deviation ${\displaystyle s^{2}}$ = sample variance ${\displaystyle s_{1}}$ = sample 1 standard deviation ${\displaystyle s_{2}}$ = sample 2 standard deviation ${\displaystyle t}$ = t statistic ${\displaystyle df}$ = degrees of freedom ${\displaystyle {\overline {d}}}$ = sample mean of differences ${\displaystyle d_{0}}$ = hypothesized population mean difference ${\displaystyle s_{d}}$ = standard deviation of differences ${\displaystyle {\hat {p}}}$ = x/n = sample proportion, unless specified otherwise ${\displaystyle p_{0}}$ = hypothesized population proportion ${\displaystyle p_{1}}$ = proportion 1 ${\displaystyle p_{2}}$ = proportion 2 ${\displaystyle d_{p}}$ = hypothesized difference in proportion ${\displaystyle \min\{n_{1},n_{2}\}}$ = minimum of n1 and n2 ${\displaystyle x_{1}=n_{1}p_{1}}$ ${\displaystyle x_{2}=n_{2}p_{2}}$ ${\displaystyle \chi ^{2}}$ = Chi-squared statistic ${\displaystyle F}$ = F statistic

## انظر أيضاً

For a reconstruction and defense of Neyman–Pearson testing, see Mayo and Spanos, (2006), "Severe Testing as a Basic Concept in a Neyman–Pearson Philosophy of Induction," GJPS, 57: 323–57.

## الهامش

1. ^ أ ب NIST handbook: Two-Sample t-Test for Equal Means
2. ^ NIST handbook: F-Test for Equality of Two Standard Deviations (Testing standard deviations the same as testing variances)

## وصلات خارجية

• Wilson González, Georgina (September 10, 1997). "Hypothesis Testing". Environmental Sampling & Monitoring Primer. Virginia Tech. Unknown parameter |coauthors= ignored (|author= suggested) (help)
• Bayesian critique of classical hypothesis testing
• Critique of classical hypothesis testing highlighting long-standing qualms of statisticians
• Dallal GE (2007) The Little Handbook of Statistical Practice (A good tutorial)
• References for arguments for and against hypothesis testing
• Statistical Tests Overview: How to choose the correct statistical test
• An Interactive Online Tool to Encourage Understanding Hypothesis Testing