+612 9045 4394
 
CHECKOUT
Schaums Outline Of Elements Of Statistics Ii : Inferential Statistics - Ruth Bernstein

Schaums Outline Of Elements Of Statistics Ii

Inferential Statistics

Paperback Published: 2nd September 1999
ISBN: 9780071346375
Number Of Pages: 350

Share This Book:
Ships in 7 to 10 business days

The book is the second half of a comprehensive outline of statistics, presenting the material in a user-friendly question-and-answer format. "Elements of Statistics II" will benefit students in beginning statistics courses, and is geared to students who need to know statistics in their specific field of study, which may range from business to the social sciences. It continues the introduction to general statistics begun in "Elements of Statistics I: Descriptive Statistics and Probability", and covers the material with added focus on inferential statistics. It provides an integrated step-by-step presentation with problems cross-referenced throughout.

Discrete Probability Distributionsp. 1
Discrete Probability Distributions and Probability Mass Functionsp. 1
Bernoulli Experiments and trialsp. 1
Binomial Random Variables, Experiments, and Probability Functionsp. 2
The Binomial Coefficientp. 3
The Binomial Probability Functionp. 4
Mean, Variance, and Standard Deviation of the Binomial Probability Distributionp. 5
The Binomial Expansion and the Binomial Theoremp. 6
Pascal's Triangle and the Binomial Coefficientp. 8
The Family of Binomial Distributionsp. 8
The Cumulative Binomial Probability Tablep. 10
Lot-Acceptance Samplingp. 12
Consumer's Risk and Producer's Riskp. 13
Multivariate Probability Distributions and Joint Probability Distributionsp. 14
The Multinomial Experimentp. 16
The Multinomial Coefficientp. 16
The Multinomial Probability Functionp. 17
The Family of Multinomial Probability Distributionsp. 18
The Means of the Multinomial Probability Distributionp. 19
The Multinomial Expansion and the Multinomial Theoremp. 19
The Hypergeometric Experimentp. 20
The Hypergeometric Probability Functionp. 20
The Family of Hypergeometric Probability Distributionsp. 22
The Mean, Variance, and Standard Deviation of the Hypergeometric Probability Distributionp. 23
The Generalization of the Hypergeometric Probability Distributionp. 24
The Binomial and Multinomial Approximations to the Hypergeometric Distributionp. 24
Poisson Processes, Random Variables, and Experimentsp. 25
The Poisson Probability Functionp. 26
The Family of Poisson Probability Distributionsp. 27
The Mean, Variance, and Standard Deviation of the Poisson Probability Distributionp. 28
The Cumulative Poisson Probability Tablep. 29
The Poisson Distribution as an Approximation to the Binomial Distributionp. 30
The Normal Distribution and Other Continuous Probability Distributionsp. 46
Continuous Probability Distributionsp. 46
The Normal Probability Distributions and the Normal Probability Density Functionp. 48
The Family of Normal Probability Distributionsp. 49
The Normal Distribution: Relationship between the Mean ([mu]), the Median ([mu]), and the Modep. 50
Kurtosisp. 50
The Standard Normal Distributionp. 51
Relationship Between the Standard Normal Distribution and the Standard Normal Variablep. 52
Table of Areas in the Standard Normal Distributionp. 53
Finding Probabilities Within any Normal Distribution by Applying the Z Transformationp. 55
One-tailed Probabilitiesp. 56
Two-tailed Probabilitiesp. 58
The Normal Approximation to the Binomial Distributionp. 59
The Normal Approximation to the Poisson Distributionp. 61
The Discrete Uniform Probability Distributionp. 62
The Continuous Uniform Probability Distributionp. 64
The Exponential Probability Distributionp. 65
Relationship between the Exponential Distribution and the Poisson Distributionp. 67
Sampling Distributionsp. 89
Simple Random Sampling Revisitedp. 89
Independent Random Variablesp. 89
Mathematical and Nonmathematical Definitions of Simple Random Samplingp. 90
Assumptions of the Sampling Techniquep. 92
The Random Variable Xp. 92
Theoretical and Empirical Sampling Distributions of the Meanp. 93
The Mean of the Sampling Distribution of the Meanp. 98
The Accuracy of an Estimatorp. 99
The Variance of the Sampling Distribution of the Mean: Infinite Population or Sampling with Replacementp. 99
The Variance of the Sampling Distribution of the Mean: Finite Population Sampled without Replacementp. 100
The Standard Error of the Meanp. 101
The Precision of An Estimatorp. 102
Determining Probabilities with a Discrete Sampling Distribution of the Meanp. 103
Determining Probabilities with a Normally Distributed Sampling Distribution of the Meanp. 103
The Central Limit Theorem: Sampling from a Finite Population with Replacementp. 104
The Central Limit Theorem: Sampling from an Infinite Populationp. 108
The Central Limit Theorem: Sampling from a Finite Population without Replacementp. 108
How Large is "Sufficiently Large?"p. 108
The Sampling Distribution of the Sample Sump. 109
Applying the Central Limit Theorem to the Sampling Distribution of the Sample Sump. 110
Sampling from a Binomial Populationp. 111
Sampling Distribution of the Number of Successesp. 113
Sampling Distribution of the Proportionp. 113
Applying the Central Limit Theorem to the Sampling Distribution of the Number of Successesp. 114
Applying the Central Limit Theorem to the Sampling Distribution of the Proportionp. 115
Determining Probabilities with a Normal Approximation to the Sampling Distribution of the Proportionp. 116
One-Sample Estimation of The Population Meanp. 134
Estimationp. 134
Criteria for Selecting the Optimal Estimatorp. 135
The Estimated Standard Error of the Mean S[subscript x]p. 136
Point Estimatesp. 136
Reporting and Evaluating the Point Estimatep. 137
Relationship between Point Estimates and Interval Estimatesp. 138
Deriving P(x[subscript 1-alpha/2] [less than or equal] X [less than or equal] x[subscript alpha/2]) = P(-z[subscript alpha/2] [less than or equal] Z [less than or equal] z[subscript alpha/2]) = 1 - [alpha]p. 138
Deriving P(X - z[subscript alpha/2] [sigma subscript x] [less than or equal] [mu] [less than or equal] X + z[subscript alpha/2] [sigma subscript x]) = 1 - [alpha]p. 139
Confidence Interval for the Population Mean [mu]: Known Standard Deviation [sigma], Normally Distributed Populationp. 141
Presenting Confidence Limitsp. 142
Precision of the Confidence Intervalp. 142
Determining Sample Size when the Standard Deviation is Knownp. 144
Confidence Interval for the Population Mean [mu]: Known Standard Deviation [sigma], Large Sample (n [greater than or equal] 30) from any Population Distributionp. 145
Determining Confidence Intervals for the Population Mean [mu] when the Population Standard Deviation [sigma] is Unknownp. 146
The t Distributionp. 146
Relationship between the t Distribution and the Standard Normal Distributionp. 148
Degrees of Freedomp. 148
The Term "Student's t Distribution"p. 149
Critical Values of the t Distributionp. 149
Table A.6: Critical Values of the t Distributionp. 151
Confidence Interval for the Population Mean [mu]: Standard Deviation [sigma] not known, Small Sample (n [ 30) from a Normally Distributed Populationp. 153
Determining Sample Size: Unknown Standard Deviation, Small Sample from a Normally Distributed Populationp. 155
Confidence Interval for the Population Mean [mu]: Standard Deviation [sigma] not known, large sample (n [greater than or equal] 30) from a Normally Distributed Populationp. 156
Confidence Interval for the Population Mean [mu]: Standard Deviation [sigma] not known, Large Sample (n [greater than or equal] 30) from a Population that is not Normally Distributedp. 158
Confidence Interval for the Population Mean [mu]: Small Sample (n [ 30) from a Population that is not Normally Distributedp. 158
One-Sample Estimation of the Population Variance, Standard Deviation, and Proportionp. 173
Optimal Estimators of Variance, Standard Deviation, and Proportionp. 173
The Chi-Square Statistic and the Chi-Square Distributionp. 174
Critical Values of the Chi-Square Distributionp. 175
Table A.7: Critical Values of the Chi-Square Distributionp. 177
Deriving the Confidence Interval for the Variance [sigma superscript 2] of a Normally Distributed Populationp. 178
Presenting Confidence Limitsp. 179
Precision of the Confidence Interval for the Variancep. 180
Determining Sample Size Necessary to Achieve a Desired Quality-of-Estimate for the Variancep. 181
Using Normal-Approximation Techniques To Determine Confidence Intervals for the Variancep. 181
Using the Sampling Distribution of the Sample Variance to Approximate a Confidence Interval for the Population Variancep. 182
Confidence Interval for the Standard Deviation [sigma] of a Normally Distributed Populationp. 183
Using the Sampling Distribution of the Sample Standard Deviation to Approximate a Confidence Interval for the Population Standard Deviationp. 184
The Optimal Estimator for the Proportion p of a Binomial Populationp. 185
Deriving the Approximate Confidence Interval for the Proportion p of a Binomial Populationp. 186
Estimating the Parameter pp. 187
Deciding when n is "Sufficiently Large", p not knownp. 188
Approximate Confidence Intervals for the Binomial Parameter p When Sampling From a Finite Population without Replacementp. 188
The Exact Confidence Interval for the Binomial Parameter pp. 189
Precision of the Approximate Confidence-Interval Estimate of the Binomial Parameter pp. 189
Determining Sample Size for the Confidence Interval of the Binomial Parameter pp. 189
Approximate Confidence Interval for the Percentage of a Binomial Populationp. 191
Approximate Confidence Interval for the Total Number in a Category of a Binomial Populationp. 192
The Capture--Recapture Method for Estimating Population Size Np. 192
One-Sample Hypothesis Testingp. 205
Statistical Hypothesis Testingp. 205
The Null Hypothesis and the Alternative Hypothesisp. 205
Testing the Null Hypothesisp. 206
Two-Sided Versus One-Sided Hypothesis Testsp. 207
Testing Hypotheses about the Population Mean [mu]: Known Standard Deviation [sigma], Normally Distributed Populationp. 207
The P Valuep. 208
Type I Error versus Type II Errorp. 209
Critical Values and Critical Regionsp. 210
The Level of Significancep. 212
Decision Rules for Statistical Hypothesis Testsp. 213
Selecting Statistical Hypothesesp. 214
The Probability of a Type II Errorp. 214
Consumer's Risk and Producer's Riskp. 215
Why It is Not Possible to Prove the Null Hypothesisp. 216
Classical Inference Versus Bayesian Inferencep. 216
Procedure for Testing the Null Hypothesisp. 217
Hypothesis Testing Using X as the Test Statisticp. 218
The Power of a Test, Operating Characteristic Curves, and Power Curvesp. 219
Testing Hypothesis about the Population Mean [mu]: Standard Deviation [sigma] Not Known, Small Sample (n [ 30) from a Normally Distributed Populationp. 221
The P Value for the t Statisticp. 221
Decision Rules for Hypothesis Tests with the t Statisticp. 222
[beta], 1 - [beta], Power Curves, and OC Curvesp. 223
Testing Hypotheses about the Population Mean [mu]: Large Sample (n [greater than or equal] 30) from any Population Distributionp. 223
Assumptions of One-Sample Parametric Hypothesis Testingp. 224
When the Assumptions are Violatedp. 225
Testing Hypothesis about the Variance [sigma superscript 2] of a Normally Distributed Populationp. 226
Testing Hypotheses about the Standard Deviation [sigma] of a Normally Distributed Populationp. 227
Testing Hypotheses about the Proportion p of a Binomial Population: Large Samplesp. 228
Testing Hypotheses about the Proportion p of a Binomial Population: Small Samplesp. 229
Two-Sample Estimation and Hypothesis Testingp. 247
Independent Samples Versus Paired Samplesp. 247
The Optimal Estimator of the Difference Between Two Population Means ([mu subscript 1] - [mu subscript 2])p. 248
The Theoretical Sampling Distribution of the Difference Between Two Meansp. 248
Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]) Known, Independent Samples from Normally Distributed Populationsp. 249
Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]) known, Independent Samples from Normally Distributed Populationsp. 250
The Estimated Standard Error of the Difference Between Two Meansp. 252
Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations not known but Assumed Equal ([sigma subscript 1] = [sigma subscript 2]), Small (n[subscript 1] [ 30 and n[subscript 2] [ 30) Independent Samples from Normally Distributed Populationsp. 253
Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations not Known but Assumed Equal ([sigma subscript 1] = [sigma subscript 2]), Small (n[subscript 1] [ 30 and n[subscript 2] [ 30) Independent Samples from Normally Distributed Populationsp. 254
Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]) not Known, Large (n[subscript 1] [greater than or equal] 30 and n[subscript 2] [greater than or equal] 30) Independent Samples from any Populations Distributionsp. 255
Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]), not known, Large (n[subscript 1] [greater than or equal] 30 and n[subscript 2] [greater than or equal] 30) Independent Samples from any Populations Distributionsp. 256
Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Paired Samplesp. 257
Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Paired Samplesp. 260
Assumptions of Two-Sample Parametric Estimation and Hypothesis Testing about Meansp. 261
When the Assumptions are Violatedp. 262
Comparing Independent-Sampling and Paired-Sampling Techniques on Precision and Powerp. 263
The F Statisticp. 263
The F Distributionp. 264
Critical Values of the F Distributionp. 266
Table A.8: Critical Values of the F Distributionp. 268
Confidence Interval for the Ratio of Variances ([sigma superscript 2 subscript 1]/[sigma superscript 2 subscript 2]): Parameters ([sigma superscript 2 subscript 1], [sigma subscript 1], [mu subscript 1] and [sigma superscript 2 subscript 2], [sigma subscript 2], [mu subscript 2]) Not Known, Independent Samples From Normally Distributed Populationsp. 269
Testing Hypotheses about the Ratio of Variances ([sigma superscript 2 subscript 1]/[sigma superscript 2 subscript 2]): Parameters ([sigma superscript 2 subscript 1], [sigma subscript 1], [mu subscript 1] and [sigma superscript 2 subscript 2], [sigma subscript 2], [mu subscript 2]) not known, Independent Samples from Normally Distributed Populationsp. 270
When to Test for Homogeneity of Variancep. 272
The Optimal Estimator of the Difference Between Proportions (p[subscript 1] - p[subscript 2]): Large Independent Samplesp. 273
The Theoretical Sampling Distribution of the Difference Between Two Proportionsp. 273
Approximate Confidence Interval for the Difference Between Proportions from Two Binomial Populations (p[subscript 1] - p[subscript 2]): Large Independent Samplesp. 274
Testing Hypotheses about the Difference Between Proportions from Two Binomial Populations (p[subscript 1] - p[subscript 2]): Large Independent Samplesp. 276
Multisample Estimation and Hypothesis Testingp. 296
Multisample Inferencesp. 296
The Analysis of Variancep. 296
Anova: One-Way, Two-Way, or Multiwayp. 297
One-Way Anova: Fixed-Effects or Random Effectsp. 297
One-way, Fixed-Effects Anova: The Assumptionsp. 298
Equal-Samples, One-Way, Fixed-Effects Anova: H[subscript 0] and H[subscript 1]p. 298
Equal-Samples, One-Way, Fixed-Effects Anova: Organizing the Datap. 298
Equal-Samples, One-Way, Fixed-Effects Anova: the Basic Rationalep. 300
SST = SSA + SSWp. 301
Computational Formulas for SST and SSAp. 302
Degrees of Freedom and Mean Squaresp. 302
The F Testp. 304
The Anova Tablep. 306
Multiple Comparison Testsp. 306
Duncan's Multiple-Range Testp. 307
Confidence-Interval Calculations Following Multiple Comparisonsp. 308
Testing for Homogeneity of Variancep. 309
One-Way, Fixed-Effects ANOVA: Equal or Unequal Sample Sizesp. 311
General-Procedure, One-Way, Fixed-effects ANOVA: Organizing the Datap. 312
General-Procedure, One-Way, Fixed-effects ANOVA: Sum of Squaresp. 312
General-Procedure, One-Way, Fixed-Effects ANOVA Degrees of Freedom and Mean Squaresp. 313
General-Procedure, One-Way, Fixed-Effects ANOVA: the F Testp. 314
General-Procedure, One-Way, Fixed-Effects ANOVA: Multiple Comparisonsp. 314
General-Procedure, One-Way, Fixed-Effects ANOVA: Calculating Confidence Intervals and Testing for Homogeneity of Variancep. 316
Violations of ANOVA Assumptionsp. 317
Regression and Correlationp. 333
Analyzing the Relationship between Two Variablesp. 333
The Simple Linear Regression Modelp. 334
The Least-Squares Regression Linep. 335
The Estimator of the Variance [sigma superscript 2 subscript Y times X]p. 338
Mean and Variance of the y Intercept a and the Slope bp. 338
Confidence Intervals for the y Intercept a and the Slope bp. 339
Confidence Interval for the Variance [sigma superscript 2 subscript Y times X]p. 341
Prediction Intervals for Expected Values of Yp. 341
Testing Hypotheses about the Slope bp. 342
Comparing Simple Linear Regression Equations from Two or More Samplesp. 343
Multiple Linear Regressionp. 343
Simple Linear Correlationp. 344
Derivation of the Correlation Coefficient rp. 344
Confidence Intervals for the Population Correlation Coefficient [rho]p. 349
Using the r Distribution to Test Hypotheses about the Population Correlation Coefficient [rho]p. 350
Using the t Distribution to Test Hypotheses about pp. 351
Using the Z Distribution to Test the Hypothesis [rho] = cp. 352
Interpreting the Sample Correlation Coefficient rp. 353
Multiple Correlation and Partial Correlationp. 354
Nonparametric Techniquesp. 379
Nonparametric vs. Parametric Techniquesp. 379
Chi-Square Testsp. 379
Chi-Square Test for Goodness-of-fitp. 380
Chi-Square Test for Independence: Contingency Table Analysisp. 381
Chi-Square Test for Homogeneity Among k Binomial Proportionsp. 383
Rank Order Testsp. 385
One-Sample Tests: The Wilcoxon Signed-Rank Testp. 385
Two-Sample Tests: the Wilcoxon Signed-Rank Test for Dependent Samplesp. 387
Two-Sample Tests: the Mann-Whitney U Test for Independent Samplesp. 389
Multisample Tests: the Kruskal-Wallis H Test for k Independent Samplesp. 392
The Spearman Test of Rank Correlationp. 394
Appendixp. 424
Cumulative Binomial Probabilitiesp. 424
Cumulative Poisson Probabilitiesp. 426
Areas of the Standard Normal Distributionp. 427
Critical Values of the t Distributionp. 428
Critical Values of the Chi-Square Distributionp. 429
Critical Values of the F Distributionp. 430
Least Significant Studentized Ranges r[subscript p]p. 436
Transformation of r to z[subscript r]p. 437
Critical Values of the Pearson Product-Moment Correlation Coefficient rp. 439
Critical Values of the Wilcoxon Wp. 440
Critical Values of the Mann-Whitney Up. 441
Critical Values of the Kruskal-Wallis Hp. 442
Critical Values of the Spearman r[subscript S]p. 443
Indexp. 444
Table of Contents provided by Syndetics. All Rights Reserved.

ISBN: 9780071346375
ISBN-10: 0071346376
Series: Schaum's Outlines
Audience: Tertiary; University or College
Format: Paperback
Language: English
Number Of Pages: 350
Published: 2nd September 1999
Publisher: McGraw-Hill Education - Europe
Country of Publication: US
Dimensions (cm): 27.5 x 20.8  x 2.4
Weight (kg): 0.84
Edition Number: 1

This product is categorised by