| Preface | p. xi |
| Introduction | p. 1 |
| Focus and Overview of Topics | |
| Some Basic Descriptive Statistics | |
| Summation Notation | |
| t Test for Independent Samples | |
| t Test for Dependent Samples | |
| Outliers | |
| SPSS and SAS Statistical Packages | |
| SPSS for Windows-Release 12.0 | |
| Data Files | |
| Data Entry | |
| Editing a Dataset | |
| Splitting and Merging Files | |
| Two Ways of Running Analyses on SPSS | |
| SPSS Output Navigator | |
| SAS and SPSS Output for Correlations, Descriptives, and t Tests | |
| Data Sets on Compact Disk | |
| Obtaining the Mean and Variance on the T1-30Xa Calculator | |
| One Way Analysis of Variance | p. 45 |
| Introduction | |
| Rationale for ANOVA | |
| Numerical Example | |
| Expected Mean Squares | |
| MS[subscript w] and MS[subscript b] as Variances | |
| A Linear Model for the Data | |
| Assumptions in ANOVA | |
| The Independence Assumption | |
| ANOVA on SPSS and SAS | |
| Post Hoc Procedures | |
| Tukey Procedure | |
| The Scheffe Procedure | |
| Heterogeneous Variances and Unequal Group Sizes | |
| Measures of Association (Variance Accounted For) | |
| Planned Comparisons | |
| Test Statistic for Planned Comparisons | |
| Planned Comparisons on SPSS and SAS | |
| The Effect of an Outlier on an ANOVA | |
| Multivariate Analysis of Variance | |
| Summary | |
| Appendix | |
| Power Analysis | p. 105 |
| Introduction | |
| t Test for Independent Samples | |
| A Priori and Post Hoc Estimation of Power | |
| Estimation of Power for One Way Analysis of Variance | |
| A Priori Estimation of Subjects Needed for a Given Power | |
| Ways of Improving Power | |
| Power Estimation on SPSSM ANOVA | |
| Summary | |
| Factorial Analysis of Variance | p. 123 |
| Introduction | |
| Numerical Calculations for Two Way ANOVA | |
| Balanced and Unbalanced Designs | |
| Higher Order Designs | |
| A Comprehensive Computer Example Using Real Data | |
| Power Analysis | |
| Fixed and Random Factors | |
| Summary | |
| Doing a Balanced Two Way ANOVA With a Calculator | |
| Repeated Measures Analysis | p. 181 |
| Introduction | |
| Advantages and Disadvantages of Repeated Measures Designs | |
| Single Group Repeated Measures | |
| Completely Randomized Design | |
| Univariate Repeated Measures Analysis | |
| Assumptions in Repeated Measures Analysis | |
| Should We Use the Univariate or Multivariate Approach? | |
| Computer Analysis on SAS and SPSS for Example | |
| Post Hoc Procedures in Repeated Measures Analysis | |
| One Between and One Within Factor-A Trend Analysis | |
| Post Hoc Procedures for the One Between and One Within Design | |
| One Between and Two Within Factors | |
| Totally Within Designs | |
| Planned Comparisons in Repeated Measures Designs | |
| Summary | |
| Simple and Multiple Regression | p. 219 |
| Simple Regression | |
| Assumptions for the Errors | |
| Influential Data Points | |
| Multiple Regression | |
| Breakdown of Sum of Squares in Regression and F Test for Multiple Correlation | |
| Relationship of Simple Correlations to Multiple Correlation | |
| Multicollinearity | |
| Model Selection | |
| Two Computer Examples | |
| Checking Assumptions for the Regression Model | |
| Model Validation | |
| Importance of the Order of Predictors in Regression Analysis | |
| Other Important Issues | |
| Outliers and Influential Data Points | |
| Further Discussion of the Two Computer Examples | |
| Sample Size Determination for a Reliable Prediction Equation | |
| ANOVA as a Special Case of Regression Analysis | |
| Summary of Important Points | |
| The PRESS Statistic | |
| Analysis of Covariance | p. 285 |
| Introduction | |
| Purposes of Covariance | |
| Adjustment of Posttest Means | |
| Reduction of Error Variance | |
| Choice of Covariates | |
| Numerical Example | |
| Assumptions in Analysis of Covariance | |
| Use of ANCOVA with Intact Groups | |
| Computer Example for ANCOVA | |
| Alternative Analyses | |
| An Alternative to the Johnson-Neyman Technique | |
| Use of Several Covariates | |
| Computer Example with Two Covariates | |
| Summary | |
| Hierarchical Linear Modeling | p. 321 |
| Introduction | |
| Problems Using Single-Level Analyses of Multilevel Data | |
| Formulation of the Multilevel Model | |
| Two-Level Model-General Formulation | |
| HLM6 Software | |
| Two Level Example-Student and Classroom Data | |
| HLM Software Output | |
| Adding Level One Predictors to the HLM | |
| Addition of a Level Two Predictor to a Two Level HLM | |
| Evaluating the Efficacy of a Treatment | |
| Final Comments on Hlm | |
| Data Sets | p. 365 |
| Clinical Data | |
| Alcoholics Data | |
| Sesame Street Data | |
| Headache Data | |
| Cartoon Data | |
| Attitude Data | |
| National Academy of Sciences Data | |
| Agresti Home Sales Data | |
| Statistical Tables | p. 399 |
| Critical Values for F | |
| Percentile Points of Studentized Range Statistic | |
| Critical Values for Dunnett's Test | |
| Critical Values for F (max) Statistic | |
| Critical Values for Bryant-Paulson Procedure | |
| Power Tables | p. 413 |
| Power of F Test at [alpha] = .05, u = 1 | |
| Power of F Test at [alpha] = .05, u = 2 | |
| Power of F Test at [alpha] = .05, u = 3 | |
| Power of F Test at [alpha] = .05, u = 4 | |
| Power of F Test at [alpha] = .10, u = 1 | |
| Power of F Test at [alpha] = .10, u = 2 | |
| Power of F Test at [alpha] = .10, u = 3 | |
| Power of F Test at [alpha] = .10, u = 4 | |
| References | p. 423 |
| Answers to Selected Exercises | p. 431 |
| Author Index | p. 453 |
| Subject Index | p. 457 |
| Table of Contents provided by Ingram. All Rights Reserved. |