skip to main |
skip to sidebar
Topics:
- Introduction
- Statistical models
- Likelihood function
- Sufficient statistic
- Exponential family of distributions
- Parameter Estimation
- Maximum likelihood estimation
- The method of moments
- Criteria for estimators, mean squared error
- Unbiased estimators
- Fisher information
- Cramer-Rao inequality
- Rao-Blackwell theorem
- Confidence Intervals
- Large-Sample Theory
- Convergence in mean and in probablity
- Consistency of estimators
- Asymptotic normality
- Asymptotic distribution of maximum likelihood estimators
- Hypotheses Testing
- Introduction, basic concepts
- Simple hypotheses, Neyman-Pearson lemma
- Composite hypotheses, uniformly most powerful tests
- Statistical inference for normal samples
- One- and two-sample t-tests
- Chi-squared test for variance
- Comparison of variances (F-test)
- Hypotheses testing and confidence intervals
- Tests for goodness of fit and independence
- Sequential probability ratio test (Wald)
- Bayesian Inference
- Parameters as random variables
- Bayes' theorem, prior and posterior distributions
- Bayes estimation
- Bayes choice between hypotheses
Literature
- Bickel, P.K. and Doksum, K.A. Mathematical Statistics
- Hogg, R. and Craig, A. Mathematical Statistics
- Larsen, R.J. and Marx, M.L. An Introduction to Mathematical Statistics and its Applications
- Lindgren, B.W. Statistical Theory
- Samuel-Cohen, E. Statistical Theory (in Hebrew)
(Derived from http://www.math.tau.ac.il/~isaco/TheoStat.html)
seja o primeiro a comentar!