Statistical Inference

Go back to Tutorial

Statistical inference is the process of drawing conclusions about populations or scientific truths from data. There are many modes of performing inference including statistical modeling, data-oriented strategies and explicit use of designs and randomization in analyses. Furthermore, there are broad theories (frequentists, Bayesian, likelihood, design based) and numerous complexities (missing data, observed and unobserved confounding, biases) for performing inference.

Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (firstly) selecting a statistical model of the process that generates the data and (secondly) deducing propositions from the model.

The conclusion of a statistical inference is a statistical proposition. Some common forms of statistical proposition are the following:

  • a point estimate, i.e. a particular value that best approximates some parameter of interest;
  • an interval estimate, e.g. a confidence interval (or set estimate), i.e. an interval constructed using a dataset drawn from a population so that, under repeated sampling of such datasets, such intervals would contain the true parameter value with the probability at the stated confidence level;
  • a credible interval, i.e. a set of values containing, E.g. 95% of posterior belief;
  • rejection of a hypothesis;
  • clustering or classification of data points into groups.

 

 

 

Certified Inventory and Warehouse Analytics Professional

Go back to Tutorial

Get industry recognized certification – Contact us

Menu