The purpose of collecting data is to learn from it; and we learn by asking questions. In the data world questions come in the form of estimators that construct some properties of the data. So what are the right questions to ask?

Say we have an estimator based on some (finite) data set, the most important criterion in judging whether this estimator any is ‘good’ is its behaviour as we increase the amount of data. Let us explain why. For anyone to take an estimator seriously we have to trust it. If adding one data points radically changes our opinion then we stop listening. After-all add another data point and we may change our opinion back. With this level of uncertainty we can’t be expected to make decisions. The convergence in the large data limit is a measure of stability. A lack of convergence indicates ill-posedness. In other words if we don’t converge then we are asking the wrong question. It probes something unknowable which no amount of data will tell us: the information is not in the data.

Studying the asymptotics is all about making sure we are asking the right question. We know why and now need to know how. First we should consider what we want. We want (1) our estimator to converge and (2) for it to converge to something meaningful. Many estimators can be written as solutions to a minimization problem, for example maximum likelihood, maximum a-posteriori, least square etc. We concentrate on estimators of this form.

Imagine that we have a sequence of minimization problems numbered according to the number of data points. The law of large numbers motivates a ‘limiting problem’ that we can understand as having an infinite number of data points. So then what is the natural notion of convergence for these sequence of minimization problems? Well, let’s recall what is motivating us: the convergence of minimizers!