2:00 PM - Monroe 130
Daniel MacRae Keenan, Professor Emeritus, UVa Statistics
Convolution in Modern Statistics and Probability
Convolutions are possibly the most ubiquitous construction in statistics and probability.Their fundamental property is that they “raise the level of structural regularity” of the entity to which they are being convolved, i.e., the output is more regular than the input. Convolutions occur between functions, between measures, between measures and functions, and more generally between generalized functions. In statistics, most intuition-based estimators (e.g., empirical measure; periodogram) don’t lie in the desired parameter space; regularization, via convolution, then moves them into the correct parameter space. In probability, virtually all limit theorems are the result of infinite convolutions (or approximations to these).
There have been two approaches to the use of convolution in the above theories. One is a Fourier analytic approach, i.e., that of characteristic functions, promoted by P. Levy and H. Cramer. The other is a function analytic approach, promoted by W. Feller. Each has its strengths, but it is only the second which has broadly “tied together all the loose ends.” Unfortunately, as a matter of expediency in today’s training, it is usually only the former (Fourier) that gets taught. The difficulty with this strategy is that it is the second approach which is key to the understanding of many advanced ideas, e.g., stochastic processes.
In this talk, I will lay down a perspective by which you can (easily) view this second approach. Convolution semigroups and their generators, which may seem esoteric, are easy to understand, and are at the heart of weak convergence and central limit theorems. It is a very beautiful theory that will hopefully enhance your broader intuition.