Date:
Sun, 22/11/201515:30-16:30
Location:
Elath Hall, 2nd floor, Feldman Building, Edmond Safra Campus
Topic: Calibrated Forecasts, Leaks, and Game Equilibria (joint work with Dean P. Foster)
Place: Elath Hall, 2nd floor, Feldman Building, Edmond Safra Campus
Time: Sunday, November 22, 2015 at 4:00 p.m.
Refreshments available at 3:30 p.m.
YOU ARE CORDIALLY INVITED
Abstract: How good is a forecaster? Assume for concreteness that every day the forecaster issues a forecast of the type "the chance of rain tomorrow is 30%." A simple test one may conduct is to calculate the proportion of rainy days out of those days that the forecast was 30%, and compare it to 30%; and do the same for all other forecasts. A forecaster is said to be _calibrated_ if, in the long run, the differences between the actual proportions of rainy days and the forecasts are small—no matter what the weather really was. We start from the classical result that calibration can always be guaranteed by _randomized_ forecasting procedures. We then _smooth out_ the calibration score by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, smooth calibration can be guaranteed by _deterministic_ procedures. As a consequence, it does not matter if the forecasts are _leaked_, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. We also consider related problems: online linear regression, weak calibration, and _uncoupled Nash dynamics_ in n-person games (vs. correlated equilibria dynamics for calibrated learning).
Place: Elath Hall, 2nd floor, Feldman Building, Edmond Safra Campus
Time: Sunday, November 22, 2015 at 4:00 p.m.
Refreshments available at 3:30 p.m.
YOU ARE CORDIALLY INVITED
Abstract: How good is a forecaster? Assume for concreteness that every day the forecaster issues a forecast of the type "the chance of rain tomorrow is 30%." A simple test one may conduct is to calculate the proportion of rainy days out of those days that the forecast was 30%, and compare it to 30%; and do the same for all other forecasts. A forecaster is said to be _calibrated_ if, in the long run, the differences between the actual proportions of rainy days and the forecasts are small—no matter what the weather really was. We start from the classical result that calibration can always be guaranteed by _randomized_ forecasting procedures. We then _smooth out_ the calibration score by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, smooth calibration can be guaranteed by _deterministic_ procedures. As a consequence, it does not matter if the forecasts are _leaked_, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. We also consider related problems: online linear regression, weak calibration, and _uncoupled Nash dynamics_ in n-person games (vs. correlated equilibria dynamics for calibrated learning).