It may appear weird, but one of the basic activity of theoretical cosmology is not to analyse real observational data, but rather to predict what the future data will look like. Why should we spend time to predict future data? One reason is that we try to understand whether the future experiments will be helpful to test our theories; another one, is that we train ourselves, and our codes, to the real analysis when the data will finally flow down; yet another one, is that we might learn how to design the future observations to optimize the science return. Finally, not the least reason is that one needs to predict the performance of future endeavours also to convince the funding agencies that the experiment is worth being supported.
Forecasts are relatively easy to do. You just pick up a theory and simulate how the data should look like if that theory is the right one. Then you analyse the fake data as it they were real, and see how well they can “confirm” your theory choice – or reject alternatives. In other words, you estimate the errors on your theory parameters.
However, a good forecast should a) be as close as possible to the real analysis (include all sources of noise and errors) and b) include as much information as possible. So, forecasts should be realistic and complete. In a recent paper we tried to improve upon the standard forecasts of galaxy clustering data, a very important source of information on the evolution of the Universe. We included cross-correlation among bins, the effect of a finite survey, and the uncertainty on the epoch at which we evaluate the cosmological functions, all effects that were previously neglected in this kind of forecasts.
The result is that these corrections are actually quite important for an experiment like Euclid, even if the surveys will be huge if compared to the existing ones. We find that the estimate of the error on the most important cosmological parameters can increase by as much as 30%.
On the other hand, this increase is not dramatic and the scientific value of the experiment is not affected. Moreover, we still have to take into account the so-called mode-mode correlation, that will add more information.
So, the price to pay to be realistic is not a too big one, and there are hidden troves of information that still wait to be exploited.