We needed a second set of data to test things out on, and I remembered that we got this CD with "challenge" data from a conference we attended a few months ago. Basically, the data are frames from OPD data of turbulence over some flat plate, so its not exactly the same as general AO turbulence, but close enough to validate the velocity estimation algorithm. The data's pretty dense: each frame has phases on a 41x41 grid, and there are around 15000 frames in each file. Unbelievably though, its all stored uncompressed in a fucking 1GB text file.
It took me 2 days to figure out how to load just enough data in from this shit pipe to be useful. Right now I'm taking every third frame or so, and only using every other grid point. Making Matlab do this without reading every line in the text file took more than a few hits of caffeine.
Just looking at the autocorrelations of the frames (instead of calculating a SS model first), I still see the problem if varying velocity estimates as a function of the delay. Calculating the velocity from the data itself is something other people have done successfully, so either they're all full of shit or I'm just missing some detail. I suspect there's some pre-processing of the data I have to do to get more consistent estimates.

Here's a cool video of the spatial autocorrelation over 7500 frames for a varying number of delays. The peak should move with the same velocity as as they layer, at least until enough delays are used so the frames are essentially uncorrelated.
From the vid it looks like its moving with a constant velocity, but its hard to know exactly.
I need a drink.
