At the moment its manageable unless the number of modes gets in the neighborhood of 20. Another trick that helped is eliminating the influence of the edge subapertures while identifying the poke matrix. Because these lenslets are on the edge, their centroids can experience sudden jumps if an adjacent spot edges into the wrong measurement error. This is a serious nonlinear source of noise that can create large spikes at the edges of the identified influence functions/mode. As a result, if a large disturbance shows up at an edge and is subsequently projected onto one of these erroneous spikes, the actuators pertaining to that mode can saturate almost immediately. I surmise this is why the edge actuators are the first to go.
I've been looking at ways to just reduce the influence of the edge subapertures instead of eliminating them completely. If I identify the reconstructor directly instead of the poke matrix this could be done by adding a regularization term to the least-squares problem. That route has run into some trouble with the rank of the matrix containing the slope vectors though, so I haven't gone back to it.
Another option is to identify the poke matrix, but regularize the actuator influence functions themselves. This is basically the method in [Hin07a], but obviously choosing the regularization matrix is more art than science...i.e. a pain in the ass.
Lets see, what else. We also finally got some real life, flesh and blood wavefront data. The problem (isn't there always one?) is that there's a large occlusion in the middle of the wavefront sensor image. This is a common geometry in a lot of AO telescopes. I've spent quite a bit of effort on interesting ways to fill in the hole so I can apply the wavefronts on the DM, but eventually settled on plain old least-squares. Because there's significant flow in the data, I can basically identify a "predictor" (smoother?) that fills in the hole based on the future and past sequence of wavefronts in the region around it.
Although its a Matlab indexing nightmare, the results look relatively convincing. The edge pixels aren't quite continuous all the time, but I think that can be fixed by (again) using regularization to enforce boundary conditions. If I can prove that the filled images have the same second order statistics (esp. structure function) as the original data I might be able to squeeze a short paper on it. Bonus.
In other news, I finally got my shit together wrt the target camera. Basically what I learned from that has led to even more modifications to the hardware. That deserves another post.
Somewhere in all this the driver boxes for DM31 crapped out...both of them. In the mean time its just me and my old friend Simulink working simulations.

No comments:
Post a Comment