Ahh crap its August.
Finished some code to calculate the optimal FIR filter (in the SISO case) in the disturbance rejection problem by solving the Weiner-Hopf equations. I have no clue if it works, but it seemed to give non-bullshit answers at the few, simple problems I threw at it. To actually use it in the experiment I have to harness the disturbance state-space model, but because of all the reshaping and massaging required to get the disturbance commands I'm not sure the original model would be valid. Right now I'm working on identifying a new model based on the measured wavefronts, so the basic procedure retardedly involves pulling a subspace ID twice. At that point I'm probably just getting complete nonsense, but its worth a try.
Friday, July 30, 2010
Monday, July 26, 2010
7.26.10
I was able to get the experiment running around 20Hz with Simulink by removing the pause between DM commands. This is great, except it now looks like DM dynamics are definitely screwing things up.
Here's a plot comparing the PSD's from 10000 frames using the same disturbance input and several different plant models.

Clearly using the original plant model in the AO loop, which contains a single delay and PI controller, produces crap; its worse than using just the classical controller. From my ARX experiments with the DM, I found that running at full speed seems to add an additional delay, and sure enough multiplying this ideal plant model with a delay works much better. Unsurprisingly the best results come with identifying a plant model first.
None of these results are as good as what I get with the artificial pause between DM commands, but the time difference is significant (20Hz vs. 4 Hz). I get the feeling this is a fact I'll just have to live with for the time being.
Here's a plot comparing the PSD's from 10000 frames using the same disturbance input and several different plant models.

Clearly using the original plant model in the AO loop, which contains a single delay and PI controller, produces crap; its worse than using just the classical controller. From my ARX experiments with the DM, I found that running at full speed seems to add an additional delay, and sure enough multiplying this ideal plant model with a delay works much better. Unsurprisingly the best results come with identifying a plant model first.
None of these results are as good as what I get with the artificial pause between DM commands, but the time difference is significant (20Hz vs. 4 Hz). I get the feeling this is a fact I'll just have to live with for the time being.
Friday, July 16, 2010
Can I graduate now?
Finally finally have the adaptive loop working in the experiment, at least with a single mode. Turns out that the effect on the overall wavefront norm is difficult to see for this disturbance model unless you know its there. Its more clear if you just look the coefficient of the mode being controlled.
But the best way to observe the effectiveness is by comparing the PSDs of the modal coefficient. In the uncontrolled case (classical PI controller only), you can clearly see some color resulting from the disturbance input. In simulation the adaptive loop flattens this out somewhat:

To my surprise, the adaptive loop does an even better job whitening the PSD in the experiment:

Another thing to notice is that I compared the performance using the ideal and identified plants. The adaptive loop uses a model of the closed (classical) loop to estimate the disturbance input. In general we assume that the ideal plant is just an integral controller and a unit delay, since the phase reconstructor is chosen to be the pseudo-inverese of the modal poke matrix. To verify this I also identified a plant using n4sid and a few thousand samples of input/output data, and found the resulting transfer functions very identical. This is nice to know since the plant actually contains some dirty nonlinearities like saturation and rounding, so it looks like those aren't significant for now.
This is all for a single mode. The requirements on the plant in the multiply mode case are more stringent (i.e. a diagonal transfer matrix). Something is also causing this to run quite a bit slower than from an m-file, so that will take some coffee consumption to figure out as well.
But the best way to observe the effectiveness is by comparing the PSDs of the modal coefficient. In the uncontrolled case (classical PI controller only), you can clearly see some color resulting from the disturbance input. In simulation the adaptive loop flattens this out somewhat:

To my surprise, the adaptive loop does an even better job whitening the PSD in the experiment:

Another thing to notice is that I compared the performance using the ideal and identified plants. The adaptive loop uses a model of the closed (classical) loop to estimate the disturbance input. In general we assume that the ideal plant is just an integral controller and a unit delay, since the phase reconstructor is chosen to be the pseudo-inverese of the modal poke matrix. To verify this I also identified a plant using n4sid and a few thousand samples of input/output data, and found the resulting transfer functions very identical. This is nice to know since the plant actually contains some dirty nonlinearities like saturation and rounding, so it looks like those aren't significant for now.
This is all for a single mode. The requirements on the plant in the multiply mode case are more stringent (i.e. a diagonal transfer matrix). Something is also causing this to run quite a bit slower than from an m-file, so that will take some coffee consumption to figure out as well.
Tuesday, July 13, 2010
7.12.10 [2]
Wow, 2 posts within 24 hours. This is what happens when you can't nail down consistent sleep patterns.
Advisor was intrigued by the comparison of the velocity estimates (see plot in previous post). Particularly, both lines have almost the same shape, and appear to only differ by the average slope, ie the velocity. I proposed that this is because the poke matrix doesn't account for the change in the beam size that's roughly proportional to the norm of the command vector. Relatively small perturbations are used to estimate the poke matrix, so the beam diameter is actually smaller for general random commands. A particular phase profile enters and exits the DM surface in the same time for either case, so if the actual beam diameter is smaller it means that the phase traverses fewer pixels for a given time period, ergo resulting in a lower velocity estimate.
I have no idea if that makes any sense since its around 2AM. In any case, its not clear there's anything I can really do about it other than estimate some correcting scale factor and apply that to every command sequence. Its also not clear if any of this velocity estimation stuff will make it in a paper or my dissertation, so I'm not sure its worth devoting my entire life to something that's basically a sideshow to the main event.
Anyway, tomorrow (today), I'd like to ignore this discrepancy for now and look at applying phases at twice to rate to see if I can measure twice the velocity. I plan on doing this by using the same sequence of phases from the SS model, but just applying every other command.
I'm also making some (theoretical) progress on implementing an optimal FIR filter in the actual AO experiment. I still have to think about what it means to do the calculation in the multi-channel case.
Advisor was intrigued by the comparison of the velocity estimates (see plot in previous post). Particularly, both lines have almost the same shape, and appear to only differ by the average slope, ie the velocity. I proposed that this is because the poke matrix doesn't account for the change in the beam size that's roughly proportional to the norm of the command vector. Relatively small perturbations are used to estimate the poke matrix, so the beam diameter is actually smaller for general random commands. A particular phase profile enters and exits the DM surface in the same time for either case, so if the actual beam diameter is smaller it means that the phase traverses fewer pixels for a given time period, ergo resulting in a lower velocity estimate.
I have no idea if that makes any sense since its around 2AM. In any case, its not clear there's anything I can really do about it other than estimate some correcting scale factor and apply that to every command sequence. Its also not clear if any of this velocity estimation stuff will make it in a paper or my dissertation, so I'm not sure its worth devoting my entire life to something that's basically a sideshow to the main event.
Anyway, tomorrow (today), I'd like to ignore this discrepancy for now and look at applying phases at twice to rate to see if I can measure twice the velocity. I plan on doing this by using the same sequence of phases from the SS model, but just applying every other command.
I'm also making some (theoretical) progress on implementing an optimal FIR filter in the actual AO experiment. I still have to think about what it means to do the calculation in the multi-channel case.
Monday, July 12, 2010
7.12.10
More stuff on the velocity estimation. I managed to run some disturbances on the experiments that originated from a state space model. Surprisingly, you can actually distinguish something that looks like "flow" in the resulting reconstructed phase measurements. As a bonus, the velocity seems to be relatively constant.
The velocity estimate from these measurements are less than what's predicted by putting the estimated phases (using the commands and the phase poke matrix) through the estimator, but just the fact that there's anything recognizable is a plus. I did, however, have to mask out only the center of the WFS image corresponding to the active region of the DM, everything outside of this is just distortion. Maybe something like this should be done in the AO loop as well.

Speaking of which, I really want to focus on getting the AO loop working in the experiment this week. So far I everything runs, but I haven't seen any improvement in the Strehl in either the Simulink experiment or simulation. A few things to try:
1. Compare predicted and actual disturbance measurements (w/ and w/o bias). This should pin down if the internal plant model is accurate. It should be after running n4sid on sample data. Theoretically, I think the MSE between these should converge at something like an exponential rate after the adaptive loop is closed.
2. Try different disturbance sources. Maybe the current SS system is just too close to white to be useful. Maybe try a simple FIR filter or different amplitudes.
3. Compute the optimal IIR and FIR filter using the known disturbance model and see if that makes any difference.
Getting this working, especially #3, is important. The stuff with the velocity and new SLM is just icing at the moment.
The velocity estimate from these measurements are less than what's predicted by putting the estimated phases (using the commands and the phase poke matrix) through the estimator, but just the fact that there's anything recognizable is a plus. I did, however, have to mask out only the center of the WFS image corresponding to the active region of the DM, everything outside of this is just distortion. Maybe something like this should be done in the AO loop as well.

Speaking of which, I really want to focus on getting the AO loop working in the experiment this week. So far I everything runs, but I haven't seen any improvement in the Strehl in either the Simulink experiment or simulation. A few things to try:
1. Compare predicted and actual disturbance measurements (w/ and w/o bias). This should pin down if the internal plant model is accurate. It should be after running n4sid on sample data. Theoretically, I think the MSE between these should converge at something like an exponential rate after the adaptive loop is closed.
2. Try different disturbance sources. Maybe the current SS system is just too close to white to be useful. Maybe try a simple FIR filter or different amplitudes.
3. Compute the optimal IIR and FIR filter using the known disturbance model and see if that makes any difference.
Getting this working, especially #3, is important. The stuff with the velocity and new SLM is just icing at the moment.
Thursday, July 08, 2010
7.8.10
Things to do today and tomorrow:
- Rewrite correlation code to handle phase data on rectangular grids
- Map commands from a SS model to DM commands, and compare velocity estimates for SS model, output from SS model, projected DM surface profile w/ and w/o bias. Is there any correspondence at all between the DM surface velocity and the SS estimate?
- Moar subspace ID stuff. Calculating oblique projections using LQ factorization.
- Screw around with SLM more.
Fun.
- Rewrite correlation code to handle phase data on rectangular grids
- Map commands from a SS model to DM commands, and compare velocity estimates for SS model, output from SS model, projected DM surface profile w/ and w/o bias. Is there any correspondence at all between the DM surface velocity and the SS estimate?
- Moar subspace ID stuff. Calculating oblique projections using LQ factorization.
- Screw around with SLM more.
Fun.
Wednesday, July 07, 2010
7.7.10
Still progressing on this correlation/velocity estimate stuff. Clearly from the previous post, estimating the velocity at each stop is produces inconsistent results. I think the problem is that when the speed is relatively slow (<1 px/frame), consecutive frames look very similar, with only a few edge pixels changing. Thus there just isn't enough movement in the first few delays to show clear movement of the peak of the correlation image, maybe explaining why it takes a few delays for the estimated velocity to settle down to a reasonable number.
But based on how consistently the peak moves in that video I decided to just track its position as a function of the delay, instead of producing an estimate each time. Luckily, the peak position is very linear. Because the peak should move with the same velocity as the phase profile, I can do a linear regression and take the resulting slope as the velocity. The resulting estimate is close to the average of estimating a velocity each delay, but its more justified looking at a plot of the peak position vs delay.
This works equally well using the covariance matrix of a state space model to calculate the correlation image for each time lag. To make it even better I managed to vectorize the calculation so that Matlab doesn't yack all over the nested loops. It runs around 10x faster than before.
I'm not sure where all this is going. It seems to be working pretty well, but I'm not sure we'll be able to squeeze a paper out of it. Maybe if I manage to get results of flow using the WFS and the actual experiment that would be more interesting.
But based on how consistently the peak moves in that video I decided to just track its position as a function of the delay, instead of producing an estimate each time. Luckily, the peak position is very linear. Because the peak should move with the same velocity as the phase profile, I can do a linear regression and take the resulting slope as the velocity. The resulting estimate is close to the average of estimating a velocity each delay, but its more justified looking at a plot of the peak position vs delay.
This works equally well using the covariance matrix of a state space model to calculate the correlation image for each time lag. To make it even better I managed to vectorize the calculation so that Matlab doesn't yack all over the nested loops. It runs around 10x faster than before.
I'm not sure where all this is going. It seems to be working pretty well, but I'm not sure we'll be able to squeeze a paper out of it. Maybe if I manage to get results of flow using the WFS and the actual experiment that would be more interesting.
Subscribe to:
Comments (Atom)
