A few days ago I got some new modes that are zero mean. Since the first most is a column of 1's, this means that the other modes are orthogonal to mode 1 in the normal Euclidean sense, and also on the DM surface (theoretically). There's one less degree of freedom, so there are a total of 30 of these.
Its not clear yet if this is going to make a difference since they look pretty similar to the modes I was using before, but you never know. The most significant difference so far seems to be the norm of the phase shapes they reproduce. Maybe there's something that could be done with this parameter to scale them relative to the power in each mode of the disturbance.
Saturation is still a problem I'll have to live with, even though I tried scaling down the amplitude of the disturbance WF. In the mean time I'd like to compare the performance of the two sets of modes, obviously using a set of disturbances that don't cause saturation.
Sunday, December 26, 2010
Tuesday, December 21, 2010
12.21.10
Been plagued this week getting the LTI controller to work without saturation. I'm not sure why I'm suddenly having this problem when it was running smoothly before. Right now I'm frequently getting saturation in the DM voltage after a few hundred samples with the LTI loop closed.

The annoying thing is that it occurs very suddenly, and as a result the modal sequences shoot up, almost like step response. Actually, I'm not sure which one is driving the other, but since no one's touching the experiment, I assume its the control saturation.
Some things to investigate:
1. A bad plant model. It could be that the poke matrix just isn't accurate enough for some modes, and as a result the modal commands from the LTI controller are too coupled or otherwise garbage.
2. Bad sensor data. I doubt this is the case since I've gotten the WFS image very clean, with small realignments and re-referencing every day to keep things tidy.
3. Shitty predictor. Relative prediction error's are sometimes north of 50%
4. Bad internal model. None of the nonlinear effects (rounding or saturation) are included in the controller's internal model. So when saturation does happen the predictor doesn't get an accurate estimate of the current disturbance. Obv. this would only make things worse when saturation was already happening.
5. Bad PI Gain/pole. Possible, but since the integrator is included in the ID I'm not sure the LTI controller would jsut send out higher values if I reduced the PI gain.
6. Unexplainable. The most likely.
I'm gong to see if I can reproduce the problem in simulation if playing around with the integrator gain doesn't solve things. Otherwise, I might also look at writing the code to actually calculate the optimal LTI filter instead of pulling it from the Kalman filter. Basically this involves an LQR problem, with the advantage that you can directly penalize the "size" of the predictor to try to prevent large control values.
Also, it's pretty funny looking at the keywords that send people to this blog, I guess people looking for implementations of different AO algorithms. So if you're out there: no I don't have a simulink SPGD implementation, sorry. Maybe one day.

The annoying thing is that it occurs very suddenly, and as a result the modal sequences shoot up, almost like step response. Actually, I'm not sure which one is driving the other, but since no one's touching the experiment, I assume its the control saturation.
Some things to investigate:
1. A bad plant model. It could be that the poke matrix just isn't accurate enough for some modes, and as a result the modal commands from the LTI controller are too coupled or otherwise garbage.
2. Bad sensor data. I doubt this is the case since I've gotten the WFS image very clean, with small realignments and re-referencing every day to keep things tidy.
3. Shitty predictor. Relative prediction error's are sometimes north of 50%
4. Bad internal model. None of the nonlinear effects (rounding or saturation) are included in the controller's internal model. So when saturation does happen the predictor doesn't get an accurate estimate of the current disturbance. Obv. this would only make things worse when saturation was already happening.
5. Bad PI Gain/pole. Possible, but since the integrator is included in the ID I'm not sure the LTI controller would jsut send out higher values if I reduced the PI gain.
6. Unexplainable. The most likely.
I'm gong to see if I can reproduce the problem in simulation if playing around with the integrator gain doesn't solve things. Otherwise, I might also look at writing the code to actually calculate the optimal LTI filter instead of pulling it from the Kalman filter. Basically this involves an LQR problem, with the advantage that you can directly penalize the "size" of the predictor to try to prevent large control values.
Also, it's pretty funny looking at the keywords that send people to this blog, I guess people looking for implementations of different AO algorithms. So if you're out there: no I don't have a simulink SPGD implementation, sorry. Maybe one day.
Friday, December 17, 2010
12.16.10
More playing around with the target camera today. For some reason I haven't been able to reproduce the nice spot from the previous post, instead getting a large blob ~ 100px dia. Prob a result of the mysterious bias problem I've been having.
Luckily most of the performances measures so far show pretty good correlation to the norm of the WF error. Here's an example of the norm of the recon phase compared to the negative log of "sr2," which is the sharpness measure equal to the max intensity divided by the total image sum. Input was a spatially uniform sinusoidal disturbance command around the bias with an amplitude of 20V.

[Chr06a] suggests that the image sharpness is proportional to the Strehl, so its natural log should be approx. inversely proportional to the spatial RMS of the wavefront error (by the Marechall approximation). Seems to be about right if you squint.
The imaging sensor of the target camera isn't placed with the same accuracy as the other components, so its possible that minimized wavefront doesn't minimize the measured Strehl. I don't expect this to be a problem though since ultimately I'm going to be looking at the noise power, and I think unless the placement is way off with LTI controller should show an obvious improvement.
Here's a video of a random disturbance with the corresponding target image and reconstructed WF.
Well, obviously, we have a spot which isn't stationary, making power in the bucket measures somewhat useless. My advisor seems to favor these since they're nice to use in simulation when everything is perfectly on axis, so there's going to have to be some convincing before I can really use a different method.
Tomorrow I'd like to get the some sharpness measurements going w/ and w/o the LTI controller, assuming I can ever get it working without it shitting the bed.
Luckily most of the performances measures so far show pretty good correlation to the norm of the WF error. Here's an example of the norm of the recon phase compared to the negative log of "sr2," which is the sharpness measure equal to the max intensity divided by the total image sum. Input was a spatially uniform sinusoidal disturbance command around the bias with an amplitude of 20V.

[Chr06a] suggests that the image sharpness is proportional to the Strehl, so its natural log should be approx. inversely proportional to the spatial RMS of the wavefront error (by the Marechall approximation). Seems to be about right if you squint.
The imaging sensor of the target camera isn't placed with the same accuracy as the other components, so its possible that minimized wavefront doesn't minimize the measured Strehl. I don't expect this to be a problem though since ultimately I'm going to be looking at the noise power, and I think unless the placement is way off with LTI controller should show an obvious improvement.
Here's a video of a random disturbance with the corresponding target image and reconstructed WF.
Well, obviously, we have a spot which isn't stationary, making power in the bucket measures somewhat useless. My advisor seems to favor these since they're nice to use in simulation when everything is perfectly on axis, so there's going to have to be some convincing before I can really use a different method.
Tomorrow I'd like to get the some sharpness measurements going w/ and w/o the LTI controller, assuming I can ever get it working without it shitting the bed.
Wednesday, December 15, 2010
12.15.10
The first 3 of yesterday's stuff went fine. Amazingly I was able to get a license for the driver quickly, so accessing both cameras simultaneously through the same interface is no problem. Current target position seems fine for now, here's an example image with both DMs at the bias voltage.

I'll start looking at performance measures today. Incidentally, here's an illustration of the strange bias WF problem I've been having.

This is a composite of the reference SHWFS image from last night and from this morning. Clearly there's been a change in the focus, which is a mystery since no one's been the lab afaik. This is fixed by adjusting L2 (after the spatial filter), and doing some basic realignment, but its bitch to do correctly.

I'll start looking at performance measures today. Incidentally, here's an illustration of the strange bias WF problem I've been having.

This is a composite of the reference SHWFS image from last night and from this morning. Clearly there's been a change in the focus, which is a mystery since no one's been the lab afaik. This is fixed by adjusting L2 (after the spatial filter), and doing some basic realignment, but its bitch to do correctly.
Tuesday, December 14, 2010
12.14.10
FIR code still isn't working right. Diagonal terms always seem to be wrong in both the LS and normal-equation versions. I think I'm going to table this for now since I really only need it to compare the performance of the adaptive controller.
I really want to focus on getting the target cam working this week so I can start generating some complete data for possible paper submissions. In the process of getting the driver license right now, but its taking a while due to some admin delays. Luckily I can still access the image (although sometimes with a watermark). Stuff that needs to be done:
- Code to generate camera object in Matlab. Can the target and WFS be active simultaneously?
- Determine some nominal positioning where the spot size is reasonably small.
- Find decent camera settings for said position.
- Reconfig HDR code to use if necessary.
- Look at possible performance metrics. See how they respond to basic WF disturbances (step input to DM61, sinusoidal disturbances in a single mode, etc).
- Look at results using different non-PI controllers.
Ideally I'll have all this done by Friday.
I really want to focus on getting the target cam working this week so I can start generating some complete data for possible paper submissions. In the process of getting the driver license right now, but its taking a while due to some admin delays. Luckily I can still access the image (although sometimes with a watermark). Stuff that needs to be done:
- Code to generate camera object in Matlab. Can the target and WFS be active simultaneously?
- Determine some nominal positioning where the spot size is reasonably small.
- Find decent camera settings for said position.
- Reconfig HDR code to use if necessary.
- Look at possible performance metrics. See how they respond to basic WF disturbances (step input to DM61, sinusoidal disturbances in a single mode, etc).
- Look at results using different non-PI controllers.
Ideally I'll have all this done by Friday.
Monday, December 06, 2010
12.6.10
Finally managed to get all 3 control situations running on the same disturbance data. For some reason, the bias WF changes significantly from time to time, forcing me to do a realignment each time. The usual symptom of this is a crappy poke matrix, or highly saturated commands when identifying the CL dist. model.
Anyway, something's screwed up with the adaptive loop (AO) as shown below. In the past (and I think in theory), it has outperformed the LTI loop since it can better compensate for plant modeling error. One possibility is that the adaptive filter order is insufficient (right now, the adaptive filter is FIR with order 4). I think I'm going to update the code I have to compute the optimal FIR filter so I can look at how the theoretical FIR impulse looks.


For tomorrow:
- Update FIR code
- Figure out what the deal is with the target camera so I can get some Strehl ratio measurements.
Anyway, something's screwed up with the adaptive loop (AO) as shown below. In the past (and I think in theory), it has outperformed the LTI loop since it can better compensate for plant modeling error. One possibility is that the adaptive filter order is insufficient (right now, the adaptive filter is FIR with order 4). I think I'm going to update the code I have to compute the optimal FIR filter so I can look at how the theoretical FIR impulse looks.


For tomorrow:
- Update FIR code
- Figure out what the deal is with the target camera so I can get some Strehl ratio measurements.
Thursday, December 02, 2010
12.2.10: Plotfest 2010
I put in another beam splitter to send the beam to the target camera, so I wanted to run the LTI loop again to make sure everything was still ok. Luckily things still seem to be working, and I ended up spending most of the day thinking about what plots might be useful in analyzing what's spit out.
The controller only projects the wavefront onto the modes that are controlled, but since the modes aren't totally uncoupled irl, looking at these sequences don't tell the whole story. Instead, I projected the wavefront sequence onto ALL the modes and only looked at the the ones that are controlled. The difference is that there is some cross talk from the higher order modes that you don't see if you neglect them from the projection. For the plots here, the experiment used 25 control modes. The LTI predictor was estimated using 10000 frames with a prediction error around 0.38.
Anyway, one of the more useful plots is obviously the PSD's for each modal sequence. Clearly the classical controller is killing the low frequency/static disturbances but does shit for anything else as expected. The LTI controller does a decent job flattening the spectrum, with surprisingly little high frequency amplification like I was seeing before. My experience has been that this amplification shows up when the predictor model is crap, so I guess it was good enough this time. Plant modeling error should also contribute to that, so if the AO loop does even better I'll have an idea that the closed-loop plant isn't matching the theoretical version in the controller.

Next up is the modal sequence itself for a few of the modes. You can clearly see how the completely open-loop disturbances have zero mean (i.e. in Mode 1), a static error that's knocked out by the integrator. The LTI controller makes quick work of the remaining high-amplitude spikes. One thing I really have to do is estimate a noise floor that shows in the stead-state (with no disturbances) due to the WFS.

The temporal RMS for each mode is also instructive. This is key since sometimes the RMS improvement may not be so impressive even if the PSD's are white. Luckily the LTI improves the RMS for every mode, so I guess they're all fairly well formed on the DM. Note the the integrator sometimes does worse in certain modes, I guess those are being sacrificed for the sake of modes that contribute more to the noise power. i.e., modes with the largest RMS improvement should be the ones that have the highest open-loop value (ex. 6 and 9 compared to 8). Notice that after mode 25 there's almost no improvement as expected. But, there is some difference, indicating the influence of the controlled modes creeping in.

Finally, the time series of the spatial RMS. The top plot shows the spatial RMS from the modal sequence, i.e. the part of the wavefront in the range space of the DM. This is fine, but ultimately the Strehl ratio of the beam depends on the entire wavefront RMS, shown in the bottom plot. This was generated by reconstructing the phase directly from the slope vector instead of just projecting it onto the range of the poke matrix. Before, when few modes were used, I think there wasn't much improvement between the controllers in this plot, and subsequently that's why there was little improvement in the Strehl as measured by the target. Based on this guy, if the Marechall approximation is true I should see some significant improvement in the measured Strehl when I finally get the target camera set up again (hopefully tomorrow.

That's all for now. gjdm.
The controller only projects the wavefront onto the modes that are controlled, but since the modes aren't totally uncoupled irl, looking at these sequences don't tell the whole story. Instead, I projected the wavefront sequence onto ALL the modes and only looked at the the ones that are controlled. The difference is that there is some cross talk from the higher order modes that you don't see if you neglect them from the projection. For the plots here, the experiment used 25 control modes. The LTI predictor was estimated using 10000 frames with a prediction error around 0.38.
Anyway, one of the more useful plots is obviously the PSD's for each modal sequence. Clearly the classical controller is killing the low frequency/static disturbances but does shit for anything else as expected. The LTI controller does a decent job flattening the spectrum, with surprisingly little high frequency amplification like I was seeing before. My experience has been that this amplification shows up when the predictor model is crap, so I guess it was good enough this time. Plant modeling error should also contribute to that, so if the AO loop does even better I'll have an idea that the closed-loop plant isn't matching the theoretical version in the controller.

Next up is the modal sequence itself for a few of the modes. You can clearly see how the completely open-loop disturbances have zero mean (i.e. in Mode 1), a static error that's knocked out by the integrator. The LTI controller makes quick work of the remaining high-amplitude spikes. One thing I really have to do is estimate a noise floor that shows in the stead-state (with no disturbances) due to the WFS.

The temporal RMS for each mode is also instructive. This is key since sometimes the RMS improvement may not be so impressive even if the PSD's are white. Luckily the LTI improves the RMS for every mode, so I guess they're all fairly well formed on the DM. Note the the integrator sometimes does worse in certain modes, I guess those are being sacrificed for the sake of modes that contribute more to the noise power. i.e., modes with the largest RMS improvement should be the ones that have the highest open-loop value (ex. 6 and 9 compared to 8). Notice that after mode 25 there's almost no improvement as expected. But, there is some difference, indicating the influence of the controlled modes creeping in.

Finally, the time series of the spatial RMS. The top plot shows the spatial RMS from the modal sequence, i.e. the part of the wavefront in the range space of the DM. This is fine, but ultimately the Strehl ratio of the beam depends on the entire wavefront RMS, shown in the bottom plot. This was generated by reconstructing the phase directly from the slope vector instead of just projecting it onto the range of the poke matrix. Before, when few modes were used, I think there wasn't much improvement between the controllers in this plot, and subsequently that's why there was little improvement in the Strehl as measured by the target. Based on this guy, if the Marechall approximation is true I should see some significant improvement in the measured Strehl when I finally get the target camera set up again (hopefully tomorrow.

That's all for now. gjdm.
Tuesday, November 30, 2010
11.30.10
With everything (basically) working, and a reasonable way to generate disturbances, I was able to run the full-blown IIR controller yesterday without any problems. I'd like to be a little more rigorous though and so some id on the closed-loop plant, and play around more with the number of modes to get a sense of how the handling is going to be. Luckily, running 5000 samples takes a few minutes now instead of an hour.
- Verify closed-loop (classical) plant matches model
- Close classical loop with different number of modes. Any instability or saturation?
- Set up target camera with CMU driver
The objective is to have the target camera working, and a good idea of the max number of modes I can use by the end of the week.
- Verify closed-loop (classical) plant matches model
- Close classical loop with different number of modes. Any instability or saturation?
- Set up target camera with CMU driver
The objective is to have the target camera working, and a good idea of the max number of modes I can use by the end of the week.
Tuesday, November 23, 2010
11.23.10
Many re-installations, a new software version, much cursing, and a birthday later, I finally have everything (relatively) working with the WFS. Frame rates at the ~260x260px resolution I'm using now are around 250-300 fps, but for some reason in Matlab I'm limited to a mere 30-40. The external trigger guarantees that each frame is fresh though, so this is still an order of magnitude faster than before with that ridiculous artificial pause. I suspect the frame rate problem has to do with the bytes per packet, which I can't seem to change from Matlab directly.
Anyway, at the moment that's a minor problem. This new speed means that generating a poke matrix only takes around 10 seconds. Here's an example after a few realignments:

Recall that I'm now doing differential measurements . I've tried to correct for the bias WF as much as possible, but its no a pure focus so moving lenses around won't completely obliterate it. Surprisingly, most of the modes are recognizable on the 9x9 measurement grid. For comparison, here are the ideal mode shapes resized to that grid from an original 128x128 image:

Most of them look pretty good, and the higher frequency ones might still be usable. The next step is mapping disturbances on the DM. Right now I'm trying to work out a more systematic way of doing that instead of just blindly using the reconstructor matrix. I'm thinking that projecting the desired WF onto a set of modes, other than the DM modes for the 61 actuator mirror, might be useful for this purpose. Maybe PCA modes from the original data or SVD modes of the DM.
Anyway, at the moment that's a minor problem. This new speed means that generating a poke matrix only takes around 10 seconds. Here's an example after a few realignments:

Recall that I'm now doing differential measurements . I've tried to correct for the bias WF as much as possible, but its no a pure focus so moving lenses around won't completely obliterate it. Surprisingly, most of the modes are recognizable on the 9x9 measurement grid. For comparison, here are the ideal mode shapes resized to that grid from an original 128x128 image:

Most of them look pretty good, and the higher frequency ones might still be usable. The next step is mapping disturbances on the DM. Right now I'm trying to work out a more systematic way of doing that instead of just blindly using the reconstructor matrix. I'm thinking that projecting the desired WF onto a set of modes, other than the DM modes for the 61 actuator mirror, might be useful for this purpose. Maybe PCA modes from the original data or SVD modes of the DM.
Friday, November 12, 2010
11.12.10
Unbelievably I spent all of last week dealing with .NET issues. Mainly, I couldn't get Matlab to load the assemblies I needed to control the DM or connect to the WFS camera.
Mysteriously the problem with the DM assemblies fixed itself after reinstalling things in apparently the right order. The camera issue took a few conference calls and some digging through some arcane .NET barf logs. We finally figured out that the version of some dll the camera driver needed was slightly out of date, like v1.0.0.1 instead of v1.1.0.0, and this was enough to basically send my week down the toilet.
All's good now. I'll have to re-write some code to use the new driver, but that should be minor. The triggering, the whole reason for doing all this shit, still isn't working because of, surprise surprise, a problem loading a .NET assembly. I'll muck around with it more on Monday to see if I can sort the problem out, but my patience for fixing this shit on my own is basically nonexistant. I'm sure another conference call is on the horizon.
Hey, at least I was able to modify the driver box to accept the triggering cable without soldering anything. Who knew IDC connectors were so useful?
Stuff to finish off in the next few days (by Monday):
- Email about triggering problem
- Rewrite code to use new cam driver
- Verify it works with embedded matlab
- This .NET camera interface has no built-in preview method, so maybe I could cook one up with some incredible GUI interface
Tuesday and the rest of the week should be left to alignment, and verifying the triggering is working (using the ARX model stuff).
Mysteriously the problem with the DM assemblies fixed itself after reinstalling things in apparently the right order. The camera issue took a few conference calls and some digging through some arcane .NET barf logs. We finally figured out that the version of some dll the camera driver needed was slightly out of date, like v1.0.0.1 instead of v1.1.0.0, and this was enough to basically send my week down the toilet.
All's good now. I'll have to re-write some code to use the new driver, but that should be minor. The triggering, the whole reason for doing all this shit, still isn't working because of, surprise surprise, a problem loading a .NET assembly. I'll muck around with it more on Monday to see if I can sort the problem out, but my patience for fixing this shit on my own is basically nonexistant. I'm sure another conference call is on the horizon.
Hey, at least I was able to modify the driver box to accept the triggering cable without soldering anything. Who knew IDC connectors were so useful?
Stuff to finish off in the next few days (by Monday):
- Email about triggering problem
- Rewrite code to use new cam driver
- Verify it works with embedded matlab
- This .NET camera interface has no built-in preview method, so maybe I could cook one up with some incredible GUI interface
Tuesday and the rest of the week should be left to alignment, and verifying the triggering is working (using the ARX model stuff).
Thursday, November 04, 2010
W7 Upgrade During Action Report
Installing the hw trigger requires a different camera driver, so rather than try to remove all the remnants of the old one I decided to wipe out my entire system and upgrade to x64. Surprise surprise, upgrading to W7 has turned out to be a huge time vampire, with the usual bullshit-every-step-of-the-way syndrome.
Invariably I'll have to go through this crap again in the near future, so to help prevent another lost week here's brief rundown of some of the problems and solutions.
Problem 1: PC wouldn't start up from W7 x64 install CD. Creating multiple CD's from different ISO's did nothing.
Solution: Created an ISO from an install CD (on my Mac no less), then used the "Windows 7 USB/DVD Download Tool" to create a bootable USB drive. But this required...
Problem 2: Since I was upgrading from XP x86 to W7 x64, the USB/DVD Download stuff required an additional program to create the MBR on the USB drive. Normally you have to download this from MS based on your purchase history, but since my ISO came from a MSDN site I had no download history.
Solution: Thankfully this is a common problem and someone on the interwebs posted a direct link to what I needed.
Problem 3: W7 activation couldn't find the DNS server to register the product key.
Solution: After a few emails to the campus network overlords, they sent me the direct address.
Problem 4: Installing video card driver resulted in everyone's favorite BSOD.
Solution: Quickly unplug all USB connections right before installer begins. Seriously.
There were other, but you get the gist.
Invariably I'll have to go through this crap again in the near future, so to help prevent another lost week here's brief rundown of some of the problems and solutions.
Problem 1: PC wouldn't start up from W7 x64 install CD. Creating multiple CD's from different ISO's did nothing.
Solution: Created an ISO from an install CD (on my Mac no less), then used the "Windows 7 USB/DVD Download Tool" to create a bootable USB drive. But this required...
Problem 2: Since I was upgrading from XP x86 to W7 x64, the USB/DVD Download stuff required an additional program to create the MBR on the USB drive. Normally you have to download this from MS based on your purchase history, but since my ISO came from a MSDN site I had no download history.
Solution: Thankfully this is a common problem and someone on the interwebs posted a direct link to what I needed.
Problem 3: W7 activation couldn't find the DNS server to register the product key.
Solution: After a few emails to the campus network overlords, they sent me the direct address.
Problem 4: Installing video card driver resulted in everyone's favorite BSOD.
Solution: Quickly unplug all USB connections right before installer begins. Seriously.
There were other, but you get the gist.
Wednesday, October 27, 2010
10.28.10
Next couple days:
Thursday:
- more poke matrix adjustments
- identify components to upgrade (HD, ram)
- look into some kind of pin connector in lieu of soldering
- back up data
Friday-Sunday:
- gather installation disks
- install new hardware
- install matlab and w7
- install new optics software
- review matlab code examples
Monday:
- profit
Thursday:
- more poke matrix adjustments
- identify components to upgrade (HD, ram)
- look into some kind of pin connector in lieu of soldering
- back up data
Friday-Sunday:
- gather installation disks
- install new hardware
- install matlab and w7
- install new optics software
- review matlab code examples
Monday:
- profit
Tuesday, October 26, 2010
10.26.10
Yesterday I finished the code to compute the differential slope measurements. I was able to get a decent looking poke matrix out of it after finding the right size perturbations. Of course, since the new WFS grid is only 9x9, the modal images are roughly a quarter the resolution, so some of them look more like crumpled paper without squinting. I'm probably bumping up against the spatial Nyquist with the higher modes, but I'm far to lazy to quantify that. I suspect that some of the higher DM61 modes are unobservable, although its uncertain what use those are anyway.
With that done I managed to get a stable integrator running, but of course since all the measurement are differential right now, there was basically nothing to compensate. Today I finished rewriting the code to generate disturbance commands matching the output from the state-space models, so the next step is to try out the LTI and adaptive loops (first in silico obviously).
I also tried using the pixel sub-sampling mode on the WFS, which halves the image resolution but can quadruple the frame rate. Sure enough, I was able to get nearly 450fps out of it using the current beam size; huge considering just a couple weeks ago I was struggling to get 20 fps. Obviously reducing the image resolution decreases the sensitivity of the WFS, but this might not be a problem for my purposes. The advantage of doing differential measurements is that I can basically use this mode with few code modifications. I would just have to define a new reference grid with this smaller resolution.
The next (hopefully final...please let it be final) hardware step is to get the camera triggering working so I can actually use these superfast frame rates. I've set the ball in motion on this, but it looks like its going to be pretty involved, requiring a new cam driver and modifications to the DM driver so I can use the existing NI board to generate the trigger. Having this running by the end of next week will be a tall order, but not outrageous if I can get the software in time.
Make it so.
With that done I managed to get a stable integrator running, but of course since all the measurement are differential right now, there was basically nothing to compensate. Today I finished rewriting the code to generate disturbance commands matching the output from the state-space models, so the next step is to try out the LTI and adaptive loops (first in silico obviously).
I also tried using the pixel sub-sampling mode on the WFS, which halves the image resolution but can quadruple the frame rate. Sure enough, I was able to get nearly 450fps out of it using the current beam size; huge considering just a couple weeks ago I was struggling to get 20 fps. Obviously reducing the image resolution decreases the sensitivity of the WFS, but this might not be a problem for my purposes. The advantage of doing differential measurements is that I can basically use this mode with few code modifications. I would just have to define a new reference grid with this smaller resolution.
The next (hopefully final...please let it be final) hardware step is to get the camera triggering working so I can actually use these superfast frame rates. I've set the ball in motion on this, but it looks like its going to be pretty involved, requiring a new cam driver and modifications to the DM driver so I can use the existing NI board to generate the trigger. Having this running by the end of next week will be a tall order, but not outrageous if I can get the software in time.
Make it so.
Friday, October 22, 2010
10.23.10
So far so good with the beam alignment, but it turns out that fixing the beam resizing reveals another problem. After rewriting some of the slope calculation code to use the new beam size, I found that identifying a poke matrix like I had been doing was producing crap. The problem now is that while the beam diameter is relatively stable, the number of lenslet spots in the WFS image can change under certain DM commands. Basically, the wavefront can change enough so that the spots move in and out of the frame entirely. This could be another consequence of using a smaller beam - increased sensitivity of the WFS centroid locations.
The number of spots in the WFS image seems to be roughly proportional to the focus mode, or the average voltage command. Applying random commands (even with a pretty large range) around a particular voltage for example doesn't really present any problems; the lenslets spots just jiggle around as you'd expect (REF video in 10.21.10). But when the center voltage is changed the total number of spots can change drastically, even if the overall beam diameter doesn't change.
The main implication of this is how to set the reference wavefront. I'm pretty much resigned to doing differential wavefront measurements after learning from AOS that I was never really using some absolute reference file. But not its no longer sufficient to just subtract the reference wavefront and apply whatever commands I want. Instead, the average command voltage has to be close to the average used to determine the reference, i.e. if commands are zeroed at a voltage of 180, then in general commands should be close to zero mean.
I don't think this should be a problem for my experiments since everything from now on will be differential measurements from that reference, so there shouldn't be any large focus biases that the PI controller has to kill. Similarly, since the disturbances are nominally zero mean, I can set the average disturbance voltage to be the same value that I use to create the reference.
However, as I'm about 2 glasses of Sailor Jerry deep at the moment this could all be nonsense. I'll sort it out tomorrow.
The number of spots in the WFS image seems to be roughly proportional to the focus mode, or the average voltage command. Applying random commands (even with a pretty large range) around a particular voltage for example doesn't really present any problems; the lenslets spots just jiggle around as you'd expect (REF video in 10.21.10). But when the center voltage is changed the total number of spots can change drastically, even if the overall beam diameter doesn't change.
The main implication of this is how to set the reference wavefront. I'm pretty much resigned to doing differential wavefront measurements after learning from AOS that I was never really using some absolute reference file. But not its no longer sufficient to just subtract the reference wavefront and apply whatever commands I want. Instead, the average command voltage has to be close to the average used to determine the reference, i.e. if commands are zeroed at a voltage of 180, then in general commands should be close to zero mean.
I don't think this should be a problem for my experiments since everything from now on will be differential measurements from that reference, so there shouldn't be any large focus biases that the PI controller has to kill. Similarly, since the disturbances are nominally zero mean, I can set the average disturbance voltage to be the same value that I use to create the reference.
However, as I'm about 2 glasses of Sailor Jerry deep at the moment this could all be nonsense. I'll sort it out tomorrow.
Thursday, October 21, 2010
10.21.10
Finally, I think I'm just about done with the realignment. More screw-ups along the way than I would have liked considering I've done this countless times by now, but overall I think this was worth it. One problem worth mentioning was that I had to add an additional relay tele with unit magnification at the end. It turned out that despite all my careful measuring and CADing, I didn't account for the actual length of the WFS, and I wasn't able to place the lenslet array on the focal plane of the resizing telescope at the end. I thought about ordering a couple new lenses with a larger lens at one end, but I ended up just using a pair of 50mm lenses (compared to the 25mm lens at the end of the resizing tele) to reimage the beam onto a more spacious focal distance.
And holy shit the thing actually works pretty well. The beam does actually stay pretty constant when I apply random commands to both DM's. Although things go to shit quickly if I move them at their maximums, that never really happens in actual experiments. Plus its nice to know your actuator's have more range than your sensor's can deal with.
The beam diameter right now is around 1.5mm, meaning that I can meet my objective of using a WFS resolution of 220x220. With this size I can get WFS frame rates > 100 fps. Hopefully I can stay around 80 Hz closed-loop when I finally get the camera trigger installed.
Here's a video running the disturbance and control commands for a closed-loop experiment with the old setup. Obviously its nonsense, but its clear how consistent the beam size is compared to before.
Its pretty cool (to me at least) to see the spots dancing around like that without the beam drastically changing around the margins. Some of the spots are smeared out, but I think thats a result of the beam size. With a smaller diameter, each lenslet captures more relative area. Thus if there are high spatial frequency aberrations the portion of the beam entering each lenslet will have more to it than just tilt, and thus won't form a tidy spot.
An open question is how much I can mitigate the bias. The only way I can see to easily do this without messing up all the focal planes is to move the 500mm lens. Also, I have to start changing the slope calculation code to use this reduced beam. An open question is whether I should create a new reference wavefront or use the existing AOS file.
And holy shit the thing actually works pretty well. The beam does actually stay pretty constant when I apply random commands to both DM's. Although things go to shit quickly if I move them at their maximums, that never really happens in actual experiments. Plus its nice to know your actuator's have more range than your sensor's can deal with.
The beam diameter right now is around 1.5mm, meaning that I can meet my objective of using a WFS resolution of 220x220. With this size I can get WFS frame rates > 100 fps. Hopefully I can stay around 80 Hz closed-loop when I finally get the camera trigger installed.
Here's a video running the disturbance and control commands for a closed-loop experiment with the old setup. Obviously its nonsense, but its clear how consistent the beam size is compared to before.
Its pretty cool (to me at least) to see the spots dancing around like that without the beam drastically changing around the margins. Some of the spots are smeared out, but I think thats a result of the beam size. With a smaller diameter, each lenslet captures more relative area. Thus if there are high spatial frequency aberrations the portion of the beam entering each lenslet will have more to it than just tilt, and thus won't form a tidy spot.
An open question is how much I can mitigate the bias. The only way I can see to easily do this without messing up all the focal planes is to move the 500mm lens. Also, I have to start changing the slope calculation code to use this reduced beam. An open question is whether I should create a new reference wavefront or use the existing AOS file.
Thursday, October 14, 2010
Better Luck Next Time
Submission deadline has come and gone....and no submission.
The problem turned out to be with the target camera. Despite getting pretty bang up results with the PSD's and modal sequences, there was basically no difference between the average intensity profile with the classical or LTI controller. I spent several days trying to find some intensity-based performance metric-power in the box, image sharpness, intensity variance-that would show some difference to no avail. Eventually I just gave up and averaged a few thousand HDR frames and found both controllers yielded essentially identical results; no function of profiles was going to reveal any advantage.
Why this happened is still a mystery considering the improvement in the PSD's with the LTI controller. I suspect its because there's enough power in the uncontrolled 26 modes to swamp any gains. Looking at the modal distribution of the open-loop disturbance wavefronts confirms that there's significant power at the higher spatial frequencies, and indeed both the classical and LTI controllers knock down roughly similar amounts. Luckily both almost completely eliminate the steady-state (average) disturbance in the controlled modes, so at least something is working.


Note that since the modes are not unitary, you can't compare the norm of the time series of the individual modal sequences directly. Instead, you have to multiply each one by the spatial norm of the mode's phase shape (sort of like the spatial RMS), to really get a number that's proportional to the "power" produced by that mode.
The problem turned out to be with the target camera. Despite getting pretty bang up results with the PSD's and modal sequences, there was basically no difference between the average intensity profile with the classical or LTI controller. I spent several days trying to find some intensity-based performance metric-power in the box, image sharpness, intensity variance-that would show some difference to no avail. Eventually I just gave up and averaged a few thousand HDR frames and found both controllers yielded essentially identical results; no function of profiles was going to reveal any advantage.
Why this happened is still a mystery considering the improvement in the PSD's with the LTI controller. I suspect its because there's enough power in the uncontrolled 26 modes to swamp any gains. Looking at the modal distribution of the open-loop disturbance wavefronts confirms that there's significant power at the higher spatial frequencies, and indeed both the classical and LTI controllers knock down roughly similar amounts. Luckily both almost completely eliminate the steady-state (average) disturbance in the controlled modes, so at least something is working.


Note that since the modes are not unitary, you can't compare the norm of the time series of the individual modal sequences directly. Instead, you have to multiply each one by the spatial norm of the mode's phase shape (sort of like the spatial RMS), to really get a number that's proportional to the "power" produced by that mode.
Tuesday, October 05, 2010
10.5.10[2]
I managed to resurrect the code I had for evaluating intensity-based performance measures in Simulink. The problem now is that there doesn't seem to be much difference in the performance of the controllers from this perspective. There's definitely some variation in the actual number, but in terms of the average or value the differences seem to be minimal. I think there are 2 possible explanations for this.
1. Right now I'm still using 5 modes, and when looking at the modal coefficients or only the norm of the modal sequence I see definite improvement. But its possible that there content from the higher spatial frequencies is swamping the overall results. A good way to test this, and to see just how many modes I should use, would be to apply a disturbance and look at the closed-loop wavefronts projected onto all the modes. The size of the individual modal sequences and their correlation with the overall RMS wavefront should give me an idea of how many modes I need to see some improvement. Of course, I could also "cheat" by only applying the disturbance to the modes I can control. This might be a good test as well.
2. There might also be some nonlinear intensity bullshit going on that's affecting the measurement, but isn't showing up in the modal sequence. This should be evident if restricting the disturbance to the controlled modes doesn't help. If this is the case, I'm not sure there's much that can be done without the experiment rebuild that I'm planning.
1. Right now I'm still using 5 modes, and when looking at the modal coefficients or only the norm of the modal sequence I see definite improvement. But its possible that there content from the higher spatial frequencies is swamping the overall results. A good way to test this, and to see just how many modes I should use, would be to apply a disturbance and look at the closed-loop wavefronts projected onto all the modes. The size of the individual modal sequences and their correlation with the overall RMS wavefront should give me an idea of how many modes I need to see some improvement. Of course, I could also "cheat" by only applying the disturbance to the modes I can control. This might be a good test as well.
2. There might also be some nonlinear intensity bullshit going on that's affecting the measurement, but isn't showing up in the modal sequence. This should be evident if restricting the disturbance to the controlled modes doesn't help. If this is the case, I'm not sure there's much that can be done without the experiment rebuild that I'm planning.
10.5.10
With a conference deadline on the horizon, the focus the next couple days is to see just how much improvement I can squeeze out of the experiment in its current form. If it looks like I can see a significant improvement over the classical loop, even if not that many modes are in play, I think I'll have enough time to cobble together a decent paper to submit. Between my prospectus and previous stuff from our lave there should be enough material to minimize the writing that I'd have to do.
I'm optimistic based on what I've seen so far that I'll be able to pull this off, although I haven't looked to see how much the Strehls are improving. If all goes well I can work on getting specific results later in the week.
I'm optimistic based on what I've seen so far that I'll be able to pull this off, although I haven't looked to see how much the Strehls are improving. If all goes well I can work on getting specific results later in the week.
Thursday, September 30, 2010
Write Stuff Yourself
I've basically struggled over the past week+ to figure out what the deal is with the optimal filter. The puzzling thing was that simulations from the command line would look good, whitening an otherwise off-white output PSD, but when it came time to Simulink the outputs would look almost identical.
Finally on Monday, after much cursing I figured it out the problem after comparing basically every internal signal in the Simulink model to what it should be using lsim. It turns out that the multi-channel transfer function block I was using in the LTI controller was spitting out garbage. Fixing this involved copying the little fucker from a working diagram I received from my advisor. What was maddening was that both blocks implemented the identical transfer function...at least identical algebraically.
Now, I had to use this custom block because Mathworks, in their infinite, overpriced wisdom, doesn't have a transfer function Simulink block that works with multiple channels to my knowledge. Our lab created this one to use, but unbeknownst to me it assumes the denominator is monic, which wasn't the case in the block I was using. This is what I get for using software without really knowing the details under the hood.
Anyway, shits sort of working now with multiple channels in the experiment, just meeting my little deadline. Saturation's the enemy now, but I think I can fix that with the forthcoming hardware mods. There's still a paper deadline in a couple weeks though that I'm going to try for.
Finally on Monday, after much cursing I figured it out the problem after comparing basically every internal signal in the Simulink model to what it should be using lsim. It turns out that the multi-channel transfer function block I was using in the LTI controller was spitting out garbage. Fixing this involved copying the little fucker from a working diagram I received from my advisor. What was maddening was that both blocks implemented the identical transfer function...at least identical algebraically.
Now, I had to use this custom block because Mathworks, in their infinite, overpriced wisdom, doesn't have a transfer function Simulink block that works with multiple channels to my knowledge. Our lab created this one to use, but unbeknownst to me it assumes the denominator is monic, which wasn't the case in the block I was using. This is what I get for using software without really knowing the details under the hood.
Anyway, shits sort of working now with multiple channels in the experiment, just meeting my little deadline. Saturation's the enemy now, but I think I can fix that with the forthcoming hardware mods. There's still a paper deadline in a couple weeks though that I'm going to try for.
Tuesday, September 21, 2010
9.21.10 [2]
I've ordered parts for the new and improved experiment. Mainly just a ton of lenses, they should arrive relatively quickly. I'm hoping realignment should be straightforward, but I'm worried since the tolerances on the reimaging telescopes is pretty thin. I might have to resort to CADing everything out, but even that would involve a lot of guesswork.
There's apparently a conference submission deadline at the end of the month, so it turns out that my goal to do multichannel adaptive control by the end of September is a good plan. I'm going to try to get some decent results that can go into a preliminary paper before I tear my experiment asunder and start over. For the time being I'll have to run <10Hz so that the single delay DM model holds.
While the adaptive loop seems to work well without much modification, so far I haven't gotten much out of the optimal IIR filter/Kalman predictor. It works well enough in flattening the PSD when I use the identified disturbance model directly to generate the noise, but in the experiment its effect is negligible. I suspect this is a result of a shitty ID. It might be interesting to look at the prediction error with the actual experiment/simulation, instead of just driving the identified model.
Overall the plan is to spend the remainder of September getting preliminary results for this conference, then worry about experimental modifications in October. Make it so.
There's apparently a conference submission deadline at the end of the month, so it turns out that my goal to do multichannel adaptive control by the end of September is a good plan. I'm going to try to get some decent results that can go into a preliminary paper before I tear my experiment asunder and start over. For the time being I'll have to run <10Hz so that the single delay DM model holds.
While the adaptive loop seems to work well without much modification, so far I haven't gotten much out of the optimal IIR filter/Kalman predictor. It works well enough in flattening the PSD when I use the identified disturbance model directly to generate the noise, but in the experiment its effect is negligible. I suspect this is a result of a shitty ID. It might be interesting to look at the prediction error with the actual experiment/simulation, instead of just driving the identified model.
Overall the plan is to spend the remainder of September getting preliminary results for this conference, then worry about experimental modifications in October. Make it so.
9.21.10
Vacation injury report:
- 1.5" cut, left shin
- 2" cut, right shin
- 5" bruise, hip
- multiple bruises, left thigh
- 1" bruise, chest (x2)
- skinned left elbow (despite armor)
- scraped left knee (various)
- slightly chapped lips
hey, what's the point of paying for health insurance if you're not going to use it?
- 1.5" cut, left shin
- 2" cut, right shin
- 5" bruise, hip
- multiple bruises, left thigh
- 1" bruise, chest (x2)
- skinned left elbow (despite armor)
- scraped left knee (various)
- slightly chapped lips
hey, what's the point of paying for health insurance if you're not going to use it?
Sunday, September 12, 2010
9.12.10
So I'm taking a week of to visit places like this

But on the other hand I have plenty of time to digest what I learned on my little business trip last weekend. The good news is that the DM delay I've experienced was well known, and it turns out not to be due to the DM at all. Instead, the problem is that the WFS returns a frame from a buffer somewhere when requested, and at higher frame rates there's no guarantee how fresh or stale that image might be.
The solution is to use a hardware trigger for the camera so it only returns a recent frame when requested. The good news is that the current driver box I'm using for the 61 actuator DM can be modded to do this. The bad is that I have to solder a connection inside this $5000 box. We'll see when that happens.
Other than that I learned just how short my optics knowledge is when I talked about my experiment. Contrary to what I'm seeing, the beam size and intensity shouldn't change with DM commands as long as the WFS and disturbance DM are on imaging planes. Right now light is just beamed in without regard for human life. Some relatively simple hardware changes should solve this, but that means yet another alignment. Really I shouldn't be tinkering with the experiment at this point, but you can't fix stupid.
All said, I should be able to use a 10x10 lenslet array and fix the DM delay, which would allow me to boost the speed up to 100Hz and beyond. I'm aiming for the end of Oct for this, but who knows.
But on the other hand I have plenty of time to digest what I learned on my little business trip last weekend. The good news is that the DM delay I've experienced was well known, and it turns out not to be due to the DM at all. Instead, the problem is that the WFS returns a frame from a buffer somewhere when requested, and at higher frame rates there's no guarantee how fresh or stale that image might be.
The solution is to use a hardware trigger for the camera so it only returns a recent frame when requested. The good news is that the current driver box I'm using for the 61 actuator DM can be modded to do this. The bad is that I have to solder a connection inside this $5000 box. We'll see when that happens.
Other than that I learned just how short my optics knowledge is when I talked about my experiment. Contrary to what I'm seeing, the beam size and intensity shouldn't change with DM commands as long as the WFS and disturbance DM are on imaging planes. Right now light is just beamed in without regard for human life. Some relatively simple hardware changes should solve this, but that means yet another alignment. Really I shouldn't be tinkering with the experiment at this point, but you can't fix stupid.
All said, I should be able to use a 10x10 lenslet array and fix the DM delay, which would allow me to boost the speed up to 100Hz and beyond. I'm aiming for the end of Oct for this, but who knows.
Monday, September 06, 2010
9.6.10
So I'm headed out to ABQ tomorrow to check out some other AO experiments irl. Although it probably would have been more useful to go there months ago when I was still tinkering with hardware, hopefully I'll get some nice ideas about how to improve the frame rate of my experiment, something my advisor seems to focus on.
I'm more concerned about the control aspects, and I doubt I'll hear anything interesting about that although you never know. I've made some progress computing the Kalman predictor like I described. Identifying the right disturbance model turns out to be a shittier experience than I thought, and I'm still not sure if I have the best procedure nailed down yet. I wasn't even able to get er done in the single channel case. The one IIR filter (for 5 channels) I managed to implement didn't do much to improve the wavefront error in either simulation or experiment. So either something is wrong with the calculation, or the whole idea of using a simple n-step predictor is mistaken somewhere.
That'll have to wait until I get back, or longer since I'm taking next week off to get out of town before the unwashed masses return to campus at the end of the month. Hopefully I can use that time to wrap my head around some simple RLS and (ideally) primitive lattice filter example in between mountain bike runs.
If I work hard like a good grad student, I think I'm still on track to get a multichannel version of the adaptive filter running by the end of the month, although surprisingly the multichannel optimal IIR filter may come first.
I'm more concerned about the control aspects, and I doubt I'll hear anything interesting about that although you never know. I've made some progress computing the Kalman predictor like I described. Identifying the right disturbance model turns out to be a shittier experience than I thought, and I'm still not sure if I have the best procedure nailed down yet. I wasn't even able to get er done in the single channel case. The one IIR filter (for 5 channels) I managed to implement didn't do much to improve the wavefront error in either simulation or experiment. So either something is wrong with the calculation, or the whole idea of using a simple n-step predictor is mistaken somewhere.
That'll have to wait until I get back, or longer since I'm taking next week off to get out of town before the unwashed masses return to campus at the end of the month. Hopefully I can use that time to wrap my head around some simple RLS and (ideally) primitive lattice filter example in between mountain bike runs.
If I work hard like a good grad student, I think I'm still on track to get a multichannel version of the adaptive filter running by the end of the month, although surprisingly the multichannel optimal IIR filter may come first.
Friday, August 27, 2010
IIR Filtering Ideas
This post is going to be in pseudo-formal speak since I'm fleshing out ideas for my dissertation. Although that's tough without adding any equations.
The problem of constructing the optimal IIR controller for a given closed-loop plant and disturbance model can be solved as an LQR problem with a particular state-space system. However, the condition that the plant transfer matrix is commutable with the filter turns out to be overly restrictive. Suppose the transfer matrix can be factored into the product of minimum and non-minimnum phase matrices. We can define a new filter which is the product of the minimum phase component and the optimal filter F, this leaving the non-minimum phase component to be compensated. The requirement is now that this non-minimum phase component is commutable, which happens if its equal to a scalar transfer function times the identify matrix. If this is satisfied, we can perform the LQR problem to identify the controller, then multiply it by the inverse of the minimum phase component to recover the actual optimal filter that is implemented in the software.
For the adaptive optics experiment, the lack of significant DM dynamics simplifies the problem. If the non-minimum phase transfer matrix consists solely of n-step delays on the diagonal, then the optimal IIR filter is simply the n-step Kalman predictor for the disturbance model. The disturbance model is itself in innovations form, thus the Kalman predictor can be constructed directly from the state-space model generated by the subspace identification algorithm.
...some math showing how this is done....
The result is a filter which predicts the disturbance wavefronts n-steps ahead on the basis of the current wavefront measurement.
Beautiful.
The problem of constructing the optimal IIR controller for a given closed-loop plant and disturbance model can be solved as an LQR problem with a particular state-space system. However, the condition that the plant transfer matrix is commutable with the filter turns out to be overly restrictive. Suppose the transfer matrix can be factored into the product of minimum and non-minimnum phase matrices. We can define a new filter which is the product of the minimum phase component and the optimal filter F, this leaving the non-minimum phase component to be compensated. The requirement is now that this non-minimum phase component is commutable, which happens if its equal to a scalar transfer function times the identify matrix. If this is satisfied, we can perform the LQR problem to identify the controller, then multiply it by the inverse of the minimum phase component to recover the actual optimal filter that is implemented in the software.
For the adaptive optics experiment, the lack of significant DM dynamics simplifies the problem. If the non-minimum phase transfer matrix consists solely of n-step delays on the diagonal, then the optimal IIR filter is simply the n-step Kalman predictor for the disturbance model. The disturbance model is itself in innovations form, thus the Kalman predictor can be constructed directly from the state-space model generated by the subspace identification algorithm.
...some math showing how this is done....
The result is a filter which predicts the disturbance wavefronts n-steps ahead on the basis of the current wavefront measurement.
Beautiful.
Monday, August 23, 2010
Overly Ambitious
Now that the FIR filter is basically working for the single channel experiment, I think its a good time to set some targets for the next four weeks. The clear next step is to start working on a multichannel version using more modes. Theoretically there isn't that much difference here, but I suspect there will be practical problems with actuator saturation and other shittyness that will slow things down.
I think a reasonable goal is a 10 channel adaptive/optimal filter in 4 weeks. Here are some things that will need to happen:
1. Characterize closed-loop transfer matrix. How similar are the diagonal terms? How significant are the off diagonals? What's the best scalar transfer function approximation?
2. Get the adaptive controller working given an identified or ideal transfer matrix.
3. Write a script to compute the optimal multichannel FIR and IIR filters.
4. Get target camera working to compute Strehl ratios.
5. Find/write/steal a simulink block that can implement multichannel transfer functions. The pole at 1 in the pure integrator I'm using now might (will) lead to saturation when more modes are controlled.
6. Questions: How to the PSD's of the output channels compare? How does changing the number of modes alter steady-state performance?
A lot of this will require running many experiments or simulations, so there should be plenty of down time to pursue some theoretical stuff for the SISO case. Basically, I'd like to write m-files that do the following:
1. Compute the optimal IIR filter. How does performance compare to the FIR case?
2. Compute the optimal FIR gains using and RLS array algorithm.
3. Compute the optimal FIR gains using an RLS lattice filter.
4. Do something about implementing a Laguerre filter.
SInce these will all be m-files, I don't expect to implement these in the actual experiment right away. Mainly, I want to get some idea of the theoretical performance benefits by using increasingly complicated methods. Obviously I want to do all of this for the multichannel case one day, but the details of that are so complicated my head might explode first. We'll see.
I think a reasonable goal is a 10 channel adaptive/optimal filter in 4 weeks. Here are some things that will need to happen:
1. Characterize closed-loop transfer matrix. How similar are the diagonal terms? How significant are the off diagonals? What's the best scalar transfer function approximation?
2. Get the adaptive controller working given an identified or ideal transfer matrix.
3. Write a script to compute the optimal multichannel FIR and IIR filters.
4. Get target camera working to compute Strehl ratios.
5. Find/write/steal a simulink block that can implement multichannel transfer functions. The pole at 1 in the pure integrator I'm using now might (will) lead to saturation when more modes are controlled.
6. Questions: How to the PSD's of the output channels compare? How does changing the number of modes alter steady-state performance?
A lot of this will require running many experiments or simulations, so there should be plenty of down time to pursue some theoretical stuff for the SISO case. Basically, I'd like to write m-files that do the following:
1. Compute the optimal IIR filter. How does performance compare to the FIR case?
2. Compute the optimal FIR gains using and RLS array algorithm.
3. Compute the optimal FIR gains using an RLS lattice filter.
4. Do something about implementing a Laguerre filter.
SInce these will all be m-files, I don't expect to implement these in the actual experiment right away. Mainly, I want to get some idea of the theoretical performance benefits by using increasingly complicated methods. Obviously I want to do all of this for the multichannel case one day, but the details of that are so complicated my head might explode first. We'll see.
Wednesday, August 18, 2010
Phucking Transpose
Yes, a freaking missing apostrophe was responsible for nearly a weeks delay. To try to narrow down the problem with the impulse response filter calculation, I was trying to match the results with the data driven m-file using made up disturbance models comprised of random matrices. It turned out that both filters were the same as long as the A matrix was diagonal. The only place in the code where this mattered were locations where I needed A transpose. The function call to Matlab's dlyap function was the freaking culprit. That piece of shit is one of those functions that's screwed me in the past, and of course the one time I wasn't careful it bits me in the ass.
Anyway, with the fix the impulse response method now returns basically the same FIR filter coefficients as using data. I also realized that the results from the adaptive loop I posted yesterday were crap. Somehow I was using the wrong model for the closed loop plant, chalk that up to shitty variable names. Here are the proper PSD's and modal outputs

My advisor thinks these results are stellar. Amazingly the PSD with the optimal FIR filter is pretty similar to the AO PSD, hopefully showing that my idea of dividing out the part of the disturbance cancelled by the classical loop is correct. In this case both the adaptive and fixed gain FIR filter are using 4 taps, so you might ask why the AO loop does better than the "optimal" filter at certain frequencies. The reason is that the adaptive loop can compensate somewhat for modeling error.
Right now I'm trying to run things without the shitty DM pause, which speeds things up to around 20Hz. I suspect the results won't be so peachy, but the time savings would be huge (8 minutes vs 40 for 10000 frames), and I wouldn't have to spend so much time quality time with youtube waiting for my experiments to run.
Also, I'd like to look at how the number of filter coefficients changes the steady-state performance, although I don't think adding many more taps will make much difference. Also I'd be cute to have the Strehl ratio performance to look at too.
Peace out.
Anyway, with the fix the impulse response method now returns basically the same FIR filter coefficients as using data. I also realized that the results from the adaptive loop I posted yesterday were crap. Somehow I was using the wrong model for the closed loop plant, chalk that up to shitty variable names. Here are the proper PSD's and modal outputs

My advisor thinks these results are stellar. Amazingly the PSD with the optimal FIR filter is pretty similar to the AO PSD, hopefully showing that my idea of dividing out the part of the disturbance cancelled by the classical loop is correct. In this case both the adaptive and fixed gain FIR filter are using 4 taps, so you might ask why the AO loop does better than the "optimal" filter at certain frequencies. The reason is that the adaptive loop can compensate somewhat for modeling error.
Right now I'm trying to run things without the shitty DM pause, which speeds things up to around 20Hz. I suspect the results won't be so peachy, but the time savings would be huge (8 minutes vs 40 for 10000 frames), and I wouldn't have to spend so much time quality time with youtube waiting for my experiments to run.
Also, I'd like to look at how the number of filter coefficients changes the steady-state performance, although I don't think adding many more taps will make much difference. Also I'd be cute to have the Strehl ratio performance to look at too.
Peace out.
Monday, August 16, 2010
Case of the Mondays
Here are the first results from using the "optimal" FIR filter, computed using the data-drive approach. Surprisingly it performs pretty well compared to the AO loop, the fact that its not spitting out absolute crap is a small miracle.

Here's a sampling of the modal output

I'm not sure what's going on with the AO loop. Looking at the commands with the adaptive loop closed shows lots of lower-end saturation going on about 1/3 of the time, so something is probably screwed up somewhere in the experiment. Hard to say at the moment since I'm running this remotely from home. The frustrating part is that it takes to friggin long to run an experiment, around 40 minutes for 10000 frames, that its easy to distract my already OCD mindset. I'm going to have to start doing this in simulations first.
This is good for a first step, but there are still some outstanding questions I'd like to look at this week. The first few deal with this optimal FIR filter calculation:
1. Determine what's really causing the difference between the data and impulse response drive methods to finding the optimal gains.
2. What's the real disturbance model that should be used in the calculation? What's the difference between computing it and identifying it from i/o data? Should it be SISO or MISO?
3. How does the filter order affect performance?
4. Write a script to determine the optimal IIR filter by solving an LQR problem.
Also, all of this stuff so far has been for the first focus mode. Sooner or later I'm going to have to do everything over again al MIMO, so I'd be nice to have some heads up if there are potential problems in the road. The first step is to look at the transfer matrix for multiple modes with the classical loop closed. Everything depends on this being diagonal with the same SISO tf on the diagonals. If this doesn't hold to a reasonable extent then there could be serious limitations. With that in mind:
1. How similar are the diagonal transfer functions for each mode? Models identified with significant saturation are garbage.
2. If they're all of the same form, but with a different gain, can the transfer matrix be factored into a single transfer function times a static gain matrix? If so, can this matrix just be incorporated into the poke matrix?
3. What's the difference between doing a MIMO subspace ID and multiple SISO id's?
4. Does the simulation even have enough accuracy to identify the model for multiple channels?
All this will be much faster if I just suck it up and do it in silico first.

Here's a sampling of the modal output

I'm not sure what's going on with the AO loop. Looking at the commands with the adaptive loop closed shows lots of lower-end saturation going on about 1/3 of the time, so something is probably screwed up somewhere in the experiment. Hard to say at the moment since I'm running this remotely from home. The frustrating part is that it takes to friggin long to run an experiment, around 40 minutes for 10000 frames, that its easy to distract my already OCD mindset. I'm going to have to start doing this in simulations first.
This is good for a first step, but there are still some outstanding questions I'd like to look at this week. The first few deal with this optimal FIR filter calculation:
1. Determine what's really causing the difference between the data and impulse response drive methods to finding the optimal gains.
2. What's the real disturbance model that should be used in the calculation? What's the difference between computing it and identifying it from i/o data? Should it be SISO or MISO?
3. How does the filter order affect performance?
4. Write a script to determine the optimal IIR filter by solving an LQR problem.
Also, all of this stuff so far has been for the first focus mode. Sooner or later I'm going to have to do everything over again al MIMO, so I'd be nice to have some heads up if there are potential problems in the road. The first step is to look at the transfer matrix for multiple modes with the classical loop closed. Everything depends on this being diagonal with the same SISO tf on the diagonals. If this doesn't hold to a reasonable extent then there could be serious limitations. With that in mind:
1. How similar are the diagonal transfer functions for each mode? Models identified with significant saturation are garbage.
2. If they're all of the same form, but with a different gain, can the transfer matrix be factored into a single transfer function times a static gain matrix? If so, can this matrix just be incorporated into the poke matrix?
3. What's the difference between doing a MIMO subspace ID and multiple SISO id's?
4. Does the simulation even have enough accuracy to identify the model for multiple channels?
All this will be much faster if I just suck it up and do it in silico first.
8.16.10
Still having trouble getting decent results with the FIR filter. I still can't get the impulse response method to agree with what the least-squares solution spits out, even when I include the noise covariance in the state-space model. I might try a third method, there the filter coefficients are spit out from a finite-time LQR problem.
One thing I noticed is that the disturbance model I use isn't exactly the state-space system identified directly from the disturbances. Instead, its the part of the disturbance left over after going through the classical control loop. After untangling the block diagram, you end up dividing the original disturbance system by some transfer function involving the plant model. The problem I have is that this transfer function might not be exactly minimum phase, so you can end up with an unstable disturbance model to put into your FIR calculation.
I think its time to visit my advisor.
One thing I noticed is that the disturbance model I use isn't exactly the state-space system identified directly from the disturbances. Instead, its the part of the disturbance left over after going through the classical control loop. After untangling the block diagram, you end up dividing the original disturbance system by some transfer function involving the plant model. The problem I have is that this transfer function might not be exactly minimum phase, so you can end up with an unstable disturbance model to put into your FIR calculation.
I think its time to visit my advisor.
Tuesday, August 10, 2010
8.10.10 [2]
I feel like I'm loosing the script here, so a quick sitrep on what's going on: I currently have 2 different methods written to calculate the optimal FIR disturbance rejection filter. One uses state-space models of the disturbance and plant to generate impulse response sequences. These can be used to form a Weiner-Hopf problem and solved for the optimal coefficients a la earlier work we did on jitter control. The second method uses the models to simulate data directly. Given enough samples, the solution to another (similar) linear equation yields the coefficients.
Theoretically, both of these should give similar results (I think). But so far no luck. Here's a comparison of the output PSD with filters calculated using each method compared to using no filter (F=1).

I used the actual plant disturbance models I identified from the experiment. Using the data approach works pretty well, although its pretty cumbersome, and would be stupid with multiple modes. The method using the impulse responses, however, is just crap, clearly making things worse.
Confusingly, both methods crap out the identical filter with less complicated disturbance models. I think the problem is that the input noise covariance matrix isn't really accounted for in the state-space model of the disturbance. It comes into play in the data driven case since I have to use it to generate the input, but it doesn't show up directly in the impulse response as the moment. It should be easily, however, to incorporate it into the state-space model by multiplying the input matrix.
Thats the plan for the afternoon, as soon as I finish blogging here in a coffee shop.
Theoretically, both of these should give similar results (I think). But so far no luck. Here's a comparison of the output PSD with filters calculated using each method compared to using no filter (F=1).

I used the actual plant disturbance models I identified from the experiment. Using the data approach works pretty well, although its pretty cumbersome, and would be stupid with multiple modes. The method using the impulse responses, however, is just crap, clearly making things worse.
Confusingly, both methods crap out the identical filter with less complicated disturbance models. I think the problem is that the input noise covariance matrix isn't really accounted for in the state-space model of the disturbance. It comes into play in the data driven case since I have to use it to generate the input, but it doesn't show up directly in the impulse response as the moment. It should be easily, however, to incorporate it into the state-space model by multiplying the input matrix.
Thats the plan for the afternoon, as soon as I finish blogging here in a coffee shop.
8.10.10
After doing a system ID on the new disturbance model, I was finally able to get my "optimal" FIR filter working. Or rather, functional since in the experiment I get saturation and in the simulation I get crap. Right now I'm trying to write a similar script that finds the optimal coefficients by using simulated data directly in the least-squares problem, instead of just impulse response terms. We'll see if I get the same results as with the other method.
I can already see the next stop on this pain train. The adaptive filter basically solves this least-squares problem recursively using either LMS or RLS filter, so the obvious next step is to write my own adaptive code. The only piece missing is my ignorance about writing S-functions for Simulink, but there are hints that I might have to learn that eventually anyway.
I can already see the next stop on this pain train. The adaptive filter basically solves this least-squares problem recursively using either LMS or RLS filter, so the obvious next step is to write my own adaptive code. The only piece missing is my ignorance about writing S-functions for Simulink, but there are hints that I might have to learn that eventually anyway.
Tuesday, August 03, 2010
One Thing Leads To Another
I think my plan to do a subspace ID on the actual disturbances as measured by the WFS is a good one, but it turned out that the code we have to do it requires the measurements to be on a square grid. Stupidly, I've been using a rectangular 19x22 measurement area on the WFS, with the DM area (found by adding all the influence functions) shifted to the right by around 5 columns. Putting disturbances from a state space model on this odd configuration required scaling the image, shifting it in an attempt to align it with the DM area, an only then performing a least-squares fit with the poke matrix. Naturally the results were shit, as shown in the video in 6.18.10.
My slope calculation code is ludicrously cumbersome, so it took a day to properly rewrite it to use a 19x19 grid (actually a down-sampled 38x38 grid) and make sure it was bug-free. Realigning the beam and removing tilt so that the area for both DMs was centered took another. My advisor commented that he doesn't know how I keep everything straight in my head with so much shit going on. I assured him I have no idea what I'm doing.
Anyway, its good to periodically tinker with the experiment and realign everything anyway to keep my monkey skills sharp. Today I put some state-space generated disturbances on the DM, which now only requires rescaling the state-space output to a slightly larger grid. The resulting phases look much better. Here's a comparison between (L to R) the state-space output (1 phase screen model on a 17x17 grid), the DM phase predicted using the poke matrix (on a 19x19 grid), and the actual phase measured by the WFS (with the bias removed of course).
If you squint you can actually see the similarity in the phases as it flows across the aperture. Either way it much cleaner than the random flatulence I was getting before.
Tomorrow I'll look at how this affects the performance of the adaptive loop. Now that the grid is square I can also go ahead with my original plan and compute an optimal FIR filter.
My slope calculation code is ludicrously cumbersome, so it took a day to properly rewrite it to use a 19x19 grid (actually a down-sampled 38x38 grid) and make sure it was bug-free. Realigning the beam and removing tilt so that the area for both DMs was centered took another. My advisor commented that he doesn't know how I keep everything straight in my head with so much shit going on. I assured him I have no idea what I'm doing.
Anyway, its good to periodically tinker with the experiment and realign everything anyway to keep my monkey skills sharp. Today I put some state-space generated disturbances on the DM, which now only requires rescaling the state-space output to a slightly larger grid. The resulting phases look much better. Here's a comparison between (L to R) the state-space output (1 phase screen model on a 17x17 grid), the DM phase predicted using the poke matrix (on a 19x19 grid), and the actual phase measured by the WFS (with the bias removed of course).
If you squint you can actually see the similarity in the phases as it flows across the aperture. Either way it much cleaner than the random flatulence I was getting before.
Tomorrow I'll look at how this affects the performance of the adaptive loop. Now that the grid is square I can also go ahead with my original plan and compute an optimal FIR filter.
Friday, July 30, 2010
7.31.10
Ahh crap its August.
Finished some code to calculate the optimal FIR filter (in the SISO case) in the disturbance rejection problem by solving the Weiner-Hopf equations. I have no clue if it works, but it seemed to give non-bullshit answers at the few, simple problems I threw at it. To actually use it in the experiment I have to harness the disturbance state-space model, but because of all the reshaping and massaging required to get the disturbance commands I'm not sure the original model would be valid. Right now I'm working on identifying a new model based on the measured wavefronts, so the basic procedure retardedly involves pulling a subspace ID twice. At that point I'm probably just getting complete nonsense, but its worth a try.
Finished some code to calculate the optimal FIR filter (in the SISO case) in the disturbance rejection problem by solving the Weiner-Hopf equations. I have no clue if it works, but it seemed to give non-bullshit answers at the few, simple problems I threw at it. To actually use it in the experiment I have to harness the disturbance state-space model, but because of all the reshaping and massaging required to get the disturbance commands I'm not sure the original model would be valid. Right now I'm working on identifying a new model based on the measured wavefronts, so the basic procedure retardedly involves pulling a subspace ID twice. At that point I'm probably just getting complete nonsense, but its worth a try.
Monday, July 26, 2010
7.26.10
I was able to get the experiment running around 20Hz with Simulink by removing the pause between DM commands. This is great, except it now looks like DM dynamics are definitely screwing things up.
Here's a plot comparing the PSD's from 10000 frames using the same disturbance input and several different plant models.

Clearly using the original plant model in the AO loop, which contains a single delay and PI controller, produces crap; its worse than using just the classical controller. From my ARX experiments with the DM, I found that running at full speed seems to add an additional delay, and sure enough multiplying this ideal plant model with a delay works much better. Unsurprisingly the best results come with identifying a plant model first.
None of these results are as good as what I get with the artificial pause between DM commands, but the time difference is significant (20Hz vs. 4 Hz). I get the feeling this is a fact I'll just have to live with for the time being.
Here's a plot comparing the PSD's from 10000 frames using the same disturbance input and several different plant models.

Clearly using the original plant model in the AO loop, which contains a single delay and PI controller, produces crap; its worse than using just the classical controller. From my ARX experiments with the DM, I found that running at full speed seems to add an additional delay, and sure enough multiplying this ideal plant model with a delay works much better. Unsurprisingly the best results come with identifying a plant model first.
None of these results are as good as what I get with the artificial pause between DM commands, but the time difference is significant (20Hz vs. 4 Hz). I get the feeling this is a fact I'll just have to live with for the time being.
Friday, July 16, 2010
Can I graduate now?
Finally finally have the adaptive loop working in the experiment, at least with a single mode. Turns out that the effect on the overall wavefront norm is difficult to see for this disturbance model unless you know its there. Its more clear if you just look the coefficient of the mode being controlled.
But the best way to observe the effectiveness is by comparing the PSDs of the modal coefficient. In the uncontrolled case (classical PI controller only), you can clearly see some color resulting from the disturbance input. In simulation the adaptive loop flattens this out somewhat:

To my surprise, the adaptive loop does an even better job whitening the PSD in the experiment:

Another thing to notice is that I compared the performance using the ideal and identified plants. The adaptive loop uses a model of the closed (classical) loop to estimate the disturbance input. In general we assume that the ideal plant is just an integral controller and a unit delay, since the phase reconstructor is chosen to be the pseudo-inverese of the modal poke matrix. To verify this I also identified a plant using n4sid and a few thousand samples of input/output data, and found the resulting transfer functions very identical. This is nice to know since the plant actually contains some dirty nonlinearities like saturation and rounding, so it looks like those aren't significant for now.
This is all for a single mode. The requirements on the plant in the multiply mode case are more stringent (i.e. a diagonal transfer matrix). Something is also causing this to run quite a bit slower than from an m-file, so that will take some coffee consumption to figure out as well.
But the best way to observe the effectiveness is by comparing the PSDs of the modal coefficient. In the uncontrolled case (classical PI controller only), you can clearly see some color resulting from the disturbance input. In simulation the adaptive loop flattens this out somewhat:

To my surprise, the adaptive loop does an even better job whitening the PSD in the experiment:

Another thing to notice is that I compared the performance using the ideal and identified plants. The adaptive loop uses a model of the closed (classical) loop to estimate the disturbance input. In general we assume that the ideal plant is just an integral controller and a unit delay, since the phase reconstructor is chosen to be the pseudo-inverese of the modal poke matrix. To verify this I also identified a plant using n4sid and a few thousand samples of input/output data, and found the resulting transfer functions very identical. This is nice to know since the plant actually contains some dirty nonlinearities like saturation and rounding, so it looks like those aren't significant for now.
This is all for a single mode. The requirements on the plant in the multiply mode case are more stringent (i.e. a diagonal transfer matrix). Something is also causing this to run quite a bit slower than from an m-file, so that will take some coffee consumption to figure out as well.
Tuesday, July 13, 2010
7.12.10 [2]
Wow, 2 posts within 24 hours. This is what happens when you can't nail down consistent sleep patterns.
Advisor was intrigued by the comparison of the velocity estimates (see plot in previous post). Particularly, both lines have almost the same shape, and appear to only differ by the average slope, ie the velocity. I proposed that this is because the poke matrix doesn't account for the change in the beam size that's roughly proportional to the norm of the command vector. Relatively small perturbations are used to estimate the poke matrix, so the beam diameter is actually smaller for general random commands. A particular phase profile enters and exits the DM surface in the same time for either case, so if the actual beam diameter is smaller it means that the phase traverses fewer pixels for a given time period, ergo resulting in a lower velocity estimate.
I have no idea if that makes any sense since its around 2AM. In any case, its not clear there's anything I can really do about it other than estimate some correcting scale factor and apply that to every command sequence. Its also not clear if any of this velocity estimation stuff will make it in a paper or my dissertation, so I'm not sure its worth devoting my entire life to something that's basically a sideshow to the main event.
Anyway, tomorrow (today), I'd like to ignore this discrepancy for now and look at applying phases at twice to rate to see if I can measure twice the velocity. I plan on doing this by using the same sequence of phases from the SS model, but just applying every other command.
I'm also making some (theoretical) progress on implementing an optimal FIR filter in the actual AO experiment. I still have to think about what it means to do the calculation in the multi-channel case.
Advisor was intrigued by the comparison of the velocity estimates (see plot in previous post). Particularly, both lines have almost the same shape, and appear to only differ by the average slope, ie the velocity. I proposed that this is because the poke matrix doesn't account for the change in the beam size that's roughly proportional to the norm of the command vector. Relatively small perturbations are used to estimate the poke matrix, so the beam diameter is actually smaller for general random commands. A particular phase profile enters and exits the DM surface in the same time for either case, so if the actual beam diameter is smaller it means that the phase traverses fewer pixels for a given time period, ergo resulting in a lower velocity estimate.
I have no idea if that makes any sense since its around 2AM. In any case, its not clear there's anything I can really do about it other than estimate some correcting scale factor and apply that to every command sequence. Its also not clear if any of this velocity estimation stuff will make it in a paper or my dissertation, so I'm not sure its worth devoting my entire life to something that's basically a sideshow to the main event.
Anyway, tomorrow (today), I'd like to ignore this discrepancy for now and look at applying phases at twice to rate to see if I can measure twice the velocity. I plan on doing this by using the same sequence of phases from the SS model, but just applying every other command.
I'm also making some (theoretical) progress on implementing an optimal FIR filter in the actual AO experiment. I still have to think about what it means to do the calculation in the multi-channel case.
Monday, July 12, 2010
7.12.10
More stuff on the velocity estimation. I managed to run some disturbances on the experiments that originated from a state space model. Surprisingly, you can actually distinguish something that looks like "flow" in the resulting reconstructed phase measurements. As a bonus, the velocity seems to be relatively constant.
The velocity estimate from these measurements are less than what's predicted by putting the estimated phases (using the commands and the phase poke matrix) through the estimator, but just the fact that there's anything recognizable is a plus. I did, however, have to mask out only the center of the WFS image corresponding to the active region of the DM, everything outside of this is just distortion. Maybe something like this should be done in the AO loop as well.

Speaking of which, I really want to focus on getting the AO loop working in the experiment this week. So far I everything runs, but I haven't seen any improvement in the Strehl in either the Simulink experiment or simulation. A few things to try:
1. Compare predicted and actual disturbance measurements (w/ and w/o bias). This should pin down if the internal plant model is accurate. It should be after running n4sid on sample data. Theoretically, I think the MSE between these should converge at something like an exponential rate after the adaptive loop is closed.
2. Try different disturbance sources. Maybe the current SS system is just too close to white to be useful. Maybe try a simple FIR filter or different amplitudes.
3. Compute the optimal IIR and FIR filter using the known disturbance model and see if that makes any difference.
Getting this working, especially #3, is important. The stuff with the velocity and new SLM is just icing at the moment.
The velocity estimate from these measurements are less than what's predicted by putting the estimated phases (using the commands and the phase poke matrix) through the estimator, but just the fact that there's anything recognizable is a plus. I did, however, have to mask out only the center of the WFS image corresponding to the active region of the DM, everything outside of this is just distortion. Maybe something like this should be done in the AO loop as well.

Speaking of which, I really want to focus on getting the AO loop working in the experiment this week. So far I everything runs, but I haven't seen any improvement in the Strehl in either the Simulink experiment or simulation. A few things to try:
1. Compare predicted and actual disturbance measurements (w/ and w/o bias). This should pin down if the internal plant model is accurate. It should be after running n4sid on sample data. Theoretically, I think the MSE between these should converge at something like an exponential rate after the adaptive loop is closed.
2. Try different disturbance sources. Maybe the current SS system is just too close to white to be useful. Maybe try a simple FIR filter or different amplitudes.
3. Compute the optimal IIR and FIR filter using the known disturbance model and see if that makes any difference.
Getting this working, especially #3, is important. The stuff with the velocity and new SLM is just icing at the moment.
Thursday, July 08, 2010
7.8.10
Things to do today and tomorrow:
- Rewrite correlation code to handle phase data on rectangular grids
- Map commands from a SS model to DM commands, and compare velocity estimates for SS model, output from SS model, projected DM surface profile w/ and w/o bias. Is there any correspondence at all between the DM surface velocity and the SS estimate?
- Moar subspace ID stuff. Calculating oblique projections using LQ factorization.
- Screw around with SLM more.
Fun.
- Rewrite correlation code to handle phase data on rectangular grids
- Map commands from a SS model to DM commands, and compare velocity estimates for SS model, output from SS model, projected DM surface profile w/ and w/o bias. Is there any correspondence at all between the DM surface velocity and the SS estimate?
- Moar subspace ID stuff. Calculating oblique projections using LQ factorization.
- Screw around with SLM more.
Fun.
Wednesday, July 07, 2010
7.7.10
Still progressing on this correlation/velocity estimate stuff. Clearly from the previous post, estimating the velocity at each stop is produces inconsistent results. I think the problem is that when the speed is relatively slow (<1 px/frame), consecutive frames look very similar, with only a few edge pixels changing. Thus there just isn't enough movement in the first few delays to show clear movement of the peak of the correlation image, maybe explaining why it takes a few delays for the estimated velocity to settle down to a reasonable number.
But based on how consistently the peak moves in that video I decided to just track its position as a function of the delay, instead of producing an estimate each time. Luckily, the peak position is very linear. Because the peak should move with the same velocity as the phase profile, I can do a linear regression and take the resulting slope as the velocity. The resulting estimate is close to the average of estimating a velocity each delay, but its more justified looking at a plot of the peak position vs delay.
This works equally well using the covariance matrix of a state space model to calculate the correlation image for each time lag. To make it even better I managed to vectorize the calculation so that Matlab doesn't yack all over the nested loops. It runs around 10x faster than before.
I'm not sure where all this is going. It seems to be working pretty well, but I'm not sure we'll be able to squeeze a paper out of it. Maybe if I manage to get results of flow using the WFS and the actual experiment that would be more interesting.
But based on how consistently the peak moves in that video I decided to just track its position as a function of the delay, instead of producing an estimate each time. Luckily, the peak position is very linear. Because the peak should move with the same velocity as the phase profile, I can do a linear regression and take the resulting slope as the velocity. The resulting estimate is close to the average of estimating a velocity each delay, but its more justified looking at a plot of the peak position vs delay.
This works equally well using the covariance matrix of a state space model to calculate the correlation image for each time lag. To make it even better I managed to vectorize the calculation so that Matlab doesn't yack all over the nested loops. It runs around 10x faster than before.
I'm not sure where all this is going. It seems to be working pretty well, but I'm not sure we'll be able to squeeze a paper out of it. Maybe if I manage to get results of flow using the WFS and the actual experiment that would be more interesting.
Friday, June 25, 2010
6.25.10
As I mentioned, one problem with the velocity estimation method I'be been playing around with is that the estimates aren't constant for autocorrelations over a different number of delays, even when only one phase screen is used to generate the data. The question is, is this a result of the method, or is something phucked up in the simulation data itself?
We needed a second set of data to test things out on, and I remembered that we got this CD with "challenge" data from a conference we attended a few months ago. Basically, the data are frames from OPD data of turbulence over some flat plate, so its not exactly the same as general AO turbulence, but close enough to validate the velocity estimation algorithm. The data's pretty dense: each frame has phases on a 41x41 grid, and there are around 15000 frames in each file. Unbelievably though, its all stored uncompressed in a fucking 1GB text file.
It took me 2 days to figure out how to load just enough data in from this shit pipe to be useful. Right now I'm taking every third frame or so, and only using every other grid point. Making Matlab do this without reading every line in the text file took more than a few hits of caffeine.
Just looking at the autocorrelations of the frames (instead of calculating a SS model first), I still see the problem if varying velocity estimates as a function of the delay. Calculating the velocity from the data itself is something other people have done successfully, so either they're all full of shit or I'm just missing some detail. I suspect there's some pre-processing of the data I have to do to get more consistent estimates.

Here's a cool video of the spatial autocorrelation over 7500 frames for a varying number of delays. The peak should move with the same velocity as as they layer, at least until enough delays are used so the frames are essentially uncorrelated.
From the vid it looks like its moving with a constant velocity, but its hard to know exactly.
I need a drink.
We needed a second set of data to test things out on, and I remembered that we got this CD with "challenge" data from a conference we attended a few months ago. Basically, the data are frames from OPD data of turbulence over some flat plate, so its not exactly the same as general AO turbulence, but close enough to validate the velocity estimation algorithm. The data's pretty dense: each frame has phases on a 41x41 grid, and there are around 15000 frames in each file. Unbelievably though, its all stored uncompressed in a fucking 1GB text file.
It took me 2 days to figure out how to load just enough data in from this shit pipe to be useful. Right now I'm taking every third frame or so, and only using every other grid point. Making Matlab do this without reading every line in the text file took more than a few hits of caffeine.
Just looking at the autocorrelations of the frames (instead of calculating a SS model first), I still see the problem if varying velocity estimates as a function of the delay. Calculating the velocity from the data itself is something other people have done successfully, so either they're all full of shit or I'm just missing some detail. I suspect there's some pre-processing of the data I have to do to get more consistent estimates.

Here's a cool video of the spatial autocorrelation over 7500 frames for a varying number of delays. The peak should move with the same velocity as as they layer, at least until enough delays are used so the frames are essentially uncorrelated.
From the vid it looks like its moving with a constant velocity, but its hard to know exactly.
I need a drink.
Friday, June 18, 2010
6.18.10
This is "inter-session" week, the time of bliss on campus between finals and the start of the summer quarter when undergrads have gone on internships and professors have gone on vacation, and only pale grad students and vagrants are out and about. Its basically the only time its possible to find an empty seat in the campus coffee shop, which staffed by dour students clearly disappointed they couldn't find something better to do for the summer.
You'd think with finals over that this would be a great time to for me to actually get some work done, but no, I had to move to another apartment just as it was starting. After a few days unpacking and waiting for utility people I'm finally starting to get back into the swing of things.
The velocity/correlation analysis I've been working on still shows some promise. I've tried it on "real" simulated data and it is still able to produce discernible peaks from the state-space model of the disturbance. The problem is that the resulting velocity estimate varies depending on the number of delays in the correlation, essentially implying that the velocity isn't constant across the frame. These simulation are supposedly using a single phase screen, but since they're essentially black boxes for us who really knows what's happening. Its hard to know then if my method is return accurate velocity estimates.
Right now the idea is simmering on the back burner until I can get more state-space models. Another option is to use this "challenge data" from a conference we went to that is apparently the turbulence from flow over some kind of plate, but with a known, fixed velocity. The data is all stored in a gigantic text file though, so just extracting it is another project.
In the mean time I'm working on getting the adaptive controller working in simulink with the actual experiment. The last week or so I've spend looking at different ways to put disturbances generated from a SS model on the DM. The most obvious way is to project the desired phase onto some kind of phase poke matrix (maps the actuator inputs to phase, not slopes). In one approach I used the theoretical poke matrix from the manufacturer that is basically a model of the DM surface on a high-resolution grid of points. I don't actually have this for DM61, so in that case I used my estimated poke matrix multiplied by the phase reconstructor. Either one produces phases that seem to flow somewhat like the original model, although I haven't done any kind of analysis to verity that.
As an example of my precise control of Matlab's video functionality, here's the desired phase profile generated from the state space model, positioned on the same size grid as the WFS measurements
And here's the resulting measurements after doing a least-squares fit and applying the corresponding DM commands
They're indistinguishable I know. The fact that the actual measurements show anything that could be believably described as a "flow" is a major success in my book.
You'd think with finals over that this would be a great time to for me to actually get some work done, but no, I had to move to another apartment just as it was starting. After a few days unpacking and waiting for utility people I'm finally starting to get back into the swing of things.
The velocity/correlation analysis I've been working on still shows some promise. I've tried it on "real" simulated data and it is still able to produce discernible peaks from the state-space model of the disturbance. The problem is that the resulting velocity estimate varies depending on the number of delays in the correlation, essentially implying that the velocity isn't constant across the frame. These simulation are supposedly using a single phase screen, but since they're essentially black boxes for us who really knows what's happening. Its hard to know then if my method is return accurate velocity estimates.
Right now the idea is simmering on the back burner until I can get more state-space models. Another option is to use this "challenge data" from a conference we went to that is apparently the turbulence from flow over some kind of plate, but with a known, fixed velocity. The data is all stored in a gigantic text file though, so just extracting it is another project.
In the mean time I'm working on getting the adaptive controller working in simulink with the actual experiment. The last week or so I've spend looking at different ways to put disturbances generated from a SS model on the DM. The most obvious way is to project the desired phase onto some kind of phase poke matrix (maps the actuator inputs to phase, not slopes). In one approach I used the theoretical poke matrix from the manufacturer that is basically a model of the DM surface on a high-resolution grid of points. I don't actually have this for DM61, so in that case I used my estimated poke matrix multiplied by the phase reconstructor. Either one produces phases that seem to flow somewhat like the original model, although I haven't done any kind of analysis to verity that.
As an example of my precise control of Matlab's video functionality, here's the desired phase profile generated from the state space model, positioned on the same size grid as the WFS measurements
And here's the resulting measurements after doing a least-squares fit and applying the corresponding DM commands
They're indistinguishable I know. The fact that the actual measurements show anything that could be believably described as a "flow" is a major success in my book.
Monday, June 14, 2010
Sunday, May 30, 2010
5.30.10
I've been sidetracked the last couple weeks working on this idea of quantifying frozen flow layers. The problem is to identify how many layers are moving in a phase profile, and estimate their velocities. A common idea among some "predictive" AO controllers is to then use this information to generate control commands some number of steps in the future.
Of course, this only works when the velocities are constant and pretty well known, but it seems to be a common approach among certain fields. Based on some comments I've heard, they like it because it incorporates some knowledge about the physics behind the problem. I think deep down some of them just don't trust the completely black box methods that's common in hard core controls applications.
We typically use one of these feared methods to identify a state space model for the turbulence. One question that's bothered us though, is how can we extract the velocity and layer information? Since the controller developed from the state space is optimal, the velocity info has to be embedded in there, but since the states are a product of the ID it isn't clear how.
One approach people in the AO community have tried is to generate a bunch of image correlations from the data, and look for peaks. If the phase is composed of a finite number of layers moving with distinct velocities, the correlations between images separated by enough delay should develop peaks corresponding to each layer. In our case, we have a state space model. And while we could just generate a sequence of data and use these methods, I'd be cooler if we could identify the velocity straight from the system matrices directly.
Coincidentally, I've been reading this book on subspace identification, and it has a good review about calculating the state and output covariance matrices for a state space system. Its very easy to compute the covariance matrices for any number of time steps, so I started to wonder if you could compute the covariance function directly from these matrices. After much, much head banging, it turns out you can.
I don't want to reveal the exact details, but I'll just say even though its not theoretically complicated, its pretty cumbersome in the 2D case, and required many many cups of coffee and a nontrivial amount of cursing to figure out. I still haven't tried it out on real data, but in all the simple 2 layer, integer velocity cases I've developed it works swimmingly, and seems relatively robust to random similarity transformations to the state space.
Of course, this only works when the velocities are constant and pretty well known, but it seems to be a common approach among certain fields. Based on some comments I've heard, they like it because it incorporates some knowledge about the physics behind the problem. I think deep down some of them just don't trust the completely black box methods that's common in hard core controls applications.
We typically use one of these feared methods to identify a state space model for the turbulence. One question that's bothered us though, is how can we extract the velocity and layer information? Since the controller developed from the state space is optimal, the velocity info has to be embedded in there, but since the states are a product of the ID it isn't clear how.
One approach people in the AO community have tried is to generate a bunch of image correlations from the data, and look for peaks. If the phase is composed of a finite number of layers moving with distinct velocities, the correlations between images separated by enough delay should develop peaks corresponding to each layer. In our case, we have a state space model. And while we could just generate a sequence of data and use these methods, I'd be cooler if we could identify the velocity straight from the system matrices directly.
Coincidentally, I've been reading this book on subspace identification, and it has a good review about calculating the state and output covariance matrices for a state space system. Its very easy to compute the covariance matrices for any number of time steps, so I started to wonder if you could compute the covariance function directly from these matrices. After much, much head banging, it turns out you can.
I don't want to reveal the exact details, but I'll just say even though its not theoretically complicated, its pretty cumbersome in the 2D case, and required many many cups of coffee and a nontrivial amount of cursing to figure out. I still haven't tried it out on real data, but in all the simple 2 layer, integer velocity cases I've developed it works swimmingly, and seems relatively robust to random similarity transformations to the state space.
Monday, May 17, 2010
5.17.10
Now that I can identify simple disturbance models from open-loop data, I'd like to try designing an optimal controller using the identified system. I think it should work out in simulation, but ideally I'll be able to implement it in the experiment and see some results. This is actually much more simple than using actual wavefronts and doing a multichannel problem; since the plant is particularly simple (with a slow enough sampling time), the optimal controller should do a pretty good job.
1. Make sure models can be id'ed using the disturbance model, probably using the 61 actuator modes constructed using the poke matrix.
2. Ignore the PI controller for now? Try to apply disturbances while the integrator is running and see what the results look like.
3. Come up with a script that calculates the optimal controller structure using the controller SS model. First by solving a WH problem, then also by solving a finite time LQR problem. Results should be the same with each.
4. Apply the controller in simulation and in the experiment if all goes well.
5. ....?
6. Profit.
The one wrinkle in all this it that controller and disturbance DM are using different sets of modes, so I have to think about the best way to unify them. Ultimately, I suspect I'll end up using the DM61 modes for generating disturbances, but since that process is basically opaque from the controller's perspective, all the ID and control will be done on the basis of the DM31 modes.
1. Make sure models can be id'ed using the disturbance model, probably using the 61 actuator modes constructed using the poke matrix.
2. Ignore the PI controller for now? Try to apply disturbances while the integrator is running and see what the results look like.
3. Come up with a script that calculates the optimal controller structure using the controller SS model. First by solving a WH problem, then also by solving a finite time LQR problem. Results should be the same with each.
4. Apply the controller in simulation and in the experiment if all goes well.
5. ....?
6. Profit.
The one wrinkle in all this it that controller and disturbance DM are using different sets of modes, so I have to think about the best way to unify them. Ultimately, I suspect I'll end up using the DM61 modes for generating disturbances, but since that process is basically opaque from the controller's perspective, all the ID and control will be done on the basis of the DM31 modes.
Wednesday, May 12, 2010
5.12.10
I feel like my brain's overheating from all the wasted processing cycles I'm asking it to do this week, mainly due to my shallow grasp of concepts I should know by now. The problem with doing several complicated dance moves simultaneously is that they all overlap into something that resembles a seizure more than a coherent motion.
1. Making headway in my subspace ID book. I found the presentation of prediction theory from a Hilbert Space perspective to be particularly illuminating once I finally got wtf was going on. Now embarking on several chapters on stochastic realization.
2. Now that I can roughly identify a noise model from input/output data, the question is how to construct the optimal controller. This is covered in a past grad's thesis that I'm dissecting; like all dissertations its terse, and lets the references do most of the talking.
3. We've struggled for some time for a way to evaluate layer velocities for a turbulence model given a state space realization if you assume "frozen flow" holds. In the past people have estimated the velocity by looking at the spatial autocorrelation of the WFS measurements and tracking the peak. I brought up the point that perhaps the steady-state autocorrelations could be derived analytically from the state space realization, specifically the state covariance matrix. Its not clear if this is feasible or if I was just high on lens cleaning fumes.
4. I'm also working on a midterm for the class I'm taking, doing some analysis of neuronal pattern generators using quasi-linear multivariable harmonic balance. Yeah.
Shits gettin real.
1. Making headway in my subspace ID book. I found the presentation of prediction theory from a Hilbert Space perspective to be particularly illuminating once I finally got wtf was going on. Now embarking on several chapters on stochastic realization.
2. Now that I can roughly identify a noise model from input/output data, the question is how to construct the optimal controller. This is covered in a past grad's thesis that I'm dissecting; like all dissertations its terse, and lets the references do most of the talking.
3. We've struggled for some time for a way to evaluate layer velocities for a turbulence model given a state space realization if you assume "frozen flow" holds. In the past people have estimated the velocity by looking at the spatial autocorrelation of the WFS measurements and tracking the peak. I brought up the point that perhaps the steady-state autocorrelations could be derived analytically from the state space realization, specifically the state covariance matrix. Its not clear if this is feasible or if I was just high on lens cleaning fumes.
4. I'm also working on a midterm for the class I'm taking, doing some analysis of neuronal pattern generators using quasi-linear multivariable harmonic balance. Yeah.
Shits gettin real.
Wednesday, May 05, 2010
Backpacking: After Action
Some lessons learned:
1. Respect the toilet paper. Waste not the toilet paper. Cherish its many uses.
2. Snowshoes? That's for teh gayz!
3. On second thought, frozen boots are bad, dripping socks are worse.
4. Filtering is a PITA. Socks are not adequate water filtration devices despite their affinity for water.
5. No amount of caffeine withdrawal makes Starbucks a decent choice.
6. Sleeping on a slope sucks.
1. Respect the toilet paper. Waste not the toilet paper. Cherish its many uses.
2. Snowshoes? That's for teh gayz!
3. On second thought, frozen boots are bad, dripping socks are worse.
4. Filtering is a PITA. Socks are not adequate water filtration devices despite their affinity for water.
5. No amount of caffeine withdrawal makes Starbucks a decent choice.
6. Sleeping on a slope sucks.
Monday, May 03, 2010
Inb4SID
I'm trying to gradually ween myself (and my advisor) off tinkering with my experiment and start working on actual controls. Before I can get to controller design I have to do some work on system ID; stuff more complicated than simple ARX models of the DM. Its been so long since I've actually taken any classes in this stuff that this means a lot of reading in the weeks ahead.
The reason for focusing on system ID is that one way to reject disturbances is to characterize the disturbance input as the output of an LTI filter with white noise input. Once you have that you can internalize the model in the controller and use it to reject disturbances "optimally." This is explicitly what an adaptive controller does, but doing the ID in a separate step has its advantages.
The immediate question was did the experiment have enough accuracy in reading and applying wavefronts to do this? To do this I cooked up a little Simulink file that runs white noise through a FIR filter and applies the resulting modal commands. The game is to read the wavefronts from this and reconstruct these filtered commands. With those and knowledge of the inputs, it should be possible to identify the FIR coefficients if the reconstruction is accurate enough.
To my amazement this actually worked out well; the reconstruction of the filtered commands was accurate enough to do a good job of estimating the filter with batch least-squares. The modes also seemed orthogonal enough that I could determine identify separate filters if a different one was applied to different modal channels.
For shits and giggles I stuck the RLS block in the Simulink model and was able to get it to converge to the SS error pretty quickly. You can see this in action by changing the filter coefficient half way through the experiment and watching it re-converge.
This is good news. Eventually I'm going to unleash a subspace ID algorithm on this biotch to identify even more complicated state space, MIMO models, but since I know nothing about that stuff its going to be a while before I catch up on all the reading. In the mean time there's plenty to play around with here. I'd like to see how well I can identify a multichannel FIR filter, particularly if only a limited number of modes are used in the ID. Also, I think I'd be good for the soul to try to code an RLS filter myself.
The reason for focusing on system ID is that one way to reject disturbances is to characterize the disturbance input as the output of an LTI filter with white noise input. Once you have that you can internalize the model in the controller and use it to reject disturbances "optimally." This is explicitly what an adaptive controller does, but doing the ID in a separate step has its advantages.
The immediate question was did the experiment have enough accuracy in reading and applying wavefronts to do this? To do this I cooked up a little Simulink file that runs white noise through a FIR filter and applies the resulting modal commands. The game is to read the wavefronts from this and reconstruct these filtered commands. With those and knowledge of the inputs, it should be possible to identify the FIR coefficients if the reconstruction is accurate enough.
To my amazement this actually worked out well; the reconstruction of the filtered commands was accurate enough to do a good job of estimating the filter with batch least-squares. The modes also seemed orthogonal enough that I could determine identify separate filters if a different one was applied to different modal channels.
For shits and giggles I stuck the RLS block in the Simulink model and was able to get it to converge to the SS error pretty quickly. You can see this in action by changing the filter coefficient half way through the experiment and watching it re-converge.
This is good news. Eventually I'm going to unleash a subspace ID algorithm on this biotch to identify even more complicated state space, MIMO models, but since I know nothing about that stuff its going to be a while before I catch up on all the reading. In the mean time there's plenty to play around with here. I'd like to see how well I can identify a multichannel FIR filter, particularly if only a limited number of modes are used in the ID. Also, I think I'd be good for the soul to try to code an RLS filter myself.
Friday, April 23, 2010
ARMAing
The general idea behind an ARMA model is to express the current output as a linear combination of past outputs (the autoregressive part), and current and past inputs (the moving average part). This essentially identifies a transfer function of some order for the system, and since the coefficients appear linearly, the "optimal" values can be found by constructing an appropriate least-squares problem.
I wasn't really interested in coming up with a complete model for the DM dynamics since its time constant is small compared to the sampling time I can get with the WFS. But is it small enough to be negligible when the experiment is running at full speed? The majority of AO papers consider a DM with insignificant dynamics, modeling it instead with a static poke matrix. This was muy bueno with me, except that when trying to identify a poke matrix I basically get garbage unless I insert a pause between applying a DM command and reading the wavefront.
Anyway after much reading, checking, rechecking and cursing I finally wrote a basic ARMA estimation script that organizes all the data into the right places and does the least-squares problem. Basically I gathered some number of DM commands and the resulting WFS measurement, and looked into the coefficients when a pause was included between sending the commands or not. To make things simpler I projected the slope vectors onto the space of actuators. In the ideal case, when the WFS measurements depend only on the current command, the first coefficient should be the identity matrix.
Amazingly, the data was clean enough so that this is actually what happens when I put in a pause before reading the WFS. What's interesting is that without a pause, when the WFS is read immediately after applying the commands, its the second coefficient that's identity, indicating that it takes essentially 1 sample time for the DM to achieve the desired shape. With no pause the sampling frequency was around 20 Hz.
Here's a comparison of the first 5 MA coefficients with a pause (top) and without (bottom. The AR coefficients turned out to be pretty negligible (as you'd expect). Also shown are the norms of the coefficients in each case.

This result seems to verify that the DM does indeed have some time constant, contradicting what everyone says about infinitely fast dynamics. In particular, the 0.05 time constant roughly matches what I found earlier when actuating the DM and capturing WFS frames at a high frame rate.

Incidentally the prediction error on new data is around 15% in either case, and not projecting the slopes into actuator coordinates results in the poke matrix instead of the identity.
The question now is whether or not its worth it to incorporate these dynamics into the controller, or just stick a pause in there and ignore it. Even with the dynamics the model is very simple, just a 1 sample delay with no other significant coefficients, so it shouldn't be hard to do. Of course, I say that now...
I wasn't really interested in coming up with a complete model for the DM dynamics since its time constant is small compared to the sampling time I can get with the WFS. But is it small enough to be negligible when the experiment is running at full speed? The majority of AO papers consider a DM with insignificant dynamics, modeling it instead with a static poke matrix. This was muy bueno with me, except that when trying to identify a poke matrix I basically get garbage unless I insert a pause between applying a DM command and reading the wavefront.
Anyway after much reading, checking, rechecking and cursing I finally wrote a basic ARMA estimation script that organizes all the data into the right places and does the least-squares problem. Basically I gathered some number of DM commands and the resulting WFS measurement, and looked into the coefficients when a pause was included between sending the commands or not. To make things simpler I projected the slope vectors onto the space of actuators. In the ideal case, when the WFS measurements depend only on the current command, the first coefficient should be the identity matrix.
Amazingly, the data was clean enough so that this is actually what happens when I put in a pause before reading the WFS. What's interesting is that without a pause, when the WFS is read immediately after applying the commands, its the second coefficient that's identity, indicating that it takes essentially 1 sample time for the DM to achieve the desired shape. With no pause the sampling frequency was around 20 Hz.
Here's a comparison of the first 5 MA coefficients with a pause (top) and without (bottom. The AR coefficients turned out to be pretty negligible (as you'd expect). Also shown are the norms of the coefficients in each case.

This result seems to verify that the DM does indeed have some time constant, contradicting what everyone says about infinitely fast dynamics. In particular, the 0.05 time constant roughly matches what I found earlier when actuating the DM and capturing WFS frames at a high frame rate.

Incidentally the prediction error on new data is around 15% in either case, and not projecting the slopes into actuator coordinates results in the poke matrix instead of the identity.
The question now is whether or not its worth it to incorporate these dynamics into the controller, or just stick a pause in there and ignore it. Even with the dynamics the model is very simple, just a 1 sample delay with no other significant coefficients, so it shouldn't be hard to do. Of course, I say that now...
Sunday, April 11, 2010
4.13.10
On Friday I got back from my first academic "workshop" up in Monterey. To my surprise it was actually informative, both in terms of AO and in witnessing the unique social interactions that take place at these things. My advisor's assertion that some of the people there were "world class pricks" turned out to be a good assessment of character in some cases.
But that's for another day. For all the interesting stuff I heard there, there were only a few thoughts that materially affect my work at the moment. Even though its April, there's more snow in the forecast for the mountains, so this week is bound to be a short one. Here's what I've been focusing on:
Fourier transforms. One presenter at the workshop extolled the benefit of using Fourier modes to do predictive wavefront control. A common theme among the talks was this idea of frozen flow; basically modeling turbulence as a superposition of a finite number of static layers, each moving with an independent velocity. Under this model, decomposing the wavefront using the DFT is beneficial since shifting a Fourier mode with a certain velocity simply involves rotating the (complex valued) modal coefficient some number of degrees. For a given coefficient, each moving layer would impart a certain periodicity related to the layer's velocity. Thus one could identify the layers by looking at the PSD of each mode and looking for spikes corresponding to this periodic behavior....I think.
DM ARX model. I've been meaning to do this for a while, but its only now that I think I know how to compute MIMO ARX models using least-squares. Of course its close to the scalar version, but I wanted to work everything out to make sure. Because I'm anal about verifying every segment of code, its taken some time to get the actual m-file written up. I finished the MA part on Sunday, and just finished the full ARX code a few hours ago. I'll test it all tomorrow.
Return of the SLM. One of the other grad students at this workshop had a working experiment very similar to mine, but using SLM's as disturbance generators. My advisor was intrigued, especially since there seemed to be an easy way to use it in Matlab, and it now looks like we'll be ordering one of these puppies soon. What exactly I'll use it for I don't know, but maybe I can put it to work mitigating the static bias.
But that's for another day. For all the interesting stuff I heard there, there were only a few thoughts that materially affect my work at the moment. Even though its April, there's more snow in the forecast for the mountains, so this week is bound to be a short one. Here's what I've been focusing on:
Fourier transforms. One presenter at the workshop extolled the benefit of using Fourier modes to do predictive wavefront control. A common theme among the talks was this idea of frozen flow; basically modeling turbulence as a superposition of a finite number of static layers, each moving with an independent velocity. Under this model, decomposing the wavefront using the DFT is beneficial since shifting a Fourier mode with a certain velocity simply involves rotating the (complex valued) modal coefficient some number of degrees. For a given coefficient, each moving layer would impart a certain periodicity related to the layer's velocity. Thus one could identify the layers by looking at the PSD of each mode and looking for spikes corresponding to this periodic behavior....I think.
DM ARX model. I've been meaning to do this for a while, but its only now that I think I know how to compute MIMO ARX models using least-squares. Of course its close to the scalar version, but I wanted to work everything out to make sure. Because I'm anal about verifying every segment of code, its taken some time to get the actual m-file written up. I finished the MA part on Sunday, and just finished the full ARX code a few hours ago. I'll test it all tomorrow.
Return of the SLM. One of the other grad students at this workshop had a working experiment very similar to mine, but using SLM's as disturbance generators. My advisor was intrigued, especially since there seemed to be an easy way to use it in Matlab, and it now looks like we'll be ordering one of these puppies soon. What exactly I'll use it for I don't know, but maybe I can put it to work mitigating the static bias.
Sunday, March 28, 2010
Moar Modes
Sure enough, the problem with the poke matrix in this new configuration was caused by the beam size. With the bias voltage the beam was much more compressed than with a general random command. So when the bias wavefront was subtracted from the residual wavefront to get a measurement of the DM phase, it was only modifying the center of the residual.
I fixed this by adjusting the optics to make the beam larger when the bias is applied, something I probably should have done right away. The results look almost perfect, though slightly larger than before


Its possible I could make the modes even smaller by moving stuff around, but at this point I don't want to mess with it any more. I'll settle for Amish perfect.
I fixed this by adjusting the optics to make the beam larger when the bias is applied, something I probably should have done right away. The results look almost perfect, though slightly larger than before


Its possible I could make the modes even smaller by moving stuff around, but at this point I don't want to mess with it any more. I'll settle for Amish perfect.
Subscribe to:
Comments (Atom)
