Sunday, March 28, 2010

Moar Modes

Sure enough, the problem with the poke matrix in this new configuration was caused by the beam size. With the bias voltage the beam was much more compressed than with a general random command. So when the bias wavefront was subtracted from the residual wavefront to get a measurement of the DM phase, it was only modifying the center of the residual.

I fixed this by adjusting the optics to make the beam larger when the bias is applied, something I probably should have done right away. The results look almost perfect, though slightly larger than before






Its possible I could make the modes even smaller by moving stuff around, but at this point I don't want to mess with it any more. I'll settle for Amish perfect.

Wednesday, March 24, 2010

The Happening

Back from the North, I've spent the last couple days using some new lenses to reduce the beam size. So far so good, and I've been able to reduce the un-molested beam to around 5mm in diameter, half what it was before. With this change almost the entire beam fits into the WFS measurement area I'm using (around 1000x800 px, usually sub-sampled).

Its taken some tinkering, but I've been able to get a recognizable poke matrix out of this configuration.





Notice how nearly the full extent of the modes are in the frame, much closer to the theoretical version. Its not all rainbows and puppies though, as you can see in this comparison between actuator influence functions (for the same actuator) using the small and large beams



Notice the depression surrounding the peak when the smaller beam is used. I would expect the peak to be narrower since the beam is condensed, but this is unexplained. Maybe related to that is the ring that often appears around the modes in the modal poke matrix.

I think this crap shows up because the beam diameter really changes when commands are applied, and the effect is more noticeable when the whole beam is in the WFS frame. Essentially, the problem is the assumption that each WFS sub-aperture measures the same area of the beam, independent of the command. In reality, the beam changes shape with a changed wavefront, so sub-apertures are actually measuring different parts of the beam each time a new command is generated. For small commands, or when the beam is much larger than the measurement area this isn't really a problem, but I think its what's causing the distortions here.

Surely someone must have noticed this before, but as far as I can tell from a 30 second Google scholar search ignorance is bliss, and no one's has the stones to address it. Its a tough problem since I have a suspicion the distortion isn't easy to predict analytically. I'm looking up a few papers on beam shaping based on deformable mirrors, so maybe there's a way to do some kind of pre-warping to the beam as a function of the command, sort of like pre-warping in the bilinear transform. Maybe then the sub-aperture measurements could be linked to a specific part of the beam and used to interpolate a wavefront profile. Maybe its possible to linearize this and make it just a series of simple transformations. Maybe I've had a few too many rusty nails while writing this.

Either way, el hefe thinks this might be a reason we couldn't generate a decent poke matrix from the HEL simulation we have and instead had to rely on a theoretical version. I shall vocalize my pre-warping idea to him tomorrow and report back.

Over and out.

Thursday, March 11, 2010

3.11.10

Here's an interesting look at the different sub-aperture sampling resolutions compared to the ideal case using theoretical actuator influence functions



The clear take away from this is that the 9x11 array is essentially crap, and doesn't really provide much performance enhancement over the 18x22 case at the current camera frame rates. Also, it'd be nice to get more of the beam measured since there's clearly some peripheral detail that's missing. I've ordered some new lenses that should help do that...which means more alignment.

In the mean time, I'm still trying to do some kind of system ID using the experiment as a channel, but I'm running into some mysterious Simulink problems that haven't been sorted out yet. I think the idea is solid though; the basic idea is to generate a random command sequence by passing white noise through an FIR filter. The game is to identify the filter coefficients using only the data from the WFS and the original white noise sequence. This is essentially just a basic channel identification process, except the experiment is shoe horned in between. In a perfect world the WFS would be able to perfectly reconstruct the modal commands exiting from the FIR filter, thus identifying the filter coefficients is just a least-squares problem. However in this case the original modal commands are estimated from the WFS data (the estimate is itself a least-squares approximation based on the identified poke matrix). The question is how close will the FIR estimate be given the finite resolution of the WFS? How would changing the WFS resolution affect convergence if a LMS or RLS algorithm is used instead of just a batch process? Does this even make sense?

I was attempting to explain this to my advisor this afternoon but he instead thought I was trying to identify a FIR model for the DM itself, since it looks like there are some dynamics to it after all. This actually might not be a bad experiment to try either. The simplest case would involve identifying a moving average model for the DM, i.e. the current wavefront sequence would be some linear combination of current and past inputs (with a poke matrix stuck in between). If its true that the DM dynamics are negligible, then the MA model should only have order 1. On the other hand it probably doesn't make sense to try to identify a full ARMA model.

Experimentally, the problem with this is the frame rate of the camera. In the past catching the mirror dynamics required frame rates higher than what I'm using right now to read from the WFS, so I might have to use a pretty reduced measurement area to see anything interesting. This might not be a problem though, if I'm apply random commands to the entire mirror surface. I wouldn't even have to use the modes necessarily since I could do a multichannel identification based just on the number of lenslets used.

Anyway, tomorrow I leave for Wyoming on a family vacay, so I'll have 32 hours of driving over the next week to iron these issues out en cerebrum before getting to work in front of the keyboard.

Tuesday, March 09, 2010

3.9.10

Downsampling the sub-aperture array worked pretty well in reducing the computation time for constructing the slope vector. The image itself is still pretty large, around 900x900 px, but this saves me from having to screw with the beam diameter further. An image of that size allows for roughly a 36x44 lenslet array. Sampling every other sub-aperture essentially doubles the speed; enough such that capturing an image and computing a down sampled, 18x22 dimension slope vector can happen at around 30 Hz, near the 40 fps frame rate of the camera at that resolution/shutter speed. I can get even closer by going to a 9x11 array, but the savings aren't great, and the resulting phase profiles start to look pretty crappy at higher spatial frequencies.

I think this is about as fast as I can expect to go without any (drastic?) changes to the beam size or any additional info on the DM dynamics. The DM presents an interesting problem. I can really only see the dynamics with a very high WFS frame rate (100 fps+), which means a small image with just a few sub-apertures. The trouble is that getting meaningful modal information is hard with so few sub-apertures, so computing an actual frequency response using the modes as control channels would be tough.

I want to move on to some controls applications. One thing I want to try first is to generate random disturbance commands, pass them through a low order FIR filter and apply them to DM61, first with a single modal channel, and the more. Eventually, I'd like to read the resulting open loop WFS data and try to identify the FIR coefficients in some way, either adaptively or using a subspace ID algorithm. This is basically the first step in designing a time-invariant LQG controller, the topic of the first paper I can probably get out. One interesting question in an adaptive situation would be to investigate how the convergence rate is affected by the density of the sub-aperture array. How badly does down sampling screw up the ID? I suspect its significant at higher resolutions.

Friday, March 05, 2010

3.4.10

I talked to my advisor about my concern that I don't have any papers on deck now that my experiment is essentially in place. He didn't seem that worried, so I guess that's all that matters, and assured me that we'd get things going soon enough. He mentioned that the easiest, publishable experiment to work towards would be designing an LQR controller using an identified noise model. However, he's still stressing the idea of running the experiment in real time, so I'm going to have to either make some real advances there, or exhaust all the practical possibilities.

I've managed to get the Simulink version of the PI controller working about as fast as the M-file version. Nominally, sending commands to both actuators each iteration, I can run the thing around 4 Hz. Pretty pathetic, but there are a few things I can work on to speed things up.

WFS: the current sub aperture array size, around 30x45, encapsulates most of the actuator influence functions for the beam size I'm using right now. With partial frame mode, I can capture images of that size at around 45 fps. Since I'm no where near that, and reducing the beam size further would be a pain in the ass, this serves as an upper bound on performance for now. Still, with that many lenslets, computing the slope vector can only be done around 10 Hz. This number of sub apertures is much much higher than what others use, so one idea is to down sample the image and compute centroids for a set of lenslets, like every third or something. I think this would be a linear improvement in the performance of the slope vector calculation.

DMs: the mirrors clearly have some dynamics, so there has to be some pause between sending a command and reading the WFS. When computing a poke matrix, a pause of less than 0.08s yields crappy results, but the PI controller works pretty well when there is no pause at all.

Tomorrow I'd like to look at the down sampling idea since that shows the most promise. There's also the idea of doing the slope calculation in an M-file S-function instead of an EML block. I doubt that'd really improve things much, so its on the back burner for now.