Tuesday, March 31, 2009

3.30.09

Today's the first day of the spring quarter, which means I had to spend most of the day fussing with bullshit errands like getting my parking permit and paying rent. Other than that, I spent some time taking a break from SPGD and working on my interior point idea.

Basically, the general plan is to find the command vector c that minimizes the 2-norm of the total wavefront. This requires the linear model of the DM using the poke matrix, G, and can be generalized using modes if desired. Without saturation the optimal DM surface would be the projection of the static wavefront onto the range space of G. But, limits on the actuator commands add inequality constraints. This means that the true optimal command vector must lie in the union of the feasible set AND the orthogonal compliment to the null space of G. Note that if modes are used, the inequality constraints involve the modal poke matrix, since you can't place explicit constraints on each modal command.

This kind of optimization problem can be solved by replacing the inequality constraints with logorithmic barrier functions that are tacked onto the objective. These beasts are designed to go to infinity as the limits are approached. By eliminating the inequality constraints, we're left with (in this case with no equality constraints) an unconstrained problem. The new objective function to be minimized now consists of a weighted sum of the wavefront norm, and the barrier function terms. This unconstrained problem can then be solved using Newton's method iteratively for more and more accuracy.

Wow, that's possibly the worst explanation of the barrier method ever committed to words. See the notes on Nonlinear Optimization for the real deal.

Thursday, March 26, 2009

3.25.09

Still playing around with this moment of inertia idea. There've been some hiccups with the details along the way, but I think I can get it to work somewhat with more tinkering. Whether or not its better than just the max intensity or something like that remains to be seen.

I also started thinking seriously about coming up with a barrier method optimization based on the poke matrix. This should be fairly straightforward to implement one I get all the math bs worked out. The algorithm could be run without any output to the DM, or each command encountered on the central path could be tested experimentally and the resulting wavefront used as a stopping criteria.

Yesterday, because my parking pass for the quarter expired, I had to park literally in another zip code and walk. And this is during spring break too. Why does renewing a permit require 40,000 signatures and forms?

Monday, March 23, 2009

3.23.09


Haven't had much luck getting the spgd controller to work with just the maximum image intensity. The problem is that it simply searches around without actually maximizing anything. In desperation I went back and tried running it with a constant command across all actuators, essentially optimizing over a scalar value.

Since the optimization variable is a scalar in this case we can look at the objective function graphically. Here's a plot of the maximum intensity vs. the constant command. Clearly its not concave, although it would be quasi if it wern't for that little bit beyond the max. There's an obvious maximum, but the problem lies in the flat area from 0 to ~180. When the gradient descent algorithm is in this region there is no obvious direction of steepest descent with small perturbations, so the algorithm gets stuck at the intial value.

Its clear from the images themselves that the beam tightens as the maximum command is approached, but the peak intensity doesn't necessarily increase measurably. I'm trying now to come up with another objecive function that measures the spread of the beam based on a "moment of inertia" calculated from the image.

The results today weren't great, but I was using the center of the frame as the origin for the inertia calculation rather than the "center of mass." I'll try changing this tomorrow, maybe by using the parallel axis theorem to save computational time.

I got around 50% on my final. About average. Along with Adaptive Filtering and Linear Programming, that class should establish my minor field in EE, so I'm technically finished with classes forever.

Friday, March 20, 2009

Capitulation II

I've probably taken around 50 final exams over the years, so you'd think by now a career student like myself would have a solid method to study for one. But you'd be wrong. Every time is a masochistic adventure, a battle of attrition against your own ability to concentrate and ingest caffeine. For most people there's no finite amount of studying that will guarantee a decent grade; going over every page, every example, and every hour of lecture notes is not practical. The result is a balance between competitiveness and sanity. Unfortunately engineering students are famously competitive.

Its even worse in grad school. The exam I'm facing tomorrow is worth more than 80% of my final grade. Does that mean I should spend 4x the time studying for this beast as I spend on everything else? The only comfort I have is knowing that if I've gotten this far, tomorrow should be no problem.

Update: 26/50...good enough for an A- hah.

Wednesday, March 18, 2009

Close

After a lot of trial and error, I discovered another problem with the way I was applying random perturbations. To avoid saturating the actuators, I was first generating random DAC commands in the range [0,255], and then calculating the corresponding perturbation from that. The problem with this is that the resulting perturbations aren't zero mean, so if the current command is, say, 200, then its unlikely a positive perturbation will be generated. Thus, the gradient was effectively being estimated with information in only one direction.

I fixed this by randomly generating the perturbations themselves, rather than absolute commands and back calculating the perturbation values. I'll have to play around with the parameters to make sure they're actually zero mean. Saturation might just be something I'll have to live with, especially when generating random modal commands. The ideal method for generating these things actually might require some thinking. If the current command is close to 255, for example, then having too many perturbations that are saturated is a waste.

With this change I tried running the SPGD controller with a single mode (the focus mode), commanding all actuators with the same value. Luckily it converges in steady state to approximately the same control command as the PI controller, around 199. However the command seems to jump around to a much lower command every other iteration, so I think I'll need to play around with the gain to tamp that down.

I witnessed similar behavior when feeding back the intensity instead of the WFS data. I'm hopeful that I can get it working by just picking the right gain.

I'm close...I can smell it.

Tuesday, March 17, 2009

3.16.09

So far, feeding back the max intensity in the SPGD algorithm has resulted in crap. Basically the intensity just fluctuates randomly around the initial value no matter how many iterations or perturbations. Either the gradient isn't being identified correctly or the measure of the max intensity doesn't provide sufficient information for controlling it.

Irritatingly, even though the intensity isn't maximized the wavefront variance (which is just measured, not used for control), is reduced as if it were the objective. This suggests that the algorithm might be, in both cases, just pushing all the actuators to their maximum value, which happens to result in a relatively low WFS variance.

Tomorrow I'll test both cases using the first mode only. Doing the same with the PI controller should indicate what the optimal value is, and it definitely shouldn't be at the maximum value. If the SPGD algorithm surpasses that value in steady state I'll know that something's wrong.

I spotted some lady with a book on "Medieval Drama" today in Taco Bell. It was thicker than my adaptive filtering book.

Thursday, March 12, 2009

3.12.09

The maximum intensity is the simplest objective function I can think of from images of the beam profile. Comparing it to the wavefront norm using the PI controller (still the best algorithm right now) shows pretty decent correlation between the two.



As expected, the maximum intensity doesn't correspond to the minimum wavefront error. This is because the code I wrote to read the WFS images doesn't use the exact lenselet locations, so the measured wavefront isn't exactly accurate. Also, the camera isn't placed exactly in the focal plane. These are problems that'll have to be fixed once I get the new sensor.

Wednesday, March 11, 2009

3.10.09


It turns out that using the full wavefront norm as the cost function works pretty well with a limited number of control modes. Previously, if only the first m modes were used, I was projecting the slope vector S onto the modes giving the cost function J=U(:,1:m)\S, where U is the modal poke matrix. Apparently this representation fails to capture enough detail in the cost function perturbations to really achieve good convergence.

As a result convergence is pretty good when using a limited number of modes for control, especially when enough perturbations are generated per iteration to allow a least-squares approximation of the gradient.



Once I get a faster WFS this algorithm should be golden. First though I have a host of tests to run to characterize the DM in more detail. Is the response really linear wrt the square of the voltage commands? Does superposition really hold? Should I use the given influence functions in estimating the poke matrix? We shall see.

Next I'm going to try constructing a cost function using image data instead. Again, this is something that's been done in papers so it should be possible here. The first step is to find a function (eg peak intensity, intensity variance, etc) that has a positive correlation with the WF norm that I'm using now.

Only one more 236 lecture left in the semester. How will I get by without my biweekly dose of olfactory stimulus? I might have to start huffing some mouldering cheese as a replacement.

Tuesday, March 10, 2009

3.9.09

Was able to get the modal SPGD working using a least-squares estimate of the gradient, instead of the stochastic nonsense in the original algorithm. Its not clear yet if this is a superior method or not, but as far as I know it doesn't depend as explicitly on the statistics of the perturbations (the original algorithm required delta correlated perturbations).

Convergence seems to be very dependent on the number of modes used. Even lowering it from 31 to 25 results in crap steady-state performance. I might try feeding back the non-modal wavefront norm as the cost function, rather than the wavefront projection onto a number of modes, since its possible the problem is there, rather than the gradient estimation. This might make more sense since there's no advantage to approximating the cost with a limited number of modes when you can get the complete value for free.

In general though steady-state performance isn't as good as with the integral controller, and seems to depend heavily on the gain. This seems to indicate that either (a) the cost function truly isn't truly convex or (b) the estimate of the gradient is too shitty to converge to the theoretical minimum. The real advantage, if there is one, will be in optimizing from a cost function that can be measured from a regular camera, and not a WFS. There are some papers where this is done, so I should be able to do it eventually.

Ideally, when I have the better sensor, an accurate poke matrix will let me calculate the optimal actuator command without any iterating at all.

Friday, March 06, 2009

Reason #2

Reason #2 I'm in grad school: never underestimate the allure of free pizza in today's harsh economic environment.

Thursday, March 05, 2009

3.5.09

More 236 lecturing today...more holding my breath and breathing through my mouth. We're finally going over inequality constrained optimization, which is exactly the problem I'm facing now for correcting static disturbances. I still think it would be bad ass if I could one day implement some kind of interior point algorithm in real time, even if the performance would be crap with dynamic noise. Some other stuff:

- Finished writing modal spgd script. Performs similarly to the non-modal case; not surprising since using all the modes reconstructs the commands exactly. The main difference is that the cost function being fed back is now the norm of the modal vector, not simply the wavefront error itself.

- Normalizing the estimated gradient helps performance greatly, although so far its still not as good as when using only positive perturbations. Naturally the gain has to be changed appropriately, but with these descent algorithms, the direction is really what matters. Here's a comparison between the unnormalized (v=0.011) and normalized (v=100) cases:



- I'm now looking into estimating the gradient using least-squares, but I'm not sure if this is equivalent, worse, or better than the current method. Comparing methods by estimating a known gradient (of a random quadratic function) wasn't conclusive. Theoretically, 31 perturbations would be needed to really identify the 31 entries of the gradient, thus requiring lots of time to capture each image. However if m modes are used instead, the gradient would only have m terms that need to be identified, so there might be some benefit to using the modes there. All this will hopefully be faster when I have the new SH sensor.

- If I can find a good way to estimate the Hessian at each iteration, I could use the full Newton's method with all the corresponding bells and whistles. I should probably look up some more recent papers.

Monday, March 02, 2009

Reason #1

Reason #1 I'm in grad school: so I can take off on arbitrary weekdays and head to places like this:




I'm often asked by amused relatives why I put up with long hours, low respect, and pay around the level of a assistant Mcdonalds manager when I could have snagged a cushy engineering job straight outta undergrad. This is going to be my attempt to prepare for those questions.