My advisor's made it clear that speeding up the SLM is a low priority at the moment, and I'm to focus on getting the DM and WFS working faster.
I suppose this makes sense, but accomplishing those things means a lot of waiting around to hear back from the manufacturer. In the mean time I"m going to try my hand at writing some basic adaptive controllers using the same architecture as in the beam pointing experiments. I already have a Simulink model that actually runs the experiment, so instead of screwing around with the SLM, for the mean time I can just add desired disturbance wavefronts to the measurement. From a control perspective this shouldn't make any difference, and it'll let me get my feet wet with some adaptive methods. I can get an identified disturbance model that can spit out disturbances with realistic statistics, so doing this should be "trivial."
Tuesday, August 18, 2009
Wednesday, August 12, 2009
Didn't Ben Franklin have syphilis?
As if I didn't have enough going on right now, I'd like to eventually write a complete set of HDR functions that I can actually use with a DSLR. I already have part of this done already for use with AO, but there are still a few features that are left:
1. Image alignment: since I don't really carry a tripod around everywhere, it'd be nice to have an image alignment function that can automatically align several images taken by hand. If you consider the problem as sliding one image around on top of another, this is really an (convex?) optimization problem: find the optimal (x,y) displacement to minimize some alignment metric, like the mean pixel difference or something. I think there are a couple algorithms out there that do this by gradient descent, so I'll have to look those up but the general idea is simple. The problem with HDR is that each image has a different exposure, so somehow that has to be corrected. Maybe weighting each pixel by its intensity of something to ignore saturated areas...
2. Creating a HDR image map: basically already done following the method by Debevec.
3. Tone mapping: there are a boatload of algorithms for this, and I'm still not sure which one's the simplest to implement and yields decent results. Right now the leading candidate is Gradient Domain HDR Compression (Fattal 2002) since the results look pretty good and I understand most of the paper.
Who knows if I'll ever get around to this. I recently read a nice how-to on doing time-lapse videos using a Canon point and shoot, so if I could somehow combine that to make HDR time-lapses it could be epic.
1. Image alignment: since I don't really carry a tripod around everywhere, it'd be nice to have an image alignment function that can automatically align several images taken by hand. If you consider the problem as sliding one image around on top of another, this is really an (convex?) optimization problem: find the optimal (x,y) displacement to minimize some alignment metric, like the mean pixel difference or something. I think there are a couple algorithms out there that do this by gradient descent, so I'll have to look those up but the general idea is simple. The problem with HDR is that each image has a different exposure, so somehow that has to be corrected. Maybe weighting each pixel by its intensity of something to ignore saturated areas...
2. Creating a HDR image map: basically already done following the method by Debevec.
3. Tone mapping: there are a boatload of algorithms for this, and I'm still not sure which one's the simplest to implement and yields decent results. Right now the leading candidate is Gradient Domain HDR Compression (Fattal 2002) since the results look pretty good and I understand most of the paper.
Who knows if I'll ever get around to this. I recently read a nice how-to on doing time-lapse videos using a Canon point and shoot, so if I could somehow combine that to make HDR time-lapses it could be epic.
Subscribe to:
Comments (Atom)
