ALE
Image Processing Software Deblurring, Anti-aliasing, and Superresolution. Local Operation localhost 5393119533 |
[ Up ]
J0 = Ifinal
J1 = UJ0
J2 = UJ1
.
.
.
Jk = UJk-1
Given transformations Px and Bx, linear exposure adjustment Ex, linear and non-linear convolutions L and N, discretization operator D, and extension operator C, the imaging process can be modeled, starting with s, a discrete approximation of the scene, as:
dx = DNCGDLExBxPxCs
Note the extension operator to the left of s, which was not present in the similar equation appearing on the main page. Since we are using only a discrete approximation of the scene, it must first be made continuous, in order to fit within the framework developed on the main page. As noted before, the second extension operator, between G and N, is included for convenience of implementation, and does not restrict the generality of the model.
Since pixel values in DLExBxPxCs are linear in pixel values of Cs, and hence linear in pixel values of s, and since, similarly, pixel values of dx are linear in pixel values of GDLExBxPxCs, we can define arrays of values a and b such that:
DLExBxPxCs(i, j) = ∑(i', j') ax,i,j,i',j' * s(i', j')Hence, using G as a gamma correction function:
dx(i, j) = ∑(i', j') bx,i,j,i',j' * GDLExBxPxCs(i', j')
dx(i, j) = ∑(i', j') bx,i,j,i',j' * G(∑(i'', j'') ax,i',j',i'',j'' * s(i'', j''))Note that, since G is non-linear, the inner sum cannot be moved outside of the enclosing parentheses.
Given the model described above, we now have two ways to calculate a pixel value dx(i, j): directly, from the input frame, or indirectly, from the scene approximation s. Hence, we can calculate an error between the two methods:
ex(i, j) = dx(i, j) - ∑(i', j') bx,i,j,i',j' * G(∑(i'', j'') ax,i',j',i'',j'' * s(i'', j''))
dx(i, j) = ∑(i', j') bx,i,j,i',j' * s'(i', j')
This includes, as a special case, the scenario originally outlined by Irani and Peleg, which considered a single convolution step. In this case, we can use a backprojection array c to produce an updated image Us':
Us'(i, j) = s'(i, j) + (1 / xmax) * ∑x ∑(i', j') cx,i,j,i',j' * ex(i', j')
For certain limited cases, Irani and Peleg have proved this update rule to converge exponentially to the original scene. However, since the model used by ALE includes two convolution steps that cannot, in general, be combined, ALE uses a different update rule.
The two-stage backprojection rule below could easily be extended to any arbitrary number of stages, but ALE currently supports only two stages, in accordance with the two-colorspace imaging model outlined above. The rule, using G-1 as an inverse gamma correction function, is:
Us(i, j) = s(i, j) + (1 / xmax) ∑x ∑(i', j') cx,i,j,i',j' * (G-1(GDLExBxPxCs(i', j') + ∑(i'', j'') fx,i',j',i'',j'' * ex(i'',j'')) - DLExBxPxCs(i', j'))
Where f is a second backprojection array. Schematically, the update process looks something like the diagram below, based on an excerpt from the ALE source code. The exposure re-estimation step shown in the diagram has not been described here, but is available for inspection in the source. Also, since ALE stores images internally using a linear representation, they must first be converted back to a non-linear representation, as shown at the very bottom of the diagram.
* The algorithm in the paper by Irani and Peleg looks something like this * (PSF' is the backprojection kernel, and corresponds to what the authors of * the paper call AUX): * * =============================================================== * Forward Backward Binary Operators * --------------------------------------------------------------- * * scene(n) ------> scene(n+1) <--- summation * * | ^ * | | * PSF PSF' * | | * | ---------+ <--- difference * V / | * * simulated(n) real * * =============================================================== * * The above approach assumes a single colorspace representation. However, * consumer cameras sometimes perform sharpening in non-linear colorspace, * whereas lens and sensor blurring occurs in linear colorspace. Hence, there * can be two colorspaces involved; ALE accounts for this with linear and * non-linear colorspace PSFs. Hence, the algorithm we use looks something * like: * * =============================================================== * Forward Backward Binary Operators * --------------------------------------------------------------- * * scene(n) -----> scene(n+1) <--- summation * * | ^ * | | * LPSF LPSF' * | | * | ----------+ <--- difference, * V / | exposure * re-estimation * lsimulated(n) lreal(n) * * | ^ * | | * unlinearize linearize * | | * V | * * lsim_nl(n) -----> lreal_nl(n) <--- summation * * | ^ * | | * NLPSF NLPSF' * | | * | ----------+ <--- difference * V / | * * nlsimulated(n) real_nl * * ^ * | * unlinearize * | * | * * real * * =============================================================== |
The rules used for calculating backprojection arrays c and f are as follows:
cx,i,j,i',j' = Y * bx,i,j,i',j'
fx,i,j,i',j' = Z * ax,i,j,i',j'
Such that:
Y = - 0.9 / (maxω |FKb(ω)|)
Z = - 0.9 / (maxω |FKa(ω)|)
where F is the fourier transform, Ka is the convolution kernel for a in its native coordinate system, and Kb is the convolution kernel for b in its native coordinate system.
In cases where at least one of L, N, or G is the identity operator, or if G is linear, the technique outlined above can be expressed in terms of the original method by Irani and Peleg, and will converge exponentially if it meets the convergence criteria outlined in their paper. Note that ALE makes certain approximations to obtain projection and backprojection arrays that may hinder convergence in some cases. In particular, the calculation of overlap areas involving non-identity transformations is not exact.
ALE's implementation of certainty for Irani-Peleg rendering modifies the rule described above. Write the original rule as:
Us(i, j) = s(i, j) + (1 / xmax) ∑x correctionx(i, j)
where correctionx represents the back-projected correction
for frame x. Let the re-estimated linear value comp_real,
lreal(n)
in the diagram above, be defined as:
comp_realx(i, j) = G-1(GDLExBxPxCs(i, j) + ∑(i'', j'') fx,i,j,i'',j'' * ex(i'',j''))Then the certainty-based rule for versions 0.7.x is:
Us(i, j) = s(i, j) + ∑x Κ(correctionx, Excomp_realx, i, j) * correctionx(i, j) / ∑x K(correctionx, Excomp_realx, i, j)
Where Κ is the symmetrically one-sided certainty function.
For version 0.8.0, the rule is:
Us(i, j) = s(i, j) + ∑x Κ'(correctionx, s, i, j) * correctionx(i, j) / ∑x Κ'(correctionx, s, i, j)
Where Κ' is the asymmetrically one-sided certainty function. Note that this expression substitutes s for Excomp_realx, resulting in estimate-based certainty.
Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved.