Wednesday, August 8, 2012

0 Turn Off VR (or IS) When Shooting Sports

Tips how to shoot sport photos
Computational Photography



If you want sharper sports photos, and you have lenses that use VR (Vibration Reduction
on Nikons) or IS (Image Stabilization on Canons), you should turn this off. There are two
important reasons why: (1) the VR (or IS) slows down the speed of the autofocus, so it can
stabilize the image, and (2) since you’ll be shooting at fast shutter speeds (hopefully
at 1⁄ 1000 of a second or higher), you don’t get any benefit from VR (or IS), which is designed
to help you in low-light, slow shutter speed situations. In fact, it works against you, because
the VR (or IS) system is searching for vibration and that can cause slight movement.
Normally, that wouldn’t be a problem, because you want VR (or IS) to do its thing in low
light, but in brighter light (and faster shutter speeds), this movement can make things
less sharp than they could be, so make sure you turn VR (or IS) off.

Monday, August 6, 2012

3 Why zooming on your DSLR is different and how to use autofocus for shooting video

Computational Photography

When you zoom in/out on a traditional video camera, the zoom is very smooth because
it’s controlled by an internal motor—you just push a button and it smoothly zooms in
or out, giving a nice professional look. The problem is that there’s no internal motor on
your DSLR—you have to zoom by hand, and if you’re not really smooth with it, and really
careful while you zoom, you’re going to wind up with some really choppy looking zooms.
In fact, since it’s so tough to get that power-zoom quality like we’re used to with regular
video cameras, there are a bunch of companies that make accessories so you can make it
look like you used a power zoom, like the Nano focus+zoom lever from RedRock Micro.
It’s two pieces (sold separately, of course, because this is video gear and therefore a
license to print money), but luckily, neither is too expensive. First, you need the focus
gear, which is sized to fit your particular zoom lens, and that runs around $45, but that’s
just for focusing (very nice to have, by the way), but then you add this zoom handle to it,
for another $35 or so (also worth it), so you’re into both for only around $80, but it sure
makes zooming smoothly a whole lot easier.

Computational Photography


As you’ve probably learned, many DSLR cameras don’t autofocus when you’re shooting
DSLR video, so then it’s a manual thing, but that doesn’t mean you can’t use Autofocus
to help you out. The trick is to use Autofocus before you start shooting, so basically, you
turn on Live View mode, but don’t start recording yet—instead, aim your camera at your
subject, press the shutter button down halfway to lock the focus, then move the Auto-
focus switch on the barrel of your lens over to “M” (Manual) mode, and you’re all set. Now,
the only downside is if your subject moves to a new location (even if it’s two feet away), you
have to do this process again (luckily, it only takes a few seconds each time you do it, and
you get pretty quick at it pretty quickly).

Sunday, August 5, 2012

0 Compression Using Context Matching-Based Prediction


   The algorithm presented in this section uses both predictive and entropy coding to com-
press  CFA  data.     First,  the  CFA  image  is  separated  into  the  luminance  subimage  (Fig-
ure 3.1b) containing all green samples and the chrominance subimage (Figure 3.1c) con-
taining all red and blue samples. These two subimages are encoded sequentially. Samples in the same subimage are raster-scanned and each one of them undergoes a prediction pro-
cess based on context matching and an entropy coding process as shown in Figure 3.3a.
Due to the higher number of green samples in the CFA image compared to red or blue
samples, the luminance subimage is encoded before encoding the chrominance subimage.
When handling the chrominance subimage, the luminance subimage is used as a reference
to remove the interchannel correlation.

Figure 3.3

 Decoding is just the reverse process of encoding as shown in Figure 3.3b. The lumi-
nance subimage is decoded first to be used as a reference when decoding the chrominance
subimage. The original CFA image is reconstructed by combining the two subimages.


Context Matching-Based Prediction 

   In the prediction process exploited here, the value of a pixel is predicted with its four
closest processed neighbors in the same sub-image. The four closest neighbors from the
same color channel as the pixel of interest should have the highest correlation to the pixel
to be predicted in different directions and hence the best prediction result can be expected.
These four neighbors are ranked according to how close their contexts are to the context
of the pixel to be predicted and their values are weighted according to their ranking order.
Pixels with closer contexts to that of the pixel of interest contribute more to its predicted
value. The details of its realization in handling the two subimages are given below.







Saturday, August 4, 2012

0 What to do if your image isn’t quite good enough to print?

Image printing
Computational photography



If you’ve taken a shot that you really, really love, and it’s maybe not as sharp as you’d
like it to be, or maybe you’ve cropped it and you don’t have enough resolution to print
it at the size you’d like,

I’ve got a solution for you—print it to canvas. You can absolutely
get away with murder when you have your prints done on canvas. With its thick texture
and intentionally soft look, it covers a multitude of sins, and images that would look
pretty bad as a print on paper, look absolutely wonderful on canvas. It’s an incredibly
forgiving medium, and most places will print custom sizes of whatever you want, so if
you’ve had to crop the photo to a weird size, that usually doesn’t freak them out. Give
it a try the next time you have one of those photos that you’re worried about, from a
sharpness, size, or resolution viewpoint, and I bet you’ll be amazed!

0 Common Compression Techniques

Inside sony



 In lossless compression, bits are reduced by removing the redundant information carried
by an image. Various techniques can be used to extract and remove the redundant informa-
tion by exploring i) the spatial correlation among image pixels, ii) the correlation among
different color channels of the image, and iii) the statistical characteristic of selected data
entities extracted from the image. The performance of an algorithm depends on how much
redundant information can be removed effectively.       A compression algorithm usually ex-
ploits more than one technique to achieve the goal.  Entropy coding and predictive coding
are two commonly used techniques to encode a CFA image nowadays.
 
Entropy coding removes redundancy by making use of the statistical distribution of the
input data.  The input data is considered as a sequence of symbols.  A shorter codeword is
assigned to a symbol which is more likely to occur such that the average number of bits
required to encode a symbol can be reduced. This technique is widely applicable for various
kinds of input of any nature and hence is generally exploited in many lossless compression
algorithms as their final processing step.     Entropy coding can be realized using different
schemes, such as Huffman coding and arithmetic coding.          When the input data follows a
geometric distribution, a simpler scheme such as Rice coding can be used to reduce the
complexity of the codeword assignment process.

Predictive coding removes redundancy by making use of the correlation among the input 
data.   For  each  data  entry,  a  prediction  is  performed  to  estimate  its  value  based  on  the 
correlation and the prediction error is encoded.            The spatial correlation among pixels is 
commonly used in predictive coding. Since color channels in a CFA image are interlaced, 
the spatial correlation in CFA images is generally lower than in color images.  Therefore, 
many spatial predictive coding algorithms designed for color images cannot provide a good 
compression performance when they are used to encode a CFA image directly. 
   However, there are solutions which preprocess the CFA image to provide an output with 
improved correlation characteristics which is more suitable for predictive coding than the 
original input.    This can be achieved by deinterleaving the CFA image into several sep- 
arate images each of which contains the pixels from the same color channel [21],  [22]. 
This can also be achieved by converting the data from RGB space to YC C                        space, 
                                                                                         r  b 
. Although a number of preprocessing procedures can be designed, not all of them are 
reversible and only reversible ones can be used in lossless compression of CFA images. 
   In transform coding, the discrete cosine transform and the wavelet transform are usually 
used to decorrelate the image data. Since typical images generally contain redundant edges 
and details, insignificant high-frequency contents can thus be discarded to save coding bits. 
When distortion is allowed, transform coding helps to achieve good rate-distortion perfor- 
mance and hence it is widely used in lossy image compression.

In particular, the integer 
Mallat wavelet packet transform is highly suitable to decorrelate mosaic CFA data, 
. This encourages the use of transform coding in lossless compression of CFA images. 
  
 Other  lossless  coding  techniques,  such  as  run-length  coding  ,  Burrows-Wheeler 
transform,  and adaptive dictionary coding (e.g.,  LZW) are either designed for 
a specific type of input other than CFA images (for example, run-length coding is suitable 
for coding binary images) or designed for universal input. Since they do not take the prop- 
erties of a CFA image into account,  it is expected that the redundancy in a CFA image 
cannot be effectively removed if one just treats the CFA image as a typical gray-level im- 
age or even a raster-scanned sequence of symbols when using these coding techniques.  A 
preprocessing step would be necessary to turn a CFA image into a better form to improve 
the compression performance when these techniques are exploited. 
   
At the moment,  most,  if not all,  lossless compression algorithms designed for coding 
CFA images mainly rely on predictive, entropy, and transform coding.  In the following, 
two dedicated lossless compression algorithms for CFA image coding are presented. These 
algorithms serve as examples of combining the three aforementioned coding techniques to 
achieve remarkable compression performance. 

Friday, August 3, 2012

0 Color Fidelity versus Spatial Resolution

computational photography


 The family of CFA patterns selected in this chapter is motivated by recalling contrast
sensitivity research showing human sensitivity to luminance contrast is very different from
human sensitivity to chrominance contrast.      Reference [70] examines the dependence of
chrominance contrast sensitivity on spatial frequency and on illuminance level; it was found
that contrast sensitivity degrades at lower luminance levels, for both chrominance and lumi-
nance. Despite limited comparison with luminance contrast sensitivity, the results suggest
that chrominance sensitivity degrades past one cycle/degree, while luminance sensitivity
peaks  near  two  cycles/degree.  Reference  [71]  provides  a  more  in  depth  comparison  of
chrominance and luminance contrast sensitivity.


 This work finds that red-green and blue-
yellow contrast sensitivity functions have similar spatial bandwidth, which is roughly 13
of the bandwidth of luminance contrast sensitivity.  It was also found that luminance con-
trast sensitivity degrades below about 1 to 2 cycles/degree, while chrominance sensitivity is
constant below about 13 to 23 cycles/degree. The similarity of red-green and blue-yellow
contrast sensitivities and their substantial different from the luminance contrast sensitivity
suggests the decoupling of spatial detail and luminance sensitivity from color sensitivity.
The clearest way to accomplish this is to provide a highly sensitive luminance channel in
addition to three channels for chrominance data.  The spectral response of the panchro-
matic channel is colorimetrically inaccurate for luminance,but it provides the best possible
signal-to-noise for a given sensor.


Introducing panchromatic pixels into a three-channel CFA pattern and allowing the color
sampling to drop off provide color resolution that is roughly 13 to 14 the panchromatic
sampling.  Assuming that color artifacts and chromatic aliasing are successfully limited in
demosaicking, this approach, fusing a panchromatic image with a lower resolution color
image, provides a capture system most closely mimicking the capabilities of the human
visual system under most imaging conditions.





0 Why JPEG look better than RAW images?

JPEG or RAW
Computational photography

I know what you’re thinking, “I’ve always heard it’s better to shoot in RAW!” It may be
(more on that in a moment), but I thought you should know why, right out of the camera,
JPEG images look better than RAW images.

 It’s because when you shoot in JPEG mode,
your camera applies sharpening, contrast, color saturation, and all sorts of little tweaks
to create a fully processed, good-looking final image. However, when you switch your
camera to shoot in RAW mode, you’re telling the camera, “Turn off the sharpening, turn
off the contrast, turn off the color saturation, and turn off all those tweaks you do to
make the image look really good, and instead just give me the raw, untouched photo
and I’ll add all those things myself in Photoshop or Lightroom” (or whatever software you
choose). So, while RAW files have more data, which is better, the look of the RAW file
is not better (it’s not as sharp, or vibrant, or contrasty), so it’s up to you to add all those
things in post-processing.

Now, if you’re pretty good in Photoshop, Lightroom, etc., the
good news is you can probably do a better job tweaking your photo than your camera
does when it creates a JPEG, so the final result is photos processed just the way you like
them (with the amount of sharpening you want added, the amount of color vibrance you
want, etc.). If you just read this and thought, “Man, I don’t even use Photoshop…” or
“I don’t really understand Photoshop,” then you’ll probably get better-looking images by
shooting in JPEG and letting the camera do the work. I know this goes against every-
thing you’ve read in online forums full of strangers who sound very convincing, but I’ll
also bet nobody told you that shooting in RAW strips away all the sharpening, vibrance,
and contrast either. Hey, at least now you know.

Wednesday, August 1, 2012

0 Computational photography table 2

Computational Photography

 Summary of Relative Noise in White Balanced and Color Corrected Signals.

         QE Set        SB        TrSB    L B              SC           Tr SC   L C

                  397 000 000                    1038   250     082
         RGB      000 262 000     9.88   1.24    250     625    390    24.41   1.60
                  000 000 329                     082   390     778

                  397 000 000                    1822  1470     724
         RPB      000 100 000     8.26   0.84   1470    2168  1394     53.22   2.51
                  000 000 329                     724  1394    1332

                  202 000 000                    2505   643    471
         CMY      000 189 000     5.67   1.03    643    1045    828    51.34   1.90
                  000 000 175                    471    828    1584

  In the case where  2   P   G       G , where i  123, this simplifies to
                              C    E   i
                      g

                                                 
                                         G1  0   0
                                                 
                                K         0 G2   0   G  P                              (1.8)
                                  B                    E C
                                          0  0  G3

To focus on the relative sensitivity, the matrix S is defined by leaving out the factor of P  :
                                                B                                         C

           
                                          G1  0   0
                                                   
                                  SB      0  G2   0   GE                               (1.9)
                                          0   0  G3

The values on the diagonal of SB   show the relative noise levels in white balanced images
before color correction, accounting for the differences in photometric sensitivity. To finish
the comparison, the matrix S   is defined as MS  MT . The values on the diagonal of S     in-
                             C                  B                                      C
dicate the relative noise levels in color corrected images. The values L B  and L C indicate
the estimated relative standard deviation for a luminance channel based on Equation 1.4.

 As shown in Table 1.2, the TrSB  and L B      are smaller for CMY and for RPB than for
RGB, reflecting the sensitivity advantage of the broader spectral sensitivities.   However,
TrSC  and L C  are greater for RPB and CMY than for RGB, reflecting the noise ampli-
fication from the color correction matrix.  In summary, while optimal selection of spectral
sensitivity  is  important  for  limiting  noise,  a  well-selected  relatively  narrow  set  of  RGB
spectral sensitivies is close to optimum, as found in References [65] and [66]. Given these
results, it is tempting to consider narrower spectral bands for each color channel, reduc-
ing the need for color correction.  This would help to a limited extent, but eventually the
signal loss from narrower bands would take over.     Further, narrower spectral sensitivities
would produce substantially larger color errors, leading to lower overall image quality. The
fundamental problem is that providing acceptable color reproduction constrains the three
channel system, precluding substantial improvement in sensitivity.

Reference  [65] considers the possibility of reducing the color saturation of the image,
lowering the noise level at the expense of larger color errors.   However,  the concept of
lowering the color saturation can be applied with RGB quantum efficiencies as well.  Ref-
erence [66] shows that by allowing larger color errors at higher exposure index values, the
optimum set of quantum efficiencies changes with exposure index.  In particular, at a high 
exposure index, the optimum red quantum efficiency peaks at a longer wavelength and has 
less overlap with the green channel.    This is another way to accept larger color errors to 
reduce the noise in the color corrected image. 


0 Computational photography table




Summary  of  channel  sensitivity  and  color  correction  matrices. The  balance  gains  and  the
       sensitivity gain are respectively denoted by G  G  G   and GE .
                                             1  2  3

        QE Set   Channel Response     G  G  G          GE             M
                                         1  2  3

                                                               1558  0531  0027
        RGB      2616 3972 3159     1518 1000 1257    2.616  0078     1477  0399
                                                               0039  0508    1469

                                                               2000  1373    0373
        RPB      2616  10390 3159   3972 1000 3289    1.000  1062     3384 1322
                                                               0412  1248    1836

                                                             2554     2021   1533
        CMY      5134 5486 5929     1155 1081 1000    1.752    0941  1512    1571
                                                               1201   1783  1984

reflectance,  Qi is the quantum efficiency,  and IEI is the exposure index.  The additional
values are Planck’s constant h, the speed of light c, the spectral luminous efficiency function
V ,  and  normalization  constants  arising  from  the  definition  of  exposure  index. Using  a
relative spectral power distribution of D65 for the illuminant, a pixel size of l  22  m,
and a spectrally flat 100% diffuse reflector, the mean number of photo-electrons captured
in each pixel at an exposure index of ISO 1000 are shown under “Channel Response” in
Table 1.1.


The balance gains listed are factors to equalize the color channel responses.  The sensi-
tivity gain shown is calculated to equalize the white balanced pixel values for all sets of
quantum efficiencies. The color correction matrix shown for each set of quantum efficien-
cies was computed by calculating Equation 1.5 for 64 different color patch spectra, then
finding a color correction matrix that minimized errors between color corrected camera
data and scene colorimetry, as described in Reference [68].
 
The illustration compares the noise level in images captured at the same exposure index
and corrected to pixel value P  . For a neutral, the mean of the balanced pixel values is
                               C
the same as the color corrected pixel values.  Since the raw signals are related to the bal-
anced signal by the gains shown in Table 1.1, the original signal levels can be expressed as
follows:
                                      1    0     0        
                                                        P
                                    GE G1                 C
                                                     
                                            1
                                                           
                            P         0          0      P                            (1.6)
                                                     
                             O            GE G2           C
                                      0    0     1      P
                                                          C
                                                GE G3


Similar article: What is computational photography?




 

Computational photography Copyright © 2011 - |- Template created by O Pregador - |- Powered by Blogger Templates