Processing of Astronomical CCD Images
The key to processing images is to have a clear idea of your objective. Do
you want to produce "a pretty picture" or bring out some specific detail or
feature of the target object? Is colour important or will black and white be
sufficient?. With a clear objective in mind consider the various types of
Calibration is the process of removing unwanted signals from the raw images.
Calibration is described here and it is assumed
that all the images are first calibrated prior to further processing.
You will want to select only the best of your captured images for processing.
The best way is to look at each image and discard those that are not clear,
sharp, well shaped etc. Even with video it can be worth the time to go through
and select just a few really good frames rather than rely on the 'quality'
ranking provided by your processing software.
You may wish to enlarge the images by a factor (say x1.5 or x2.0). This will
not improve the level of detail in each image but it will spread the detail over
more pixels. This may be of benefit by allowing the alignment and stacking
process (see below) to achieve "sub pixel" precision so that the resulting
stacked image has finer detail than any of the original images. An enlarged
image may also work better with sharpening filters such as Registax's wavelets
or the Unsharp Mask. Resample (resize) back to the original (or any other
desired) size later in processing.
You will have captured multiple images (or a video) of your target object.
All frames will need to be aligned and stacked. Alignment is the process of shifting each image up/down and left/right so
that the target object is in exactly the same place on every image. There are
several ways of doing this provided in astro-imaging software:
- Manual: You have to click in exactly
the same place (e.g. a star) in each image to set the alignment point.
- Centroid: You click on the same star
in each image and the software will try to align the centre of the star.
- Automatic: (e.g using FFT algorithm)
you select a suitable star or image feature and the software will do its best
to align the feature on each image.
Some software will allow you to select two points in each image and will
rotate the images to compensate for images taken using an alt/azimuth mounted
telescope. Registax allows you to select multiple points in the image and can
align different areas of the image differently. This can be quite useful when
poor "seeing" has caused areas of the image to shimmer.
For video, manual alignment is usually impractical because of the number of
frames. I normally use use Registax for videos. For still frames I like to use
Centroid (Maxim) or Automatic (Registax).
Once the images have been aligned then they can be stacked by:
- Averaging (Median): This is the normal way
to stack images. Each pixel is set to the median value of the
appropriate pixels from all the images. Noise is effectively reduced by
averaging and there is no danger of any pixels becoming saturated.
- Adding (Sum): The images are added together and pixel values are the
sum of the appropriate pixel on all the images. Generally this is not a good
idea as bright spots (e.g. stars) will add up to more than the maximum value of
a pixel causing distortion or "white out" or areas of the image. But if you
have a limited number of images and their maximum pixel values are low then
adding can be a useful because it will give a greater range of pixel values
for subsequent contrast enhancement.
- Drizzle: This is a technique designed by NASA to sharpen images acquired by the Hubble
Space Telescope. The objective is to combine the information in multiple images
to get a better resolution than the number of pixels in the camera. It is only
relevant when the resolution of the telescope is better than the resolution of
the CCD. There is a
detailed description of the process
The new Meade DSI image processing software (Autostar Envisage) has an implementation of
Drizzle and I also use the one in Registax. The output image is larger than
the input image thus possibly may have better resolution than individual input
images. (Personally I get more success using simple x1.5 resampling
rather than Drizzle).
First check this discussion about colours.
Colour plane alignment
If you have a colour camera or have produced a colour image from (L)RGB
images then examine the stacked image carefully to see
if the colours are correctly aligned. When taking images of objects near the
horizon the red, green and blue images may be slightly displaced. This will
cause a red tinge on one side of a star/planet and a blue tinge on the other
side. Your software
should provide the ability to shift the RGB components and get them properly
Stretch, Gamma, Balance, Hue and Saturation
Once you have a stacked image, adjust contrast and colour according to the
results you require. The most frequently used techniques are:
- Stretch: This resets the top and
bottom of the range of pixel values and 'stretches' the range of pixel values
in your image. Gradually increase the 'minimum' pixel value until the dark sky
background is reasonably black. Gradually reduce the 'maximum' pixel value
until bright stars or planetary features are bright but not saturated.
- Contrast & Brightness: This has the
same effect as 'stretch' but the controls are different. Gradually increase
contrast and adjust brightness to obtain a dark sky and bright stars.
- Gamma: This allows you to increase or
decrease the brightness of mid-range pixels without changing the minimum and
maximum values. It is useful to accentuate shadows in a planetary image,
bring out faint nebulosity or increase the brightness of small stars in a
- Histogram: This combines stretch and
gamma adjustments in a flexible way. You are shown a graph of
input-pixel-value and output-pixel-value. To start, the graph is a straight
line but you can set it to any shape. The classical "S" shaped curve is often
the most effective at bringing out the best in an image. It pays to
- Colour Balance: If the image has too
much red, green or blue in it then a colour balance correction can be applied.
- Saturation: If the image has too
little or too much colour then an increase or decrease in saturation can be
- Hue: This will shift the colour of all
pixels towards one or other end of the spectrum; generally not a good idea
unless you want to create false colour effects.
It is usually best not to make too dramatic a change in an image in one go.
Gradually improve the image using a series of contrast and colour adjustments
interspersed with sharpening and noise reduction filters (see below).
Most software will provide a huge range of filters and using the right
filters is a key skill in image processing. I will mention just a few here.
The objective is to make stars small and bright and to bring out detail in
planetary images. Fuzzy edges need to be made less fuzzy.
- High Pass: (Sharpen) Accentuates the high frequency signals in the image.
Makes changes in brightness across the image more stark. Can also accentuate
- Unsharp: The unsharp mask is one of the most powerful
sharpening an image. The image is examined to find edges and for each pixel
near an edge a decision is made as to whether it is on the dark side or the
light side. Thus fuzzy edges are removed. There are usually three controls:
- Radius: set this to your best guess as to how many pixels wide the
fuzziness is at the edges of stars or features. Adjust up/down for best
- Strength: set this to determine how extreme the lightening/darkening of
pixels will be near edges. Too much will introduce noise or dark rings round
- Clipping: set this to a value above zero to reduce noise. This control
says in effect "if you find a change in pixel value less than clipping value
then this is not actually an edge so don't try to sharpen it"
- Wavelets: The wavelet filter like that found in Registax is a complex but
effective tool. The signal in the image is examined at different frequencies
and you can chose how much of each frequency you want. Best is to read the
Registax documentation and experiment with it.
- Deconvolution: There are a number of
'deconvolution' filters such as Maximum Entropy Deconvolution. The theory is
complicated but the idea is that if you know (can guess at) the way in which
fuzziness was introduced then you can remove it by a series of repetitive
processes. I have had some limited success sharpening star clusters with
this type of filter but more often than not it seems to just make a mess of
- Erosion: This filter cuts away at bright edges. The effect is to reduce the size and
'sharpen' stars. This can be useful if large bright stars are spoiling the view
of a galaxy or nebula. In fact multiple use of erosion can remove the stars
completely! However, remember that the resulting image is not 'correct' and
should not be presented as such.
Noise Reduction Filters
If the image is 'grainy' this is due to random noise. The best way to avoid
noise is to average lots of images, but assuming that has been done already then
try some filters:
- Low pass: This reduces high frequency signals in the image and can reduce
the effect of pixel-to-pixel noise. However it may make the image too fuzzy.
- Median/average: This will set pixels to the median or average
value of the pixels in
a given radius around them and can reduce high frequency noise. Unfortunately
it also reduces detail.
- Edge preserving smooth: In the same way that the unsharp mask looks to
accentuate edges this filter looks for edges and smoothes out those areas that
are not near an edge. This often works quite well to reduce noise
without reducing detail.
- Despeckle/Hot Pixel: can be used to remove hot pixels.
Specialist software (e.g. Neat Image) has really powerful algorithms
for removing noise.
Most photo-editing software provides a huge range of filters and effects designed for
'artistic' or photographic effect. Most of them are unsuitable for astronomical
images - but by all means experiment with them.