Light Vortex Astronomy
  • Blog
  • Astrophotographs
  • Articles
  • Tutorials
  • Links
  • About Me
  • About e-EyE
 ​Send a Gift via PayPal:

Remote Hosting in Spain:
Picture

 

Tutorial (PixInsight): ​

Preparing Monochrome Images for Colour-Combination and Further Post-processing


Starting out with monochrome imaging can be challenging. If you are building a colour image, this involves capturing images through a number of filters, such as LRGB or a number of narrowband filters (a minimum of two is needed to build a colour image). Either way, once you pre-process each image, you end up with a set of calibrated and stacked monochrome images that are meant to come together to form a colour image. 

There are a number of things that must be done to these individual stacked monochrome images before they can be combined to form a colour image. Namely, these images will probably not be properly aligned with each other to start off with. This can be due to a very slight misalignment introduced between imaging through different filters over the course of a night or a number of nights, or through the use of dithering. This tutorial describes what should be done prior to being able to work with the images to build a colour image. It will take you from calibrated and stacked monochrome images to a set of images ready for post-processing.

Assumed for this tutorial:
  • Knowledge of operating PixInsight, related to dealing with images and processes (read this, sections 3 and 4).
  • Your monochrome images have already been pre-processed fully (read this).

​Please feel free to ask questions or leave comments via the comments section on the bottom of this page. 

Contents
1. Registering the Images with StarAlignment
2. Cropping the Black Edges with DynamicCrop
3. Subtracting Background Gradients with DynamicBackgroundExtraction
4. Matching the Images with LinearFit
Comments

 

1. Registering the Images with StarAlignment

The very first step in preparing your calibrated and stacked monochrome images for colour-combination is aligning them to each other. Whether these images are LRGB or narrowband, we need them all to align to each other. If they do not, colour-combining them can yield very undesirable results. 
Picture
You will notice above that the Red, Green and Blue images seem to fall on different places. This happens due to a number of factors including relative plate solving error from night to night, drifting over the course of a night and of course, purposefully due to the use of dithering. It is impossible to work with these monochrome images as they are - they would have to stay monochrome, and then what is the point? To address this misalignment, we will use the StarAlignment process in PixInsight.

We need not have any of the images open in PixInsight to make use of StarAlignment. For now, the process alone is opened.
Picture
First and foremost, we must decide on which of our calibrated and stacked monochrome images will act as a reference. Every other image would therefore be aligned with respect to this one. Generally speaking, the reference image best be a bright one with a nice star field (the rounder the stars, the better, of course). I generally use Luminance for reference image. If dealing with narrowband images, pick any that has a nice star field as it does not really matter. Please keep in mind that you can align your images to any as long as enough stars are picked up for alignment. Also keep in mind that the registered images will end up with the same resolution as the reference image. This means that if you bin your Red, Green and Blue images at 2x2 but leave your Luminance unbinned at 1x1, using Luminance as the reference in StarAlignment will make your Red, Green and Blue images have the same resolution (they are up-scaled, effectively to the same image size as if they were captured at 1x1).

Once you have decided on which image will act as reference, we go to the StarAlignment process and select File from the small drop-down list next to the text box for Reference image. Then, we click the small arrow icon button next to it, find our reference image of choice and select it.
Picture
Picture
Picture
Register/Match Images should already be selected in Working mode as this the default mode and is indeed the correct mode to align images together. We now need to add all the images that need aligning with our reference image. We may as well add in our reference image as well. To do this, click the Add Files button on the middle-right of StarAlignment, select all your images and add them.
Picture
Picture
Picture
We now set an output folder for the aligned images. This is done by clicking the small button next to the text box for Output directory and then selecting a folder. You can select a different folder to where your images are currently stored, or keep the Postfix set to something, such as the default _r. This will add _r to the end of the filename of each registered image. If you do not want the registered images to have their filenames changed, simply remove the contents of Postfix.
Picture
Picture
Before we proceed, it is recommended we change a couple of settings in StarAlignment from their defaults. For single panel images, I noticed this was not needed but for mosaics, particularly large ones (4​ panels or more), I noticed that the stars on the corners of the mosaics would not align as well as the stars in the centre of the mosaics. This made the corners look like the misaligned colour image shown at the top of this tutorial. To prevent this problem, select 2-D Surface Splines from Registration model at the top, enable Distortion correction, set Distortion iterations to 100 (for good measure, though generally the default of 20 works well) and select Bicubic B-Spline from Pixel interpolation under the section of Interpolation at the bottom of StarAlignment. 
Picture
As aforementioned, you should do this when working with images that are essentially mosaics, though I generally use these settings all the time regardless. 

All that is left to do is click 
Apply Global and the selected output folder will be populated with all your images, registered with respect to the selected reference image.
Picture
How smoothly the registration process goes really depends on how many stars are picked up and matched between the reference image and your other images. Not picking up enough stars can depend on how many stars were picked up in your chosen exposure time with your optical system in your night sky, as well as just how round the stars are (to be deemed stars and to match them between your images). You can relax the parameters in PixInsight to help pick up more stars when registration fails. The first thing to try is raising RANSAC tolerance under Star Matching. This is 2.00 by default but you may wish to try values between 6.00 and 8.00. This value can be entered manually or adjusted with the slider. Moreover, you may wish to raise RANSAC iterations a little, to say 3000, to force PixInsight to attempt to find matching stars more thoroughly.
Picture
Moreover, to allow PixInsight to pick up fainter stars for use in the registration process, you can decrease the Log(sensitivity) parameter under Star Detection. This can be done by entering a value manually or adjusting the slider. Since this parameter is logarithmic in scale, a lower value results inincreased sensitivity. A value of -2.00 to -3.00 (the minimum) may work well. You may also increase Detection scales very slightly, to say 6, to pick up more stars.
Picture
If after tweaking parameters, PixInsight is still unable to pick up enough stars for registration, it probably means that the quality of your images needs to be considered. Longer exposures may be needed, or imaging higher above the horizon, or in darker skies, or with better tracking/guiding (to manage rounder stars). That or perhaps excluding some exposures when pre-processing (calibrating and stacking) is a good idea to yield better end results. This is especially the case when some of the exposures are significantly worse than average for your entire dataset.

Hopefully by this point, your images have been registered with each other and you are ready to proceed. Keep in mind that because the images have been translated and perhaps even rotated to match up to each other, keeping the resolution the same for all (the reference image's resolution is applied to all the images registered), there will inevitably be some black edges to your images. These are areas of the images that have no data in them as that part of the night sky, relative to the reference image, was not captured. This is perfectly normal and we will deal with the black edges in the next section. 
 

2. Cropping the Black Edges with DynamicCrop

At this point in time, your images can actually be colour-combined already, since they are aligned to each other, but you will want to remove the black edges from your images before you do. Cropping is the way forward, but keep in mind that if you crop uncontrollably, you will lose the alignment between your images, which brings us back to square one! The key is equal cropping to all your images, even if one does not have any black edges (common with the image you used as reference for registration, since it was not altered). The following shows my LRGB images (in that order from left to right), auto-stretched.
Picture
The DynamicCrop process is precisely what we need to use for this, so we open this.
Picture
The images have been kept open, spread out and auto-stretched, as we need to see what we are working with while cropping. As per the name, DynamicCrop is a dynamic process, meaning it activates a session on an image and we cannot focus on another image unless we cancel what we are doing. The best thing to do is click to select the image that has the worst black edges - this is the Red image for me. Once selected, click DynamicCrop's Reset button to start the session on it.
Picture
If another image appears to have the DynamicCrop session active on it when you try this, simply close DynamicCrop, re-open it and try again. All that is needed for you to do is drag the four sides of the white highlight box inwards and it will reduce the size of the white highlight box. The area bound by this box signifies what we will keep when we perform the crop. Be careful to place your mouse cursor precisely on the edges as otherwise, dragging will move the box rather than resize it (we want to resize it). To do this precisely, it helps to enlarge the image and zoom in. If you want to restart the process, just click DynamicCrop's Reset button.
Picture
Picture
You can apply a rotation to your cropped area as well, by entering an angle manually or dragging from way outside the white highlight box limits. DynamicCrop will automatically interpolate pixels if you rotate, which will smooth things out (as pixels are square after all!).

You may want to be a tiny bit aggressive with how much you crop. After all, just because my Red image looks worse than the other three does not mean that I have not missed a little bit of a black edge on another image if I base my cropping on my Red image. Maybe my Red image is worse overall but perhaps the bottom side is worse on my Green image than my Red image! You can use stars to guide yourself as to how much to crop. As you create this model with DynamicCrop, look back at your other images and more or less figure out whether or not you should crop more. Do keep in mind that indeed you can repeat this process again afterwards if you notice any black edges are left in an image.

Once you are sufficiently happy with your DynamicCrop model, do not apply it to your image, simply drag the New Instance button of DynamicCrop to the workspace to create a process icon for it. You can then close DynamicCrop. You may rename the process icon if you wish.
Picture
Picture
Now apply this process icon to each of your images by dragging and dropping it over them. This process icon has all the parameters saved so the crop we will apply to each image is identical. With an auto-stretch applied to all your images, you will quickly see how the crop has performed. Once done, you may resize your images to fit the windows and delete the DynamicCrop process icon as well.
Picture
Picture
With the exact same crop parameters applied to all our images, they retain their alignment with respect to each other and they now feature no black edges, which makes post-processing further much easier. We now proceed to remove any background gradients in our images.
 

3. Subtracting Background Gradients with DynamicBackgroundExtraction

Background gradients occur naturally in images. These are due to varying night sky brightness across your FOV, and can be due to Moonlight, artificial light pollution, etc. Since the images have been pre-processed already, the background gradients left over are from these phenomena, as opposed to vignetting in your optical system (this is corrected with use of flats during pre-processing). Subtracting these background gradients will yield much better-matched images with higher SNR across the important regions and a more even, darker background.

The DynamicBackgroundExtraction process by allowing the user to place sample points in an image (the intention is for them to be placed over background). These sample points are used by the process to calculate a model for the background gradients, which can then be subtracted from the image to clean it up. We will use the DynamicBackgroundExtraction process on one image to create the model with sample points across areas of background. This will then be applied to all the other images individually as well. It helps to use the image with the brightest star field and the most nebulosity, as basing our model with sample points in an image with the most stars and the most nebulosity allows us to avoid placing sample points over areas that should not be taken to be background. This helps avoid having to exhaustively look through each sample point you place when you apply the same model to an image taken with a different filter. Luminance and Hydrogen-Alpha are therefore good images to base a model upon.

We open the DynamicBackgroundExtraction process and only keep our Luminance image maximised to work with. Since this is a dynamic process, we click the Reset button on it to initialise the session with the Luminance image selected.
Picture
Crosshairs appear inside the image that has the active session. These can be moved around the image by dragging them around. The purpose of them is to define the location of the centre of symmetry of your image's background gradients. They is generally left in the centre unless you have a clear view of the background gradients and wish to move them. Clicking the image will create a sample point. It can be dragged around to place it elsewhere as well. You may zoom in and pan around the image with your mouse to get a clear view. Since the default colour of a sample point is grey, it does not work very well with monochrome images, which we are working with. Simply click the colour rectangle for Sample color and alter it to something more visible. I simply make them bright turquoise.
Picture
Picture
We can have DynamicBackgroundExtraction place sample points automatically for us, and generally this is a good starting point unless you want to be extra careful as to where you place them. You should raise Default sample radius to a value between 10 and 20. I generally use 15 but find sample points with a size of 15 are too large for these images that have so, so many stars closely packed together. 10 works better here. For Samples per row, we can use a value between 15 and 20, which gives a good number of sample points. Once changed, click the Generate button.
Picture
Picture
Sample points now appear all over the image, though your mileage will vary according to your image. If you are getting very, very few, you need to raise the value for Tolerance. This value is ideally left at 0.500 but some images have too bright a background and thus sample points you place manually are rejected (shown as red squares) and if you choose to place sample points automatically, a lot of areas will be left out. Raise Tolerance only if you really, really need to. When do you really need to? When you zoom in closely, manually place a sample point by clicking in an area that you know for 100% sure is background and it shows up as a red square because the sample point is rejected. I use values of Tolerance only as high as about 2.000 but never above that. I tend to hover between 0.500 and 1.500. If you change the Tolerance value, be sure to click the Resize All button to apply the settings to the currently-placed sample points. Do not click Generate as this will automatically place sample points and undo all your hard work of moving some, manually placing others, etc. Of course, you may want to allow the process to automatically place sample points according to a new Tolerance value. If you do, then do click the Generate button.

Since I have very, very few sample points, I raise my Tolerance to 1.000 and click Generate to create a new automatic placement of sample points.
Picture
With a more adequate number of sample points placed, I now proceed to inspect them manually. I zoom in and pan around the image, checking each one closely. Sample points that are over stars or nebulosity are moved around nearby so they overlap strictly background. If they are too problematic or moving them is useless because more sample points nearby take care of that area, simply click them to select them and delete them with your keyboard's Delete key.
Picture
Picture
Picture
Now that all the automatically placed sample points are in good places, we need to look through our image and perhaps place some extra ones manually by clicking areas that are clearly background. Do this in areas that have been missed out by the process' automatic sample point placement. Sometimes it helps to zoom out a lot of the image to get a very quick overall glance at where sample points are missing. Remember as you place them however, that quality over quantity - you do not need to plaster the background with sample points, so long as the sample points placed are in good locations and adequately cover your entire image, especially the four sides and four corners.
Picture
Picture
Above, zooming out aggressively shows us the gaps left by the automatic sample point placement. Alternatively zooming in and placing sample points manually fills these gaps adequately. If you are placing a sample point manually and find that it is being rejected (appears as a red square), try moving it around a little bit. If you are 100% sure that it is over background and feel you need it to not be rejected (as you want that bit of background gradient subtracted), raise the Tolerance value a little bit (only by about 0.200 might do it) and click Resize All (important: do not click Generate or you will lose all your manual work!).

Once you are happy with your model, select Subtraction from the drop-down list for Correction and we are ready to save the parameters. To do this, drag and drop the New Instance button of DynamicBackgroundExtraction to the PixInsight workspace to create a process icon. Rename its identifier if you wish. Once done, feel free to close the process without applying it.
Picture
Picture
Picture
Picture
All we need to do now is actually apply our model to our images. With Luminance still open, I double-click the process icon to bring up the DynamicBackgroundExtraction process as was set up earlier. Since I know everything works perfectly, I simply click Execute.
Picture
Picture
Two new images will pop up. The first is our desired image - the image but with the background gradients subtracted. The second is the actual background gradients model image - the one subtracted from your actual image. The above are both shown auto-stretched. Clearly there was something there to remove and indeed the image is better for it too - much cleaner background!

The background gradients model image is only there for your inspection and can be closed. If you do not want to have it come up at all, simply enable Discard background model in DynamicBackgroundExtraction before clicking Execute. The background-subtracted image should be saved, either replacing your previous one or as a new file - it is up to you. You can then close the process and this image. We then maximise the next one to apply this background model to and double-click the DynamicBackgroundExtraction process icon to initialise it.
Picture
When you do this, inspect that all your sample points are ok and are not rejected. If they are, you may need to raise Tolerance that little bit extra and click Resize All. If all your sample points are accepted, it is a good idea to lower Tolerance bit by bit, clicking Resize All each time to get to the minimum value needed such that all your sample points are accepted, without going far too low - 0.500 as a bare minimum is good. Also check that indeed no sample points are over important areas such as nebulosity. This can happen when you make a background model with an Oxygen-III image and then apply it to a Hydrogen-Alpha image, since Hydrogen-Alpha generally picks up a whole lot more nebulosity. This means there could now be something interesting just below one of your sample points - check them briefly!

With all my sample points in their correct place (not over anything more interesting than background), I was able to lower my Tolerance value to0.500 with all my sample points still accepted.
Picture
We repeat this process for all our images until all of them have been background-subtracted, as shown below.
Picture
These are now ready for the final step - matching them up in average background and signal brightness with LinearFit.
 

4. Matching the Images with LinearFit

When we have calibrated and stacked images that are meant to be colour-combined, we have to consider how well the histograms really match up to one another. Generally due to the varying conditions of the night sky throughout a night of imaging, or various nights of imaging, as well as the filter being used, the average brightness of the background and signal may not match up well between images we need to colour-combine. In light-polluted areas, it is common for the background to be brighter through a Red filter than Green or Blue. Luminance tends to be brighter than all three for obvious reasons. DynamicBackgroundExtraction has definitely helped in this regard through subtraction of background gradients in each monochrome image. Colour calibration can effectively correct for this (this is the subject of another tutorial), it is generally good practice to match up the average background and signal brightness between the images you are going to colour-combine later. 

The PixInsight process responsible for this feat is LinearFit, which assumes that a mathematically linear function can model the difference in average background and signal brightness between a reference image you choose and the target image you apply the process to. As a result, it works best at the very beginning of your post-processing, when your images are linear, but strictly speaking it does not need the images to be linear for it to do its job. Please note that LinearFit requires that the images it is applied to are registered to each other, otherwise there is no correlation between the image we set as reference and the image we apply the process to. Make sure section 1 above is followed closely.

We start by opening our registered, cropped and background-subtracted images and the LinearFit process.
Picture
The process itself is very, very simple and default settings work wonderfully well. All we need to do is select a reference image open in PixInsight and then Apply the process to each of the other images. Selecting a reference image is a fairly simple process. When we apply the process to an image, the average background and signal brightness is brought in line with the reference image's. In accordance to this, it is a good idea to use our brightest image as the reference. This also helps keep the contrast in the histograms of the other images. To inspect which image is brightest, we simply inspect the histograms. This is done using the HistogramTransformation process. Simply open it, zoom in horizontally in the histogram preview by entering a value such as 50 into top-left text box and then select an image from the drop-down menu.
Picture
This shows us that image's histogram on the bottom preview box, zoomed in horizontally. To compare histograms, simply select each image one by one and you will notice the differences immediately. Below are the histograms for my images, which are LRGB (shown below in that order):
Picture
Picture
Picture
Picture
We quickly see that for my images, the Red is clearly brighter than Luminance, Green and Blue, as the histogram is placed more towards the right (towards white). This means using Red as the reference image for LinearFit is a good idea. If you are working with narrowband images, usually Hydrogen-Alpha ends up as brightest (and broadest) but your results may vary. Please note this is not a strict requirement.

To apply LinearFit, simply click the small button to the right of the text box for Reference image, select your chosen reference image from the list, click OK and then Apply the LinearFit process to all your open images except the reference image (as PixInsight will complain!).
Picture
Picture
Picture
When you apply the LinearFit process to your images, you may notice they look somewhat brighter (or darker), depending on how the histograms looked before and look after applying the LinearFit process. Do not worry - an auto-stretch will reveal the same details in the images as before. However, now all your images match up in terms of average background and signal brightness levels.

This final step has now fully prepared your images for colour combination and are therefore labelled ready for further post-processing. A quick colour combination shows a much better end result than we achieved when we first started:
Picture

 
Comment Box is loading comments...