Light Vortex Astronomy
  • Blog
  • Astrophotographs
  • Articles
  • Tutorials
  • Links
  • About Me
  • About e-EyE
 ​Send a Gift via PayPal:

Remote Hosting in Spain:
Picture

 

Tutorial (PixInsight): 

​Pre-processing (Calibrating and Stacking) Images in PixInsight


For a long time, the likes of DeepSkyStacker has been used for pre-processing of images. This process involves the following:
  1. Stacking many bias frames to produce a master bias.
  2. Stacking many dark frames to produce a master dark.
  3. Stacking many flat frames to produce a master flat.
  4. Calibrating the master flat with the master bias and master dark.
  5. Calibrating light frames with the masters, removing hot and cold pixels, registering and stacking the resulting calibrated light frames.
  6. Applying a resampling algorithm such as Drizzle to improve the end result.
​This process may seem exhaustive but it is by no means too long-winded. DeepSkyStacker makes things very simple when it comes to this as everything is executed automatically. PixInsight can also be used for pre-processing images and though the process here can seem daunting to beginners, achieving it ends up in a good understanding of the science behind pre-processing. 

Please keep in mind that PixInsight has a script called BatchPreprocessing that automates much of this process. However, this script is ideally used only up to the point that you then stack the calibrated and registered light frames yourself. This is because there is some trial and error involved in getting the best resulting image out of stacking - something the script cannot do automated. Also, manual stacking is required if you are to make use of the Drizzle algorithm to get a sharper end result. Moreover, it does not produce a master superbias during the first step (only a regular master bias). You may however produce the master superbias yourself first and then instruct the script not to stack any bias as it is already a master. Using the script could save you some time, but not a tremendous amount either, so use of it really is optional. This tutorial discusses using the script, but near the end as there are a few things from the manual steps you will need to know regardless. 

Assumed for this tutorial:
  • Basic knowledge of the purpose of bias, darks and flats in the calibration of lights (object exposures).
  • Knowledge of operating PixInsight, related to dealing with images and processes (read this, sections 3 and 4).

​Please feel free to ask questions or leave comments via the comments section on the bottom of this page. 

Contents
1. The PixInsight Pre-processing Workflow
2. Generating a Master Superbias and a Master Dark
3. Generating a Master Flat
4. Calibrating Lights and Correcting Hot and Cold Pixels
5. Selecting the Best Lights and Calculating Optimum Weightings
6. Registering and Integrating Lights
7. Applying Drizzle to the Master Light
8. Using the BatchPreprocessing Script
​9. Dealing with Data from Various Nights
Comments

 

1. The PixInsight Pre-processing Workflow

This section serves as a simple introduction to what we are about to cover and for future reference as a reminder for what to do, in what order. We first need to prepare our masters, starting with the bias. This will be made into a noise-free superbias as well. The individual dark frames are calibrated with the master superbias and the master dark is then formed from these calibrated dark frames. The individual flat frames are calibrated with the master superbias and master dark before forming the master flat from these calibrated flat frames. This gives us masters that are pure in their own right and can be used to calibrate the images we want to stack. 

The images we want to stack are first and foremost calibrated with the masters. They are then cosmetically corrected to remove as many hot and cold pixels as we can. At this point, the images are clean and calibrated and they are then aligned with each other (to correct for slight misalignments, including purposeful misalignments done through dithering), followed by stacking them together. To improve the end result, we then apply the Drizzle algorithm, which creates a resulting image twice the resolution and improves the pixelation by interpolating. At this point, we are done. To summarise succinctly:
  1. Stack bias frames to produce a master bias. Transform this into a noise-free master superbias. 
  2. Calibrate dark frames with master superbias. Stack these dark frames to produce a master dark. 
  3. Calibrate flat frames with master superbias and master dark. Stack these flat frames to produce a master flat. 
  4. Calibrate the light frames to stack with the master superbias, master dark and master flat. 
  5. Apply cosmetic correction to the calibrated light frames to stack. 
  6. Debayer the light frames if they are colour images such as those captured using a one shot colour CCD or DSLR camera. 
  7. Select the best light frames and optimise their weightings. 
  8. Register the light frames to stack with each other. 
  9. Stack the light frames. 
  10. Apply the Drizzle algorithm to produce an end result. 
We now proceed with each of these steps, followed by an introduction to the BatchPreprocessing script that can be used to ease the workload a little (if desired - I personally do not use it). The data used for this tutorial is 8 Hydrogen-Alpha light frames of the Crescent Nebula, along with corresponding bias, dark and flat frames. 
Picture
Above, four empty folders are shown. The Masters folder will simply store our master superbias, master dark and master flat after they have been created. Lights_Cal will store the images after they have been put through calibration with ImageCalibration. Lights_Cal_CC will store these calibrated images after they have been cosmetically corrected with CosmeticCorrection. Lights_Cal_CC_Reg will store the resulting images after they have also been registered with each other using StarAlignment. At that point, they can be stacked together with ImageIntegration and DrizzleIntegration after that. 

The folders called Lights_Cal, Lights_Cal_CC and Lights_Cal_CC_Reg can be deleted once the image has been stacked and had the Drizzle algorithm applied to it. These folders are only temporary storage while we pre-process the images. You may wish to keep the Masters folder, particularly for the master superbias and master dark as these apply to other images you may have.

If you are dealing with DSLR RAW colour images, you should change some image file format settings on PixInsight before you start. To access the relevant settings, click Format Explorer to the left of PixInsight and double-click the DSLR_RAW entry near the top.
Picture
By default, PixInsight reads DSLR RAW colour images as Debayered images, meaning it is artificially altering them to display them in colour within PixInsight. This may seem convenient as it is done automatically but technically, DSLR RAW colour images need to be calibrated while purely in their RAW state, prior to any alterations such as Debayering. Simply click the Pure Raw button on the bottom-left of the settings window and you will be set - click OK afterwards.
Picture
PixInsight will remember these settings for the future so you will not need to change this again. Please note that now opening DSLR RAW colour images in PixInsight will display them in monochrome, such as is the RAW image data. Once the images are calibrated, we will be converting them to colour images by Debayering them so do not worry about this. 

Before starting, it is worth mentioning that PixInsight's native image format is the XISF​ format. Juan Conejero of the PixInsight team presented this format to the community in this forum thread. Though the PixInsight team has stated that the popular FITS format is deprecated, it is still fully supported and will always be fully supported to guarantee maximum compatibility with other software. Given that CCD cameras currently output images in FITS format, it is the format I keep using throughout. Not only in tutorials you will read here, but for my own personal image processing in PixInsight. If you choose to use PixInsight's XISF format, feel free as it will work all the same, though at this point in time I have not noticed any end-user advantage to using XISF over ​FITS. It is worth noting however that as of PixInsight version 1.8.5, pre-processing processes and scripts no longer offer an option to set the image output format and they therefore all output in XISF format by default. You can however save your final pre-processed images in FITS format if you so wish, or keep the XISF​ format. Either of these will do perfectly fine. 
 

2. Generating a Master Superbias and a Master Dark

It is no mathematical secret that "the more, the better" applies to astrophotography - for everything - bias, darks, flats and actual object exposures, called lights. There is a limit to how much we are able and willing to capture however, and when there are significant diminishing returns on acquiring more of the same. A bias image is used to remove the sensor bias signal produced when the image is produced after the exposure has ended. As a result, the master bias is subtracted from the lights. A bias should technically contain no noise, since what we are trying to subtract is the sensor bias signal, not random noise. However, the production of noise is mathematically unavoidable. For this, the more bias frames we stack, the better, of course, but there are personal limits. Since bias frames are quick to capture (typical exposure time is 0.001 seconds, though image download time is much higher), there is little problem in capturing say, 200 of them to stack. We will use 20 in this tutorial for illustration purposes. 

A single bias frame I am using looks as follows once auto-stretched in PixInsight (for illustration purposes only):
Picture
The random noise is pretty clear. Stacking the 20 bias frames should yield a better result, more representative of the actual sensor bias signal than random noise. We will proceed to do this. For this, we open the ImageIntegration process.
Picture
Integration is a term used for stacking, and this is what we need to do - integrate the 20 bias frames into a master bias. Granted 20 bias frames is little, but it is better than 1. First and foremost, we click Add Files on the top-right of ImageIntegration, look for the bias frames and add them.
Picture
Picture
Picture
With all 20 bias frames added, we now configure ImageIntegration to do the job adequately for a master bias. We first make sure we select Average in Combination, No normalization in Normalization and Don't care (all weights = 1) in Weights. We also disable Evaluate noise. This gives us a basic stacking of bias frames into a master bias.
Picture
We now proceed to the pixel rejection algorithms. What to choose here depends on the number of bias frames being stacked into a master bias. Under Rejection algorithm, choose according to these general guidelines:
  • Averaged Sigma Clipping for < 10 frames
  • Winsorized Sigma Clipping for 10 to 20 frames
  • Linear Fit Clipping for > 20 frames
Since I am stacking exactly 20 bias frames, I will go for Winsorized Sigma Clipping. For bias frames, you will typically be using Linear Fit Clipping as it is easy to capture upwards of 100 bias frames for a good master bias. Again, also select No normalization in Normalization. Keep Clip low pixels and Clip high pixels enabled but you can disable both Clip low range and Clip high range. 
Picture
We can now proceed with the integration process, so we click Apply Global and the bias frames are stacked to produce a master bias. This master bias comes up automatically as a new image within PixInsight, with the integration identifier. 
Picture
The above is my result, which has been auto-stretched for illustration purposes. This is a master bias and it is clear that after having stacked 20 bias frames, the result is a better representation of the sensor bias signal with less dominant noise. However, noise remains. Obviously 20 bias frames is too little but even with 200 bias frames and a low-noise CCD sensor such as a Sony ICX694 sensor, noise will remain. This is where the Superbias process comes in handy, which transforms a regular master bias as seen above to one which appears as if it was the result of a stack of thousands of bias frames. We close the ImageIntegration process and open the Superbias process. 
Picture
Default settings in Superbias work very well in most cases but if your master bias came from about 50 or more individual bias frames, reduce Multiscale layers to 6, otherwise keep it at its default setting of 7. Once set, simply apply the Superbias process to your master bias. 
Picture
The end result should be tremendously smoother and practically noise-free. The above has been auto-stretched for illustration purposes but it has also been auto-stretched in 24-bit mode by clicking the circled button along the PixInsight toolbar. As with normal auto-stretches, this does not alter the image, but gives us a good look at the data while in its linear state. Our master superbias is now ready and can be saved. Save it in FITS format. We can close the regular master bias without saving as well. 
Picture
The general recommendation is that you stack 100 bias frames or more initially, then apply the Superbias process. The end result will always be better the more bias frames you use initially, but there are diminishing returns and the existence of the Superbias process gets us to those diminishing returns much, much faster. 

Now that the master superbias has been generated, we need to generate the master dark. This is put into the same section in the tutorial because the integration settings in ImageIntegration are identical to that of the master bias. First of all however, we will calibrate our individual dark frames with the master superbias in order to remove the sensor bias signal from each dark frame and thus make our master dark be representative purely of the dark current signal produced by the camera during the exposures. This is particularly important if the master dark will be scaled (this is done when your dark frames are from longer exposures than your lights). To calibrate the dark frames, we open the ImageCalibration process. 
Picture
We now click the Add Files button and add the dark frames here. 
Picture
Picture
Picture
We need to assign an output folder for each dark frame that is calibrated to be put into. For this, we click the button next to the text box for Output directory and select an output folder. I created one called Calibrated within the folder for my dark frames. 
Picture
Picture
Since we only calibrate with the master superbias, we disable Master Dark and Master Flat. 
Picture
We now click the button beside the text box for Master Bias and look for our master superbias file and select it. 
Picture
Picture
With the Calibrate option disabled (since we do not need to calibrate the master superbias), we simply click Apply Global on the ImageCalibration process and a whole set of calibrated dark frames appears in our Calibrated folder. 
Picture
We now have dark frames that have had the sensor bias signal from the camera subtracted and are ready to be stacked to form an adequate master dark. To do this, we re-open the ImageIntegration process and click the Clear button (not the Reset button) in order to keep every setting still set, but remove the bias frames from the list. 
Picture
We now click Add Files again and add the calibrated dark frames this time.
Picture
Picture
Picture
We keep the ImageIntegration settings identically to before, but we check that our choice of Rejection algorithm is correct for our number of frames. Since there are 20 dark frames in mine, I keep it to Winsorized Sigma Clipping. Once all is verified, we click Apply Global again and the master dark appears automatically as a new image within PixInsight, with integration as identifier. 
Picture
This is our master dark and can now be saved as such. Make sure it is in FITS format, though it should be by default according to the output settings of the ImageCalibration process. 
Picture
At this point, we are done as we have successfully generated a master superbias and a master dark. 
 

3. Generating a Master Flat

As with the master dark, before we proceed with stacking the flat frames, we need to calibrate them individually with the master superbias and the master dark. To do this, we return to the ImageCalibration process. To avoid confusion, we can click the Reset button to reset the process back to defaults. 
Picture
We now proceed with adding the flat frames to ImageCalibration by clicking Add Files and selecting them. 
Picture
Picture
Picture
Since we are calibrating the flat frames with a master superbias and a master dark, we simply disable Master Flat this time. 
Picture
Again, an output folder must be set. For this, we click the button next to the text box for Output directory and set one. I simply created a folder called Calibrated in the folder storing my flat frames and set that one. 
Picture
Picture
We now select our master superbias and master dark in their appropriate sections (Master Bias and Master Dark), via the same method. 
Picture
Optimize has been left enabled under Master Dark in order to allow PixInsight to scale the master dark to suit the exposure time of the flat frames. This is necessary because the flat frames are usually around 1 second in exposure time whereas the master dark in my case comes from dark frames that have 600 seconds exposure time (10 minutes). Since dark current signal is linearly acquired during a dark frame exposure, it can be scaled easily by PixInsight so long as the Optimize option is left enabled. Please note that PixInsight may state that there is no correlation between the master dark and your flat frames. This happens commonly if your flat frames are indeed very low exposure time (such as 1 second or less). Do not worry about this as it is normal - it just means that dark subtraction is not necessary for those flat frames. Now clicking Apply Global populates our Calibrated folder with calibrated flat frames. 

With the flat frames calibrated, they can now be stacked to form a master flat. This is again done with ImageIntegration. Click Reset on it when it is opened as we need to use different settings here. 
Picture
First and foremost, we click Add Files to add our calibrated flat frames to the list. 
Picture
Picture
Picture
We now proceed with the settings required to produce an adequate master flat. We set Average in Combination, Multiplicative in Normlization and again Don't care (all weights = 1) in Weights. This time however, we leave Evaluate noise enabled. 
Picture
For Rejection algorithm, we simply select Percentile Clipping here and then Equalize fluxes in Normlization. Clip low pixels and Clip high pixels are left enabled, with Clip low range and Clip high range disabled. 
Picture
We may now have to tweak the Percentile low and Percentile high settings below. These settings define thresholds from which to reject pixels in the master flat's generation. For example, if you captured these flat frames by pointing the telescope at the dawn sky, you may inadvertently have captured some stars, which need removing. If you have used a lightbox, a flat field generator or an electroluminescent panel, you can use the default settings and be done with it - these generally work perfectly well. If however you have captured your flat frames using the sky, such as at dawn, you should lower both Percentile low and Percentile high to a tiny value such as 0.010 in order to maximise rejection and remove any stars and other outliers you may have captured. Since mine are captured using a flat field generator, I leave my settings at default. 
Picture
Once set, click Apply Global and the master flat will appear automatically as a new image within PixInsight, with identifier integration. 
Picture
As seen above, with the setting of Generate rejection maps enabled, you will get two other images that you can inspect with an auto-stretch. These give you an idea of what pixels were rejected. As always though, auto-stretch the master flat generated and inspect it well to make sure it looks fine (i.e. that there are no outliers such as stars left if you captured the flat frames with the sky). Dust motes should obviously remain in your master flat as this is one of the primary reasons for using flat frames to begin with. 

Once content with the master flat, it is saved as such. Make sure it is in FITS format, though it should be by default according to the output settings of the ImageCalibration process. The rejection map images can both be closed without saving as they are not really needed other than for your inspection. 
Picture
The above, for me, was for the Hydrogen-Alpha filter. If you used a monochrome CCD camera and have images with other filters, you will naturally have flat frames for each of those filters. As a result, you will need to repeat this section of the tutorial for each of your filters' flat frames. If you used a one shot colour camera such as a DSLR (or equivalently, a one shot colour CCD camera), you should be done by now as you would only have one set of flat frames. 
 

4. Calibrating Lights and Correcting Hot and Cold Pixels

This section of the tutorial will prepare our light frames for alignment and stacking by calibrating them with our masters and then correcting any hot and cold pixels left over. We begin with calibration. Since we have our master superbias, master dark and master flat prepared, we can proceed. ImageCalibration is the first process to use. We click Reset once opened to make sure it is at default settings.
Picture
We begin by adding the light frames to the list by clicking Add Files and selecting them from our Lights folder.
Picture
Picture
Picture
First, set the output folder by clicking the button next to the text box for Output directory and select another folder you can store the calibrated light frames in. I set to use my Lights_Cal folder as mentioned earlier in this tutorial.
Picture
Picture
Please note that as of PixInsight version 1.8.5, the Output extension option is no longer present here or in any of the other pre-processing processes we will use and the default format of all processed images will be PixInsight's native XISF format. This is not an issue so do not worry. All that remains is for us to select our master superbias, master dark and master flat from their respective sections, keeping all three enabled. Again, if like me at this time, you do not use darks in calibration, you can keep Master Dark disabled at this stage.
Picture
All Calibrate options should be kept disabled if you followed this tutorial closely, as both the master dark and master flat have been calibrated already. The master superbias does not need calibration. I would keep Optimize enabled under Master Dark so as to make sure PixInsight matches the master dark to your light frames.

Once ready, click the Apply Global button and your output folder will be populated with the calibrated light frames.
Picture
Picture
​Above we see a single light frame auto-stretched in PixInsight after it has been calibrated as above. Since our light frames are now calibrated, we can proceed to remove as many hot and cold pixels as we can. We will first start by opening the CosmeticCorrection process. ​
Picture
We begin by adding our calibrated light frames here, again by clicking Add Files and selecting them. 
Picture
Picture
Picture
If your light frames are colour images from a one shot colour CCD camera or DSLR camera, enable the CFA images (Colour Filter Array) setting here. These images will be Debayered later in the pre-processing workflow. After colour images are Debayered, you will not need to use this option anywhere else. If your light frames are monochrome images however, keep the CFA images option disabled. 

Let us set an output folder by clicking the button next to the text box for Output directory. I selected my Lights_Cal_CC folder, mentioned earlier in this tutorial. 
Picture
Picture
With the output folder set, we now proceed with the actual cosmetic correction. If you have a master dark, you can enable Use Master Dark and as soon as you do, PixInsight will ask you to point it to your master dark. 
Picture
Since we have a master dark to work with, we may not need to enable the Use Auto detect option below Use Master Dark. That option is particularly reserved for people who do not bother with dark frames at all, such as myself as present due to my low-noise Sony ICX694 sensor. We will cover Use Auto detect in a bit as we will find it is helpful for extra clean-up. 

In order to see what we are working with, simply double-click one of the light frames listed to open it on the side. An auto-stretch is required at this point. 
Picture
To optimise our working speed, we should define a preview box around an area of the image that shows quite a few hot pixels (seen clearly as outliers as bright white single pixels that are clearly not stars). Also set the newly-created preview box to active so we are only seeing the preview box area. 
Picture
Picture
With this set up, we click Real-Time Preview on CosmeticCorrection to get a window that shows us what is happening to the hot and cold pixels as we tweak settings, but only within our preview box area (which speeds up the updating as we tweak settings!). 
Picture
First we click Enable on Hot Pixel Threshold and then decrease the Sigma value by using the slider. As we do this, we observe the real-time preview window to see how our hot pixels disappear. Avoid Sigma values that are way too low as otherwise it will remove an unnecessary amount of pixels. You can also increase the Qty value by using the slider to remove more hot pixels.
Picture
You may check if there are any cold pixels worth removing by clicking Enable on Cold Pixel Threshold and tweaking this afterwards. Keep in mind however that this may not be required, as is the case in my image above.

You may notice, as I do above, that having used the master dark to remove hot pixels was not enough as there are a good number of hot pixels left over. When this happens, you may wish to also use the Auto Detect function. To do this, simply click to enable Use Auto detect. You can then enable Hot Sigma and tweak the value with the slider or by entering a new value manually. A lower value is more aggressive at removing hot pixels so generally avoid values below 1.0. The higher the value, the better, so long as it does the job for your image. As for cold pixels, you can enable Cold Sigma to help further.

The Use Defect list option can be used to point out where bad columns or rows exist within your image. I find this is not necessary if you have a good master superbias and master dark that in essence present these bad columns and rows as well, as they would be removed by virtue of calibration.

Your CosmeticCorrection settings should adequately clean up your images, like so:
Picture
Once you are happy, feel free to close both the real-time preview window and your image itself. With CosmeticCorrection still open, click Apply Global to apply these cosmetic correction settings to your light frames and output them to the Lights_Cal_CC folder.
Picture
We are now done with the calibration and cosmetic correction of the light frames and can now proceed to their registration (alignment) and integration (stacking). However, if your light frames are colour images from a one shot colour camera, you will first need to Debayer them. This process converts a raw image with colour matrix information into an actual colour image and must be done if your light frames are colour images. The Debayering procedure is made very simple with the use of the Debayer process. 
Picture
This process is very simple to use. Simply click Add Files to add your calibrated and cosmetically-corrected light frames. Once done, select the correct Bayer/mosaic pattern for your camera sensor from the list (check online if unsure, though RGGB seems to be the case with Canon EOS DSLR cameras), set an output directory as usual and circle Apply Global ​button to proceed. In seconds, all your light frames should be adequately Debayered. If your light frames are all monochrome, you can basically skip the above Debayering procedure entirely and proceed from here on.

If however your light frames are monochrome images, no doubt you will have others you will need to calibrate and cosmetically correct. This section would have to be repeated for each of those sets of light frames. Before we register and integrate the light frames however, we will first use the SubframeSelector script to help us determine which exposures are best kept and which exposures are best removed. Also, we can use the ​SubframeSelector script to calculate optimum weightings for our light frames, depending on what we wish to prioritise. 
 

5. Selecting the Best Lights and Calculating Optimum Weightings

​​Imaging conditions vary throughout the night, in particular due to the changing position of the target (relative to the Meridian and Horizon), as well as possible thin, high clouds, gusts of wind, etc. For a lot of us, much of the imaging is automated to the point that many long exposures are captured during the course of several hours. Checking each individual exposure separately is possible, but it is certainly not a precise science. PixInsight's SubframeSelector process measures the quality of all your exposures based on numerous important properties such as average star size (FWHM - Full-Width Half-Maximum), star eccentricity (a measure of the distortion from perfectly circular), noise, Signal-to-Noise Ratio (SNR), etc. Measuring all these properties numerically allows you to see how exposures compare to each other, and allows you to exclude some and approve others (for stacking later). Moreover, the SubframeSelector process allows you to define how exposures weigh out compared to each other (giving some priority over others, due to favourable properties). As a result, these weights can be provided to the ImageIntegration process, which will perform the stacking. Due to all this, use of the SubframeSelector process is considered important for the overall quality of the end result. 

​The exposures put through the SubframeSelector process should ideally be calibrated and cosmetically-corrected (to remove hot pixels). For this reason, we move on to using the SubframeSelector​ process now and not earlier. As of PixInsight version 1.8.6, SubframeSelector is a process rather than a script. Therefore, we open it as we do other processes. 
Picture
​​First of all, we must add the images we would like to measure and optimise the weights for. The images we need to add are the ones that have been put through calibration and cosmetic correction (the results from the previous section). To add the images, we click the Add Files button, select all of them and add them. 
Picture
Picture
Picture
​We now look at the System Parameters tab as we need to define some of the values for the process to do its job. Primarily, we must enter the Subframe scale in arcseconds/pixel, which is a property of your telescope and camera combination. If you are unaware of it, you could calculate it using the following equation:
[Arcseconds Per Pixel] = (206.2648 x [Camera Pixel Size in μm]) / [Telescope Focal Length in mm]
​Alternatively, uploading a single raw exposure (uncalibrated) to Astrometry.net will provide you the Pixel Scale once it is plate solved. This is normally more accurate as the plate solving will determine precisely what it is your exposure captured and its pixel scale in arcseconds/pixel​. Once you have the correct value of Subframe scale in arcseconds/pixel, enter it. You should also enter your Camera gain in electrons/ADU​. This is a specification you can either measure yourself (see this article), or simply get from the camera's manufacturer. Once you have the correct value of Camera gain, enter it. 
Picture
​​For me, Subframe scale was 1.630 arcseconds/pixel and Camera gain was 0.450 electrons/ADU. You should also select the correct Camera resolution (bit-depth). If imaging with a CCD camera or RAW with a DSLR camera, 16-bit [0, 65535] is the correct option to select. If the image is in an 8-bit format such as JPEG, select that instead. For Scale unit, select Arcseconds (arcsec) instead so that the FWHM measurements are in arcseconds as opposed to pixels. 
Picture
​​We now expand the Star Detector Parameters tab to make some adjustments. Default settings usually work perfectly well, but we can usually afford to reduce Structure layers to 4 and increase Noise layers to 2. This ensures the process does not pick up the larger stars for measurements, and ignores the first two layers (the first of which is largely dominated by small-scale noise). If you need to increase the sensitivity of picking up stars, increase the Sensitivity parameter slightly. Generally, other settings can be left at default values. 
Picture
​Once set, we ensure Measure Subframes is selected under Routine at the top and we then click the circle Apply Global button on the bottom of the process. The Process Console will pop-up and will show you the progress as it measures detected stars in all your images. Note that it should pick up a couple hundred to a couple thousand stars. If it does not, you may want to return the Structure layers and Noise layers settings to default, and/or increase the Sensitivity setting. 
Picture
Once the process is done measuring, we pay attention to the results on the middle window. Under the Measurements Table tab, three of the most important properties are shown without having to scroll to the right - FWHM, Eccentricity and SNRWeight​. The third, SNRWeight, is a measure of the Signal-to-Noise Ratio of the images. At present, Weight is zero for every exposure listed. This parameter can define how much weighting each exposure gets when stacked, which can be used to give priority to particularly good exspoures. Weight can be customised by entering an expression for Weighting on the right window, which we will do momentarily. First, take a look at the graphs for FWHM, Eccentricity and SNRWeight, as it compares the exposures to each other. It is difficult to give general values to look out for, as everyone's images are different and indeed targets and equipment combinations are wildly different. In general however, Eccentricity values of 0.42 or lower are generally considered to give perfectly pinpoint stars to the human eye even on close inspection. However, it is generally considered that Eccentricity values of 0.60 or lower look OK. Do not worry too much if yours, like mine above, are nowhere near 0.42 or even if some exceed 0.60​. 

In general, the lower the FWHM and Eccentricity values, the better the exposure is. Conversely, the higher the SNRWeight value, the better the exposure is. A word of warning however that thin, high clouds do register as signal so exposures affected by this or something similar can have high SNRWeight values when in reality you may wish to discard said exposures. At this stage, while inspecting the graphs of these three properties, you can exclude exposures by simply clicking their corresponding data points on the graphs. This will turn the data point into an X and the exposure will appear excluded from the table. You may also exclude exposures manually from the table, by double-clicking the green tick to the left of the exposure on the list, or selecting the exposure from the list and clicking the Toggle Approve ​button at the top. If you exclude an exposure and then decide to include it, once it is re-included, ensure you select the exposure from the list and click the Toggle Lock button so it is unlocked. 

You may find it useful to sort the list in ascending or descending order of FWHM, Eccentricity, SNRWeight or any other property. This can be done easily by selecting the property to sort by from the list at the very top, above the table. Select any property you wish to sort by and then select Asc. for Ascending or Desc. for Descending from the second list. 
Picture
Picture
Notice above that the excluded exposure appears excluded in the list and has been marked by an X in all the graphs. Notice also that clearly this exposure is worse than the others in terms of FWHM but not Eccentricity. Generally, excluding exposures is a good idea if you have a large dataset to stack together, or have very obvious bad exposures when compared to the rest. Some exposures will be worse than others, no doubt, but they may not be so bad that they need excluding (or like me in this case, you may not have the luxury of a large dataset to start excluding exposures!). Since all the exposures in my dataset are more or less equal to each other in visible quality, I will not exclude any but will make sure the reduced quality is taken into account in the weighting for each exposure. 

Feel free to exclude exposures at your leisure. You can in fact be quite automated about it by entering an expression in Approval in the right window. For example, if you want to only include exposures whose FWHM is less than 6​, you can enter the following and then click the triangle button on the right of the Approval​ box:
FWHM < 6
Picture
​Notice above that the exposures that do not meet the FWHM criteria set, are excluded automatically. We can extend this expression to also include other properties. For example, if I also want Eccentricity values above 0.7​ to be excluded (this time we include 0.7​ as allowed), I can modify the Approval box contents to (and then click the triangle button on its right once more):
FWHM < 6 && Eccentricity <= 0.7
Picture
The exposures excluded were excluded because they do not meet one or more criteria entered in the expression in the Approval box. However, since I do not want to exclude any of my exposures, I delete the expression and leave the Approval box blank. I also ensure I click the triangle button on its right to apply the lack of expression (so all my exposures are once again included). 

Once you have excluded some exposures, or opted not to exclude any, we can move on to giving the exposures appropriate weights. These weights should ideally take into account all three important properties - FWHM, Eccentricity and SNRWeight​. ​David Ault recommended a particular expression to enter in Weighting in his tutorial. The one presented here is similar but has been modified to prioritise SNRWeight instead of Noise (as SNRWeight takes Noise​ into account as well). Enter the following in the Weighting box of the right window:
(15*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin)) + 15*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin)) + 20*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin))+50
Note above that FWHM and Eccentricity are negative properties - the larger they are, the worse the exposure. This is the reason for the 1-FWHM and 1-Eccentricity​ in the above expression. SNRWeight however is a positive property - the larger it is, the better the exposure - therefore it is left intact. In previous versions of PixInsight, where SubframeSelector was a script, we had to manually pick out the minimum and maximum values of FWHM, Eccentricity and SNRWeight to enter in the expression. These had to be changed for every dataset as every dataset has different exposures. Thankfully now, with PixInsight 1.8.6 and the new SubframeSelector process, the above expression is universal and does not need editing for particular datasets as it automatically picks out the minimum and maximum values. 

Beside each property in the expression are multiplier values. For FWHM and Eccentricity, these are 15 and for SNRWeight, this is 20​. Their total is purposefully made to be 50. Basically if you want to prioritise SNRWeight over FWHM and Eccentricity, choose its multiplier to be higher than the multiplier for the other two. You can mix and match the multipliers (totalling 50) to your heart's content, prioritising a property over another as you see fit. I chose to prioritise SNRWeight a bit more because all my exposures are more or less the same, visually speaking, so I prefer the one with the highest SNRWeight to have a larger weighting overall. 

Finally, because the values are normalised and because the multipliers add up to 50, the overall scale of Weight will turn to one that ranges between 0 and 50. 0 being a horrible exposure and 50 being an incredible exposure. However, the danger here is that though 0 may seem like a really bad exposure, this exposure would not add much to the overall stacked image, when in reality it may not be that bad. That is why we add 50 at the very end of the expression - to take the scale to 50 to 100, rather than 0 to 50. This means that even if an exposure is considered pretty bad (Weight close to 50), it will still add something to the stacked image. Keep in mind that if an exposure really is bad, it could simply be excluded by you (by inspecting the graphs or list manually, or by entering an appropriate expression in the Approval ​box​). 

Once you have entered the Weighting expression above, click the triangle button next to the Weighting box to calculate and apply the weightings to your exposures. 
Picture
​​Notice now that my Weight property has been set to range between 50 and 100, as planned. The graph for Weight shows how the exposures compare to each other under this comparison. These are our optimum weightings ​for use in stacking. I have also set the table to display my exposures in ascending order of the newly-calculated Weight property. At the bottom, therefore, is my best exposure​. 

It is now worth opening your best exposure in PixInsight and auto-stretching it to check it out. Check it for artifacts such as satellite tracks or star blooms from thin, high clouds. If the exposure is deemed clean, then we can take note of the name of this particular exposure as it will be used as reference to align all the other exposures in the stack, as well as a reference for LocalNormalization​ (both explained later). If you notice your best exposure has satellite tracks or some star blooming from thin, high clouds, you should check your second best exposure (according to Weight value). Do this until you find a clean exposure of high Weight value to use as reference. I cannot stress enough how important it is for your reference exposure to have the highest Weight possible while also being clean (free from satellite tracks and star blooming). 

We now move on to the left window, on to the Output Files tab at the bottom, in order to export our exposures with the optimum weightings written into them. Before we proceed, select Output Subframes from the Routine list at the top of the left window. 
Picture
Under the Output Files tab, simply set a folder to place the approved exposures into. You can create a sub-folder called Approved or something like it, if you prefer. The Postfix is what will be added to the approved exposure filename. You can delete this if you are placing them in a different folder and do not mind having the same filenames as the original exposures. You can also apply a filename Prefix​ in the same way. It is entirely up to you. 

Once you are done, we need to set a Weight keyword. This will be the keyword embedded into your approved exposures that has the optimum weightings we calculated. It is important because we can later instruct ImageIntegration to use this for the stacking process. I entered SSWEIGHT in mine. 
Picture
Once done, simply click the circle Apply Global​ button on the bottom and your folder will be populated with the approved exposures, with their optimum weightings embedded under the set keyword. You can then close the SubframeSelector​ process. If you close the left window (simply titled SubframeSelector), all three windows will close as this is the main process window. 
Picture
Above shows my best exposure open in PixInsight and auto-stretched, with the FITS Headers listed next to it. At the bottom we find the new SSWEIGHT keyword, with the calculated optimum weighting. We can now move on to registering and integrating these images. 
 

6. Registering and Integrating Lights

Since the light frames have been calibrated and cosmetically corrected, they are now considered clean light frames. Moreover, SubframeSelector was used to optimise their weightings. Before we can stack them together however, we need to make sure they are perfectly aligned to each other. To register the light frames with each other, we use the StarAlignment process.
Picture
First, let us add our light frames, specifically the ones that have been calibrated, cosmetically corrected and with their weightings optimised (and Debayered if you had to do this). To do this, click Add Files and select them all.
Picture
Picture
Picture
We need to set one of these light frames as the reference image so that all the others are aligned in respect to it. From the small list box that says View at the top of StarAlignment, select File instead, then click the downward arrow button to the right of it. Select one of your calibrated, cosmetically corrected and weighting-optimised (and Debayered, if applicable) light frames. It is generally recommended you choose the image that has the highest optimised weighting (SSWEIGHT​) as this will act as the best reference, especially if FWHM and Eccentricity were included in the optimised weighting calculation. As aforementioned in the previous section, do make sure this reference is free from satellite tracks and star blooming. 
Picture
Picture
Let us now select an output folder for our registered light frames. As always, we do this by clicking the button next to the text box for Output directory and selecting an appropriate folder. I selected my Lights_Cal_CC_Reg folder, as mentioned earlier in the tutorial.
Picture
Picture
Finally, since we are going to apply the Drizzle algorithm later, we need to enable Generate drizzle data at the top of StarAlignment.
Picture
All that is left is to click Apply Global (the circle button). Your output folder will be populated with the registered light frames. If PixInsight has trouble with picking up stars to align with, you can relax its parameters so that it more easily picks up more stars. Since the light frames are now cosmetically corrected and calibrated, they are essentially more reliable for picking up dimmer stars. The first thing to try to help with picking up more stars for alignment is to increase the RANSAC tolerance parameter from its default of 2.00 to something higher such as 6.00 or the maximum 8.00.
Picture
RANSAC iterations directly below can also be increased slightly, to say 3000. Additionally, you can try decreasing the Log(sensitivity) setting under Star Detection (as it is a logarithmic parameter, if you decrease it, it increases sensitivity).
Picture
Detection layers at the top of Star Detection can also be increased slightly to 6 or 7 to gather more stars for registration. Please keep in mind however that default settings in StarAlignment really do work wonders and you should not really have to tweak them. If you have to tweak them, it is possible that your light frames simply do not have enough stars in them due to too low an exposure time for your optical system, or that the stars are not round or relatively round. This usually indicates things you should improve on when you are imaging.

Once you have successfully registered all your light frames, they will populate your output folder. You will also find some extra files, which are essentially Drizzle data files, one per light frame.
Picture
Your light frames are now ready to be stacked together to produce one very nice result. For this, we will return to ImageIntegration but first, let us use PixInsight's LocalNormalization feature (new as of version 1.8.5​). This feature is a more robust pixel rejection algorithm that ends up producing cleaner-looking stacked images, with less noise in the background, stronger SNR on the objects of interest and smoother transitions. To make use of it, we first need to open the LocalNormalization process. 
Picture
We begin by selecting our reference image, which will again be the best one as determined earlier with the SSWEIGHT optimised weighting parameter. This is particularly good if SNRWeight​ was included in the optimum weighting calculation. As aforementioned in the previous section, do make sure this reference is free from satellite tracks and star blooming. ​In my data's case, the best frame was the last frame. Note that according to Juan Conejero of the PixInsight team, the algorithm works independently of gradients and variations in signal. Therefore, you do not need to eliminate gradients from your reference image prior to using it (e.g. by using DynamicBackgroundExtraction). Your best single frame after calibration and cosmetic correction will do perfectly. 

To select the reference image, we verify the list next to Reference image has File selected, we click the downward arrow next to it and select the best frame for reference image. 
Picture
Picture
We now need to add all our light frames here to be processed. To do this, simply click Add Files and select all your light frames. 
Picture
Picture
Picture
​We will set the Output directory to be the same one where our calibrated, cosmetically-corrected, weighting-optimised and registered light frames are, so the LocalNormalization data files​ will also be stored there. 
Picture
Picture
Default settings generally work extremely well so you need not really touch anything. The parameter which dictates final quality somewhat is the Scale parameter at the top. This defines how large of a normalisation scale the algorithm will use. The values recommended are between 64 and 256. Though the default of 128 works very well in the vast majority of cases, I find 256 works ever so slightly better for my data. ​
Picture
With LocalNormalization now set up fully, we simply click the circle Apply Global button and PixInsight processes the data and produces LocalNormalization data files in our selected folder. This folder now contains the calibrated, cosmetically-correct, weighting-optimised and registered light frames, its Drizzle data files and its LocalNormalization data files​. 
Picture
Your light frames are now ready to be stacked together to produce one very nice result. For this, we return to ImageIntegration and click Reset there to get back to default settings.​
Picture
We begin by adding the calibrated, cosmetically-corrected, weighting-optimised and registered (and Debayered, if applicable) light frames to ImageIntegration by clicking Add Files and selecting them.​
Picture
Picture
Picture
Again, since we are going to apply the Drizzle algorithm later, we need to add our Drizzle data files in here as well. To do this, click the Add Drizzle Files button, select them from the same folder your registered light frames were in and add them.​
Picture
Picture
Picture
A <d> symbol will appear next to each listed light frame, signifying the Drizzle data files have been added successfully. We now need to add the LocalNormalization data files in the same way. To do this, click the Add L. Norm. Files button, select them from the same folder your light​ frames are in and add them. ​
Picture
Picture
Picture
An <n> symbol will appear next to each listed light frame, signifying the LocalNormalization data files have been added successfully. Now, we select Average in Combination, Local normalization in Normalization (note that without LocalNormalization data files in use, we would select Additive​ here) and FITS keyword in Weights. FITS keyword is chosen because we previously embedded optimised weightings with the SubframeSelector script under a particular keyword, that I chose to be SSWEIGHT. For this reason, I enter SSWEIGHT in Weight keyword​. If you choose not to use any such optimised weightings, simply select Noise evaluation in Weights. We also keep Evaluate noise enabled. Also make sure Generate drizzle data is enabled - this will update the Drizzle data files that we will use in the next section.
Picture
For Rejection algorithm, we follow the exact same guidelines as we did earlier with the master bias and master dark, which are:
  • Averaged Sigma Clipping for < 10 frames
  • Winsorized Sigma Clipping for 10 to 20 frames
  • Linear Fit Clipping for > 20 frames
Since I have 8 light frames, I select Averaged Sigma Clipping. In Normalization, select Local normalization. Leave Clip low pixels and Clip high pixels enabled and you may disable Clip low range and Clip high range.
Picture
We may very well need to tweak the Sigma low and Sigma high parameters, especially if we notice that after we stack the light frames, we still see some cosmic ray hits or satellite trails on our stacked image. Default settings are a good starting point so click Apply Global and your stacked image will appear automatically as a new image within PixInsight, with the integration identifier. 
Picture
An auto-stretch helps reveal the hidden details in your stacked image without altering it. Inspect it closely - zoom in, pan around it carefully, etc. Check if there are any cosmic ray hits or satellite trails in the image. If there are any left, which is particularly common with a low number of light frames used in the stacking process, you will need to be more aggressive in your Sigma low and Sigma high parameters than default. To tweak these, simply close your stack image and your rejection maps without saving and go back to ImageIntegration. Here, lower Sigma low and Sigma high very slightly. If the defaults of 4.00 and 2.00 do not work, try 3.00 and 2.00. If they also do not remove all cosmic ray hits and satellite trails successfully, try 2.75 and 1.75. The process is a recursive one where you check your stacked image after a small tweak and go back to it if necessary. Remember that the more aggressive (lower values used) you are, the more worthwhile signal you are removing. Try to keep these numbers as close to default as possible while removing whatever it is you need to remove. As always, the more light frames you have to work with, the better, but there are personal limits attached to this. 

If you are having problems with the stacking process removing thick satellite or aircraft trails (whether they remain in full or some trace of them remains), PixInsight features (new as of version 1.8.5) a Large-Scale Pixel Rejection algorithm embedded in the ImageIntegration process for just this purpose. There are very few settings to tweak for this so feel free to enable both and play with its two settings to get optimum results. However, it is expected that only Reject high large-scale structures would be enough to target thick satellite and aircraft trails as these tend to be very bright. Further comments on this will be made in a future update as the PixInsight team document this feature properly. 

Once you are happy with your stacked image as a result of using ImageIntegration, do not bother saving the stacked image itself or the rejection maps - just close all three and then close ImageIntegration. This may seem counter-productive but what we are about to do is apply the Drizzle algorithm to the Drizzle data files. These data files are updated each time you run ImageIntegration so once your end result in ImageIntegration is satisfactory, we proceed to deal with the Drizzle data files themselves. 

As before, if you are working with a colour image because you used a one shot colour camera, you are done at this point and can proceed. If however you are working with monochrome images, you will most probably have other filters' images to pre-process. This only involves repeating this section for your other filters' images. 
 

7. Applying Drizzle to the Master Light

The benefits of applying the Drizzle algorithm have been well documented. To apply the Drizzle algorithm, we need to use the DrizzleIntegration process.
Picture
Default settings here work tremendously well in every case I have applied them to. What Drizzle essentially does is up-scale a stacked image to allow the algorithm to interpolate between pixels to produce smoother transitions. This is particularly important when light frames stacked together have a rotational offset between them, since pixels are strictly squares. The end result produces smoother transitions and thus the stacked image suffers less from pixelation. For anyone reading this who is a PC gamer or is into computer graphics, this is akin to supersampling anti-aliasing.

To use DrizzleIntegration, all we need to do is add our Drizzle data files to it. For this, we click Add Files, select all our Drizzle data files and add them.
Picture
Picture
Picture
As aforementioned, default settings work extremely well. An important parameter to pay attention to is the Scale parameter. Scale dictates how much larger the end result will be. The default of 2 means that an image of 3000 x 2000 pixels will be up-scaled to 6000 x 4000 pixels and due to the interpolation process, will appear smoother (not blurry!). Generally 2 for Scale suffices and works well with today's common CCD sensor resolutions. Also keep in mind that large mosaics may become slow to work with if you have applied the Drizzle algorithm to all panels (as I always do). So long as your computer has enough RAM, it will be able to work with them however, even if processes take a while to run. 

Before we continue, we will make use of the LocalNormalization data files we created earlier and have been working with so far. There is no need to click the Enable local normalization setting as this will automatically enable when we add the LocalNormalization data files. To do this, just click the Add L. Norm. Files button, select them all and add them. 
Picture
Picture
Picture
As before, an <n> symbol will appear next to each listed Drizzle data file, signifying the LocalNormalization data files have been added successfully. Once set, simply click the circle Apply Global button and the Drizzled stacked image will appear as a new image within PixInsight, with the drizzle_integration identifier.
Picture
Finally, our end result! This image is the result of light frames that have been calibrated with a master superbias, a master dark and a master flat, cosmetically-corrected, stacked and then Drizzled and using LocalNormalization as well. Your own may also have been Debayered in-between! You may save this image as this is your officially pre-processed image.

Below we see a comparison of a small area of the Crescent Nebula itself (bottom right corner), zoomed in. The comparison is between the end result of ImageIntegration (no Drizzle and no LocalNormalization), the end result of DrizzleIntegration but without using LocalNormalization and finally, the end result we have worked through here, using both DrizzleIntegration and LocalNormalization. We can clearly see a difference when Drizzle is implemented, as the image is much smoother with the pixelation removed almost entirely (particularly visible on small stars). With LocalNormalization, we can see some boost to SNR and therefore higher contrast between the signal and the background. The apparent difference in image scale from not using Drizzle to using Drizzle is purely due to the nature of the Drizzle algorithm, which up-scales the image resolution for interpolation. 
Picture
As before, this section needs to be repeated for your other filters' images, unless you were working with a colour image, in which case you are done at this point. ​
 

8. Using the BatchPreprocessing Script

Shortcuts, shortcuts - who wants them? This section is not an exhaustive run-down of everything you must do with the BatchPreprocessing script, as this section has been strategically placed here so that by now, you are an expert at pre-processing manually. As a result, I can skip some pleasantries and deal with the issues head-on, referring back to previous sections where applicable.

The BatchPreprocessing script is available through the Script -> Batch Processing -> BatchPreprocessing menu.
Picture
Since this is a script, closing it will remove all your settings, hence why the script confirms if you really want to exit. Before we invest time and effort into setting everything out, we will need something from outside the script and you cannot access it while the script is open. Basically, the script can apply cosmetic correction to your images, but it needs a process icon in the workspaces available for selection within the script, with the parameters already set out.

We therefore need to open the CosmeticCorrection process. We click Reset on it to make sure it is at default settings.
Picture
As was done in section 4 above, we set up the CosmeticCorrection process as we see fit for our images. There is a hitch you might encounter though. If you use darks, in theory the BatchPreprocessing script will create the master dark for you, but you need it now, before the BatchPreprocessing script is used - for CosmeticCorrection! So you are left with a choice. Either forget the master dark and use a single dark frame for CosmeticCorrection (not ideal), make the master dark manually, which requires you make the master superbias manually as well, or the final choice - do not use darks at all in CosmeticCorrection and just go for Use Auto detect. The first and third options may not seem ideal and the second option partly nullifies the use of the script to begin with.

The choice is yours. If you choose to make the master dark and thus the master superbias manually, then you can tell the script later on that they are already masters and do not need altering of any kind. Set up CosmeticCorrection as per section 4 above however you feel best about given this choice you have to make. You do not need to set an output directory for CosmeticCorrection since the script will do the outputs automatically. Once you are done playing with CosmeticCorrection, create a New Instance process icon of it on your workspace.
Picture
You can then close CosmeticCorrection as our process icon with all its saved parameters has been prepared for the script to use. You can set this process icon on the side to get it out of the way.

We can now open the BatchPreprocessing script. There are many buttons at the bottom but in theory you can click Add Files (bottom-leftmost button) and add everything without regard for whatever anything is - the script is meant to know what is a bias frame, what is a dark frame, etc. However, I do not trust it and would rather add each thing manually.

Simply click Add Bias and select your individual bias frames.
Picture
Picture
Picture
In the middle of the script, we can now edit the bias frame integration parameters as we did manually earlier. Since I am using 20 bias frames for stacking, I choose Winsorized Sigma Clipping.
Picture
What about the beloved master superbias? What is so super about this master bias the script will generate? Nothing, actually! The script does not use the Superbias process at all, so we lose out on its benefits. You may have already created the master superbias yourself, for calibrating the master dark that you also made yourself in order to use CosmeticCalibration adequately. If you did, that is great, since we can use your already-made masters. If you have not done this, consider doing it, because I would prefer using a master superbias I made myself than the master bias the script will make for me. This is less important for the master dark and master flat because they will follow suit from your master superbias if you provide the script with an already-made master superbias, although as mentioned earlier, you may also already have a master dark for use with CosmeticCorrection.

If you indeed have a master superbias to provide the script with, instead of selecting your individual bias frames when you click Add Bias, simply select your master superbias. Then, enable Use master bias on the right side of the script. This will tell the script that indeed your bias is a master already and need not be altered.
Picture
The bias frame added gains a little star icon next to it to signify it is being treated as a master. This also disables the Image Integration settings as they are not needed at all. Whatever you choose - individual bias frames or a master superbias - is up to you. I personally prefer using a pre-made master superbias.

We now move on to the dark frames. Click Add Darks and select your individual dark frames to add.
Picture
Picture
Picture
As before with the bias frames, I select the appropriate Rejection algorithm, which in my case with 20 dark frames is Winsorized Sigma Clipping.
Picture
Once again, if you already have a pre-made master dark that you would like to use instead of the individual dark frames, simply choose to add this master dark instead of your individual dark frames. Once done, enable Use master dark on the right side of the script. Please note that uncontrollable by the end user, the script will subtract the master bias from the master dark. This means that when you manually make your master dark, you need to make sure you do not subtract the master bias from your individual dark frames prior to integrating them into a master dark. Simply integrate all your raw dark frames into a master dark and that is it.
Picture
Once again, the dark frame added gains a little star and the Image Integration settings are disabled since they are not needed at all. If you do not work with dark frames at all (like myself at this time), do not bother with this section at all - the script will warn you that no dark frames have been selected, but it can continue working regardless.

We now move on to the flat frames. We click Add Flats and select our flat frames.
Picture
Picture
Picture
If when you add the flat frames you notice there is no filter label under the Binning label on the list, you may have to add the flat frames manually. If you do not do this, when you add the light frames, if the filter labels do not match, the script will not apply the flat frames to the light frames. You will be warned of this when you click Run or Diagnostics. If you need to add the flat frames manually due to a lack of filter label below the Binning label, click Clear at the top (next to the list) and then click Add Custom on the bottom.
Picture
Picture
Here, simply click Add Files, select your individual flat frames, select Flat field under Image type, enter the filter name in Filter name, such as HA, set Binning to whatever you used (1 for my images) and click OK (there is no need to enter an exposure time).
Picture
Picture
As per what we did in section 3, we set Percentile Clipping as the Rejection algorithm and tweak the Percentile low and Percentile high parameters if necessary (leave defaults if your flat frames are made with a lightbox, flat field generator or electroluminescent panel or change both to 0.010 if your flat frames are made using the sky at dawn).
Picture
No doubt you now know what is coming - what if we want to use a pre-made master flat? Simply add this master flat instead of the individual flat frames and then enable Use master flat on the right of the script.
Picture
If again you realise that the filter label has not appeared below the Binning label, again click Clear, click Add Custom, click Add Files and select your master flat, select Flat field from Image type, enter the filter name in Filter name, such as HA, enter your binning mode in Binning and click OK (again, no need to enter an exposure time). Once the master flat is on the list, make sure you enable Use master flat on the right of the script.
Picture
Picture
Finally, we now proceed with the light frames. We click Add Lights and add these individual light frames
Picture
Picture
Picture
Again, pay attention to the filter name label under the Binning label. If this does not match what your flat frames have as filter name label, they will not be applied. If they do not match, click Clear and then Add Custom. Click Add Files and select your light frames. Select Light frame from Image type, enter the correct filter name in Filter name, such as HA (make sure it matches your flat frames' filter name), set the Binning mode you used and enter the correct Exposure time in seconds. Click OK once done. 
Picture
Picture
Now that our light frames are listed properly, we enable Apply for CosmeticCorrection and then we select our process icon from the list. 
Picture
This will allow the script to cosmetically correct our light frames given our preset parameters (that we know work well!). Important at this point is to enable CFA images on the right of the script if you are working with colour images captured with a one shot colour camera. Enabling this will enable the DeBayer section under Lights. Simply select the correct Bayer / mosaic pattern from the list (again, check online for your camera's sensor Bayer matrix pattern and keep in mind RGGB works for Canon EOS DSLRs I have worked with). 
Picture
​If you are working with monochrome images, do not enable CFA images - keep it disabled! Since my light frames are monochrome, I keep CFA images disabled. For Image Registration, leave Generate drizzle data enabled so we can apply the Drizzle algorithm later. If you click the Registration parameters button, you will see the section on it. 
Picture
Here, the default settings work very well. If you are having trouble with registration, it may be that your light frames do not have an adequate star field for registration, which could be due to too short exposure times, badly misshaped stars, etc. If this is the case, you may wish to forego applying Image Registration through the script at all. Click the red cross in Image Registration to return to the main menu for Lights and if you indeed wish to forego registration through the script, enable Calibration Only at the very top. 
Picture
I recommend instead that you disable Apply on Image Integration and forego integrating the light frames through the script at all. The reason is compounded by the fact that the script itself recommends this, since the script cannot detect whether or not it has got rid of cosmic ray hits and satellite trails in your stacked image. Clicking the Integration parameters button allows you to customise the required parameters such as Sigma low and Sigma high (see section 6) but this would require running the script again and again. It would be faster if you simply forego integration through the script and do it manually after the script has done its job. 
Picture
Now that we have configured the script as required, we have to select one of our light frames to act as reference for the rest to be registered with respect to it. To do this, pick one particularly good light frame and just double-click it on the list to the left of the script. 
Picture
All that is left for us to do is set an output folder. Simply click the button next to the text box for Output Directory and select an empty folder. I created one called Script Output. 
Picture
Picture
Please note that as of PixInsight version 1.8.5, the option Output file suffix is no longer present and the script will therefore output images in PixInsight's default XISF format. This is not an issue so do not worry. We can click Diagnostics at the bottom to verify if everything is OK before we proceed. ​
Picture
The script reports that everything is OK in that we have bias, dark and flat frames ready and the light frames are also entered and everything is configured properly. If you are not using dark frames at all (as I do), you will get a warning that no dark frames have been entered. This is fine and you do not need to pay attention to warnings if you do not want to. You may be warned about no flat frames being selected to apply to light frames. This happens when the filter name labels do not match so make sure you have addressed that if you get this warning! Once happy, click Run to execute the script and let it do its job. 

In the output folder you set, you will get a number of folders once the script is done processing. 
Picture
​If you did not use pre-made masters for all three calibration frame sets (bias, dark and flat) and/or you left Image Integration enabled for your light frames, you will have a third folder called masters where all your stacked images (by the script) will be stored. Since I gave the script master calibration frames and chose not to integrate my light frames, the folder has not appeared. This is fine. The registered folder is the one to pay attention to, since it contains your light frames after cosmetic correction, calibration, Debayering (if applicable) and registration. 
Picture
The images themselves are contained within a subfolder with the same name as your filter name label. You will also find the Drizzle data files here since we left Generate drizzle data enabled. To proceed with our pre-processing, we now need to switch to manual mode and use the ImageIntegration process to stack the light frames output by the script. The exact same procedure applies as was detailed on section 6 for integrating the light frames. 
Picture
And once you have dealt with the ImageIntegration process enough to make sure no cosmic ray hits or satellite trails are left in your stacked image, we proceed to the DrizzleIntegration process exactly as detailed on section 7. 
Picture
This gives us the same end result as we achieved in fully manual mode in previous sections. 
Picture
​The question now is - do you want to use the BatchPreprocessing script or do you want to do everything manually? It is up to you. It may be faster to use the script when you are dealing with a lot of data to pre-process, but I prefer the control of the manual way of pre-processing. Besides, as you have noticed, there are several things you should do manually anyway. For ideal results, the BatchPreprocessing script may not end up giving the end result much faster than doing things manually, especially once you memorise all the settings or have set up process icons configured to do everything on simply selecting the frames. 

Worthy of mention at this point, before closing this section on the script, is that you can take advantage of the script's filter name labels to allow it to do multiple images at once (great for those using monochrome CCD cameras). Simply add all your flat frames (or the pre-made master flats), making sure each set has the correct filter name label assigned (as discussed above). Then, do the same with your light frames, making sure you add them with the corresponding filter name labels (so that each set of light frames you add has the same filter name label as the flat frames or master flat that is meant to go with them). When you run the script, the script will then apply the master bias (or master superbias) and master dark to all your light frames, but will only apply the correct master flat to the light frames it belongs to. The question is whether or not your cosmetic correction process icon settings apply equally well to all the light frames you place in the script, but that can be tested (or accepted as a loss of control that you do not mind, in favour of saving some time). 
 

9. Dealing with Data from Various Nights

Inevitably, someone will eventually ask the question - what do I do if I have a set of light frames from one night and their corresponding flat frames if I need to combine them with another night's sets? The answer to this question is simpler than it may seem. Let us explore the potential problems. 

Flat frames can often change if dust mote patterns change by moving, removing or adding some. You certainly do not want to use a set of flat frames that do not correspond to the light frames because you could potentially end up leaving donut-shaped light or dark patches in your light frames after calibration. Bias and dark frames are not too much of an issue, primarily because they are very, very faint and that the sensor bias and dark current signal of a camera may not change very appreciably over time. However, let us say you captured some light frames of an object in 2014 with corresponding bias, dark and flat frames, and then repeated this in 2015. You now have two datasets for the same object. The question is, how do we combine everything together to get your 2015 data to boost your 2014 data to produce an even better end result?

The best way forward is to get your first dataset, say from 2014 and apply the following initial steps from the PixInsight pre-processing workflow detailed in section 1:
  1. Stack bias frames to produce a master bias. Transform this into a practically noise-free master superbias. 
  2. Calibrate dark frames with master superbias. Stack these dark frames to produce a master dark. 
  3. Calibrate flat frames with master superbias and master dark. Stack these flat frames to produce a master flat. 
  4. Calibrate the light frames to stack with the master superbias, master dark and master flat. 
  5. Apply cosmetic correction to the calibrated light frames to stack. 
  6. Debayer the light frames if they are colour images such as those captured using a one shot colour CCD or DSLR camera. 
  7. Select the best light frames and optimise their weightings. 
This will leave you with unregistered but very clean light frames for your first dataset. Very clean in the sense that they have been cosmetically corrected and calibrated fully with their corresponding calibration frames. Once you have done this for the first dataset, do the same for the second dataset, say from 2015. This will again give you a set of very clean light frames for your second dataset. 

Now that you have two sets of light frames that in their own right have been cosmetically corrected and calibrated, all it takes is performing the rest of the PixInsight pre-processing workflow steps from section 1:
  1. Register the light frames to stack with each other. 
  2. Stack the light frames. 
  3. Apply the Drizzle algorithm to produce an end result.
​This will end up in a nice, combined stacked image including everything from your first and second datasets. If you have more than two datasets, the same workflow applies. 

What is not recommended is that you produce a stacked image for the first dataset and then a stacked image for the second dataset, and then add them (or rather, stack these two stacked images together). This would largely ignore most of the benefits from having so many exposures to stack together to begin with. 

Given this workflow, you could simply keep a backup of only your cosmetically corrected and calibrated light frames from each night. Collect as many of these very clean light frames over the course of multiple nights and then whenever you feel like post-processing the end result, proceed to stack everything together. 

 
Comment Box is loading comments...