Tutorial (PixInsight):
Producing an HDR Image
There are not an awful number of targets in the night sky that merit an HDR image. In fact, most deep space objects are very faint and long exposures on their own generally suffice to bring out all the fine detail, both in relatively bright and relatively faint regions. There are however some exceptions, such as the M42 Orion Nebula and the M31 Andromeda Galaxy. These objects in particular have extremely bright areas relative to very faint surrounding areas. It is objects such as these that benefit from HDR images. This generally involves capturing a stack of short exposures and another stack of long exposures. Some may opt to capture several more stacks of medium-length exposures in-between (e.g. 30 seconds, 1 minute, 2 minutes, 5 minutes and 10 minutes). It is all a matter of how much relative detail you wish to maintain in the HDR image and how long you are willing to work on said target.
PixInsight has a particular process dedicated to producing an HDR image - HDRComposition. This tutorial goes over how to use this simple process and how to then end up with an image that indeed looks like an HDR image (as stretching to non-linear makes it seem the detail has again been lost). As with other tutorials, there are a number of preparatory steps that need to be taken for the images to be ready to produce an HDR image. These are detailed at the beginning of the tutorial.
Assumed for this tutorial:
Please feel free to ask questions or leave comments via the comments section on the bottom of this page.
PixInsight has a particular process dedicated to producing an HDR image - HDRComposition. This tutorial goes over how to use this simple process and how to then end up with an image that indeed looks like an HDR image (as stretching to non-linear makes it seem the detail has again been lost). As with other tutorials, there are a number of preparatory steps that need to be taken for the images to be ready to produce an HDR image. These are detailed at the beginning of the tutorial.
Assumed for this tutorial:
- Knowledge of operating PixInsight, related to dealing with images and processes (read this, sections 3 and 4).
- Your images have already been pre-processed fully (read this).
- Your images have all been registered with each other (read this, section 1).
Please feel free to ask questions or leave comments via the comments section on the bottom of this page.
1. Initial Preparations for the Different Exposure Images
The images that will be used in your HDR image should always be fully pre-processed. Moreover, these images must be registered with each other for the procedure to work properly. This means the images must be properly aligned to each other. This is not the subject of this particular tutorial (it is part of another tutorial) but can be carried out quickly with StarAlignment, perhaps using the longest exposure image as reference image (as it is will naturally have more stars to act as reference). Also note that your images must be in their linear state and therefore not stretched.
The images below are Hydrogen-Alpha images of the M42 Orion Nebula, autostretched for demonstration purposes. The left image is a stack of 1 minute exposures and the right image is a stack of 7 minute exposures. These will be used to produce an HDR image of the target.
The images below are Hydrogen-Alpha images of the M42 Orion Nebula, autostretched for demonstration purposes. The left image is a stack of 1 minute exposures and the right image is a stack of 7 minute exposures. These will be used to produce an HDR image of the target.
The above shows that the 7 minute exposures image (on the right) is much more finely detailed in the fainter surrounding areas. Though both above appear as if the core is already saturated, removing the autostretch demonstrates that the 1 minute exposures image (on the left) shows the four trapezium stars within the nebula core:
Conversely, the nebula core is completely saturated in the 7 minute exposures image. This is the key to producing an HDR image - we wish to display detail in both the nebula core and in the fainter surrounding areas.
Though both of these images are technically ready to be used to produce an HDR image (since they are fully pre-processed and registered with each other), it is also a good idea to remove any background gradients with DynamicBackgroundExtraction. This is done on both individually, so each are as clean as they can be. This is again beyond the scope of this tutorial and is covered in another tutorial.
Though both of these images are technically ready to be used to produce an HDR image (since they are fully pre-processed and registered with each other), it is also a good idea to remove any background gradients with DynamicBackgroundExtraction. This is done on both individually, so each are as clean as they can be. This is again beyond the scope of this tutorial and is covered in another tutorial.
Above shows the same two images after both have been treated with DynamicBackgroundExtraction. These are shown autostretched (top) and in their original linear state (bottom). The images are now fully prepared to produce an HDR image, having been fully pre-processed, registered with each other and having had their background gradients removed. They are all also in their linear state. Ensure your prepared images are saved as image files as we will need actual image files rather than images open in PixInsight.
2. Creating the HDR Composite with HDRComposition
In order to produce the HDR image, we will need to open the PixInsight process dedicated to this - HDRComposition. We do not need to have any images actually open in PixInsight as this process works with actual image files.
We first need to add the prepared images to the list of Input Images. For this, we click the Add Files button, look for the prepared images and add them.
Please keep in mind default options in HDRComposition tend to work exceptionally well, but we will explore what they do. The option Automatic exposure evaluation should be kept enabled. This will not use FITS headers or anything like that to determine which image is composed of longer exposures - it will determine it quickly through image statistics. Just in case it somehow fails however, you should always have your longest exposure image at the top of the list under Input Images, with increasingly shorter exposure images going down the list. In my case, the images are listed in correct order - the 7 minute exposures image is at the top with the 1 minute exposures image below it. It is a good idea to keep Reject black pixels enabled so that we do not accidentally have black pixels from a shorter exposure image replace saturated pixels in a longer exposure image (thus appearing as pixel-sized holes). Moreover, to produce a satisfactory HDR image retaining all details, we must keep Generate a 64-bit HDR image enabled as 64-bit bit depth is greatly beneficial for HDR images (for non-HDR images, 32-bit generally suffices). Output composition masks is recommended to be kept enabled as it will show you what data in the longer exposure images was replaced by shorter exposure image data. These masks come up when the HDR composition is created, and can be closed without saving.
Binarizing threshold is perhaps the most important parameter that can be tweaked. The default value of 0.8000 works very well in most cases. This parameter helps define the areas of pixels from a longer exposure image that are replaced by a shorter exposure image (in terms of their relative brightness). If the value is too high, a longer exposure image may be insufficiently combined with a shorter exposure image (losing bright area detail in the HDR image). If the value is too low, the faint areas picked up in a longer exposure image may not contribute as much to a shorter exposure image (losing faint area detail in the HDR image). Experimentation is key to getting the best result for your data, though it is a good idea to test the default value of 0.800 before tweaking it.
Mask smoothness defines how smooth to make the mask that corresponds to the bright areas in a longer exposure image that will be replaced by a shorter exposure image. The default value of 7 can work well but there is little reason not to increase this to the maximum value of 25. This produces a smooth transition between the bright areas that have been replaced and the fainter surrounding areas in the HDR image. Mask growth can generally be kept at its default value of 1 unless you notice that the bright areas that have been replaced appear surrounded by dark halos or other artifacts. In those scenarios, increasing this value slightly may help.
Below is my resulting HDR image after clicking Apply Global in HDRComposition. This was with default settings except for Mask smoothness, which was increased to its maximum value of 25.
Binarizing threshold is perhaps the most important parameter that can be tweaked. The default value of 0.8000 works very well in most cases. This parameter helps define the areas of pixels from a longer exposure image that are replaced by a shorter exposure image (in terms of their relative brightness). If the value is too high, a longer exposure image may be insufficiently combined with a shorter exposure image (losing bright area detail in the HDR image). If the value is too low, the faint areas picked up in a longer exposure image may not contribute as much to a shorter exposure image (losing faint area detail in the HDR image). Experimentation is key to getting the best result for your data, though it is a good idea to test the default value of 0.800 before tweaking it.
Mask smoothness defines how smooth to make the mask that corresponds to the bright areas in a longer exposure image that will be replaced by a shorter exposure image. The default value of 7 can work well but there is little reason not to increase this to the maximum value of 25. This produces a smooth transition between the bright areas that have been replaced and the fainter surrounding areas in the HDR image. Mask growth can generally be kept at its default value of 1 unless you notice that the bright areas that have been replaced appear surrounded by dark halos or other artifacts. In those scenarios, increasing this value slightly may help.
Below is my resulting HDR image after clicking Apply Global in HDRComposition. This was with default settings except for Mask smoothness, which was increased to its maximum value of 25.
Two images have been output by HDRComposition. The left image is my HDR image. The right image is the mask, showing what areas of the longer exposure image were replaced by the shorter exposure image. Due to my Mask smoothness setting of 25, the mask is very smooth. I do however note that though the area replaced was in the most part the saturated region, a slightly larger area would have been better. To enlarge the areas being replaced, we lower Binarizing threshold. A value of 0.600 yielded a better end result for my data.
As the area replaced is now shown by the mask image on the right to be more extended, but not ridiculously so, I am happy with my end result and close both the mask image and the HDRComposition process. Below shows my HDR image autostretched:
We could clearly see the four trapezium stars in the nebula core in the HDR image's linear state and autostretched, we see a lot of detail in the faint surrounding nebulosity. This has achieved what we wanted - an HDR image showing both very bright and very faint areas. The issue you may have picked up on is that the above image, autostretched, no longer shows detail in the bright nebula core. We can compress the dynamic range in order to fix this, but first we must actually stretch the image to non-linear.
3. Stretching the HDR Image and Compressing its Dynamic Range with HDRMultiscaleTransform
As this tutorial is based on creating an HDR image, we will not go over the process of stretching the image to non-linear in exhaustive detail. This is the subject of another tutorial. For this example, I simply transferred the ScreenTransferFunction autostretch parameters to HistogramTransformation and applied them as a permanent stretch to the image. Therefore, though it looks the same, the image below is now stretched and therefore non-linear.
The process used to compress the dynamic range and therefore regain the HDR details in the HDR image is HDRMultiscaleTransform.
For HDRMultiscaleTransform, we note that the Number of layers parameter defines at what wavelet scale the dynamic range compression should take place. In terms of looks, the lower this value, the flatter the image appears to be. The higher this value, the less aggressive the changes appear to be (particularly to bright areas). We can tweak this value to achieve a good overall look for the HDR image. Below shows the results for Number of layers values of 5 to 8. All other settings are kept at their defaults.
Once you have decided what value of Number of layers gives the best overall look to your HDR image, we can set that. In my case, I thought Number of layers set to 7 provided a good look, with the four trapezium stars in the nebula core being visible and the rest of the nebula not looking flat. After you decide on a good value for Number of layers, you can try setting Number of iterations to 2 to see if you get a nicer looking end result. For me, Number of iterations set to its default of 1 more than sufficed. The Overdrive parameter can also be increased slightly to provide a slightly more aggressive compression to the dynamic range. Note that other processes such as LocalHistogramEqualization can be used to further increase contrast in bright and dark regions, though this is applied later in post-processing.
At this point your HDR composition is complete. Though the above example is monochrome Hydrogen-Alpha, this procedure works identically for monochrome LRGB images as well as One Shot Colour images. In fact, if you are imaging a target in monochrome with the intention of producing a colour HDR image in the end, you will need to apply this procedure to the Luminance image and to the colour RGB image separately, before post-processing further.
The following is a general workflow for producing a colour HDR image of a target imaged in monochrome LRGB:
The following is a general workflow for producing a colour HDR image of a target imaged in monochrome LRGB:
- Colour-combine R, G and B to produce a colour RGB image for each exposure time stack.
- Produce an HDR composition of all the colour RGB images using HDRComposition.
- Produce an HDR composition of all the Luminance images using HDRComposition.
- Post-process both HDR images separately to later combine, as normal. This post-processing must include use of the HDRMultiscaleTransform process in order to compress the dynamic range.
Comment Box is loading comments...