Light Vortex Astronomy
  • Blog
  • Astrophotographs
  • Articles
  • Tutorials
  • Links
  • About Me
  • About e-EyE
 ​Send a Gift via PayPal:

Remote Hosting in Spain:
Picture

 

Tutorial (PixInsight):

Sharpening Fine Details


As one images through the atmosphere, seeing and transparency play a role in distorting the image that would otherwise be dead-sharp (in theory). Deconvolution is a mathematical operation that can be applied to in effect undo some of this distortion, by first modelling it with some stars in the image. This is done very early-on in a post-processing workflow, in an image's linear state. Later on, when the image is non-linear, further sharpening of fine details can be applied on specific detail levels (pixel scales), and this is the job of MultiscaleLinearTransform. Overall, the end result is a much, much sharper image with potential for very high-contrast fine details. 

This tutorial aims to discuss two main processes used for sharpening fine details in an image using PixInsight. Namely, these are Deconvolution and MultiscaleLinearTransform. Classically, ATrousWaveletTransform can be used as well but this is being phased out to obsolete by the PixInsight developers. However, MultiscaleLinearTransform using the Starlet transform algorithm works identically and so this tutorial covers that. It is important to note that these techniques work identically between monochrome and colour images. Also, as with any kind of targeted post-processing of images, masks are the key to protecting areas that need not be sharpened. Producing masks is not the subject of this tutorial (another tutorial covers this - see the list below), but are briefly discussed for completeness. 

Assumed for this tutorial:
  • Knowledge of operating PixInsight, related to dealing with images, processes and working with masks (read this, sections 3, 4 and 6).
  • Your images have already been pre-processed fully (read this).
  • Knowledge of producing different types of masks (read this). 

​Please feel free to ask questions or leave comments via the comments section on the bottom of this page. 

Contents
1. Deconvolution with DynamicPSF
2. MultiscaleLinearTransform
Comments

 

1. Deconvolution with DynamicPSF

+ By virtue of the mathematics behind it, can effectively cancel out some of the distortion introduced by the atmosphere. 
+ Good results are very reproducible with somewhat similar settings between different images so application is relatively easy. 

- Has a tendency to produce dark ringing on stars if the mask protection is not good. Deringing can help prevent this. 
- Takes a while to set up as one needs to make the PSF image model first. 
- Is time-consuming to run and so using small preview boxes to test out settings is almost essential. 

​
Deconvolution, though a process name in its own right, is a mathematical operation that can be performed to functions (or images, in this case). The basic theory is that the light from deep space is pure and can produce a perfectly sharp image in its own right. Unfortunately as this light has traversed through our atmosphere, seeing and transparency has distorted its capture and then your equipment may have also distorted its capture somewhat. We can accept that these distortions combine together into what could in theory be modeled mathematically. Deconvolution attempts to model this total distortion function from a number of sample stars and then tries to undo the total distortion. The effective end result is a sharper image. How much benefit you gain through applying Deconvolution depends on your imaging resolution. For long focal length and small pixel sized setups, Deconvolution can be very revealing whereas for short focal length and large pixel sized setups, it may not add any significant detail. Obviously the benefit you gain is also dependent on the night sky conditions you image through. 

It is very important to note from this very point that Deconvolution only truly works on linear images. This is because we will create a distortion model with the DynamicPSF process and this requires linear data to model the distortion function properly. Generally speaking, Deconvolution is applied very early in one's post-processing workflow. This could be done at the very beginning of a post-processing workflow, straight after all pre-processing is done and prior to noise reduction in the linear state (certainly before stretching the image to non-linear!).

Due to the above, we will work with a linear image only. Before we open the Deconvolution process however, we need to create a model of a star based on the image itself. This is done with the DynamicPSF process, shown open below alongside a monochrome Hydrogen-Alpha image in its linear state and auto-stretched.
Picture
Since this process is dynamic in nature, it opens a session on the image you are on. What we need to do now is click on a number of stars scattered around the image. The stars you select should not be too large or too small, or have excessive halos around them. I prefer selecting stars along the centre of the image and then from each of the four corners and some from the four sides' centres. I normally end up selecting about 80 to 100 stars, usually, though we will exclude quite a few of these later. It helps to zoom in and pan around the image. Click the star centres to select them properly. As you do this, DynamicPSF will model the star in its selection and list it with calculated parameters.
Picture
Picture
Above, I have selected 124 stars, which is excessive but I have properly sampled all areas of the image. Remember, life is too short to spend half an hour doing this! We will now exclude quite a few of the selected ones. To do this, we enlarge the DynamicPSF window. 
Picture
Circled above are A (Amplitude), r (Aspect Ratio) and MAD (Mean Absolute Difference). These are the parameters we will use to select our stars for the model. First, we will use MAD, but let us sort out all the stars in order of MAD value. To do this, click the Sort button, select Mean absolute difference and click OK. 
Picture
Picture
Picture
Now as I look down my list of MAD values, they range between 1.185e-003 and 1.446e-002 (which is just over 10 times larger). We need to avoid values that are too different so in this step, we will exclude a large number of stars. What I tend to do is look through the list and roughly estimate which set of values of MAD has the most stars listed. I also restrict my range a fair bit and towards the lower end of MAD values, ideally. I can see that between my lowest MAD value star with 1.185e-003 and about 2.3e-003, I have quite a good number of stars. This corresponds to my first 26 stars in order of increasing MAD value. Therefore, I select all the stars I want to exclude. This is best done by clicking the first star you want to exclude, scrolling down to the bottom and clicking on the last star you want to exclude while keeping your SHIFT key pressed down. Once all these stars you want to exclude are selected, click the Delete button. 
Picture
Picture
This has left my best-matching 26 stars listed in DynamicPSF. Now we will filter based on A (Amplitude). Again, we sort them out in order of A value by clicking the Sort button, selecting Amplitude and clicking OK. 
Picture
Picture
Picture
My 26 stars are now sorted out in order of increasing A value. A good guideline here is to choose stars between about 0.25 and 0.75 A value. All others can be excluded. So again, look for the stars that do not match this range, select them individually or in groups (above and below your good stars) and click the Delete button to exclude them. 
Picture
Picture
We are now down to 16 stars and we will filter further, this time by r (Aspect Ratio) value. Again, click the Sort button, select Aspect ratio and click OK.
Picture
Picture
Picture
We now exclude stars that are too wildly different in r value. A good range of values of r to look for are generally between 0.6 and 0.8. You can select the rest and click the Delete button to exclude them. 
Picture
Picture
Finally we have arrived at a good sample of stars to use in the modelling. At this point, all there is left to do is select all the stars listed and click the Export button to produce a tiny image that corresponds to our model star (based on the ones listed). 
Picture
Picture
This image is called PSF by default and can be used in the Deconvolution process as a reference. You may now close the DynamicPSF process but keep this PSF image open as we will need it. 

We will use a mask for protection of background and stars so that sharpening is applied to the strong signal areas (the bright nebulosity). The mask used here is the same one as before. This has been produced by first making a clone of the linear image. The auto-stretch parameters were then applied to it as a permanent stretch with HistogramTransformation to make the clone non-linear. The RangeSelection process was used to select out only the brightest areas of nebulosity, excluding the background and dark areas as I saw fit. Once the range mask image was produced, I used StarMask to produce a mask with just stars in it. This was stretched and black-clipped a bit with HistogramTransformation to bring out the faintest stars, including those over bright nebulosity. Once done, the star mask image was subtracted from the range mask image to produce a new mask image that has only got the bright nebulosity visible, with background and stars excluded (including stars over the bright nebulosity). Though this may seem overly complex, it does not take very long to produce and is absolutely necessary to get the best out of sharpening fine details. Details of these mask generation procedures are the subject of another tutorial. 
Picture
The above shows the mask applied to the linear image for adequate protection of background and stars (including stars above bright nebulosity). The star mask image produced to make the above combined mask image was kept in order to use it in the Deconvolution process, for protection against dark ringing of small stars. We now open the Deconvolution process and set it to the External PSF mode as we will provide it with our model (the PSF image). 
Picture
The first thing we need to do is select our model under View Identifier for External PSF. Simply click the button next to the text box, select the PSF image and click OK. 
Picture
Picture
Picture
With the model now applied, we can customise the settings required. The Regularized Richardson-Lucy algorithm selected by default is excellent and should be used. Under Iterations, we should set a much higher value. However, each time an iteration is executed, the next one sharpens less and less. This means that eventually we converge on to no longer needing more iterations. Generally an Iterations value of about 50 suffices in most cases but we should start with 10 to experiment with the settings. A preview box can be created and made active in our image in order to reduce the time taken to apply settings, thus allowing us to arrive at a good set quickly, before applying Deconvolution to the entire image. You may leave Target set to Luminance (CIE Y) - the default setting - even when working with colour images, as the sharpening happens to the brightness of the image if it is a colour image. Feel free to experiment with the RGB/K components setting as well if you wish. 

We now enable Deringing and then enable Local deringing. From here, we must select our star mask image to act as support for localised deringing. You may choose to keep Local deringing disabled if you want it to be applied image-wide, but generally speaking, a star mask used in Local deringing goes a long way to avoiding dark ringing around stars. To select the star mask image in Local deringing, simply click the button next to the text box, select the star mask image and click OK. 
Picture
Picture
Picture
Since we wish to prevent dark ringing around stars, we need to tweak the Global dark setting under Deringing. The default setting of 0.1000 is generally too aggressive and values between 0.0100 and 0.0500 are generally best. I set mine to 0.0100. 
Picture
You may keep Local amount set to the default 0.70 as this works very well overall. Global bright can also be left at the default 0.0000 as generally the ringing is dark rather than bright. 

Under Wavelet Regularization, we have some noise reduction applied to the first two Layers, by default. Default settings work very well here and since noise generally exists in the first Layer and some on the second Layer, you can keep only these two first Layers enabled. For safety, you can raise the Noise threshold on Layer 2 up to 3.00 and set Noise reduction to 1.00 there as well. 
Picture
We are now set up to run Deconvolution on our image. As aforementioned, you may wish to run this only on a small, active preview box to inspect the settings and thus have it to take less time to run. Below, I have zoomed in on a segment of the image and hidden the mask (but left it active), for clarity. The second screenshot shows the altered image after application of Deconvolution. 
Picture
Picture
If while you run Deconvolution, you notice in the Process Console in PixInsight that it says Local divergence, it means that it has reached the point of convergence where more iterations are not going to yield any real benefit. This means you can essentially lower your value of Iterations to the point at which these warnings start appearing (the Process Console tells you which iteration is running at the time). 

The settings set have produced a good end result which shows sharper fine details, without noticeable dark ringing around stars. It is worthy to note that the image is still in its linear state, which is strictly the image state in which Deconvolution should be used. At this point, knowing the settings work well, the general practice is to raise Iterations to 30 to 50 and run it through again, just to be sure it is as sharp as it can get. Preview boxes are good to test things out as you change something like Iterations or Global dark under Deringing, prior to applying the Deconvolution process to the entire image, as this process is time-consuming to run.

Remember that as Deconvolution is applied to a linear image, it should be done very early on in the post-processing workflow. Straight after all pre-processing is done is as good a time as any. That way you are working with an image that is naturally sharper due to the cancellation of some of the distortion introduced by the atmosphere. As we will see now, MultiscaleLinearTransform is another process that can be used to sharpen fine details and we will see that this one indeed works best on non-linear images. This means that as far as a post-processing workflow goes, you may use Deconvolution early on and later sharpen fine details further using MultiscaleLinearTransform, once the image has been stretched to non-linear. Using both processes within a workflow will give you sharper images as a whole. 
 

2. MultiscaleLinearTransform

+ Works extremely well with non-linear images and with adequate mask protection and conservative use, can work well with linear images. 
+ Good results are very reproducible with very similar settings between different images so application is easy. 

- Tends to produce dark ringing on stars if the mask protection is not good, particularly on linear images. Deringing can help prevent this. 
​
MultiscaleLinearTransform is one of those processes that generally works very well with images that are either linear or non-linear, given how it functions. However, as we will see, when sharpening fine details in an image using its Bias setting, linear images can be more difficult to work with. This is not really a problem because this process is usually used to sharpen fine details late in a post-processing workflow, at which point your image is non-linear.

The following is the same monochrome Hydrogen-Alpha image in its linear state, auto-stretched, open alongside the MultiscaleLinearTransform process we will use.
Picture
MultiscaleLinearTransform is an excellent process to use for sharpening fine details because of how it works. This process targets particular pixel scales via Layers. For example, Layer 1 targets pixel scales of 1 pixel in size. Conversely, Layer 3 targets pixel scales of 4 pixels in size. This means that we can effectively sharpen fine details only, while leaving out the larger structures within an image (and indeed most of the noise as well!). Layer R is called the Residual Layer, and contains everything else not listed individually above it (by default, pixel scales of 16 pixels and up). If we select a larger value for Layers from the drop-down list, more are displayed on the list and Layer R is then everything else above that.
Picture
Since fine details exist in small pixel scales, we need not increase Layers from the default setting of 4. Moreover, we will use the default algorithm for MultiscaleLinearTransform, called Starlet transform, which is precisely how the ATrousWaveletTransform process works as a whole.
Picture
Sharpening fine details in an image is best done with mask protection. The mask should ideally only show bright areas in an image that are of relevance, with cut-offs in dark areas (these fine edges would be sharpened too so the dark-to-bright transitions will be more pronounced). Moreover, to further assist in not sharpening noise, background should be excluded by this mask. Stars should also ideally be excluded as otherwise the process of sharpening fine details will produce dark ringing around small stars (this happens more commonly when working with linear images).

The mask I have produced for this particular image involved a few steps and is identical to the one used for Deconvolution above. First and foremost, a clone of the linear image was made. The auto-stretch parameters were then applied to it as a permanent stretch with HistogramTransformation to make the clone non-linear. The RangeSelection process was used to select out only the brightest areas of nebulosity, excluding the background and dark areas as I saw fit. Once the range mask image was produced, I used StarMask to produce a mask with just stars in it. This was stretched and black-clipped a bit with HistogramTransformation to bring out the faintest stars, including those over bright nebulosity. Once done, the star mask image was subtracted from the range mask image to produce a new mask image that has only got the bright nebulosity visible, with background and stars excluded (including stars over the bright nebulosity). Though this may seem overly complex, it does not take very long to produce and is absolutely necessary to get the best out of sharpening fine details. Details of these mask generation procedures are the subject of another tutorial. 
Picture
Above we see the mask image produced for adequate protection of background and stars. This is now applied as a mask over the original linear image. 
Picture
With the background and stars now adequately protected and the bright areas of nebulosity exposed, we can freely sharpen fine details. To do this, we look at the Bias setting in MultiscaleLinearTransform. This setting applies to individual Layers so you can apply a different number to each layer individually. We note that Layer 1 is dominated by noise and thus we always avoid sharpening in this pixel scale. However, Layers 2 and 3 are ones to always target fine details. Layer 4 and above are not necessary for sharpening as these pixel scales go over what we normally wish to sharpen, but feel free to experiment. Do not sharpen Layer R as this will just sharpen everything above your largest individually listed layer. Simply select a larger value for Layers to target layers above Layer 4 individually. 

For Layers 2 and 3, I simply select each layer and on doing so, I set a value of 0.100 on Bias for both. 
Picture
Keep in mind values smaller than 0 will actually blur these pixel scales and values larger than 0 will sharpen them. Also keep in mind that Bias can become too much very, very quickly. I never use values above 0.100 and tend to even stick to 0.075 or so. Experimentation is key however as different images benefit differently. Feel free to use different values in Layers 2 and 3. Generally the smaller layer should receive the smaller Bias value as the smaller the layer, the closer you are to noise. Therefore values of Bias of 0.050 for Layer 2 and 0.075 for Layer 3 may be ideal in a lot of cases. Once set, we click Apply and check the changes by zooming in on fine details (the second screenshot below shows the altered image on a zoomed in segment of it, with the mask hidden, but active, for clarity). 
Picture
Picture
This is a very noticeable improvement to overall sharpness in fine details, but we note how some of the smaller stars above nebulosity have suffered from dark ringing around them, despite the mask's protection. This may be remedied slightly by enabling the Deringing setting in MultiscaleLinearTransform. 
Picture
Since dark ringing is what we are encountering, all you need to do is set an appropriate value (larger than 0) for the Dark setting under Deringing. Keep in mind Deringing is applied process-wide and is not per-layer. Also, things can get ugly, fast. Below shows the results of applying MultiscaleLinearTransform with the same Bias settings but with Deringing enabled and at default settings. 
Picture
Indeed the default setting of Dark under Deringing is too aggressive for linear images. Reducing this to 0.0100 produces better results. 
Picture
Certainly a much better end result, though some dark ringing is still visible if you zoom in on the smaller stars, but it has definitely improved. 

We now switch over to a non-linear image. Indeed all we have really done is apply the auto-stretch parameters permanently to the image using HistogramTransformation, but the image is now non-linear nonetheless. This is the image state you will most commonly be in to sharpen fine details using MultiscaleLinearTransform. The following shows the non-linear image with the same mask as before applied to it. 
Picture
We will now apply the same Bias settings as before - 0.100 to both Layers 2 and 3 - but with Deringing left disabled. I have zoomed in on a particular segment of the image and have kept the mask hidden, but active, for clarity (the second screenshot below shows the altered image). 
Picture
Picture
Though dark ringing is slightly, slightly visible on the smallest of stars, this is much improved sharpening performance on a non-linear image compared to a linear image. If desired, Deringing can be enabled and used with an appropriate value under Dark (usually a very low value). Generally however, sharpening fine details and avoiding dark ringing around small stars is a fine tuning process between choosing values of Bias that produce good sharpening and do not affect small stars aggressively, and choosing Deringing Dark values that compensate a little bit without adversely affecting the image. Mask protection certainly helps ease the burden of finding this fine balance. 

I do not normally use Deringing at all as I find the Bias values I use are very appropriate and always use MultiscaleLinearTransform on non-linear images. I always sharpen fine details on Layers 2 and 3 and only lightly with values of Bias ranging between 0.050 and 0.100. I tend to find a balance between sharpening enough and not adversely affecting small stars with dark ringing, though I do accept very little if I must.

Overall, using Deconvolution in an image's linear state and later using MultiscaleLinearTransform in an image's non-linear state can provide a stunning level of fine detail sharpness that you could not appreciate when you pre-processed your image. As a result of this benefit, use of these processes, or at least MultiscaleLinearTransform (if your imaging resolution and night sky conditions do not provide visible benefits withDeconvolution), is considered core within a post-processing workflow. 

 
Comment Box is loading comments...