Light Vortex Astronomy
  • Blog
  • Astrophotographs
  • Articles
  • Tutorials
  • Links
  • About Me
  • About e-EyE
 ​Send a Gift via PayPal:

Remote Hosting in Spain:
Picture

 

Tutorial (PixInsight):

Noise Reduction


Noise reduction is one of the procedures that is considered core for post-processing of images. This is also one of PixInsight's strong suits. There are a number of processes capable of noise reduction in PixInsight. The main ones are namely MultiscaleLinearTransform, MultiscaleMedianTransform, ACDNR and TGVDenoise. ATrousWaveletTransform is classically excellent at noise reduction but it is in the process of being phased out to obsolete, in favour of the more capable but similar MultiscaleLinearTransform. Each is capable in their own right though users may choose to use one process or another, or a combination of a few. This particular analysis is useful at seeing how well each perform. A key point to nail in at the very beginning is that the aim is noise reduction, not noise elimination. The best way to achieve low noise in images is simply capturing more exposures to stack. Attempting to eliminate noise has side-effects of giving images a waxy look and producing larger scale blobs around the background that are essentially from very smoothed out noise. It is therefore best to leave some small scale noise there to produce sharpness.

This tutorial covers how to use each of the four aforementioned noise reduction processes, with reference to both monochrome and colour images. However, for simplicity, the image used for most of this tutorial is monochrome and has had background gradients removed with DynamicBackgroundExtraction. 

Assumed for this tutorial:
  • Knowledge of operating PixInsight, related to dealing with images, processes and working with masks (read this, sections 3, 4 and 6).
  • Your images have already been pre-processed fully (read this).
  • Knowledge of producing different types of masks (read this). 

​Please feel free to ask questions or leave comments via the comments section on the bottom of this page. 

Contents
1. MultiscaleLinearTransform
2. MultiscaleMedianTransform
3. ACDNR
4. TGVDenoise
5. Analysing Wavelet Layers for Noise
Comments

 

1. MultiscaleLinearTransform

+ Works equally well with linear and non-linear images. 
+ Good results are very reproducible with very similar settings between different images so application is easy. 
+ Noise reduction can be done smoothly without being overly aggressive, producing excellent results. 
+ Noise reduction can be applied to image lightness and chrominance (colour) independently. 

- Has a tendency to blur small details along edges if not applied conservatively and with a decent mask image as protection. 
​

MultiscaleLinearTransform is a process that works directly with wavelet layers, attending to specific pixel structure sizes via individual wavelet layers. This makes this process very powerful in that noise reduction can be applied at a specific pixel scale, or at a number of pixel scales with varying degrees of aggressiveness. MultiscaleLinearTransform is the replacement for ATrousWaveletTransform and works the same way in its default algorithm of Starlet transform. 
Picture
Listed in MultiscaleLinearTransform are four layers numbered 1 to 4 and a final layer named R. Next to the layer names are scale numbers. Layer1 effectively corresponds to pixel structures whose size is 1 pixel. Layer 2 corresponds to pixel structures whose size is 2 pixels. Layer 3 corresponds to pixel structures whose size is 4 pixels. Layer 4 corresponds to pixel structures whose size is 8 pixels. Finally, layer R represents the rest (named Residual for this reason) - all larger pixel structure scales not covered by layers 1 to 4. We can customise how many layers are addressed by MultiscaleLinearTransform by simply selecting a number of layers from the Layers drop-down menu.
Picture
With Layers set to 6, we can individually address pixel structures of sizes up to 32 pixels, with layer R (Residual) now representing everything else above that. We can effectively delete pixel structures of certain sizes by selecting the relevant layer and disabling it. This is done simply by disabling Detail Layer for the particular layer selected, or double clicking the layer on the list.
Picture
Noise in images tends to dominate the first layer, with pixel scales of 1 pixel. Larger layers are also affected somewhat but to a decreasing degree as we go up layer scales. Noise reduction tends to work extremely well when applied decreasingly so to layers 1 to 4, inclusive. MultiscaleLinearTransform is very versatile and works extremely well with both linear and non-linear images. Resetting the process back to default, we now apply a protective mask over the image shown, which is linear but auto-stretched. The mask applied is generated by duplicating the linear image and applying the auto-stretch parameters permanently to make the mask image non-linear. If you are working with a colour image, the same procedure applies but it is best to extract a Lightness image from the colour image and then perform a permanent stretch on that using the auto-stretch parameters. Details of these mask generation procedures are the subject of another tutorial.
Picture
Above, the mask has been applied and inverted in order to attack the darker areas more than the brighter areas (and thus attack the background the most). This type of mask is excellent for general noise reduction due to performing noise reduction with varying degrees of protection across the entire image (degree of protection depends on the signal strength in the particular area of the image).

To apply noise reduction with MultiscaleLinearTransform, we simply select the layer we wish to apply noise reduction to and enable Noise Reduction for that layer. Since noise reduction can be applied to about the first four layers, we do this for layers 1 to 4, inclusive.
Picture
More layers can be noise reduced, of course, but noise dominates the smallest pixel scales anyway so the first four suffices. Doing this at the very beginning of post-processing is an excellent way to enhance the SNR on your image while it is still linear. Since the first layer is more dominated by noise than the second, and the second more than the third (and so forth), we will attack the first layer the most. We will decrease our noise reduction aggressiveness as we progress to larger layers.

For this image, I select layer 1 and set Threshold to 3.000, Amount to 0.50 and Iterations to 3. Threshold defines the strength of the noise reduction. Amount defines a blend between the noise reduced image and the original image. The default setting of 1 means that once you noise reduce the image, your end result will completely be the noise reduced image. A setting of 0.50 is in fact a blend of 50% the original image and 50% the noise reduced image. Doing this effectively produces smoother results with nicer overall noise reduction. Iterations defines how many times the algorithm is executed over this layer. Since the first layer is the one most affected by noise, our Threshold and Iterations values are the highest here.

For layer 2, I set Threshold to 2.000, Amount again to 0.50 and Iterations to 2. For layer 3, I set Threshold to 1.000, Amount again to 0.50 and Iterations again to 2. Finally, for layer 4, I set Threshold to 0.500, Amount again to 0.50 and Iterations to only 1. Applying the process produces a noise reduced image with the mask supporting where it applies the most noise reduction.
Picture
Picture
The above screenshots show a zoomed in section of the image towards the centre, with a dark area that has been nicely noise reduced by MultiscaleLinearTransform. The second screenshot shows the noise reduced result.

Results can be customised if you are not happy with the quantity of noise reduction, be it too much or too little. For example, I can produce a more noise reduced result if I change a few parameters. I raised Threshold to 4.000 and Iterations to 4 for layer 1. For layers 2, 3 and 4, I kept Threshold to the same values but raised Iterations to 3, 3 and 2, respectively. Applying the process again gives stronger noise reduction.
Picture
Picture
Indeed playing around with values of Threshold and Iterations will yield optimum results for your image. Amount may also be tweaked but the general recommendation is to keep it below 1.00 to yield a nice blend between the noise reduced image and the original image. This gives rise to good noise reduction rather than aggressive noise elimination (that gives waxy looking results). Please keep in mind that with fairly aggressive noise reduction, your image may appear to have single pixel sized holes (black pixels left in smoothed out small scale noise). ACDNR is particularly useful at getting rid of these in the non-linear state so this would be an example of using more than one noise reduction process in tandem.

MultiscaleLinearTransform can be applied equally well to linear and non-linear images, which means that using it during the initial linear stages of noise reduction is an excellent idea. This would be prior to stretching the image to a non-linear state. Once stretched, you could apply the process again but much less aggressively (since the image has already been noise reduced). A new mask would be beneficial once the image is altered, but this is good advice for any process that is applied.

A final note for those working with colour images is that MultiscaleLinearTransform can be applied to the entire image as is, the Lightness channel (governs monochrome brightness across the image) alone or the Chrominance channel (governs colour distribution across the image) alone. To alter how MultiscaleLinearTransform is applied, simply select your desired target from the Target drop-down menu.
Picture
This can be used to reduce noise in the colour distribution of your colour image but not the lightness of your image, for example. Left at its default mode, the noise reduction is applied to all RGB colour channels equally.

The algorithm used by MultiscaleLinearTransform can be changed to Multiscale linear transform rather than the default Starlet transform from the top drop-down menu. PixInsight however reports that though this algorithm is good, Starlet transform tends to be be more flexible when applying it across different pixel scales in an image. 
 

2. MultiscaleMedianTransform

+ Works equally well with linear and non-linear images. 
+ Good results are very reproducible with very similar settings between different images so application is easy. 
+ Noise reduction can be done smoothly without being overly aggressive, producing excellent results. 
+ Noise reduction can be applied to image lightness and chrominance (colour) independently. 

- Has a tendency to leave behind black noise pixels over background that need smoothing out later, particularly if using the Multiscale median transform algorithm. 
​

MultiscaleMedianTransform can arguably be better than MultiscaleLinearTransform in terms of noise reduction in certain situations. This is due to an overall less aggressive noise reduction that though can seem to produce slightly less noise reduction, maintains edges better and can thus appear to blur small details less than MultiscaleLinearTransform. To the end user, MultiscaleMedianTransform actually works in much the same way - by addressing specific wavelet layers to attack different pixel size structures. The following is the MultiscaleMedianTransform process open alongside the same image in its linear state, auto-stretched.
Picture
We quickly observe that MultiscaleMedianTransform looks very similar to MultiscaleLinearTransform. By default, layers 1 to 4 are listed as well as the Residual R layer. The exact same principles as for MultiscaleLinearTransform detailed in section 1 apply here. Again, since we know noise dominates the smallest pixel scale, we need to be most aggressive about noise reduction in layer 1. We will apply noise reduction to layers 1 to 4 but in a decreasingly aggressive order, just as we did with MultiscaleLinearTransform. To provide protection to high signal areas however, we first apply a mask. The mask applied is generated by duplicating the linear image and applying the auto-stretch parameters permanently to make the mask image non-linear. If you are working with a colour image, the same procedure applies but it is best to extract a Lightness image from the colour image and then perform a permanent stretch on that using the auto-stretch parameters. Details of these mask generation procedures are the subject of another tutorial.
Picture
To apply noise reduction, simply select the relevant layer and enable Noise Reduction.
Picture
You will notice there is no Iterations setting in MultiscaleMedianTransform. We therefore control the aggressiveness of noise reduction purely through the Threshold and Amount parameters, in the same way as with MultiscaleLinearTransform. First, I enable noise reduction for layers 1 to 4, inclusive.
Picture
Since MultiscaleMedianTransform is apparently less aggressive at noise reduction, the Threshold values used here would need to generally be higher than for MultiscaleLinearTransform. For example, I set layer 1 to have Threshold at 5.0000, layer 2 to have Threshold at 3.0000, layer 3 to have Threshold at 2.0000 and layer 4 to have Threshold at 0.5000. Amount is set to 0.75 for layers 1 to 4 to be more aggressive and the process is applied.
Picture
Picture
The noise reduction is pronounced, but clearly not as much as with MultiscaleLinearTransform. Edge detail is preserved a little bit more with MultiscaleMedianTransform, but not incredibly so. Also, though tweaking settings can ease this, MultiscaleMedianTransform does appear to be more prone to yielding black noise pixels within background. These can of course be removed with ACDNR, particularly when the image is non-linear, but it is worthy of note when we compare MultiscaleLinearTransform and MultiscaleMedianTransform.

Once again, Threshold and Amount can be tweaked on a per-layer basis. These settings have the same meanings as in MultiscaleLinearTransform - Threshold defines how aggressive to noise reduce and Amount defines how much of a blend between the original image and the noise reduced image is reproduced as an end result. Setting Amount to 1.00 gives purely the noise reduced image while a setting of 0.50 gives a blend of 50% of the noise reduced image with 50% of the original image. This is useful to achieve smoother end results after noise reduction so a value of 1.00 is normally avoided if possible. The Adaptive setting can be useful to increase above the default 0.0000 for the smallest layers if you find noise structures you want to remove still remain. Generally this is only done once you determine good values for Threshold and are still unable to get rid of certain noise.

MultiscaleMedianTransform is equally capable of noise reduction in both linear and non-linear image states. This means it can be applied equally well through post-processing for a multi-staged approach to noise reduction, much like with MultiscaleLinearTransform. Generally speaking, masks for adequate high signal area protection is key to best application of noise reduction.

For those working with colour images, the same final note as with MultiscaleLinearTransform applies to MultiscaleMedianTransform. The application of MultiscaleMedianTransform can be altered to a different target than the whole image by selecting the desired target from the Target drop-down menu.
Picture
This allows you to reduce noise in the colour distribution of your image (by selecting Chrominance) rather than the lightness in your image, for example. Leaving it at the default mode will reduce noise across all RGB channels equally.

So far, MultiscaleMedianTransform has been used with its default algorithm of Multiscale median transform. Another algorithm is available from the top drop-down menu, Median-wavelet transform. PixInsight reports that this is essentially a hybrid algorithm that blends the noise reduction on different pixel scales (wavelet layers) with the median filtering of significant structures within an image. The end result is meant to be more noise reduction where it matters, while protecting bright areas of an image that do not need much noise reduction. Indeed applying the exact same noise reduction parameters to Layers 1 to 4 as above, but using the Median-wavelet transform algorithm presents better end results.
Picture
Picture
The background appears to be less plagued with black noise pixels and the noise reduction is smoother, though less pronounced than using the Multiscale median transform algorithm. 
 

3. ACDNR

+ Good results are achievable with some tweaking of settings, but being conservative is advised. 
+ Can be effectively used in conjunction with other noise reduction to smooth things out late in post-processing. 
+ Noise reduction can be applied to image lightness and chrominance (colour) independently. 

- Does not work well at all with linear images. 
​

ACDNR is radically different to MultiscaleLinearTransform and MultiscaleMedianTransform, at least in appearance to the end user. The following is the ACDNR process open alongside the same image in its linear state, auto-stretched.
Picture
ACDNR is not seen by many as being as good as the likes of MultiscaleLinearTransform or MultiscaleMedianTransform but it is nevertheless effective. It is less effective with linear images than non-linear images, but can be used in either. However, the recommended use for ACDNR is when the image is non-linear, for extra noise reduction duty (after application of one of the other processes, for example). Here we first demonstrate application of ACDNR to a linear image and move on to its application to a non-linear image.

We begin by exploring the process itself. It has two tabs - labelled Lightness and Chrominance. ACDNR works by converting an image to the L*a*b* colour space, applying noise reduction there and then converting back to the RGB colour space. If your image is monochrome, noise reduction would also be applied to the L channel in the L*a*b* colour space. It is important to note that both tabs are applicable when you click Apply on ACDNR, not just the tab you are on. Due to this, ACDNR has an Apply option on both tabs. Disabling this for either tab will not apply noise reduction to either Lightness or Chrominance (depending on your selection).
Picture
Picture
Since we are working with a monochrome image, we disable the Chrominance filter for ACDNR, though if you are working with a colour image, you may want to keep this enabled and tweak its settings individually from the settings applied to Lightness. To provide protection to high signal areas, we first apply a mask. The mask applied is generated by duplicating the linear image and applying the auto-stretch parameters permanently to make the mask image non-linear. If you are working with a colour image, the same procedure applies but it is best to extract a Lightness image from the colour image and then perform a permanent stretch on that using the auto-stretch parameters. Details of these mask generation procedures are the subject of another tutorial.
Picture
The setting StdDev (or Standard Deviation), defines roughly the noise structure size you are wanting to eliminate. For the usual small scale noise, you may wish to use values ranging between the default 1.5 and about 2.5. Amount has the same definition here as in the other two processes covered thus far. A setting of Amount of 1.00 keeps purely the noise reduced image as the end result. The default setting of 0.90 keeps 90% of the noise reduced image with 10% of the original image, giving a smoother end result. I will change my StdDev value to 2.0 and set Amount to 0.50 to produce a smoother end result and be less aggressive.
Picture
Generally speaking, the default settings for Robustness and Structure size work extremely well (for both modes, Lightness and Chrominance). Iterations can be tweaked in the same way as for the other two processes covered thus far. This defines how many times the algorithm is applied so the higher the value, the stronger the noise reduction. Once you reach a good value for StdDev that attacks the noise you are looking to remove, tweaking Amount and Iterations generally yields the optimum results for your image. With Iterations left at 3 (which tends to work well - I normally only set between 2 and 4), I apply the process to my linear image.
Picture
Picture
Upon closer inspection of the noise reduced results (see second screenshot above), we determine that the noise reduction has not been anywhere near as successful as it was with MultiscaleLinearTransform and MultiscaleMedianTransform. Small scale noise appears to largely remain and the image has just taken an overall blurrier look to it, with stars being enlarged with larger halos (as if they have been smeared outward from their centres). This is despite the protective mask! This is where we start to see that ACDNR can be less effective than the other two aforementioned noise reduction processes when it comes to linear images.

In theory, the settings under Bright Sides Edge Protection are used for tweaking how badly the ACDNR process affects stars, sometimes to the point of not negatively at all. However, in a linear image, even tweaking these settings can yield less than successful results. We therefore move on to using ACDNR on a non-linear image. The same monochrome image has now been stretched with HistogramTransformation.
Picture
A new protective mask is now applied to this non-linear image. The mask in this case consists of a duplicate that has been treated with HistogramTransformation with use of the Auto-clip shadows and Auto-clip highlights, to accentuate the bright and dark areas further. If you are working with a colour image, this can be done as well, but ideally to a Lightness image extracted from your colour image. Details of producing these mask images and modifying them with HistogramTransformation are the subject of another tutorial.
Picture
For demonstration purposes, we now apply ACDNR again with the same settings as before - StdDev set to 2.0, Amount set to 0.50 and Iterations set to 3.
Picture
Picture
On close inspection of the noise reduction (see the second screenshot above), we first note that indeed it was not very aggressive in the slightest. It has however clearly worked out much better than with the previous linear image - the stars have been left alone and the background has a bit less noise. We can be somewhat more aggressive with our noise reduction, it appears, so we opt to increase Amount to the default of 0.90 and set Iterations to 5. StdDev is left at 2.0 because it appears to address the noise pixel scale we are interested in addressing (small scale noise).
Picture
Picture
This may indeed be overly aggressive, but it is clearly how to best operate ACDNR - by tweaking StdDev to address the noise your wish to remove, tweaking Amount and Iterations to alter the aggressiveness and to generally use ACDNR on non-linear images only.

Given the manner in which ACDNR operates its noise reduction, which appears to blend the noise specks with the black noise pixels left in the background, it is generally a good idea not to be far too aggressive with ACDNR as it can turn your small scale noise into large scale noise (sometimes described by people as smooth blobs). This smoothing is best seem with the mask applied but hidden, shown before before noise reduction (first screenshot) and after noise reduction (second screenshot).
Picture
Picture
This exemplifies why it is better to see this whole procedure as noise reduction and not noise elimination - leaving a few small scale noise specks in the image looks better than smoothing them out to the point of turning them into blobs. Still, ACDNR's ability to smooth out noise can be used to our advantage. For example, MultiscaleMedianTransform tends to leave some black noise pixels behind in the background when applying noise reduction. Similar can be said about MultiscaleLinearTransform at times. Either of these two processes could be used to reduce noise in the image's linear state and then apply ACDNR a little to the image once non-linear. For example, below we see the image in its linear state after receiving noise reduction with MultiscaleMedianTransform (the mask is active but hidden).
Picture
Once stretched to its non-linear state, the image retains these black noise pixels.
Picture
ACDNR can be very useful at this stage, especially now that the image is non-linear. The protective mask used is again a duplicate that has been modified with HistogramTransformation with Auto-clip shadows and Auto-clip highlights. ACDNR was applied only to Lightness and with a StdDev setting of 2.0, as before. Amount was set to 0.75 and Iterations to 2. We are trying to not be anywhere near as aggressive as in the earlier example, but enough to smooth out these black noise pixels left over by MultiscaleMedianTransform.
Picture
Personally I feel this noise reduction procedure has been applied too aggressively from the beginning, but it exemplifies the idea that if you have black noise pixels left over from the initial noise reduction, that ACDNR can effectively smooth them out once the image is non-linear.

A final note is that since ACDNR smooths out noise, it is commonplace for users to have to re-adjust their image's histogram with HistogramTransformation after application of ACDNR. A simple black point adjustment in HistogramTransformation suffices.
Picture
Notice above that no pixels are being clipped by this black point adjustment, so it is a deemed a required black point adjustment. Since undoubtedly your images will be non-linear when ACDNR is applied, consider black point adjustment as part of the process of applying ACDNR to your image.
 

4. TGVDenoise

+ Works equally well with linear and non-linear images. 
+ Good results are achievable with some tweaking and use of the Statistics process as an aid. 
+ Noise reduction can be applied to image lightness and chrominance (colour) independently. 

- Can require some time to tweak settings to produce acceptable results. 
- Can easily become overly aggressive at noise reduction and produce undesirable results. 
-
 Is particularly slow to run on entire images (use of preview boxes for testing settings is advised). 
​

An official assessment of noise reduction algorithms was made a few years ago to determine which one seemed most effective and adaptable. TGVDenoise appeared to be extremely effective in doing its job, at times significantly more so than the other algorithms available (detailed in the above sections). The trouble users tend to have with TGVDenoise is quite simply its apparently wild behaviour with subtle changes to its settings. Detailed here is how to best make use of TGVDenoise, whether or not you find it preferable to other noise reduction algorithms. The following is the TGVDenoise process open alongside the same image in its linear state, auto-stretched.
Picture
TGVDenoise is a very adaptable process and can indeed be used with both linear and non-linear images, though the settings for either of these image states vary greatly. At the top of TGVDenoise, we find two settings for mode. 
Picture
RGB/K mode is the default setting and this keeps the image in the standard RGB colour space. In this mode, the entire image as is, is treated with noise reduction. In CIE L*a*b* mode however, we are able to target an image's Lightness and Chrominance (colour) separately with different noise reduction parameters. Providing you are dealing with a colour image, this mode may be useful if you wish to reduce noise more effectively. 
Picture
Since in this example we are dealing with a monochrome image, we keep the default RGB/K mode setting. The top section are where we find our control parameters of Strength (how much of a noise reduction effect to produce), Edge protection (how much leeway the process has in attacking noise when it detects edges) and Smoothness (how smooth the noise reduction should be). Iterations has its standard meaning of how many times to apply the algorithm over the image. The more times it is applied, naturally the more noise reduction you can expect. However, there is always a convergence point after which applying more iterations does extremely little extra noise reduction. 

Next to Strength, Edge protection and Smoothness you have sliders. These define the values you set, though they can be entered manually on the smaller of the two text boxes to the left of the sliders. The numbers to the right of the sliders are exponents, which are essentially powers of ten and define the overall magnitude of your values. For example, having a slider value of 3 and an exponent of -2 means your actual value is 0.03. Similarly, a slider value of 3 and an exponent of 2 means your actual value is 300. The actual value can also be entered precisely into the larger of the two text boxes to the left of the sliders.

Local Support is actually very much necessary, and it is here you provide the process with an image to use as reference to where noise reduction is more important (i.e. over areas of weak signal - the background). Before we begin, we should in fact create an image for Local Support to use. We will simply use the standard duplicate of the original image, with the auto-stretch parameters permanently applied to it. If you are working with a colour image, the same procedure applies but it is best to extract a Lightness image from the colour image and then perform a permanent stretch on that using the auto-stretch parameters. Details of these mask generation procedures are the subject of another tutorial.
Picture
For previous noise reduction algorithms, the mask image was indeed applied as a mask. In the case of TGVDenoise, we instead define the mask image under Local Support. To do this, we expand and enable Local Support, click the folder icon button next to the text box for Support image, select our mask image and confirm. ​
Picture
Picture
We can leave everything else under Local Support at default. Since we do not need to stretch the image further (or clip it), we do not need to touch the Midtones, Shadows or Highlights sliders. If we increase the value of Noise reduction, we are effectively removing layers from our Local Support image but after some testing, I found this to be counter-productive in noise reduction at this stage so I advise keeping it at 0.

To start off, let us define a value for Edge protection. Thankfully, this is something we can derive from our image. We will first need a preview box over background space (no nebulosity or stars). It does not need to be huge - a small segment of your image is fine.
Picture
At this point, we use the Statistics process to make an analysis. Once open, select your preview box from the list. 
Picture
By default, the Statistics process does not give you the measurement of standard deviation, and we need it. This is enabled by going to the Statistics Options (the tool icon button on the top right corner) and enabling Standard deviation, then confirming.
Picture
Picture
Picture
To get values that we can use in TGVDenoise, we need to switch the Statistics process to show us pixel values in Normalized Real [0, 1], meaning pixel values range between 0 and 1 rather than their usual bit depth range (e.g. 16-bit ranges between 0 and 65,535). Simply select Normalized Real [0, 1] from the drop-down menu on the top left.
Picture
Above we see that stdDev is calculated to be 1.865619 with an exponent of -4. This is indeed the value that needs to be entered into Edge protection in TGVDenoise. Once done, close the Statistics process and delete the preview box as we no longer need it.
Picture
Above we see that PixInsight has rounded off what I entered but the accuracy more than suffices. As aforementioned, running a noise reduction algorithm over many iterations is useful until the point of convergence, when more iterations produce extremely little benefit. A safe convergence value for Iterations is 500 but we should start lower to test things out so we set 250. Before we see how things turn out, we note that the default settings of TGVDenoise are to deal with non-linear images. Our image is currently linear and these settings are therefore far too aggressive. Below shows the effect on the image of applying the default settings for Strength and Smoothness with Iterations at 250 to the entire image.
Picture
If you are seeing something of this kind, your value of Strength is set too high and should be lowered, probably by an entire exponent. It is typical to use exponent values that are negative for both Strength and Edge protection (say, -1 to about -4). By lowering our Strength exponent by 1 to -1, we get the following, much better result (second screenshot shows an area of the image after application):
Picture
Picture
This is in fact pretty excellent noise reduction already, if a little too strong, still. Lowering the value of Strength but keeping the exponent at -1 is a good place to be at the moment. A Strength value of 1.5 is much better, with exponent of -1.
Picture
Picture
This looks a lot better, with less loss of detail. However, we are still losing a bit of detail on the nebulosity and if we keep lowering Strength, we are going to start getting less noise reduction (when it is currently working very nicely). What we should therefore lower to maintain the detail is Smoothness. As a hunch, I will lower the exponent of Smoothness to -1 and then increase the value to 8 (this is still lower because we lowered it by one magnitude).
Picture
Picture
The level of detail preserved alongside the excellent noise reduction is very evident here, showing an excellent application of TGVDenoise. When your settings are looking good, it is advisable to raise Iterations to 500 so the noise reduction application is thorough. During testing, you may wish to apply TGVDenoise only to a small preview box showing an area with both, background and nebusloity (with some stars mixed in). This makes the testing process faster as TGVDenoise only runs on a small area of the entire image. Once you are happy with your final application of TGVDenoise, you can close the process and also close the mask image used in Local Support without saving it.
Picture
If we stretch the image to non-linear, we sometimes end up seeing some noise blobs left over from the noise reduction. This happens when the noise reduction is a bit too aggressive.
Picture
Since at this point, the image is non-linear, ACDNR is an excellent process to deal with these noise blobs and smooth things out a bit. TGVDenoise can also be used with non-linear images and indeed its default settings are more applicable to non-linear images than linear images. Even so, tweaking the Strength and Smoothness parameters after setting the appropriate value for Edge protection (using a preview box over background and finding stdDev with the Statistics process) is key to successful noise reduction, whether your image is linear or non-linear.
 

5. Analysing Wavelet Layers for Noise

At this point, you will have been introduced to various noise reduction processes within PixInsight and should now be able to apply them to your images. In particular for MultiscaleLinearTransform and MultiscaleMedianTransform, which work with wavelet layers for different pixel structure sizes, we have been dealing with images in pixel size layers. This begs the question - how do we know which layers really contain our noise and with what severity? Thankfully PixInsight can answer that question diligently as well.

We will make use of the ExtractWaveletLayers script in order to analyse our image in different pixel scales. This script is found under Image Analysis.
Picture
Picture
With our open image still in its linear state (but auto-stretched), we can use this script to produce new images that correspond to the different pixel scales present. Since noise reduction is usually applied to Layers 1 to 4 (not beyond), we can lower the Number of layers setting to 4. If we keep Extract the residual layer enabled, another image will be produced corresponding to Layer R - the rest of the image's pixel scales all in one. Since we do not need this to be able to analyse the noise in the first four layers, we can disable this. The default Scaling function works very well. Before clicking OK, make sure the correct image is selected under the Target image drop-down menu.
Picture
Once you click OK, a few seconds later, four new images will pop up, which correspond to each pixel scale. They will be named layer00, layer01, etc. The image layer00 corresponds to Layer 1, the image layer01 corresponds to Layer 2, etc. The following shows the four images with the boosted auto-stretch applied, laid out next to each other.
Picture
The top-left is Layer 1 (pixel scale 1), the top-right is Layer 2 (pixel scale 2), the bottom-left is Layer 3 (pixel scale 4) and finally, the bottom-right is Layer 4 (pixel scale 8). We can see that as we increase in pixel scale, more valid structures of the image become visible. This is normal as going up in pixel scale will reveal the larger structures present. This is precisely why noise reduction is applied more aggressively to Layer 1 than to Layer 4, for example - Layer 1 is dominated by noise while Layer 4 has little noise but has lots of valid image structures.

Extracting the wavelet layers from an image can be useful in order to determine just how aggressive you need to be to each layer when applying the likes of MultiscaleLinearTransform or MultiscaleMedianTransform. This script can also be applied before and after noise reduction, for comparative purposes, if desired. Please note that the ExtractWaveletLayers script works equally well with linear and non-linear images. You may still wish to use the auto-stretch or boosted auto-stretch functions of PixInsight to analyse your extracted wavelet layer images, even if they came from a non-linear image (just so you can better see the noise and structures). 

 
Comment Box is loading comments...