Tutorial (PixInsight):
Reducing Light Pollution Effects, Removing Gradients and Artificial Flattening
At times unavoidable circumstances lead to an overall light pollution glow to one's images. In today's society, it can be hard to get away from all the light pollution. Light pollution suppression filters exist, which can help with certain types of light pollution, but nevertheless there can be a glow to one's images. This glow washes the images, usually in an orange tint. DynamicBackgroundExtraction is one of PixInsight's most impressive processes and can be used to remove this glow, effectively reducing the light pollution effects on one's images.
Not only does light pollution wash images with certain intensity gradients, but other external light can produce gradients that appear to affect different parts of one's images with different levels of severity. DynamicBackgroundExtraction is very much able to tackle this as the gradients are treated in much the same way as light pollution glow itself. Moreover, DynamicBackgroundExtraction is able to artificially flatten images if they have not been calibrated with proper flats. The caveat is the lack of removal of dust motes, but nevertheless the vignetting is effectively removed.
This tutorial covers the DynamicBackgroundExtraction process in detail, starting with the fundamentals of how it functions followed by its three main applications outlined above. It applies equally to both, colour images and monochrome images.
Assumed for this tutorial:
Please feel free to ask questions or leave comments via the comments section on the bottom of this page.
Not only does light pollution wash images with certain intensity gradients, but other external light can produce gradients that appear to affect different parts of one's images with different levels of severity. DynamicBackgroundExtraction is very much able to tackle this as the gradients are treated in much the same way as light pollution glow itself. Moreover, DynamicBackgroundExtraction is able to artificially flatten images if they have not been calibrated with proper flats. The caveat is the lack of removal of dust motes, but nevertheless the vignetting is effectively removed.
This tutorial covers the DynamicBackgroundExtraction process in detail, starting with the fundamentals of how it functions followed by its three main applications outlined above. It applies equally to both, colour images and monochrome images.
Assumed for this tutorial:
- Knowledge of operating PixInsight, related to dealing with images and processes (read this, sections 3 and 4).
- Your images have already been pre-processed fully (read this).
Please feel free to ask questions or leave comments via the comments section on the bottom of this page.
1. The DynamicBackgroundExtraction Process
As per its name, the DynamicBackgroundExtraction process is a dynamic process. This means that once it is opened and initialised on one image opened in PixInsight, it cannot be applied to another image that is open without closing the process and re-opening it for the other image (or applying a saved process icon to the other image). Below is a linear image captured on a DSLR (auto-stretched in PixInsight) with the DynamicBackgroundExtraction process opened next to it.
The important tabs Sample Generation and Target Image Correction have been expanded to show their settings, as we will work with these. The process is not yet initialised. This is shown by the fact that all the settings options are greyed out. To initialise it, we can select the process window and then click anywhere inside the target image, or select the target image and then click the Reset button for the process.
You will immediately notice a crosshair appears in the target image. This defines the centre of the light pollution gradient or other gradient that is present in the image that we will aim to remove. By hovering the mouse in the centre of the crosshair, the mouse cursor changes to one indicating you can drag around. Once the cursor changes, simply keep your left mouse button pressed and drag the crosshair anywhere on the image. You may drag each line (the vertical and horizontal) individually by hovering over the lines themselves (anywhere along each one), rather than the crosshair's centre. This is useful if you notice that your gradient is not centred along the image.
Clicking the process' Reset button will reset the crosshair back to its default centre of the image position. Since this example image has its gradient pretty centred on the image, we keep the crosshair at the image centre.
Before we go into the settings of DynamicBackgroundExtraction, let us first introduce the fundamentals of how it works. DynamicBackgroundExtraction works by generating an image corresponding to sample points placed on the target image. It should come as no surprise that the sample points placed on the target image need to be placed purely above what should be background - not over stars, nebulosity, galaxies, etc. For the image generated to be an accurate representation of the gradients in the target image, the target image needs to be sampled at every location and in fairly regular intervals if possible. This gives the process a good idea of what the gradients look like and gives a pleasing end result when the generated image is subtracted from the target image.
Sample points can be placed manually anywhere along the image simply by clicking on the target image once the process has been initialised as shown above.
Before we go into the settings of DynamicBackgroundExtraction, let us first introduce the fundamentals of how it works. DynamicBackgroundExtraction works by generating an image corresponding to sample points placed on the target image. It should come as no surprise that the sample points placed on the target image need to be placed purely above what should be background - not over stars, nebulosity, galaxies, etc. For the image generated to be an accurate representation of the gradients in the target image, the target image needs to be sampled at every location and in fairly regular intervals if possible. This gives the process a good idea of what the gradients look like and gives a pleasing end result when the generated image is subtracted from the target image.
Sample points can be placed manually anywhere along the image simply by clicking on the target image once the process has been initialised as shown above.
The above screenshot shows a zoomed in segment of the image. Sample points placed have been circled for clarity. These are shown as squares in the image. They can be dragged around once placed, and deleted by clicking them to select them and pressing the DEL key on your keyboard. Depending on the relative size of the image, you may benefit from having larger or smaller sample points. By default, the sample point size is set to 5 pixel radius. I find that this is a bit too small for most image sizes these days. Personally, I tend to use settings of 10 to 15 for Default sample radius.
If you wish to change the size of your sample points, simply enter a new value under Default sample radius and click the Resize All button.
If you wish to change the size of your sample points, simply enter a new value under Default sample radius and click the Resize All button.
The right size of the sample points really depends on the size of the image itself, and how dense the star field is. For extremely dense star fields, large sample points can be problematic. The following shows the image in full size (not zoomed in) with a roughly correct sample point size, which turned out to be 10.
Now that we have covered the fundamentals of what the process needs - sample points - we realise that this can be very tedious. Thankfully it can be eased by automatic sample point placement. Once you determine the roughly correct sample point size, you can clear the image of sample points by clicking the Reset button on the process. Doing this will remove your Default sample radius setting so enter it again and click the Resize All button. In my case here, it is 10. Once done, you may click the Generate button, which will automatically place samples all over the image.
You can immediately tell from the above image that not a lot of sample points were automatically placed. This is not very useful. Moreover, there is a large gap between rows. This is unsuitable for this image because there are intense gradients with fast changes. To reduce the gap between rows, we simply enter a larger value for Samples per row. 15 tends to be a good value. Entering a new value here requires we click Generate again to apply the changes.
The number of sample points placed automatically is now much better, with a smaller (and acceptable) gap between rows. The issue is that a lot of the areas of the image are not being sampled - namely the darker corners. This is because those areas are being rejected by DynamicBackgroundExtraction in terms of the sample weighting and the tolerance defined.
To help the process automatically place more sample points, we can lower the Minimum sample weight. The default setting of 0.750 can work well but I find a setting of 0.100 is better to automatically place sample points. Changing this again requires we click the Generate button to place sample points again.
To help the process automatically place more sample points, we can lower the Minimum sample weight. The default setting of 0.750 can work well but I find a setting of 0.100 is better to automatically place sample points. Changing this again requires we click the Generate button to place sample points again.
We see that we are still missing sample points in key areas of the gradients present - towards the darker corners. Also worth noting is that despite lots of sample points being placed above, some are being rejected. This means these sample points will not be included in generating the model image, which is undesirable as these sample points are in important areas. The rejected sample points take a different colour to the clear boxes. By default, these should be red in colour, but the colour of the gradients in the image I am using as an example makes it difficult to discern their red colour. Nevertheless, the rejected sample points are clear and are circled below.
The key parameter to tweak now is Tolerance. The default of 0.500 is generally excellent but for more intense gradients such as those presented here, one needs to increase Tolerance to a larger value. For intense gradients such as these, values of 1.000 all the way up to around 1.500 generally work well. The aim is to set this to the lowest value possible (not generally below 0.500 though!) while making sure all the sample points are accepted and that they are all over the image's background areas.
For this image, I increase Tolerance to 1.000 and click the Generate button again to automatically place sample points.
For this image, I increase Tolerance to 1.000 and click the Generate button again to automatically place sample points.
Notice that a lot more sample points are now present and that in the most part, the image is covered. Notice also that the central sample points that were previously rejected are now accepted. You may however notice along the bright star halos that there are sample points there and that these are being rejected. I will not adjust Tolerance for this because these sample points should not be there to begin with - remember the sample points should strictly only cover background, not stars, nebulosity or galaxies (or star halos in this case!).
A slight problem is that despite the Tolerance value of 1.000 being very helpful, there are still a few key sample points missing - the very corners. The bottom-right corner has a sample point there but it is being rejected. Ideally the corners should always be sampled for this process to work well (unless the corners contain objects you do not want sampled!). I therefore increase Tolerance to 1.300 and click the Generate button again to check.
A slight problem is that despite the Tolerance value of 1.000 being very helpful, there are still a few key sample points missing - the very corners. The bottom-right corner has a sample point there but it is being rejected. Ideally the corners should always be sampled for this process to work well (unless the corners contain objects you do not want sampled!). I therefore increase Tolerance to 1.300 and click the Generate button again to check.
The corners are now covered by valid sample points. Increasing Tolerance may have added sample points on undesirable areas (such as bright stars, star halos or nebulosity). This is unavoidable but we can delete these problematic sample points anyway. Before 1.300, I tested values of 1.100 and 1.200 but unfortunately one or two corners were still left out or being rejected. If a sample point gets placed at the corner but is still being rejected, you may opt to move the sample point manually ever so slightly out of the corner. This is generally desirable over increasing Tolerance further, so long as the sample point is still around the corner. One final note before moving on is that DynamicBackgroundExtraction should only be used on images which have no black edges around them (these tend to appear due to the stacking and registration processes). Please ensure these are cropped out before using DynamicBackgroundExtraction. Doing so will ensure the generated image is a true representation of what is being displayed by your target image.
We are now at a stage where the image is being sampled well and gradients can be modeled by the generated image. The Smoothing factor parameter takes care of smoothing out the generated image so that it removes outliers and subtracts a smooth background gradients model from your target image. The default of 0.250 works exceptionally well in almost all cases. You may test decreasing it slightly to get a more targeted subtraction, but generally values below 0.100 do not work very well as the sample points are taken too literally.
The setting for Correction (either Subtraction or Division) is important depending on what we are looking to do. This setting is covered in later sections depending on application.
The automatic placement of sample points is somewhat blind, depending on your target image and the parameters entered into DynamicBackgroundExtraction. Some of these sample points will unavoidably be over areas they should not be - stars, star halos, nebulosity, galaxies, etc. These sample points can be moved slightly out of the way and on to background, or deleted entirely. Here comes a fairly tedious procedure - looking at each sample point placed and moving or deleting problematic ones. It helps to zoom in a fair bit and pan row by row.
We are now at a stage where the image is being sampled well and gradients can be modeled by the generated image. The Smoothing factor parameter takes care of smoothing out the generated image so that it removes outliers and subtracts a smooth background gradients model from your target image. The default of 0.250 works exceptionally well in almost all cases. You may test decreasing it slightly to get a more targeted subtraction, but generally values below 0.100 do not work very well as the sample points are taken too literally.
The setting for Correction (either Subtraction or Division) is important depending on what we are looking to do. This setting is covered in later sections depending on application.
The automatic placement of sample points is somewhat blind, depending on your target image and the parameters entered into DynamicBackgroundExtraction. Some of these sample points will unavoidably be over areas they should not be - stars, star halos, nebulosity, galaxies, etc. These sample points can be moved slightly out of the way and on to background, or deleted entirely. Here comes a fairly tedious procedure - looking at each sample point placed and moving or deleting problematic ones. It helps to zoom in a fair bit and pan row by row.
Above shows a few sample points that are currently problematic - shown overlapping stars. Clicking a sample point will select it, and display what it is overlapping on the DynamicBackgroundExtraction process window.
The black area in the process window preview is the problematic area - an overly bright area the sample point is overlapping. Dragging the sample point slightly to the left and on to pure background clears the problem.
The sample points need not be perfectly lined up to each other in nice rows. They simply need to sample the image well and only sample background at that. Therefore, feel free to move each sample point slightly out of where they were automatically placed.
When it comes to sample points that are over objects such as bright stars, bright star halos, nebulosity or galaxies, it helps to delete these. There is no point clumping too many sample points together so it is simply better to make a hole in the rows by deleting a few sample points. If you are not 100% sure whether or not a sample point is actually overlapping faint nebulosity, it is best to move it further away from the object in question, or delete it altogether, than risk keeping it there. Keep in mind however that the smoothing that is done to the generated image prior to subtraction does deal with some of these outliers.
To delete sample points, simply click to select them and press the DEL key on your keyboard. The next sample point in line is then selected.
When it comes to sample points that are over objects such as bright stars, bright star halos, nebulosity or galaxies, it helps to delete these. There is no point clumping too many sample points together so it is simply better to make a hole in the rows by deleting a few sample points. If you are not 100% sure whether or not a sample point is actually overlapping faint nebulosity, it is best to move it further away from the object in question, or delete it altogether, than risk keeping it there. Keep in mind however that the smoothing that is done to the generated image prior to subtraction does deal with some of these outliers.
To delete sample points, simply click to select them and press the DEL key on your keyboard. The next sample point in line is then selected.
The above screenshot shows all the sample points well placed over background, having moved some. The area circled used to have a sample point, which was deleted. There was little point moving this sample point because it would require too large a move to put it outside of the area it was in, and the surrounding areas were already covered by other sample points. It is best to delete the problematic sample points in this scenario.
Again, sample points here were moved out of the way of stars and the circled areas show where sample points used to be but were deleted due to bright star halos and nebulosity (even if faint).
While you are ensuring all the sample points are well placed and not overlapping objects, you may wish to add more sample points manually. Feel free to do this by clicking on the target image. Manually added sample points can also be moved or deleted in the same way.
If while you are dealing with the sample points, you realise you wish to increase or decrease their size, simply enter a new value under Default sample radius. This time however, to avoid erasing all the tedious work you have done, do not click the Generate button as this will do the automatic placement again. Instead, click the Resize All button. This will maintain your sample point placements but simply resize them. The Resize All button can also be used to apply the other parameters such as new Tolerance values. This is helpful if you suddenly realise you want to adjust Tolerance but do not want to start again. Simply change Tolerance and click Resize All.
Finally, before we move on to specific applications, you may wish to save your work. Since DynamicBackgroundExtraction is a dynamic process, closing the window will not maintain its settings - all your work will be lost. Instead, you will need to create a process icon. This is done as with all other processes in PixInsight - drag and drop the New Instance button from DynamicBackgroundExtraction on to the PixInsight workspace. You may then close the process window as double-clicking this process icon will bring your work back into view on the currently selected image. This is particularly useful if you are working with monochrome images and are applying this process to each other individually. After all, the stars should be in the same places for all the monochrome images, so there is no sense in re-doing all the above work for each one!
Please note that double-clicking the process icon will open up DynamicBackgroundExtraction and place the sample points (as saved) on to the currently selected image. This is good if you wish to check how well the parameters match the image before applying the process. You may want to click the Resize All button after double-clicking the process icon, to ensure the parameters are applied and your sample points show acceptance/rejection as appropriate. If however you wish to simply apply what you have already set up, without dealing with the window itself, simply drag and drop the process icon itself on to a target image of choice. This will simply run the process on the target image straight away.
While you are ensuring all the sample points are well placed and not overlapping objects, you may wish to add more sample points manually. Feel free to do this by clicking on the target image. Manually added sample points can also be moved or deleted in the same way.
If while you are dealing with the sample points, you realise you wish to increase or decrease their size, simply enter a new value under Default sample radius. This time however, to avoid erasing all the tedious work you have done, do not click the Generate button as this will do the automatic placement again. Instead, click the Resize All button. This will maintain your sample point placements but simply resize them. The Resize All button can also be used to apply the other parameters such as new Tolerance values. This is helpful if you suddenly realise you want to adjust Tolerance but do not want to start again. Simply change Tolerance and click Resize All.
Finally, before we move on to specific applications, you may wish to save your work. Since DynamicBackgroundExtraction is a dynamic process, closing the window will not maintain its settings - all your work will be lost. Instead, you will need to create a process icon. This is done as with all other processes in PixInsight - drag and drop the New Instance button from DynamicBackgroundExtraction on to the PixInsight workspace. You may then close the process window as double-clicking this process icon will bring your work back into view on the currently selected image. This is particularly useful if you are working with monochrome images and are applying this process to each other individually. After all, the stars should be in the same places for all the monochrome images, so there is no sense in re-doing all the above work for each one!
Please note that double-clicking the process icon will open up DynamicBackgroundExtraction and place the sample points (as saved) on to the currently selected image. This is good if you wish to check how well the parameters match the image before applying the process. You may want to click the Resize All button after double-clicking the process icon, to ensure the parameters are applied and your sample points show acceptance/rejection as appropriate. If however you wish to simply apply what you have already set up, without dealing with the window itself, simply drag and drop the process icon itself on to a target image of choice. This will simply run the process on the target image straight away.
2. Reducing Light Pollution Effects
At this point we are well versed in how DynamicBackgroundExtraction works but we have yet to apply it to anything specific. In this section, we follow on the above, applying the process to the very same example image. Below is the model we created above, with all sample points individually checked and moved/deleted, as appropriate.
There are two settings under Correction. These are Subtraction and Division.
To reduce the light pollution effects, we want to Subtract the generated model image from the target image. This requires we select Subtraction under Correction.
Since all the sample points are well placed and the Correction method is selected, all that is left is for us to apply the process to the target image. Two images pop up after application. Both have been auto-stretched below for clarity.
The image on the left is the generated model image, based on the sample points in image together with the parameters set. We can see how closely this matches the light pollution background of the target image. The image on the right is the end result from the subtraction - an image that displays a lot more contrast in the objects of interest above background, thanks to DynamicBackgroundExtraction.
The above example is considered extreme in the sense that the image is washed in light pollution glow, attributed to the fact that it was captured on a DSLR through a standard lens, without a light pollution suppression filter and in an area of night sky that had a fair bit of visible light pollution (to the naked eye). Even so, the end result is very pleasing compared to the initial image. The process could in theory be applied more than once, with slightly tweaked parameters. For example, we could close the initial image and keep only the end result image open. Double-clicking the process icon I saved will bring up the same model on the new image.
The above example is considered extreme in the sense that the image is washed in light pollution glow, attributed to the fact that it was captured on a DSLR through a standard lens, without a light pollution suppression filter and in an area of night sky that had a fair bit of visible light pollution (to the naked eye). Even so, the end result is very pleasing compared to the initial image. The process could in theory be applied more than once, with slightly tweaked parameters. For example, we could close the initial image and keep only the end result image open. Double-clicking the process icon I saved will bring up the same model on the new image.
In light of further details revealed by the initial subtraction, one can now venture on to moving some sample points, maybe deleting some others, placing new ones over areas you wish to tame further, etc. In my example image, there is a sample point on the very top of the image (near the centre) that is clearly over a bright star halo. Initially it was not clear it was over a star halo, due to the light pollution glow everywhere. Since now it is, the sample point can simply be moved or deleted for the second application. Moreover, seeing as most of the light pollution glow is now gone, we can reduce the Tolerance value significantly. This should be set back to its default of 0.500. You will need to click the Resize All button to apply the new setting.
Seeing as no sample points appear as red squares, it means that nothing is being rejected due to the reduced Tolerance value. This is good. If however one or more sample points were being rejected, if they should not be (i.e. if they are indeed over background), you could increase Tolerance ever so slightly above 0.500, or move the sample point slightly near its current area.
To apply DynamicBackgroundExtraction, we make sure Subtraction is again selected from Correction. Please note that if you simply want the end result to pop up, rather than the second image showing what was subtracted, you can enable the option Discard background model. This will force only the end result to pop up.
To apply DynamicBackgroundExtraction, we make sure Subtraction is again selected from Correction. Please note that if you simply want the end result to pop up, rather than the second image showing what was subtracted, you can enable the option Discard background model. This will force only the end result to pop up.
Applying the process now yields the end result alone, which shows a little bit of a further improvement.
There are of course diminishing returns in further applications. Generally a single application works exceptionally well. You may wish to apply the process a second time (with Tolerance lowered back to 0.500 or as close to this as you can make it) to check for extra benefit. Please note that the above end result still has vertical banding and a bright band along the bottom. These are not side-effects from DynamicBackgroundExtraction - they are from the DSLR used to capture this image.
Before moving on to other applications of DynamicBackgroundExtraction, here is another worthy example image. This is a linear image (but auto-stretched) of the Pelican Nebula in Hydrogen-Alpha, captured using a 3nm bandwidth filter.
Before moving on to other applications of DynamicBackgroundExtraction, here is another worthy example image. This is a linear image (but auto-stretched) of the Pelican Nebula in Hydrogen-Alpha, captured using a 3nm bandwidth filter.
The image is already very sharp and pretty clean of obvious light pollution gradients. However, it is generally good practice to apply DynamicBackgroundExtraction just once even to such clean images. Even if minor, they do clean up the images and bring out further contrast. Due to the sheer quantity of nebulosity present in this image, rather than automatically placing sample points as outlined above, I opted to place each one manually. First I started by placing a couple and then resizing them to ensure they are the roughly correct size. Once they were sized appropriately, I continued covering all areas of the image that should be background, regardless of whether or not my sample points were being rejected as I manually placed them as Tolerance and Minimum sample weight could later be tweaked (followed by clicking Resize All to apply new settings) until all my sample points would be accepted.
This fully-manual technique is good when you have tons of nebulosity and can only place a small number of sample points to begin with (it would take longer to go through lots of automatically-placed sample points and deleting a large portion of them!). It is generally good practice to cover each corner of the image and along the four edges, plus several towards the centre. However, as in this case, I cannot place any sample points on the top-left corner due to excessive nebulosity there. This is no problem, as clearly there is no clear background there anyway.
This fully-manual technique is good when you have tons of nebulosity and can only place a small number of sample points to begin with (it would take longer to go through lots of automatically-placed sample points and deleting a large portion of them!). It is generally good practice to cover each corner of the image and along the four edges, plus several towards the centre. However, as in this case, I cannot place any sample points on the top-left corner due to excessive nebulosity there. This is no problem, as clearly there is no clear background there anyway.
Again, Subtraction is selected under Correction because we are wanting to subtract any light pollution background. Applying the process shows the two images popping up.
The image on the left is the original image. The image in the middle is the new image, after subtracting the image on the right (the generated model image). The improvement in deep detail and contrast is very clear, even on this ultra-narrowband image captured with many hours of exposures and in a dark area of the night sky.
3. Removing Gradients
Arguably, we have already covered removal of gradients with all the above and indeed we have - the light pollution wash can generate some gradients in images, which are removed with the above procedure. Moreover, vignetting from the optical system (and lack of calibration with flats) will introduce these gradients, also clearly removed above. This section however targets what to do when a part of your image has some glow that is clearly localised and you wish to reduce.
The example used for this section is the same image of the Pelican Nebula as above, but this time in Oxygen-III, also captured using a 3nm bandwidth filter.
The example used for this section is the same image of the Pelican Nebula as above, but this time in Oxygen-III, also captured using a 3nm bandwidth filter.
It is very clear from the above screenshot that the image contains nebulosity, but it also appears to be washed in some glow from the right side and top-right corner. This glow is actually due to the CCD sensor itself (the famous Sony glow found in some cameras). Since it is not nebulosity, we can treat it as background that we wish to reduce.
The procedure to do this is one and the same and since there is a fair bit of nebulosity, I opt to do it manually by simply adding sample points by clicking.
The procedure to do this is one and the same and since there is a fair bit of nebulosity, I opt to do it manually by simply adding sample points by clicking.
As you will notice, some of my manually-placed sample points have been rejected by current parameters. This is due to a low value of Tolerance. This did not happen with the Hydrogen-Alpha image because the background was fairly neutral in intensity. In this Oxygen-III image, the background has a clear and intense gradient to it. Increasing Tolerance and clicking the Resize All button to check is the key to having all sample points accepted.
Some sample points were also moved very slightly (the stubborn ones) to avoid having to increase Tolerance further, while being able to keep a sample point around that general area.
Before proceeding, we note that the background gradient is definitely not centred in this image. It is fairly offset towards the top-right corner. We therefore move the crosshair somewhat towards the top-right corner. Additionally, more sample points are placed in the areas with the most intense gradients and with the brightest background intensities. This is done so that these areas of large, fast changes are sampled more appropriately.
Before proceeding, we note that the background gradient is definitely not centred in this image. It is fairly offset towards the top-right corner. We therefore move the crosshair somewhat towards the top-right corner. Additionally, more sample points are placed in the areas with the most intense gradients and with the brightest background intensities. This is done so that these areas of large, fast changes are sampled more appropriately.
Once done, a final check is done to ensure all sample points are well placed and are accepted by the parameters. Subtraction is selected under Correction and the process is then applied.
The end result is much better in terms of the background being significantly more neutral, reduced in intensity and presenting more deep detail and contrast in the nebulosity (shown in the middle image, compared to the left image, which is the original).
Some gradients are quite stubborn and require a second application of the same DynamicBackgroundExtraction model (with reduced Tolerance, back to 0.500 if possible) to remove fully. Sometimes however, adding more sample points on areas of large, fast changes suffices. Smoothing factor can be reduced slightly below the default 0.250 if a more targeted subtraction is desired, but be careful with this as it can quickly provide less benefit than it is worth. I have found that values below 0.100 for Smoothing factor quickly breaks down the benefits.
Stubborn gradients may require going back and forth between changing sample points and parameters and applying the process to check the end result. It helps to allow the generated model image to pop up after subtraction takes place, as this generated model image shows you precisely what DynamicBackgroundExtraction generated (based on your sample points and parameters) and then subtracted.
Some gradients are quite stubborn and require a second application of the same DynamicBackgroundExtraction model (with reduced Tolerance, back to 0.500 if possible) to remove fully. Sometimes however, adding more sample points on areas of large, fast changes suffices. Smoothing factor can be reduced slightly below the default 0.250 if a more targeted subtraction is desired, but be careful with this as it can quickly provide less benefit than it is worth. I have found that values below 0.100 for Smoothing factor quickly breaks down the benefits.
Stubborn gradients may require going back and forth between changing sample points and parameters and applying the process to check the end result. It helps to allow the generated model image to pop up after subtraction takes place, as this generated model image shows you precisely what DynamicBackgroundExtraction generated (based on your sample points and parameters) and then subtracted.
4. Artificial Flattening
Image calibration is a key technique to producing good images. Calibration with flats is generally an extremely good idea as it removes the inherent vignetting in the optical system and with good flats, removes dust motes present in the optical train. However, a number of reasons may lead to you not having appropriate flats for an image and thus be left with an image that has very clear vignetting, such as this image of the Veil Nebula in Luminance.
The above image is linear (but auto-stretched), and has only been calibrated with bias (no flats, on purpose, to provide this example). The following technique demonstrates how to use DynamicBackgroundExtraction to flatten your images, artificially. The caveat is that this technique doesnot remove dust motes so it only works well for clean images. Following this technique, one could remove dust motes with the CloneStamp tool, for example, unless they are very unsightly or you have a great many in number.
What we want to do is make DynamicBackgroundExtraction generate a model image of the background and instead of subtracting this, dividing this into the target image. Dividing is the exact process of calibrating with flats. You may already know how this is going to work. We start with the exact same procedure as above - placing sample points all over the image, making sure all edges and corners are well sampled (particularly now that we wish to generate a model image that will flatten this image - this means the vignetting pattern itself needs to be sampled fully).
As the star field in this example image is very dense, we use a small sample point size. Also, as most of the image is background, we opt for automatically placing the sample points. Due to the very clear vignetting, Tolerance needs to be a fairly high value to place sample points everywhere needed.
What we want to do is make DynamicBackgroundExtraction generate a model image of the background and instead of subtracting this, dividing this into the target image. Dividing is the exact process of calibrating with flats. You may already know how this is going to work. We start with the exact same procedure as above - placing sample points all over the image, making sure all edges and corners are well sampled (particularly now that we wish to generate a model image that will flatten this image - this means the vignetting pattern itself needs to be sampled fully).
As the star field in this example image is very dense, we use a small sample point size. Also, as most of the image is background, we opt for automatically placing the sample points. Due to the very clear vignetting, Tolerance needs to be a fairly high value to place sample points everywhere needed.
Above, the colour of the sample points was changed from the default grey to bright purple by clicking the colour swatch for Sample color. This made it easier to see the sample points placed. Sample points were placed automatically based on a large value of Tolerance of 1.500. Later, each one was inspected and moved out of the way of stars and nebulosity. Once all were clear, Tolerance was able to be lowered to 1.000 without rejecting any sample points.
With the sample points ready, we select Division under Correction (as we wish to flatten this image).
With the sample points ready, we select Division under Correction (as we wish to flatten this image).
At this point, we apply the process and inspect the end result.
The end result is very clearly flattened, with the boost to overall contrast that brings. The generated model image shows the vignetting pattern very clearly as well. Following artificial flattening, you may wish to remove any obvious dust motes using the CloneStamp tool (not covered in this tutorial). You may find this process difficult if the dust motes are very large or if you have a lot of them spread around the image, particularly if they overlap objects of interest. Due to this, artificial flattening is not a replacement for proper calibration with real flats!
Following artificial flattening, one can apply the exact same model in DynamicBackgroundExtraction, with Tolerance lowered to 0.500 (or as close to this as you can make it) and with Subtraction selected under Correction. This removes any remaining gradients in the background after having artificially flattened.
Following artificial flattening, one can apply the exact same model in DynamicBackgroundExtraction, with Tolerance lowered to 0.500 (or as close to this as you can make it) and with Subtraction selected under Correction. This removes any remaining gradients in the background after having artificially flattened.
This example is fairly ideal simply due to how clean I tend to keep my optical train and therefore the lack of dust motes. As a result, the process of artificially flattening has been very successful. Indeed your mileage will vary.
Comment Box is loading comments...