By far the biggest image project I have undertaken thus far, my IC1805 Heart Nebula image, is now complete. This took about a month and a half to acquire all the data for, which I set myself out to be 100 hours of total exposure time in a 6-panel mosaic. Some changes to plans led to a reduction in total exposure time used in the final image, resulting in it being 82.5 hours. Nevertheless it is just under 10 hours more and 2 panels more than my second-biggest image project (M31 Andromeda Galaxy). Below is the final image:
Interestingly, which is what lead to the reduction in total exposure time used in the above image, this has zero Luminance data. I did actually capture 30 exposures each 10-minutes long, in Luminance, per panel. The result however, looked as follows when auto-stretched in PixInsight:
Needless to say, very lackluster indeed, despite the fact that the above represents 30 hours of Luminance data spread equally over the 6 mosaic panels. In contrast, the Red mosaic image was infinitely more impressive. It became clear that Luminance data is a good idea for objects such as galaxies, whereas emission nebulae are generally not well represented in Luminance.
My final image in its own right is an (R+HA)GB image. Essentially, I colour-combined my R, G and B images, colour-calibrated the result, extracted the Red channel from the colour image, enhanced it with my HA image and then put it back in. What followed was a normal post-processing routine, with no Luminance data combination whatsoever. Ron Brecher suggested to me on Facebook that I could do what he does - make up a synthetic Luminance channel of sorts. He himself uses the ImageIntegration process with his R, G, B and HA images integrated together with noise evaluated weighings and no pixel rejection. This makes sense, but I tried it for my data and the result was not as nice as not using any Luminance data at all (real or synthetic). I had also tested simply adding my R, G, B and HA images together using PixelMath, with scaling coefficients, but again the end result was not as nice. Therefore in the end, no Luminance data was used. I will be making it standard practice to not bother capturing Luminance data when my target is an emission nebula, unless it is an extremely bright one like the M42 Great Orion Nebula.
My final image in its own right is an (R+HA)GB image. Essentially, I colour-combined my R, G and B images, colour-calibrated the result, extracted the Red channel from the colour image, enhanced it with my HA image and then put it back in. What followed was a normal post-processing routine, with no Luminance data combination whatsoever. Ron Brecher suggested to me on Facebook that I could do what he does - make up a synthetic Luminance channel of sorts. He himself uses the ImageIntegration process with his R, G, B and HA images integrated together with noise evaluated weighings and no pixel rejection. This makes sense, but I tried it for my data and the result was not as nice as not using any Luminance data at all (real or synthetic). I had also tested simply adding my R, G, B and HA images together using PixelMath, with scaling coefficients, but again the end result was not as nice. Therefore in the end, no Luminance data was used. I will be making it standard practice to not bother capturing Luminance data when my target is an emission nebula, unless it is an extremely bright one like the M42 Great Orion Nebula.