After Background Extraction the autostretch results in grainy image

Hi, i'm religiously trying to follow the lessons, but in the meantime trying to use the techniques on my images. 
First of al li have to say i'm in a challenging position; from a Bortle 9 zone imaging with a OSC camera... So i use light pollution filters and at least 10 hours of integraton time. 

But what happens: Before DBE the autostretch gives a relative smooth image. After applying DBE the autostretch results in a very funky and grainy image. Somehow the autostretch is not able to correctly measure the image and messes up. Why is that? 

Attached are three screenshots from an attemt to make a 'hubble deepfield' kind of image; one just after image integration, one after the DBE and one zoomed in to show the grainyness and funky colors.

Comments

  • Hi Helge,

    Unfortunately you are misinterpreting the results you are getting. In the first image the background is bright. That means there are higher values there. The autostretch routines just work on the histograms of images. They do not know anything about stars, galaxies or gradients (in a bright background). 

    So since the background is bright in your first image- the autostretch does not show as much contrast and you do not see the noise. But here is the thing- the noise is there. Really... your image is very noisy from the beginning ...the bright background is "hiding" it. (You can manually increase the contrast and prove to your self all of the noise is there.)

    When you remove the values of the bright background with DBE...now all that is left are the smaller values... the noisy stuff. And the automatic histogram math of AutoSTF recalculates (it is a NEW image) and it shows you a higher contrast result. It is not doing anything incorrectly. 

    If you want to apply the stretch result you originally had to the new DBE version you can. Just drag the STF triangle from the original image with the bright background to the new DBE image. Then you will think everything is FINE.

    But...I honestly think this is a misconception (you will see the above piece of advice parroted over and over). Your DBE image is the real deal. It is actually showing you a corrected image. This is a better rendering of your data the then first image. This shows you what your image quality actually is. It is showing you the approximate "truth"- the first image is not the "truth" because of the background. 

    So to improve your image quality more you might need a darker sky, longer exposures, more sensitive equipment... etc etc. Of course for a given image quality, you can use the tools available in PixInsight to work with what you have. 

    This is my non-coddle mode answer. :) OSC just adds yet another hurdle to improving the image quality. (It is a bit easier with monochrome to tame the "funk".)

    -the Blockhead
  • Thanks for giving me a hard landing :( but it is what is is...

    I'm using a ZWO ASI071MC pro cooled camera (4.78um pixel size, read noise 2.3-3.3e) that i hope is sensitive enough. The pixel size should work quite good with my 80mm-600FL refractor. 

    If you state i have to use longer exposeres, do you refer to the total exposure time, hence more subs? I now try to have at least 10-15h total integration time. Or should i increase my exposure time from 300 sec to 420sec? i use the unity gain (90), should i increase that?

    Apologies for all the questions. But i'm kinda stuck to my house and family so moving to a dark location is not an option :-) And i especially choose OSC to profit from the rare clear nights we have here (and the associated costs for extra filters, filterwheels, etc)
  • A couple of things. Yes, I do think longer exposures might help if the sky is dark enough. At a certain point there is something of a faint limit that is determined by your sensor/telescope and sky brightness. So certain objects are just not going to be easy/possible. The brighter the object the better things become.I do think the 10-15 hours should be good enough. 

    Regarding the gain..that is an area I do not know what is optimal. I would think you want high gain...and take enough exposures to minimize the noise.

    -the Blockhead
  • Thanks again for the response. I will have to experiment with the settings and look at the end result.

    You make excellent instructional videos. But there are a lot of them… Is there any video where you explain how I can determine the quality of my image with the readout options? For example: if take an image of only stars without nebulously; what will be a good readout difference between a star and the background? Or how to determine if the stars are saturated? I rather trust a calculated result than a subjective ‘this looks ok’
  • Helge,
    Regarding optimum subs duration, you might want to search and watch this instructional video on Youtube:

    Deep Sky Astrophotography With CMOS Cameras by Dr Robin Glover.

    You can look at his charts, estimate your background light pollution rate, and get the approximate sub duration he recommends. It will NOT be 300S or longer for your CMOS camera and your Fratio. Don't get hung up on his mathematics, just watch the video and you will learn a lot of useful information on imaging with your system.
    There is also a follow on video for the missing gain setting section, but not so useful by itself, IMO.

    I am currently investigating and implementing all above into my cooled CMOS camera imaging under my Bortle 8-9 skies. With 10 hrs data (my target) and if .5 min subs (recommended), then I need 1200 subs. Ouch. But I think these can be pre-stacked in groups of maybe 20 subs with SharpCap, and then saved. Now only need save 60 subs. 

    Roger
Sign In or Register to comment.