In your opinion, should we determine a FWHM threshold (for example, < 2.5") when selecting the subs we are going to use? Or just process everything in WBPP?
I am of the mind to process everything with respect to FWHM. Usually it is the signal-to-noise and not the resolution that is important. It doesn't matter how good the resolution is if you cannot see (or process) the object.
In addition, there is more logic. If all of the data is fuzzy- then of course you cannot determine a good threshold since none of the data meets it. As long as the fuzzy frames are in the minority then the combination of pixel rejection and weighting will let them add in for what they are worth in terms of S/N.
You certainly can pick a threshold that is just outlandishly fuzzy...but I am talking about what typical people do is throw out perfectly good frames that can contribute to the S/N.
You do not have to believe me. To convince yourself you should integrate your data that has the fuzzy frames removed and compare with the full integration of everything (reasonable). In today's age a slightly fuzzy masterlight + BXT is nearly equivalent to a culled integration with the best frames. However no process can get you better S/N than you originally have for a given stack of images.
Do the experiment... or believe me. :)
You can also show me a contrary case and then we can look at why it might be true, but as a generalization this is my response.
Integrate them all. Again, the average is what matters.
If you have a minority of frames that are elongated, then the combination of rejection, weighting, and the average will all give you a fine result. Again, you can test these scenarios by integrating with the elongated frames and without...and see what happens. If all frames are elongated...well, you are stuck.
Hi Adam! What would you personally consider as reasonable in terms of FWHM in a frame? For instance,!would you still integrate frames of a galaxy with higher FWHM than 3” ?
I favor S/N over resolution. BXT can make up for that loss. If you cannot see the object - what is the point of resolution? See? Faint signal requires more frames. Often the faint signal isn't highly resolved.
So it depends. It depends on the object. It depends on the number of frames you have. It depends on the ratio of good to bad frame you have. If you 5 qualifying frames and 15 subpar ones... what do you do? If you have 15 reasonable frames and 5 subpar (based on FWHM)- why aren't you letting rejection take care of the bloated bits and still benefit from using these weighted frames which will help with the S/N?
So there isn't a FWHM number. It is the number of frames, the ratio of quality between frames. Does rejection and weighting do what you want?
I know... people don't believe me. But... you can prove me wrong. Show how I terribly mistaken I am! Just *include* the crappy frames and create a masterlight. Then DO NOT include the frames based on your metric of "reasonable" and generate another master light. Show me how wrong I am! Compare the two frames. Remember, it isn't only about the stars. It is also about the S/N of the faint signal (if there is some).
Well, I believe you, and to be honest I would hate to discard frames just because they are fuzzier than others. And that's true, many experienced astrophotographers with big aperture telescopes in dark sites do discard high FWHM frames. It puzzles me why.
But because of you, now I understand a bit about integration and weighting (Fluctuations series is a masterpiece, by the way) and it seems to be nonsense to throw out a bit fuzzy, but otherwise fine data.
An integration using all the frames, using PSF Signal Weight with, say, 0.5 minimum weight (My frames are typically with good quality and quite uniform) seems to be a better choice for me. It would consider star quality too, and would do the job with more precision. Right?
PSF Signal Weight is a tricky creature. Part of its quality metric is the brightness of the sky as well as the fuzziness of stars. It is hard to predict what a weight cutoff should be in general... but I would say that 0.3 (assuming all frames have the same exposure time) is pretty safe. PSF SIgnal Weight quality measures of less than this are pretty crappy. 0.5 is going to give you solid frames and if you have a reasonably consistent dataset... it will work just fine. If many frames are being rejected... then you know what to do. There are on occasion instances where PI thinks there is a super duper good frame with PSF SIgnal Weight and (nearly) all others are discarded. Ofter this "super" frame is actually poor and something is tripping up the works. Just something to look out for.
Hi Adam! I did a test with 2 integrated frames of NGC 3521 (Luminance). One with 277 frames, and other with 142 frames with less than 2.4 FWHM. PSF Signal Weight on default. Captured with a CDK 17 and L550 mount, full frame Moravian C3, in Obstech, under surprisingly quite bad seeing.
The first master file (277 subframes) looked brighter when using automatic STF, but with less resolution. After using BlurXterminator, both improved the resolution, but the difference in resolution between them persisted. After matching the brightness levels using STF, I could see the first image was bit less noisy, but still with less resolution.
Is this the expected result?
Attached, SubFrame selection and FWHM of both images, after using BlurXterminator.
Quoting from myself above... "why aren't you letting rejection take care of the bloated bits and still benefit from using these weighted frames which will help with the S/N?"
The full integration was with default parameters. I did another integration with 0.3 minimum weight, PSF Signal Weight weighting scheme. In terms of FWHM after BlueXterminator, the results were quite the same of the one with default parameters.
Well, looking to both pictures, and measuring them, the version with less integrated frames and lower FWHM is still sharper, before and after BlurXterminator. Not a huge difference, but visible. Any clue of what I could be doing wrong? Or that’s the expected result?
Comments
The first master file (277 subframes) looked brighter when using automatic STF, but with less resolution. After using BlurXterminator, both improved the resolution, but the difference in resolution between them persisted. After matching the brightness levels using STF, I could see the first image was bit less noisy, but still with less resolution.
Is this the expected result?
Attached, SubFrame selection and FWHM of both images, after using BlurXterminator.
Would you suggest other rejection settings?