I've been playing around with some captured data and have been impressed to find that stacking can clean up some truly horrible data. I'm glad I didn't throw out some of my "mistake" runs because they look like they will be useful. But the experimentation lead to a question about Binning.
I THINK I understand how Binning works, but I'm going to explain ho I think it works because that may be part of my problem. No binning means that each pixel on an image is counting it's own photons. Binning 2x2 means that you are taking a 2x2 square of pixels, counting all of their photons, and putting them out as a single value. You essentially quadruple the light sensitivity at the cost of reducing resolution by half. If you were to have 3x3 binning, you would do the same with a 3x3 square of pixels being reported a single value so 9 times the light sensitivity but 1/3 the resolution.
So what I've learned about image capture is that Binning the color exposures 2x2 is helpful because the increased light sensitivity compensates for the fact that the color filter cuts out some of the light reaching the sensor. Makes sense to me.
So assuming I have all of the above correct, when I start working data with 2 different Binning methods, it stacks the Binned and Unbinned data separately (I learned this because I goofed and accidentally captured one run in Luminance only so I had both Binned and Unbinned Luminance). When you eventually go to combine the images, you will have two different resolutions of data.
So how does PixInsight deal with the two different resolutions? I'm guessing that it is going to use the lower resolution color data to overlay the higher resolution luminance data with some kind of smoothing algorithm thrown in essentially spreading each binned pixel over several unbinned pixels and then applying a little sharpening. I'm sure Adam is going to cover this in another 40 videos or so.
Thanks.
Comments
Thank you for the help. Your presentation Saturday was fantastic.