How does PixInsight deal with Binning?

I've been playing around with some captured data and have been impressed to find that stacking can clean up some truly horrible data.  I'm glad I didn't throw out some of my "mistake" runs because they look like they will be useful.  But the experimentation lead to a question about Binning.

I THINK I understand how Binning works, but I'm going to explain ho I think it works because that may be part of my problem.  No binning means that each pixel on an image is counting it's own photons.  Binning 2x2 means that you are taking a 2x2 square of pixels, counting all of their photons, and putting them out as a single value.  You essentially quadruple the light sensitivity at the cost of reducing resolution by half.  If you were to have 3x3 binning, you would do the same with a 3x3 square of pixels being reported a single value so 9 times the light sensitivity but 1/3 the resolution.

So what I've learned about image capture is that Binning the color exposures 2x2 is helpful because the increased light sensitivity compensates for the fact that the color filter cuts out some of the light reaching the sensor.  Makes sense to me.

So assuming I have all of the above correct, when I start working data with 2 different Binning methods, it stacks the Binned and Unbinned data separately (I learned this because I goofed and accidentally captured one run in Luminance only so I had both Binned and Unbinned Luminance).  When you eventually go to combine the images, you will have two different resolutions of data.

So how does PixInsight deal with the two different resolutions?  I'm guessing that it is going to use the lower resolution color data to overlay the higher resolution luminance data with some kind of smoothing algorithm thrown in essentially spreading each binned pixel over several unbinned pixels and then applying a little sharpening.  I'm sure Adam is going to cover this in another 40 videos or so.

Thanks.

Comments

  • Yes, your initial description of binning is correct. However it isn't "quadruple the light sensitivity" because you still have readnoise. In addition, CMOS detectors do not have much in the way of benefit in terms of binning for the purposes of sensitivity. It again deals with the noise- basically, unlike CCD sensors, you are just affecting the sampling (which might be a benefit if you bin 2x2 because you want smaller images since the seeing doesn't allow for greater resolution and the oversampled images do not help).

    In addition, the idea of Luminance imagery is also evolved a bit because of the above. Its necessity or helpfulness is no longer as strong as it was for CCD sensors. 

    When images are registered (StarALignment) PixInsight will match the target frames to the reference image no matter what their initial size. So if using WBPP this is all automatic.

    -the Blockhead
  • Thank you.  This was very helpful.  So what I'm hearing is don't Bin at all (ie, go for maximum resolution).  The non-Block videos that I watched did use the Luminance data and it doesn't sound like you are saying not to capture it, just that you don't use it the same way as it used to be used so the color data is more important.  Roughly correct?

    Thank you for the help.  Your presentation Saturday was fantastic.
  • Yes, roughly correct. 
    You can take Luminance if you have enough color data...and just pound away- but isn't the same time savings as was previously done with CCDs by binning. Today a compromise is to also combine the RGB masters to create a synthetic Luminance (or combine the RGB masters with a Luminance master as well).
    -the Blockhead
  • I can see that I need to do more reading.  Thanks.
  • I apologize for digressing a bit (?) on this thread. I'm new to this hobby. I have 30 years in IT with many efforts related to computer graphics, but retired for 20 years so way behind the tech curve. I'm now using an ASI533MC Pro (OSC), 3kx3k pixels. This and most other specs for the ASI533MM Pro (mono) are identical,so I'm a bit baffled. So - what exactly is a pixel in this technology? As I understand with no binning the Bayer filter (e.g., BGGR) is, in my antiquated thinking, actually 4 pixels? Bin 2 is a 2x2 of a "Bayer" pixel (if you will) - so 16 of "my" pixels?

    Re the OSC vs Mono, the specs for both have the same pixel size and the same 3kx3k array of (Bayered) pixels? Why does the mono camera not have 4x the number of pixels - as they aren't Bayered I presume?

    And both camera's specs have a pixel depth (oops, sorry "bit depth") of 14. I'm curious what is going on with the pixels (in any sense of the word) during the stretching. Since the image is eventually stretched permanently, the pixels must also change values?

    I was surprised to learn early last year about the Bayer filters. Possibly driven by cameras in smartphones?

    I'm literally in the weeds here, but any information (or links to info) is appreciated. OK, back to the very helpful FastTrack Training videos. Thanks!
  • First for OSC... the sensor has a filter in front of each pixel. (A pixel is the smallest sensing area/region on the surface). So your 3kx3k sensor has 9 million pixels total.

    But you only have 2.25 million pixels with Red filters. 2.25 million with Blue filters and 4.5 million with Green filters.

    So if you just made an image out of just the Red pixels you would have a SMALLER image (1/4). Agreed? Same for Blue. For green, if you extracted.. you would have a 1/2 size image..but actually one scheme is to average the two Green pixels. In that way you have the same size Red, Green, and Blue image. This is a "superpixel" debayer method. It extracts the pixels.

    So you might wonder, why is your color image 3Kx3k when you debayer it in PixInsight... it is because there is extra math that INTERPOLATES between your pixels. You know what... OSC sucks. Sorry. It is a mess and it causes all kinds of problems and misunderstandings. Anyway, the interpolation method by default in PI is VNG. BUT...get this...this interpolation method causes non-optimal color balance when you stack OSC data. Yeah ..you heard that right... OSC sucks. ;) So... in PixInsight you use CFA Drizzle which does an end-run around the interpolation issue.

    Your understanding of binning is correct. However with CMOS sensors of today- each pixel has its own little electrons and the values for on-chip binning are not actually calculated the same way it is with a true CCD sensor. On-chip binning with CMOS really doesn't work because of pixel-to-pixel variations. You can post bin... but there isn't much benefit there. On-chip binning of CCD sensors could provide a S/N benefit...not so for CMOS>

    Yes, when you stretch the pixels change values. You will be assigning values that fit into a small bit space... that is the 8-bit display you are looking at on the screen. So you are assigning the black point, white point..and the gray values in between. 

    -the Blockhead
  • Thank you! That nicely clarifies things for me.

    Since childhood I've been doing visual astronomy with a modest reflector, tripod to alt-az mount, and a nice collection of eyepieces. Then to a small refractor during the pandemic. My entry for astrophotography started with a Canon T3i, tripod and Rokinon 135mm lens last spring with (for me) surprising results (sub-second exposures!). Upgraded to an equatorial mount, astro-modified the Canon and got a nb filter. Finally late last year I took the plunge with a lot of new gear and shortly after PI. Still pleased with my improving results.

    And yet, I realize I'm still low on the learning curve! But very rewarding. CN and other blogs have been extremely helpful and supportive. Looking forward to more of your instructional videos Adam. Thanks again.
Sign In or Register to comment.