What happens to data in areas that aren't covered by all of the frames?

You probably answer this and I think I discovered the answer on my own, but I thought I'd ask to see if what I think I'm seeing is what is really happening.

I have 3 data sets taken with a Seestar S50 on different nights where the overall alignment on the object being imaged is pretty shifted between all 3 sets.  Just for giggles and grins, I went ahead and set up WBPP on all 3 data sets just to see what I got.  Among the output files, there is an autocropped one but also a master file.  Here's what I think I'm seeing.  Please tell me where I'm wrong.

The Autocropped image is pretty self explanatory and appears the be just the area where the 3 datasets overlaps (ie, where I have maximum data).  There is a master file that, for lack of a better term, is a composite showing the field of view of all 3 datasets.  What I think happened is that PixInsight aligned the images and then integrated each pixel with however much data there was for that pixel.  Some of the pixels may use all of the subs from all of the datasets but some of the pixels may only use a couple of subs (maybe even 1?).  The result appears to be pretty good, but the center region that is covered by all 3 data sets has less noise and more data than the areas covered by only a single data set, but all of the data from all 3 images appears to be included.

So did I get it basically right?  That the master uncropped file includes all of the data but some pixels have more data integrated than others?

I'm sure you explain this in detail later (I've gone through Fasttrack and the first 3 Fundamentals videos so I'm still very new to this), but I learn by doing so I'm already doing things that you haven't covered yet which obviously generates questions that are probably answered in another 50 videos or so.

Thanks.

Comments

  • Yes,  you have it correct (and yes, I do explain it later in the sections about image rejection and normalization). 

    When you stack images you will be rejecting pixels. Thus a stack of pixels that is averaged will have fewer values than the total number of frames you acquired. In areas of non-overlap PixInsight puts zero values...which are rejected. If the areas still show and you can easily tell the overlaps- normalization can make this much better. However, there will always be a difference in signal-to-noise which can be seen by eye... unless there are many frames for all of the overlaps.

    -the Blockhead
  • Thank you.  Knowing how the program deals with the entire data space helps.  Now I have questions about the normalization methods, but that can wait.  I'm an Engineer by trade and my father in law developed and owned Research Systems (which created IDL) so I understand a lot about image processing but figuring out the implementation in PixInsight is a bit of a different animal for me.  I'm enjoying exploring the software and "seeing what happens."  Your videos definitely help provide a "guided exploration" experience and I'm already able to produce some halfway decent images.  I can't wait to see where I'll get to with some more experience (and better data).

    Again, thank you for your time.
Sign In or Register to comment.