You probably answer this and I think I discovered the answer on my own, but I thought I'd ask to see if what I think I'm seeing is what is really happening.
I have 3 data sets taken with a Seestar S50 on different nights where the overall alignment on the object being imaged is pretty shifted between all 3 sets. Just for giggles and grins, I went ahead and set up WBPP on all 3 data sets just to see what I got. Among the output files, there is an autocropped one but also a master file. Here's what I think I'm seeing. Please tell me where I'm wrong.
The Autocropped image is pretty self explanatory and appears the be just the area where the 3 datasets overlaps (ie, where I have maximum data). There is a master file that, for lack of a better term, is a composite showing the field of view of all 3 datasets. What I think happened is that PixInsight aligned the images and then integrated each pixel with however much data there was for that pixel. Some of the pixels may use all of the subs from all of the datasets but some of the pixels may only use a couple of subs (maybe even 1?). The result appears to be pretty good, but the center region that is covered by all 3 data sets has less noise and more data than the areas covered by only a single data set, but all of the data from all 3 images appears to be included.
So did I get it basically right? That the master uncropped file includes all of the data but some pixels have more data integrated than others?
I'm sure you explain this in detail later (I've gone through Fasttrack and the first 3 Fundamentals videos so I'm still very new to this), but I learn by doing so I'm already doing things that you haven't covered yet which obviously generates questions that are probably answered in another 50 videos or so.
Thanks.
Comments
Again, thank you for your time.