Processing Strategy Question

First, I'm not sure I got this thread in the right place.  If not, apologize.

I've been through just enough of the Fundamentals Path to be truly dangerous now without really accomplishing much.  I'm on step 21 (the start of the WBPP section) and by the time I get to 34, I think I'm going to be at the point where I'll be ready to really start processing some of my datasets and I have several from different sources (my Astronomy Club's remote imaging setup, iTelescope, and some Seestar images that I've collected while out doing visual observing (I just pick a likely target, set it to collect images (and save all the subs), and go about my business)).  This brings up a couple of related questions.

Some of the data was collected, but I'm thinking I'd really like to collect more.  For some of the objects, they are either gone for the year or getting close to being gone.  I'll probably process what I have already, but when I pick up the imaging again in several months and collect a bunch more data, is it better to go back and start from scratch and process both sets of light frames together or is it better to take the create masters from the first set of data and the masters from the second set of data and then combine them?

The Seestar is fun to play with, but it creates issues because it creates a ton of short subs.  For one target, I have 950+ images covering a bit over 2.5 hours of image capture.  PixInsight isn't going to care if the subs are 10s exposures or 10 minute exposures.  I'm pretty sure it is going to see 950 data points and happily crank away at them oblivious to exposure length.  So is it better to process all 950+ images in a single run or break it down into, say, 300 ish image groups and then combine them?

Now I think I know the answer here and you can tell me if I'm right.  Assuming that you are using the same camera and the darks/flats/bias frames haven't changed significantly, I think you should be better processing everything together rather than breaking it down into groups and then combining the groups.  Because of some of the statistical work the program does, I think that PixInsight is going to be more successful eliminating things like StarLink trails, meteors, cosmic rays, etc. if you process everything as one huge dataset.  I'm also thinking there might be an issue combining masters if the data that went into the masters isn't approximately equal  (say the first data run was 2.5 hours, the second was 90 minutes, and the 3rd 5 hours) because some of the data will end up be more heavily weighted than others when the masters are combined (so every minute of the 90 minute exposure would count a little more than 3 times as much as the 5 hour exposure if you just combine the masters).  

Is my thinking more or less correct here?

I'm learning a ton and am excited to be getting to the point where I can start really processing some of what I've collected.

Comments

  • Your thinking is correct- particularly concerning stacking and the statistical weighting and rejection benefits you get from applying to the entire set of images. 

    Concerning the stacking itself... you will want to take advantage of FastIntegration- which is now a part of WBPP when you meet the threshold number of frames.

    It effectively IS doing batch stacking.... so you will not run into computer melt down.

    -the Blockhead
  • Thank you.  I saw one video on FastIntegration, but I assume there is a more in depth one that I haven't run across yet that really goes into the difference between the two methods.
Sign In or Register to comment.