Which Use of Local Normalization is Faster?

Hi Adam,

Would using the LocalNormalization process on it's own be faster in terms of file generation speed compared to using LocalNormalization within WBPP?

I've been using LocalNormalization through WBPP for a while now but have found that sometimes it can be slow to complete, with my latest attempt taking almost an hour. One time it almost took 3 hours!

Zak :)

Comments

  • I don't think so... but WBPP might be tasking an additional piece of logic. 
    One thing that will make it take long is the generation of reference frames. If you have this doing it automatically and you do not adjust the settings..this can definitely eat up time.
    You should adjust the number of frames that LN will use to create a reference to the minimum necessary. If you input 100 frames... using 5-7 frames I still think is enough. I think the default is up to 20!
    (and this is done per filter if you are doing mono...)

    -the Blockhead
  • Awesome thanks for clarifying!

    I'll use less reference frames for LN if I have lots of images.

    Hopefully that will decrease the processing times. I used to let WBPP do LN automatically, but now I'm doing it manually after seeing your demonstrations in Fundamentals.

    :)
  • Hi Adam,

    Thought I'd give you an update.

    After thinking about it, I asked myself if it could be my computer parts, so I looked at my CPU usage when LN was running in WBPP, and it was nearly using 100% CPU consistently, and sometimes throttling at 100%. That might be why I am having slow performance with WBPP and LN.

    It might be worth me looking at getting a faster processor at some stage as I know PixInsight isn't using the GPU for it's tasks at the moment, except for the modifications that we can do for things like StarXTerminator etc.

    I do hope that the PixInsight team eventually make the move over to the GPU so it can slash waiting times significantly.

    :)
  • Regarding this quote: "If you input 100 frames... using 5-7 frames I still think is enough
    Would it be true to extrapolate the above ratio and whether we have 50 or 500 subs, go with 5 to 7% as a general rule of thumb as the Maximum Integrated Frames?

    As my set has 300 subs, I would go with 7% in WBPP as per the attached screenshot.

    Thanks in advance

    WBPPLocalNormalization.PNG
    577 x 434 - 23K
  • I don't think that is a general approach. Basically you want enough good frames (with a simple/same gradient) that characterizes the background without too much noise. There isn't a general algorithm for this...but you might think of it this way.... If you have exposures longer than 2 minutes or so- it is probably good enough to combine 7-11 frames no matter how many additional frames you have. You are just trying to create a reference that all other frames will be matched to. You don't need to integrate that many to capture the bulk gradient and such. If you have super short exposures, then it might make sense to up this number of frames to be integrated a bit so that something of the sky is characterized. 

    This is my thinking at the moment...

    -the Blockhead
Sign In or Register to comment.