Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

General WBPP questions

Hi all,

Can we mix types of new raw frames with previously processed frames in WBPP? 

Like many people, I often shoot for multiple days on a given target, and about half way in, I like to process the first few days of frames and see how it looks.  After a few more days, I wonder if I can mix the previously registered (or a previous step) frames with new fits images?  Will it save time in the processing or cause other problems?  How to deal with the reference frames?

Thanks

Les

Comments

  • You can "mix" in... but my suggestion is based on a general principle.
    Integrated everything AGAIN after adding more frames is best. You are changing the statistics of the set and the resulting weighting and rejection. Integrating across all data is the way to go in my opinion.

    If you keep your cached WBPP session.. when you add new data (yes, you need to find out and use the same reference frame) it will only be processing the newly added file in the case of calibration and registration. Normalization could be different unless you fix the LN reference as well. 

    Regardless, you asked my opinion... and there you go. 

    -the Blockhead
  • Thanks.  Your opinion is greatly respected and appreciated.  L
  • I am having an odd problem with WBPP.  For reference, I am using PIS build 1585.  I am currently shooting with a C-11 at reduced FL = 2312mm which makes pretty large stars and may be part of the problem. 

    Last week I started shooting Caldwell 23 galaxy.  On the first night, I only captured 36 frames, but was curious how it would look so I processed them in PIS and had no problems generating the master, and stretching to a simple final nonlinear image.   (attached JPEG)


    Over the next few days I added more frames getting to 191, and finally about 400 frames.  But each time WBPP failed at the measurements step, measuring few if any frames.  (pipeline screen shot attached) .  I moved the raw fits files to a lower location on my C: drive, (to shorten path length) but it failed again.  Finally I went back and reran the day 1, 36 frames and WBPP failed again at the measurements step.  (Screen shot attached).  Also attached a log file of the 36 frame failure.  Tried to attach a day-1 fits file but your site blocked it.

    I have checked for PIS updates but find none available.  

    Appreciate any suggestions.

    Les
    C23 FIRST NIGHT EDGE ON GALAXY JPEG.jpg
    4144 x 2822 - 4M
    wbpp failed Screenshot 2023-10-24 152647.jpg
    991 x 650 - 147K
    log
    log
    20231024192438.log
    286K
  • The problem is the caching. I would not use this as a method of WBPP pipeline execution in general. It is really to make a single adjustment somewhere along the way... not as a method of rerunning the script incrementally. The files cannot be found is what the errors you are having is all about. 

    You need to delete all output files... clear the cache and run everything from a clean start. 
    There are ways of incremental processing- but it comes with additional attention to issues... so it is a more advanced thing.

    -the Blockhead
  • Ok, thanks.  Will try that.  Did not intentionally cache, but perhaps failed to delete it.  Do you find that in the log file ? Or how.  Les
  • So I think the problem was solved but the answer was not about cache.  I erase the cache almost always, and double checked it with a couple test runs.   I was getting a couple messages in the processing window:

    Warning: Failed to measure frame - image will be ignored.

    Error: Frame measurement failed for all light frames.

    So I googled them, and landed in a conversation on cloudy nights.  The suggestion was to check the calibration frames, about the possibility of having wrong offset settings and gain differences.  I think it really applied to darks, not flats.  So I checked and found that my 60sec dark frame was white, not dark.  Don't know how that happened.

    The idea was that by subtracting a dark with too high values, there are practically no stars left to measure.  I did a pixlmath subtraction and shure enough, it produced a black result.

    So I looked for another 60 sec frame but couldn't find one, so I just deleted the 60 and asked WBPP to optimize the dark 30sec for use with the 60sec lights.  

    This change has allowed 2 successful wbpp sessions.  This was the first successful WBPP stacking run in 11 tries!  Screen shot attached.

    Hope this continues to work.  Running a larger set of files now.

    wbpp success Screenshot 2023-10-25 163007.jpg
    991 x 496 - 124K
  • Yes of course that would do it.
    In FastTrack Training- the first thing I tell people to do when they have an error is look at all of the initial data and output... BLINK. Look at the values.
    It seems this advice would have solved the issue very quickly.
    -the Blockhead
  • Hi Adam,

    Appreciate your diagnostic help if possible.

    I have been plagued in recent months with failed WBPP runs, where stacking completely fails or only a few frames are stacked.  Most usually the process fails in the measurements step where, in this example, 36 of 39 frames failed.  Later, wbpp is unable to find a reference frame.

    I thought I had resolved it using your earlier suggestion about resetting WBPP everytime, but I have done that carefully, and this failed again.

    Here is a summary of the situation:

    I have made 3 WBPP runs with slight variations using 39 lights.  All runs failed 36 of 39 frames in the measurements step, then failed to generate a reference frame. (see screen shot of execution monitor)

    My rig is an Orion EON 90x540 with 0.8 reducer.  Camera asi294MCPro, -10C, 180 sec exposures.  Guiding error mostly <1 arc sec. My average HFR was < 2.0 on most frames.  Checked all frames by eye, eliminated none.

    I captured 49 frames and separated 39 for stacking that used the same filter, Optilong L-extreme (LEX).

    I visually checked my LEX flat master file and master dark file 180 sec.  No obvious defects. Also blinked lights and original flat frames, no defects observed.

    Set WBPP to all defaults, reset and cleared cache. 

    No Bias file. Used flat dark in first run to make flat masters, but in 2nd and 3rd runs, only used master flat LEX file.  

    On the lights tab, set exposure tolerance: 2, rejection limit: 3, checked all items in left box, unchecked all interactive options.  Unchecked autocrop.  On my 3rd run, I moved the file of lights to a lower level on my disk to ensure there was no problem with long file names (same result).  Checked some file names in Log file, string was 214 characters (limit is 256?).

    On the calibration tab:  Checked that the light files find correct dark and flat file.

    On the Post-calibration tab:  Exposure tolerance:2, checked all items under Global Options except Compact GUI.  Reg. Ref: auto, Output directory set (different each run).  

    Attaching a log file, screen shots and example light frame JPEG (your system does not allow fits files ?).

    Appreciate your suggestions.

    Les
    FAILED Screenshot 2023-11-30 175329.jpg
    980 x 716 - 135K
    log
    log
    20231130224547.log
    366K
    fits Screenshot 2023-12-01 093331.jpg
    1141 x 739 - 305K
  • edited December 2023
    You didn't mention zeros in the data... you didn't indicate a pedestal.
    Can you please indicate this confirmation as well?
    Your Jpeg image looks like there could be zeros. 

    From your log it really thinks there is nothing to measure in your pictures. This can be because of oversubtraction:
    [2023-11-30 22:47:20] C:/_PixInSight_on_SSD/171 SH2-171 NEB PROC/B_PIS PROCESS V1 FIRST NIGHT AND NEW EON FLATS/debayered/Light_BIN-1_4144x2822_EXPOSURE-180.00s_FILTER-LEX_CFA/2023-11-29_20-46-57_Sh2 171_LEX_-10.00_180.00s_0120_c_d.xisf
    [2023-11-30 22:47:20] Descriptor failed: 
    [2023-11-30 22:47:20] FWHM            : 2.1385720525001393
    [2023-11-30 22:47:20] eccentricity    : 0.679944594288702
    [2023-11-30 22:47:20] numberOfStars   : 108
    [2023-11-30 22:47:20] PSFSignalWeight : Infinity
    [2023-11-30 22:47:20] PSFSNR          : Infinity
    [2023-11-30 22:47:20] SNR             : 0
    [2023-11-30 22:47:20] median          : 0.003714619140872075
    [2023-11-30 22:47:20] mad             : 0.055904433256300005
    [2023-11-30 22:47:20] Mstar           : 0
    [2023-11-30 22:47:20] ** Warning: Failed to measure frame - image will be ignored.

    If you want to proceed... please indicate the zeros status. You can make the data available for download if you can't see the problem. 

    -the Blockhead
  • Also uncheck "Linear Defect Correction."
    Do you use this.. .period. You can consider using it after everything is working and only on monochrome images. Even then, this is a special purpose option that no one should use unless they know exactly how it works.
    -the Blockhead
  • Adam,

    Thanks for quick reply.

    Sorry I do not understand your zeros question:

    AB: You didn't mention zeros in the data... you didn't indicate a pedestal.  

    If you mean the pedistal setting in WBPP, I have left it at default zero.  

    AB: Can you please indicate this confirmation as well?

    Does above comment answer you?

    AB:  Your Jpeg image looks like there could be zeros. 

    By zero's I think you mean dark areas in a FITS image, where the lightness (K) value is zero?

    AB: If you want to proceed... please indicate the zeros status. You can make the data available for download if you can't see the problem. 
    I do not understand how I should determine the zeros status.  
    If I open the FITS file in PIS, and run the curser back in forth across the image as you did in one of the training videos, the K values range from 0.03- 0.04, BUT that is a very random test.

    I remember in one video, you used a pixelmath expression to identify zero or negative values and replace the pixyl with a yellow point, but would need a link to find that again.  When I do the imaging I usually set the offset value to 30 or 100 which I think helps to prevent zeros in the image.

    Curious in the log sample you have above, that the SNR is zero, but it identifies 108 stars.

    Les
  • Hi Les,

    Setting the offset on your hardware, literally your camera does not solve this particular problem. It is the inclusion of a mathematical offset which eliminates an over subtraction that can occur when the sky is very dark E. G. Narrow band images.

    Again, without looking at your data, I can’t tell, but it certainly seems reasonable to set it to automatic, having it set at zero of course is doing nothing.

    If you make your #DATA available, I can look at it, but I won’t be able to do so until at least Monday.

    - the blockhead
  • So not clear what you want, (#DATA) but - - -

    Here is a link to one of the fits files, I provided earlier as a jpeg.


    Yes, please take a look.

    Thanks in advance.

    Les
  • Hi

    This looks like one of your raw frames and not a calibrated frame. A calibrated frame would still be in the XISF format. Can you please provide a calibrated frame?


  • Adam, 

    I think I understand the problem, but not the solution.

    For reference, this camera is a 3 year old asi294mc.  Full well:  63,700e   Dark frames created using APT app.

    I inspected the master light (180sec), single frame light (180sec) and master dark (180sec)  and single darks 180 sec and single dark 30 sec.

    Moving the curser over single light frame, average K= 0.03, but in the master dark, K = 0.08 so subtraction would yield negative values.  So now searching for something that will make dark frame too bright.

    I examined the FITS headers on single frames to see why a dark can be brighter than a light.  In particular I looked for exposure settings that were inconsistant or higher in the darks.  The CCD-temp in both frames are -10C,  EGain on the dark was 0.946, same on light.  Gain setting was 125 on both.  I had notes iin my darks folder that I was very careful to double check all settings.

    Only difference I found:  in single dark 180 sec, offset =80;  single dark 30 sec; offset =80;  Light single offset = 30

    When I analyze frames with the Statistics process in PIS, Mean and median values almost always the same. Median values:     Dark 180 sec master:  5119;   Light  2412;   Dark single 30sec:  5116  so even a short dark is too bright.

    Minimum values  Dark 180 sec master:  5067;   Light:  2008;  Dark single 30sec:  4896  = same conclusion.

    Looking back, I found a 2022 series of darks with offset set at 30.  These resulted in a median value 1928 and minimum of 1732 so I think this offset setting difference is the problem.  I will RUN WBPP with this old set of darks and report.

    Les
  • UPDATE:  Confirming that using darks with the same offset = 30 setting as the lights had successful 100% images stacked, with no errors. Master light is reasonable considering only 40 light frames.  no rejected frames.  Good astronomic solution.

    Think I understand this issue now, did not appreciate that camera offset value was so important.

    Thanks for your time and comments.

    Les
  • Yes, gain and offset must match.
    There would be zeros in the calibrated data as I suggested. 
    Thus, there is oversubtraction. 
    The reason for the oversubtraction you found. 
    It is a good lesson.
    -the Blockhead
  • Comment deleted
  • Hi Adam,

    I may have posted this question somewhere else (sorry) but can't find it.

    I have had very good luck with fast integration, almost 99% of images successfully integrated, and remarkably fast.   

    What do we give up in quality (or whatever) by using fast integration?

    Thanks,

    Les
  • It depends on the data.
    Here is the "Adam Block" answer... do both. See for yourself!
    For one "typical" dataset, run the images through WBPP completely with all of the best settings.
    Then create a stacked image with FastIntegration and compare.

    The differences when using FastIntegration if they exist:

    1. Registration is only shifts.. .no distortion correction. For some data this isn't important.
    2. Rejection occurs in batches instead of across all data. The same is true for the weighting. With enough images this is an asymptote in terms of quality (and mathematical result). 
    3. No Local normalization. Normalization improves rejection and also simplifies gradients. This may not be an issue.

    So I can't give you an answer. You need to answer each of the points I address above based on the data in hand. It depends on the data (its quality, its artifacts, its number of frames...etc)

    -the Blockhead
Sign In or Register to comment.