Weighting and Exposure Length with WBPP

Adam,
I saw your video in which this was addressed, but was uncertain of its implications for approaching the issue using WBPP. If, for example, I had a set of data where I had a substantial number of subs with short exposure lengths to address the "core" of M42 and also a large number of subs of significantly longer exposure to address the "faint", what might be the best way to address settings in WBPP.  I only do this on the rarest of targets, of which M42 is a primary example. 

I have also heard you suggest or question the rationale behind shooting in this manner for M42. But that is an entirely different question.  Suggesting it is not necessary puzzled me. It would seem to me that one could never recover data from saturated areas. Yet, I hate shooting any more ridiculously short exposures than I have to for reasons of impracticality and maximizing the sensor's S/N.

Sorry for asking 2 entirely different separate (yet related) questions.
Thank you in advance.
-H

Comments

  • Hi Howard,

    I am ready to answer your question..but I really do need a little more. I make an awful lot of videos! Can you please indicate which videos you are talking about before I answer?

    Concerning M42...you did watch my video on GHS in which I demonstrate? I show you how. You say data is saturated... what data? It is actually difficult to saturate for reasonable exposure times in RGB filters. It was not saturated in the OSC example I showed. M42 is perhaps the ONE...I mean ONE exception. But even then, if you have a 16-bit camera it is not likely that compositing data is necessary.

    -the Blockhead
  • Oh... re-reading the first part of your question (based on the title)... yes, the weighting that WBPP does using your choice (PSF Signal Weight the "default")  is the correct thing to do. Any implications would come from *not* doing the proper weighting. Just think about that for a moment. If you have 20 second exposures and 120 second exposures... you would not want to add them up and average them without compensation right? This would give the 20 second exposures 6x the significance (weight) compared to the longer subexposures.
    Remember the reason you are weighting the exposures is to allow you to do proper rejection. If you do not weight them- you will literally reject entire frames. In addition, without a group of subexposure times there may be vary degrees of data quality- and the weighting will take care of this as well.
    -the Blockhead
  • My above answer assumes you are looking to create a single integrated image. But then it wouldn't have been necessary to take the shorter exposures. 

    So the real question is... were you acquiring the data for a single exposure or for compositing.
    This is a decision you make before you process at the time you acquire the data. I do not know which you are trying to do. You only do one of these when you take the picture.

    -the Blockhead
  • So as to answer first your question. I am doing it first before I have had time to read and study your very prompt response posts.

    It was not from ABS. It was a general Youtube, called:
    "Image Weighting in Pixinsight: Part 2" at roughly 3:27


    Incidentally, I am referring to my imaging from my IMX455 monochrome on my DR350 located in the Atacama (Obstech)


  • Read your responses and some of it exposes some ignorance on my part. I have previously watched (haphazardly) the video on M42. However, I think the wise thing for me to do is to relocate it and watch it in detail. 

    So, when you asked whether I was acquiring data to create a single image or for compositing, the lightbulb went of. I realized that I was naively doing the former. I think it is compositing that would most likely achieve my goal. To be clear I was hoping to accomplish this "as if" a single exposure (a master) would achieve my goal. 

    One thing is very clear to me: you are correctly tracking my question and my goal. But I think I have to refresh and rethink my understanding.

    Thank you. M42 is very unique indeed. I am confident the video you referenced will set me straight. 


  • Hi Howard,

    The weighting videos are on the site as well. 


    **************
    In addition to the GHS example of processing M42- please also see this example:
    ******************

    It depends on the data... and how you acquire it- but if you do it in a similar way that I do (or that was done with this member's data) you will find it is not necessary to composite.

    -the Blockhead
  • Thank you very much,
    I will review all of your recommended videos and start from there.

    The timing of my question is ideal. Why? It is because I only shoot around the meridian. So, at this time of the year M42 is very much in the Eastern sky and is transiting currently in the very late morning. (In fact, I have already accumulated considerable data experimenting with various exposure lengths. I will, of course retain that data.)  But I the timing is perfect since it will be months of data accumulation. I won't actually be processing data for quite some time until M42 starts transiting in the early evening and finally goes bye-bye for the year.
    But I want to know that I am shooting the data correctly/optimally.

    If my goal is to follow your approach, is it true that my exposure duration should based on the premise so as to avoid clipping the nebulosity immediately around the trapezium? I am not trying to avoid clipping of the trap stars per se, rather just the nebulosity in the immediate vicinity.  Or is that objective too excessive (too restrictive)?  Believe me, I generally like shooting the longest exposures my system can handle. Even more, I hate adding to the complexity of processing by having to deal with data comprised of multiple exposure lengths if all of that can be avoided. 
    Again, thank you so very much.
    -H

  • Yes, I agree. You will get some saturation of the stars... but I think that is OK. I am a "nebulist" . Some people care more for the stars over the deep sky object itself and this has always puzzled me since it isn't the main subject.

    -the Blockhead
  • If I, too, become a blockhead-nebulist might I find myself automatically improving in table tennis?
  • I have to leave this comment.
    Your video:
    What a gem!
    Firstly there are several brilliant concepts illustrated. The most stunning is the hue and sat extraction and subsequent combination with the modified I.  So cool!  What I loved about the video is that it was so casual as you, in "realtime" spitballed and processed while making your objective so clear to the listener. I learned so much. Thank you.



  • That technique, if it is the HDRMT bit, is now available as a single click in HDRMT. The intensity checkbox does this work for us now. But having seen what is happening..you now know how it works.

    -the Blockhead
  • As Pixinisight evolves (and as a select few write some very creative scripts), it allows people to process much better.  However, it often comes at the expense of their learning the mechanisms and concepts behind the curtain. So the progress is both a blessing and a curse. Thanks to you, the readership has the opportunity to achieve both.
Sign In or Register to comment.