Working with OSC images, should I convert to LRGB?

Hi Adam,

Like many beginners, I'm starting off with a OSC (ASI294MC Pro) and I've seen discussions in other threads relating to the fact that may of your tutorials treat LRGB as the 'Standard' way to process data. Like many, I aspire to owning a set-up with a mono camera and filterwheel so it feels like getting to grips with working in this way probably makes sense. I've worked my way though quite a few tutorials and seen that you use LRGB most of the time but its now I've tried Deconvolution that I've really started to wonder if taking the 'easy' route of just working on combined RGB data is the right thing to do.

That being said, there are different ways to go about this, some making smaller steps in that direction, which of the following would you suggest (I'm currently shooting mainly galaxies in broadband due to already having a Celestron C8 if that makes any difference):
  1. Carry on processing in RGB (but work out what problems that may cause e.g. using deconvolution and then build an appropriate workflow)
  2. Pre-process in RGB but create a separate L image to post-process with
  3. Pre-process in RGB but split into R, G, B & L images to post-process with
  4. Split subs into R, G, B & L images and process from scratch as separate images
Thanks for any guidance, I want to make sure I start out on the right path from the outset!

Grant

Comments

  • Grant,

    The funny thing... just yesterday I received a note asking... do you have any mono (LRGB) imagery videos...all I see you do is OSC? So I have been pushing the CMOS OSC so hard that now people do not think I do LRGB. Can't win.

    Anyway, first a comment about deconvolution. This is a very subtle and incremental enhancement of images. I fear many people try to make this do too much of the work for the sharpness and contrast. However, what makes it a unique process is that you really do want to work on the "luminance" of the image and not introduce color artifacts by interacting with the color channels. So it makes sense that whether you gather luminance data at the telescope or you extract (or create) a synthetic luminance from the RGB data- operating on that one image for deconvolution and subsequently blending makes sense. 

    The distinctions you tried to make in your list are somewhat degenerate. Deconvolution has one very very important criterion... you need enough signal. It is very rare (I really do not see any good examples) of people getting enough signal (a bright enough image) with OSC where deconvolution is appropriate. This is *why* a separate luminance acquired image is helpful... all of those hours of unfiltered data really make deconvolution possible.  So the paragraph above deals with the necessity of processing (your #2)- but this last point is really the more important one. Think of it this way, deconvolution works on brightness values above 0.05. Look at your data. Do any features of the object you want to sharpen have values above 0.05 (This is more than 3,000 counts in 16-bit language)? Look at my attached image. I have "earned" the opportunity to do decon on this image- the counts are high enough.

    So my suggestion is... you can take RGB data with your OSC. If you can get counts of features of a bright object above 0.1 then I certainly think you can take advantage of deconvolution. You could then extract a "luminance" and do it! 

    So... 

    1. I am not certain I understand this comment.
    2. This is the typical way it is done...assuming enough signal. 
    3. This is the same as the above...but making more work. A PixInsight RGB image is the same thing as separating them- PixInsight operates on each channel even though you have a single RGB image.
    4. Whoah..I have no idea how this would help!

    -the Blockhead

    Capture.JPG
    1885 x 1083 - 216K
  • edited April 2021
    Hi Adam,

    That's very interesting about deconvolution and the requirement for enough signal/brightness to even make it worth doing, I hadn't appreciated that. Working out what process to use and more importantly when not to use them is one of the skills I need to get my head around as well as the general sequence to apply them in.

    My thinking on my 4 options was broader than just deconvolution as I want to learn to process in the most effective manner. Having read what you have laid out and having done a bit more thinking and reading on the way Pixinsight operates on each channel, as well as actually on the processes used to combine LRGB when captured separately, I can now see why 2 is the way to go if you have OSC data and want to operate in a LRGB fashion, now let me go and dig around in your Foundations videos to see if you have a suggestion on the best way to do that...

    Thanks

    Grant


  • Thanks, Adam, for the comment about having enough brightness level (>0.05) in order for deconvolution to work.
Sign In or Register to comment.