Best Way of Combining Datasets Taken with the Same Camera but Different Filters

Hi Adam,

I am wondering what would the best way be to combine some data I have of a target taken with the same camera but different filters? For example, I have some data of IC 2944 I took in August 2022 with my Optolong L-eNhance and another batch of data taken in July 2023 with my Optolong L-Ultimate. Both datasets were taken with my astro modified Canon 6D and Radian 61.

I have tried stacking everything in WBPP previously but I am not sure if this is the correct way to do things. I am now thinking if I should stack the datasets separately, then align them and use the ImageBlend script like you have demonstrated previously.

Zak :)

Comments

  • Ok... so lets start with the wavelength. What is the difference between the two filters? (Pick only ONE wavelength).  If these are basically the same wavelengths... then you can combine and let the weighting do its job. If the wavelengths are significantly different... then combining doesn't make sense (unless for luminance). It would be stacking Red and Green data... something you wouldn't do for a color image... but Ok for luminance.

    I think you need to be more specific about what you have in hand, precisely, and what you are trying to accomplish.

    -the Blockhead
  • Ok. The main wavelength I am interested in is the 3nm of the Optolong L-Ultimate. The Optolong L-eNhance has a much wider bandpass as it is a tri-band filter. I hope this answers your question.

    The main reason I wanted to try and incorporate these two different filters and datasets is so I can add more exposure time to a project and try and bring out the fainter nebulosity but also to reduce noise.

    Ok, good to know thanks Adam. Would it still be the same thing if I stacked the datasets separately and later combine them with additive PixelMath?

    I am intrigued at how I would perhaps use dual-band data as a luminance layer. Maybe this could be something that I can investigate?

    Zak :)
  • No... it doesn't quite answer my question.
    With your Optolong filters.. you are getting three NB images extracted from the OSC data right?

    So lets talk about the Ha emission. At some point you extract/create an Ha linear image right? 
    If you do this for each of your images with the two different Optolong ftilers... the bandpass AND transmission is different for the Ha right? It is the same camera. So I am asking... if you stack these images (thinking ONLY of Ha for a moment)... do you think it is fair to weight the 3nm data as LESS than a frame from the L-eNhance? If yes, then stacking is fine. If no... then it doesn't make sense to stack.

    Am I zeroing in on the issue?

    -the Blockhead
  • Ok. Sorry for the confusion Adam.. I'm trying to understand this myself lol. Upon examing the extracted RGB channels for both images, the Ha signal is stronger in my Optolong L-Ultimate data than it is with my Optolong L-eNhance data.

    Now that you have explained it, it wouldn't really make sense to weight the L-Ultimate data lower than the L-eNhance data, as from my understanding the narrower the bandpass of a filter, more of the signal comes through. I am not sure if this is true for OSC data though.

    I don't normally extract an Ha linear image, and to be honest I have never really thought about doing it until you mentioned it. Have I been processing dual-band data wrong? The main way I process my dual-band data is to leave the channels combined and process it normally until after I stretch the starless image, where I then use NarrowbandNormalization to create an HOO image.

    Hoping that this explains things for you, and sorry for the confusion again.
  • I think there are many approaches for working with dual band images...and I am not expert in any.
    (I am not  fan of this method of imagery... it is double filtering and seems problematic to me).

    I think there are transformations that attempt to extract the "Ha" from a combination of the Red and Green pixels. Likewise a different proportion is used for Green and Blue for OIII.

    Concerning the narrower filter... more light *does not* come through. It really is less photons- but more of the photons that do make it are the emission you are trying to detect. Basically the narrower filter will have a darker sky at the selected wavelength of interest. So better contrast..but also less signal. This is why combining the data doesn't seem right in terms of my gut reaction. I think do it you.. you need to understand what exactly you are getting as output. Is it really going to help? And what does the blend mean? In mono imaging... nothing prevents me from combining green and red data. But what does this mean in terms of output? What I am doing with it?

    -the Blockhead
  • All good Adam, thanks for your insights anyway.

    It makes sense now. I'll do more research about OSC dual-band processing. Maybe one day when I can afford it I will switch over to mono imaging.


Sign In or Register to comment.