Best stretching method prior to LRGB combination

Being new to mono imaging, I was having trouble getting the blend of L and RGB images right until I watched Adam's presentation on LRGB Blending - now I'm getting much better results. Adam used ArcSin stretch in order to preserve the colour in the RGB image prior to blending with the Luminance image. I've been using GHS to stretch my images (again, learning a lot from Adam's tutorials) as I feel it gives finer control over the stretching process. I wonder how GHS may best be used in relation to LRGB combinations - can it be used alone, or can Arcsinstretch be used in conjunction with it. 

Frank

Comments

  • I would be careful with ArcSinh stretch for color. You can use a tiny bit of it...but it does do some funky stuff. It is better, in my opinion, to just manage the blending of the L with the RGB image at an appropriate brightness and color saturation level. You can certainly use GHS to stretch both the luminance and RGB- but the principle I teach still needs to be adhered to- which is you do not stretch to make these images look good independently... you stretch them to BLEND together well. Typically this means you need to management the brightness levels so that they do not exceed 0.8. In addition, the color image needs to show all color well..but does not necessarily need to be a high contrast result. The contrast comes from the luminance. 

    -the Blockhead
  • Just to follow up... please do watch my series on NGC 3486. 
    In the more up-to-date version- I do not use ArcSin stretch and demonstrate a better result. 
    (If you really have time..look at both versions of my processing... lots to learn there!)


    -the Blockhead

  • Thanks Adam, I'll check those out.
  • AB: "In addition, the color image needs to show all color well..but does not necessarily need to be a high contrast result. The contrast comes from the luminance. "

    I thought in one of the Fundamentals videos you recommend giving the RGB image a lot of contrast to preserve color when combined with the Lum....did I get that wrong?

    On another note, I've been using your suggestion to clip any excess background pedestal as a first stretch step, which has been working well, but in an Astrobin thread, David Payne offered a method to do the same thing but by compressing the data instead of clipping it (process below). Is there any benefit to that, or is it just saving data for data's sake but with no useful utility?
    Thanks,
    Scott

    DP's process:
    -set SP at the point you want to clip/compress
    -set b to maximum (15)
    -drag HP slider all the way down until it bumps into SP
    -apply stretch (D) to move the whole histogram to the point you want
  • Increase of contrast in the color channel does result in an increase of saturation in the LRGB for a given stretch level. BUT it is far more important to show all color in the RGB at a reasonable level. If the color is not present in the faint and bright areas- it will not color the Luminance well. This is where people have issues. The point is that the RGB image will not look like a well processed image- it just need to couple well with the Luminance to color everything the luminance has that is useful signal.

    -the Blockhead
  • As far as compressing (scaling) the data in GHS to move the histogram to the left- I believe this is wrongly conceived. You want to do this with the linear mode since this is an additive pedestal and you are effectively subtracting it. Shifting in the manner mentioned can bias the colors a bit and change the balance. This is not correct.

    I saw Paulyman mention this as well..I believe it is not correct.

    -the Blockhead
  • Thanks for the feedback on both points!! In addition to affecting color balance, could compressed data throw off background measurements used by other processes?

    Cheers,
    Scott
  • I am not sure I agree with the implication that subtracting an additive pedestal does not change the colour balance.  The "Colour" is determined by the ratio of R:G:B.  If you subtract an additive pedestal, p, say the ratio (R-p):(G-p):(B-p) is not the same - ie a colour shift will result.  What you really want is a multiplicative adjustment of the same amount to each channel, eg R(1-m):G(1-m):B(1-m).  This will leave the ratios exactly the same and will therefore leave the colour unchanged.  This can be achieved by using the method suggested by Dave Payne and by Paulyman in GHS and making sure that the Colour Mode is set to "Colour".
  • You are referring to the image normalization?

    v-backgroundReference
    then image normalization (from SPCC background Neutralization)

    template <class P>
    static void ApplyBackgroundNeutralization( GenericImage<P>& image, const DVector& backgroundReference )
    {
    StandardStatus status;
    StatusMonitor monitor;
    monitor.SetCallback( &status );
    monitor.Initialize( "Applying background neutralization", image.NumberOfPixels() );

    for ( typename GenericImage<P>::pixel_iterator i( image ); i; ++i, ++monitor )
    for ( int c = 0; c < 3; ++c )
    {
    double v; P::FromSample( v, i[c] );
    i[c] = P::ToSample( v - backgroundReference[c] );
    }

    image.Normalize();
    }
  • edited June 2023
    What I am referring to is here:
    h ttps://youtu.be/mPi8SwfZ27Q?t=200

    I will make the sky neutral by removing the bias in each channel and *then* stretch the channels together with a common stretch. If I have color calibrated images with correct scaling between the channels..but somehow a bias was added to the channel- I would just subtract the bias from each channel until they align for a neutral sky and then stretch.

    -adam

  • edited June 2023
    Maybe we are talking at cross purposes. In that video Paulyman is working in narrowband Hubble palette. Colour precision is not an issue in that case. 

    I was taking my cue from the title of this thread and assuming an LRGB image. My point is that, if you have a colour calibrated RGB image, then subtracting a pedestal would affect the colour calibration. 

    The code snippet from SPCC shows the subtraction of the background reference to achieve the colour calibration. What I am saying is, having done that and got a properly calibrated image, any further subtractive adjustment would alter the colour balance. 

    I don't think it is a big deal because the stretch to non linear and subsequent non-linear processing will likely mess with the colours a bit anyway. What I am saying is that the colour bias argument against the blackpoint adjustment method suggested by Dave and Paul is not, I believe, valid. Furthermore, if introducing colour bias is a concern, then using the method they suggest with the Colour mode set to Colour actually doesn't alter colour balance at all!
  • Mike and Adam,

    I kind of agree/disagree with both of you.  This only seems appropriate since there are three pirmary colours, there should be three opinions.

    I like Adam's approach regarding step-wise.  Removal of the channel first and then stretching is the way to go.  Each step that way leaves the colour channels individually intact.  There is no chance of clipping any data (that you don't want clipped).  If you base blackpoint adjustment on luminance (using the colour mode), you can arrive at a "black point" adjustment that is greater than an individual channel pixel value that you may want to keep.  So I keep it RGB mode, do my black point adjustment and then stretch in colour mode or RGB/K mode after the black-point adjustment.

    I never do subtraction in DBE (always division), simply because some points will appear absolute black due to clipping if the a background pixel is less than background model indicates.   This can leave areas of absolute black in the image (which arguably can't be seen in most images anyways because the area around it is so dark), so is only important to others as anal as I am about the data.   For the same reason, I tend to set my apparent "black point" via compressing the data in GHS, using RGB/K mode  rather than applying a true black point, or using colour mode which can clip the data.

    This compression method is achieve by setting SP to the left of the histogram, setting HP = SP and applying some stretch factor (generally with a very high b).  (Note an interesting variant can be done by setting HP=SP with the histogram, but still on the LHS of the peak - but that's another topic)

    Both methods will change the contrast and brightness to the right of the "new" blackpoint, but only in a linear way.   The luminance will decrease, and the saturation will increase proportionately.  There will be no change in hue (at least to the right of the chosen HP or black point).  Then once that's over and done with, I go to my "real" stretching to non-linear.

    In the vast majority of cases I think there is very little difference, but I thought I would weigh in with my "chartreuse" opinion. 
  • Oops,  I forgot to mention one other thing.

    I agree with being very careful on using arcsinh.   The difficulty, as I see it, is while luminance is "best" stretched using HT or GHS - saturation is best stretched using an "arcsinh" type function.

    The "colour" mode achieves a compromise by allowing you to blend the "arcsinh style stretch" back with the normal RGB style stretch, which I often employ but I am very careful to weigh the stretch towards "not enough colour", or undersaturated.   Once my luminance is correct, (either before or after or both the LRGB combination), I visit GHS specifically to address colour saturation.   

    By selecting b<0 (this is one of the reasons it was included), you can do a arcsinh type function stretch and by selecting "saturation" as the mode, you can perform an "arcsinh" style stretch on the saturation only.  You still have to be careful not to oversaturate the image - and this is best done by keeping the saturation histogram peak to the left side of the histogram and the saturation value low at the far RHS.   If you can imagine a triangle formed with points at 0 in the left and right sides, and a peak in the middle, you want the saturation to be within that triangle - more or less.

    By selecting b=-1.4, the function is indistinguishable from true arcsinh, but there is nothing magical about this particular value.

    I only mention this, as this is a another way to get colour saturation into the image and is part of my workflow.  Generally I at least have a look at the saturation before and after combination of the colour image with L.
     
Sign In or Register to comment.