Tutorial suggestion

Hi Adam

I'd appreciate a tutorial on how to maintain good star colours, particularly when the stars are embedded in nebulosity.  For me, the most challenging scenario is where you come across bright blue stars embedded in red Ha gas - the bubble nebula is probably a good example.

Alan

Comments

  • Hmmm... think an example image (from you) would be helpful here. Typically certain kinds of processing will create artifacts for stars embedded in nebulosity- and color is one of the things that can be affected especially at the star "edges" where it meets the nebula. But without seeing an image, I don't think I can generalize a tutorial. 

    One thing that is tricky is that at a boundary (say a star edge) where there is a difference in *brightness* the color appears messed up- but fundamentally it is luminance issue and not a color one. I am totally guessing here...

    -the Blockhead
  • Hi Adam

    Thanks for the reply. 

    In terms of an example, here's an image of the bubble nebula that I'm currently working on. 


    Alan
  • Alan,

    Arrows would be helpful!


    Are you talking about the darkness (slight color difference) surrounding the embedded stars?
    This is due to the mismatch between luminance and color (other due to a blend of LRGB or simply a contrast enhancement).

    -the Blockhead 
  • Hi Adam

    Yes - that is one of the points.  The other - which is somewhat generic- is how you would suggest maintaining good star colours for bright stars. For example, if you take the bright star, within the bubble, it is supposed to be blue,  it appears white because the lum value is so high. Furthermore, if you examine, the R,G,B values of the star core, the Blue value is not in highest, indicating something has gone wrong during the acquisition/processing.  

    At the moment, I'm experimenting with the Repaired HSV separation script, masked stretch and Photometric Colour Calibration using the difference white reference points to see how best to address these issues - however, it would be good to get your thoughts via a tutorial. 

    Alan
  • Ok. 
    Well first comment I would make..and this is a stylistic point- if you look at my "portfolio" of images- all of my stars have whitish cores. The colors tend to come from the halos and diffraction spikes. Unless the object itself is composed of stars (like a globular star cluster or other star cluster) I apply my contrast enhancements with a eye to the nebula and not so much the stars. Some processors, such has Hallas and perhaps nowadays Hanson, purposefully *lessen* the brightness of their star cores so that they can shove color in there. 

    To me..this does not look "natural." When you look at a VERY bright source of light you do not see color well. Usually it is in sparkly edges of some super bright thing that you see color. This is what I want to communicate..the centers of very bright things are so bright that well... there isn't anything else to perceive. 
    You will note that the centers of my galaxies are not entirely "flat" for the some reason. 

    So look at my very old Bubble:

    (I think I could do better now... hehe)
    See? If you take away the halos and the spikes of the stars..it would be very difficult to see the color variation of the stars. 

    So... the fundamental answer to your question is methods to not brighten stars..but have a bright object. You can handle stars and nebula independently..but the trick is in blending them back together. The Screen Blending techniques I demonstrate are good ones for this purpose. You will note I use it to "repair" stars and things. 

    -the Blockhead
  • Adam

    This is a very interesting artistic point. However,  what I am saying is that I would like to have the choice of either going down the more white in star cores or not. At the moment, I seem to get the white cores by default.

    What I have found very effective is a PS action from Noel Carboni "increase star colour"  that seems to take pixels from the outer parts of a star and replace them in the inner parts. I cannot seem to replicate the same effect in PI although obviously it is possible.

    Alan
  • Yes, white star cores is always the default. However, you might want to distinguish between stars that are truly saturated and those stars in your data that are not- but become white clipped as part of the brightening process. For example, do you simply create a non-linear image by accepting the "auto-screen transfer function" and then apply HT to make it permanent? If so, you will always have this issue. Instead you would want to choose a brightness level that is initially much lower and apply a different brightening strategy that protects stars- or makes some become brighter less quickly. I do not recall the actions of the Carboni script... but likely it has to *darken* star cores before taking that color inward- this is simply a mathematical/display thing that there is no way around. (as you know, I demonstrated this in the LRGB blending section of Fundamentals, https://www.adamblockstudios.com/articles/Fundamentals_LRGB )

    So... I will think about a PI way to replicate this effect. I really need to see what Carboni did. 
    -the Blockhead
  • You might be interested in seeing Tony's method:


    I think only subscribers can see this?
    If you are not one... I will find another way.
    This will give you an idea as to the kind of things you would need to do in PI to achieve a similar result.
  • Hi Adam

    Thanks for the reply.

    At the moment, I'm basically adopting a three stage (low, medium, high) stretch policy - I set the backgrounds to be approximately equal and then blend the three results together. I select the particular stretch according to the object. So low stretch = faint objects, high stretch = very bright objects etc.

    Unfortunately, I cannot see the article from Tony since I'm not a subscriber. 

    It will be good to see the results of your thoughts on this subject. 

    Alan


Sign In or Register to comment.