Color-blindness: any hints?

Hi
I suffer from a strong red deficiency.
I kind of blindly trust the color calibration (PCC or CC), which do a good job in general (according to my wife).
However, further processing does introduce color casts or shifts that I cannot control very well, without the help of others.
Are there any techniques that could be used to reveal changes without relying on my eyes only?
Many thanks in advance for any advice,
Rodolphe

Comments

  • In a way, you can look at the data numerically. I am not saying this would be easy...but within your images thee will always be things you expect to be neutral in color. The blank sky (no  nebula) is neutral. This means you expect to see equal values of R,G, and B in each channel. If one is , on average, higher/lower than another for a neutral source..there is likely a color bias. White stars and the sky are good references. You can also look at the results of PCC and make certain the resulting ratio of RGB is correct and the same each time you process. If not, you know you need to make an adjustment.

    The point is, what we see with our eyes is encoded numerically in the data. If you can see a bias... you can also see it in the number. So with your vision...you would have to rely on the numbers more than your eyes.

    Again..I am not saying this would be easy...but you can definitely find ways to detect there are issues.

    -the Blockhead
  • Hi Adam
    Many thanks for your response.
    I started using Startools, another processor... and found there a brilliant tool in the color module, it's called MaxRGB. It turns each individual pixel to its dominant channel: this way I can easily spot if there's a color bias and adjust the channels to obtain a balanced image.
    It helps me get quite good results, according to feedback from others.
    Wished PI had a similar tool ;)

    Love your videos, I'm a big fan.
    Keep them rolling ;)
    One suggestion, maybe - how about an end-to-end process with less than ideal subs (noise, low signal...)?

    KR

    Rodolphe
  • Each time I do videos with less than ideal subs (please look at the M51 example from Josep Drudis in Fundamentals) I get complaints it is too hard to follow along because dealing with all of the problems gets in the way. I am told to stick to the main road... etc etc. The NGC 300 data was also poor quality. I am not certain what level of poor quality I need to get at here!! Take the NGC 300 data in Fundamentals... what you did not see is what the original astrophotographer's results... you saw MY results. And maybe people say.. my results look OK or good... so the data wasn't that poor. But what if you saw his result..and THEN my result?? That ain't a good way to treat a customer!

    A rock and a hard place. I have put (and will continue to do so) challenging projects in Horizons and not Fundamentals.

    One more thing- noisy and low signal  data means less processing. That is the irony of the hobby...the poorer the data the less you can do about it. This is information processing. I think the key lesson, one that many do not want to hear, is that for any data (any image) there is a *useful* amount you can display. sure you can stretch any image to any degree you want (just like attaching an eyepiece to a telescope and getting any magnification you want)- but there is a *useful* degree of information you can display determined by the quality of the data. Many first time imagers (less than 2-3 years) often have high expectations. 

    I guess I got fired up here. I get public requests for me to work on poorer images (I am called an elitist) - but then privately I am told not to show that level of complexity or number of steps. 

    -the Blockhead


Sign In or Register to comment.