I had an issue with the new Pixinsight and it triggered a discussion with Juan (gently) chastising me for not saving my originals.
When I am collecting data over weeks or months (maybe one day years), what I do is calibrate with WBPP and stop. I do include a light cosmetic correction, but with a very light touch. I review the _C_CC files, do a cull of any that are obviously bad. The calibration is against a dark library of masters, which I redo every 6 months or so to pick up new hot pixels (and the dark library includes dark flats, no bias used; ASI6200MM Pro), and of course against flats taken every night as it is a portable setup that may change a bit.
So....
In this discussion Juan said that one should save the originals. Of everything: darks, flats, dark-flats, lights. Not integrated masters, everything, and with a new version of PI be prepared to run though all the calibration steps from the beginning.
So with 50 darks each, 25 flats, let's say 10 nights of imaging, MAYBE a new dark library at some point in those nights... that's a fair amount of stuff to save, but perhaps more importantly it means maintaining the association between the darks used for each flat set, the flats and darks used for each light. Rather than a collection of calibrated lights per filter, there is a derivation tree of input images one has to keep track of. Darks in particular either get duplicated in each day's files, or you need some kind of pointer to which darks were used (and save them).
So... my question for those doing multi-night image, especially over a longer period. What do you do? And if not saving it all, will you change?
On a related note, I had some data from the prior PI version (this started with 1.8.8-11 coming out) and went back and re-calibrated the same images with the same flats, and same master dark (I do not have the original darks). So exact same import into the calibration process, same parameters, the only difference was the new PI version. The results were NOT the same. Very close, visually the same, but a pixelmath subtract showed 0.3% of the pixels were different and the difference pixels had fairly large values (seemed to be in/around stars). .
A difference that mattered? I have no idea. But different on an aspect (simple image calibration) that I would have thought mathematically simple and unchanging.
So Juan is right in that I get different results. Of course, since he wrote the code.
But... it's going to be really painful to keep track of those input files over months. And costly in terms of size (though to be far raw images are about half the size of processed due to the 16 vs 32 bit storage).
I'm torn. What I do now is simple. Is it really worth making it that complicated?
Linwood
Comments