Hey Adam-
I re-watched some of your videos and did some experiments myself, but I am still a little unclear on a specific question.
The general question is "Is there an advantage to using Flat Darks instead of Bias to calibrate Flats on a CMOS sensor without other issues?" (Specifically the ASI2600 series which is known for being very low noise/amp glow.)
The Master Bias has very consistent pixel values, and no apparent hot pixels.
The Master Flat Dark, at 8.25 seconds, also has very consistent values, slightly higher than the Master Bias, and a few hot pixels. I sampled 3 of the same pixels on each, (Master Bias/Master Dark), and values were .0085238/.0085292, .0081241/.0081284, .0082725/.0082767 respectively. Basically values on the 8.25 sec MD are ~.0000040-.0000060 higher than the MB. (I am sure you showed a way to find an average for this somewhere, but I forget how...)
To reiterate, the MD also had some hot pixels.
I just watched your tutorial on scaling Darks using the "Optimize Darks" function in PI, but I am pretty sure that wasn't working for me when I, as an experiment, tried to scale a "Flat Dark" from Bias and a 300 second Dark. You recommend against this anyway, but I wanted to be thorough in my experiment.
Here are my specific questions:
Does the minor difference in pixel values, (the Dark Current), make any real difference in the final output for the calibration of Flats, and then the concurrent calibration of Lights with those Flats?
Does it matter if hot pixels are better controlled when calibrating the Flats or not really?
My motive is to better understand why conventional wisdom calls for using a Flat Dark rather than a Bias to calibrate Flats for most CMOS cameras, (not counting the problematic ones, which have other reasons...)
Thanks-
Anthony