Work flow after FastTrack Training

I recently completed the FastTrack Training; am now working my way through Fundamentals; and have processed some of my own data, using the basic work flow introduced in FastTrack, followed, Histogram and Curves transformation, and in once case HDR composition and HDR multiscale transformation

Beyond basic calibration and color correction introduced in FastTrack, is every image a case by case use of selected processes, or are their more advanced flows that are used routinely? For example, I have no understanding of when to pursue noise reduction.

It would be really helpful to get tips on how to proceed to the next stage beyond FastTrack, which was fantastic and got me going with PI.

Thanks
Mark

Comments

  • Hi Mark,

    Yeah, I basically made Fundamentals a way to show the important elements of processing in some depth on a number of topics. This "gets in the way" of simply showing work "flows" as such. (But I argue, showing the "flow" without the understanding is fraught...)

    This is why I made the "Workflow" examples in Fundamentals. Thus handful of sessions, if you watch them, will show you the common themes- and when things go off the typical path. 

    And then... there are more examples in Horizons of course..this was my grand scheme. Have people do the Fundamentals..and see all of the wonderful variations in Horizons which will give you many ideas.

    Regarding the noise, I would ask the question instead of "when" pursue noise reduction- whether it is necessary and to what degree. For example, you might want to be more aggressive with noise reduction on color data when combining with a Luminance image (for the mono-data processing out there). However, for OSC- if you have gradients in your image they will mask the "noise" you want to deal with. You can't actually see what you are doing until you run DBE on the data. This is a perennial stumbling block- many first time image processors think their images got *worse* after DBE. That isn't the case...they are just able to see the noisy stuff more clearly with the current STF.

    I know you might want an answer like do it after step "X" ... but there are multiple ways to reduce noise and multiple times to do it. I personally think the best way to demonstrate these decisions is by example. SO I am on the case-by-case part of the spectrum.

    One more thing- I hope people experiment and practice- this is a better way to learn than anything I can say of course. Try it out! It does take more time of course... all of that fiddling... 

     -the Blockhead
  • Just chiming in here because as a lifelong Photoshop user who did the fast trak and now watching every Fundamentals video I can, I was going to ask the same question as Mark. There obviously won't be a magic recipe of steps that's perfect for every image/person/setup/etc... If that was the  case there would make a 'make good' button in PI and we'd be done with it. But there does seem to be a general order of operations like what do do in a linear state and what to do after stretching. And when you come from PS there's a tendency to want to use every new toy in PI. At the moment I'm a overflowed on the wonderful tutorials and just looking for a general overview. 

    Between here and the PI forums and the Deep Sky Imaging Primer book the overall general workflow seems to be:

    1) WBPP For Calibration (Adam's tutorials cover this incredibly well)
    2) Register/Star Align Those
    3) Normalize using the amazing NSG then run DBE on the integrated result
    4) Deconvolve (best early on in linear state and before noise reduction)
    5) Denoise low SNR areas using inverted Luminance(Adam said TGVDenoise was pref in linear phase)
    6)  ---Here we would apply our stretch and leave linear land---
    7)  Noise reduce Phase2: Mure or MMT
    8)  HDRMutliscaleTransform (particularly in my case since I have M42 in there)
    9) LRGB (does this truly matter if we are faking the L from our OSC camera?)
    10) Local Histogram Equalization
    11) Curves Transforms to Lightness and Aaturation
    12) Morphological Transform
    13) Multiscale Linear Transform

    As I mentioned there's no magic roadmap of course, but just trying to get a general compass heading
    on the PI workflow. 

    My confusion/question at the moment is why star de-emphasis is typically late in the game, attached here is a snapshot of my current workspace, there are SO MANY stars (bottom left panel of attachment) showing up in this Orion widefield (35mm lens on QHY268C) that it seems the stars will just by 'in the way' of most early processes ....There is a temptation to go immediately to starnet (the new v2 is working great!) and go to town on the starless image. But I'm sure I'm missing something?

    I'll re-watch the Fundamental Workflow Examples on Tau Canis Cassiopeia again now as those are likely closer to my situation....

    I have about 12.5 hours of integration time, so feel good about my SNR and think I might have something really great, finger crossed !

    In other news, drizzling had no meaningful effect which I was suprised about since I was severly undersampled (35mm focal length @ 3.76 micrometers) but I was very diligent about dithering so perhaps that solved it  'the authentic' way....

    OK enough ranting !!!
    Casey

    OrionWIP.PNG
    2560 x 1560 - 5M
Sign In or Register to comment.