Wish list for tutorials

Living as I do under the Seattle light dome, I have been experimenting with Narrowband imaging and getting good results in terms of data. Processing is another story. Would love to see in-depth coverage of both bicolor Ha/O3 processing and Hubble palette SHO processing to get the great colors you see online. Thanks!
«1

Comments

  • This is a request I have received many times. I do plan on doing this. 
    However, some caveats:

    1. It isn't something I am expert in. After I complete more of the Fundamentals material, I will delve into OSC, Narrowband... etc.
    2. At the moment, I wouldn't consider it "Fundamental" and likely the content would fall under Horizons. However... I think a simple workflow in Fundamentals is possible. 
    3. It is likely I would need to ask the community for some data. 

    All that being said, quite a bit of the instruction I am trying to create is relatively independent of the data type. For example, masks whether applied to NB or anything else have the same properties. Many topics are like this. 

    Anyway... just some quick thoughts on the subject. Thank you so much for the blip!

    -the Blockhead
  • Thanks! For those of us in light polluted areas, NB has become a real boon. But the processing is a little bit different kettle of fish!
  • I'd be happy to contribute narrow band data, however I've really just started with it, so it will take me time to acquire more.   

    Here is a quick Eagle Neb I did with Narrow band,  Eagle Neb

    It's not very good but it shows some promise if I can get more data.


    My request:   This might be more of a horizons thing...   I have a refactor, but I love diffraction spikes.  StarTools has a tool called "Synth" that will let you add or remove spikes, I've only recently started playing with it.   I have not yet stumbled upon anything similar in Pi, but seems to me it's like something exists in Pi to accomplish this.

    Alternatively, I found out from a youtuber that you could tape threads over the objective to form an X and thus get spikes in the data...   I might try that.
  • Regarding processed diffraction spikes...I am not a big fan. 
    (I will spare you the soap box screed...)

    Yes the method of putting thread in front of the aperture... now that is perfectly fine (though refractor purists might cringe). Those spikes will be real, have the correct diffraction pattern and wavelength dependency.

    I would use this method for very short focal length systems... it helps to bring out star color. If the stars are the size of a pixel... a diffraction spike spreads this light out and for bright stars it helps them.

    Here are some VERY old examples when using a 76mm TeleVue telescope:


    -the Blockhead
  • Hi Adam
    You mentioned in your video "Deconvolution" that this is OK for critical and oversampled images.
    I think a short tutorial with the theme "Under- and oversampling" would be worth. The answers to the question: "What are winning steps for undersampled images (lets say 3.5 arsec/pix)" would be of interest.

    Thanks Ed
  • edited August 2018
    Hi Edwin,

    Thank you for the suggestion. There are some basic artistic considerations that apply generally- but are very useful for creating the "appearance" of resolution. Deconvolution on its own is usually something that is subtle and only appreciated if a viewer "zooms" in and looks at very small structures.

    But the "appearance" of resolution- even though in a quantifiable sense there isn't an improvement- is a much more powerful processing aesthetic. Consider this image:


    This image does not have the jaw dropping resolution that many of the images I create do have (taking advantage of the location and equipment)- but because of my "clever" use of HDRMT and a few non-linear choices- I think this looks highly resolved. 

    Another example is this one:


    Again, the FWHM of the stars in the original data isn't great- this object is quite low in the sky in the NH. What makes it have the "appearance" of resolution is careful contrast adjustments. Sometimes image processors render images that have a uniform contrast across the image.The thing is just as lightness needs darkness to stand out... sharp edges need FUZZINESS to give the appearance of structure and detail. 

    So great idea and I will try to incorporate the above into future recordings. At the moment, I am working on the somewhat less sexy- but useful- Linear Fit lesson.

    -The Blockhead
  • Thanks for your information about the "sharpness". This stage is cooming soon in my learning curve.
    To my idea for a short tutorial:
    Undersampled images deal with cosmetic correction (use Master Dark, your video) but also with to drizzle the images. This means for me also more (>25) and shorter exposure.
    I think there are more hints and I feel happy to have your videos to learn "Why and what is the way of Adam Block.
    Thanks Ed

  • Really looking forward to the noise reduction material. My weakness (one of them!) is blotchy backgrounds after NR and balancing NR with sharpening. Right now I do MLT in the linear domain with a Luminance mask, then TGVDenoise after stretching. Really envy those images that are smooth but retain detail.
  • Adam, thanks for the videos. I haven't had much time to watch those but the DBE alone was an eyeopener. I second the request of narrowband processing, especially the color management part of it and using the SHO-AIP script or similar. This unless I have missed something and you cover this already somewhere else.


    - Mikko
  • I have not covered it. I will eventually go around to this...but I am a bit behind (in my book) on a few sections. I really want to get ImageIntegration, SubFrameSelector..and a few others out first.

    -the Blockhead
  • I would be very interested in the SubFrameSelector instruction. Everything I have read seems to be partial information with the all important why missing in the steps. Without the why one does not know how to make proper adjustments. I see also you mentioned a book, that will be great (even better if you can get in kindle format as well as hard copy:)  ). 

    Thanks for all the work you put into these tutorials.

    Greg
  • Greg,

    There is a delay completing the ImageIntegration and SubframeSelector sections because I am *still* waiting on some answers to questions I have. However... I just can't see to raise the Juan for answers. There are many "why" type answers I don't know (or cannot know) without some help. 

    -the Blockhead
  • I do like your explanations because you do give so much more information which gives me a better understanding. The materials i see elsewhere are the ones that i meant leave out the why and the explanations.
  • I'd like to have sample image files to download and work through with you on the tutorial videos for Pixinsight. Is this possible?

    Thanks,

    Stan
  • Hi Stan,

    The workflow section of Fundamentals does have an image set.
    I will be making many such things available in Pixinsight Horizons as well (two sets are already there). 
    Let me know if this is what you mean. More to come...

    -the Blockhead
  • Adam,

    I would like to request more information on combining Narrowband data with LRGB data. In particular, adding Ha data rather than full SHO narrowband images. I have found that how you add Ha can really mess up the color correction process. 

    Thanks,

    Chris Foster
  • I will likely do more of this later.
    I have an example right now in PI Horizons. I work with Ed Stuckey's data of NGC 6888 as taken from Detroit.
    -the Blockhead
  • Do you cover mozaics anywhere in the published content?  I keep finding that I need to stitch many images together to cover objects objects.  Blending them seamlessly is necessary but often elusive.

    Thanks.  Wish I had been able to dedicate more time to astrophotography over the last year, but I always enjoy it when I come back and really appreciate the fact that I can go back and review the content I've seen before but forgotten during the time I was "on a break".

    Cheers!
  • I don't actually have any good data- and not much experience. Frame adaptation should help. My methodology in the past was to use Registar (such an old software!). I have not done any of this in PI yet. Much to learn...

    -the Blockhead
  • My request is:  What is the most effective way of exporting images from PIX to Photoshop.  They never seem to match brightness.  I am working in sRGB colour space in both programs, and have calibrated my monitors with a photometer.  They look ok in PIX but, when saved as a 16 bit tif, they are brighter in Photoshop.  Moving between the programs can be challenging.

    Terry 
Sign In or Register to comment.