Combining/merging images with different framing

Hi, I’m hoping this is something relatively straight forward?

I’ve basically got about 60 hrs of data on the California nebula . About half (RGBHSO) was taken 5 years ago in the framing I like. Recently, I’ve been taking SHO to improve things. I’ve also moved to using NINA from the ASIAIr. So I used the framing wizard and an old reference image to frame up what I thought was a matching frame for the new sets of imaging. However, I obviously messed this up! (Need to understand how to confirm rotation properly in NINA!} As a result, my new framing is about 90 degrees out from the previous set, causing large diagonal black areas when I combine them in WBPP using a recent image as reference. The important nebula areas combine great but the star areas in the wider framing looses a lot!

So my question is whether/what is the best way to combine the 2 framings to utilize the nebula SHO from all the images with the RGB stars and retain the framing from the first set of images?

I’m pretty new at this as have been spending the majority of my time collecting data and am only now really venturing in to processing!

Thanks for any assistance anyone can provide

Comments

  • Hi Bryn,

    I have used Mosaic by co-ordinates script in PI which has been fantastic at matching the images and then blending them with ImageBlend.

    No sure if this helps you?

    regards,

    Haider
  • Hi Bryn,

    Can you upload an example JPEG that shows your work? You will need to put it on a server you have access to and share the link.

    Based on your question- it is not clear to me that you are optimizing the normalization of images.
    The first point is that the image will only be useful where one or both images fall into a rectangle. You can also have some irregular image that is larger if you want... but I do not think this is what you are asking for.

    WIthin this rectangle you may have different contributions from one or the other framing. They have have different amounts of S/N (noise) which you cannot change- but the overall brightness of each contribution should match with proper normalization.

    -the Blockhead
  • Thanks. I’ll post images tomorrow. I don’t have the original integrated image anymore, but will share individual stacks. 

  • Hopefully this works? There are 3 images

    1. Original framing of RGBH (around 20 hrs)
    2. New Framing of HSO (currently around 35 hrs but increasing)
    3. HSO against RGB reference image in WBPP showing hard cropping in corners as the 2 images are about 55 degrees apart (my error!)

    I'm trying to utilise all Light time to form a single image.

  • This is fine. So you should integrate all of these images together and utilize normalization. 
    You did not show this image. 

    The information from the very rotated image can see be incorporated into the the other two (which are similar). The parts of this rotated image where there isn't information that overlaps in the other image would be rejected/ignored.

    -the Blockhead
  • Great thanks. I’ll give it a try

  • Re used WBPP using the RGB framing image as a master with local normalisation set to integration of best frames and evaluation criteria to PSF signal weight. This time I ensured that the post-calibration exposure tolerance was set to 500 to include the 120s, 300s and 600s images I have and they all stacked together in the desired framing. SUCCESS!!

    There are some straight line artefacts at the overlaps, but will look to reduce them, which I will have to investigate further. 

    Thanks Very much! I can now recover a further 20 hours of data into my final image!


Sign In or Register to comment.