Jump to content


  • Content Count

  • Joined

  • Last visited

Member Information

  • Main editing computer
    PC desktop
  • Editing software
  • Monitor Calibrator
  • Cameras, lenses and other photographic equipment
    Nikon D7000, D750, D780, Sigma 150-600, Nikkor 35 1.4, Nikkor 300 f2.8 prime manual, Nikkor 24-120f4, Nikkor 12-24f2.8, Nikkor 80-400 f4.5-5.6, various speedlights, MIOPS trigger, Benro gear head, ES102mm APO triplet refractor with ZWO ASI1600Pro camera behind .8 focal reducer.

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for taking a look and sharing the solution. I'm still working, but will review calibration, get my monitors squared away, go again at WB and then see what I can do with your solution. Thanks again.
  2. Of course, that's why I posted.
  3. That's fine. It's not relevant to the course. Flats neutralize sensor and optical train anomalies such as dust specks and vignette. This is an older master luminance flat I used to calibrate the L channel of the image you fixed, but the new flat I shot last night looks drastically different. The black extends in a gradient from one corner to mid frame. Flats change over time and need to be redone, but it's great that the image can be salvaged to some degree in PS. I assume the 'how' of that process will become apparent to me as I continue with your classes.
  4. That looks pretty damn good. I shot new flats tonight and think the root cause might be there. There is a hard gradient running diagonally corner to about center frame
  5. Ideally, I should probably just collect new data when the smoke clears, but this represents nearly 50 hours of exposure time and it would be great to salvage it.
  6. The latter. I'd just like more even color distribution that is more consistent with the top right. The yellow/brown dust lanes are about right until they drop into bottom right frame and get overwhelmed by the yellow green.
  7. Sorry, there's no reason to keep it in this case and a couple really good reasons not to. In simplifying the description of processing, I failed to mention a dynamic crop is applied to each channel before integration. The actual color image has not been cropped. You can even see the dead pixels on the bottom of the color image. This reduces processing time and the lost data is just not necessary for astro. I read the link you posted about cropping several times and understand the content, but it just doesn't apply to astro. In this instance, M31 fills the entire uncropped frame, so there is no usable data missing from the image.
  8. I'll try and explain in a little better detail... Multiple images are shot through a Red, Green, Blue or Luminance filter for RGB images. Narrowband images are shot through filters that only expose the sensor to narrow frequencies of light corresponding to the emission band they are capturing: Hydrogen alpha, Sulphur II and Oxygen III. This is what Hubble does quite frequently, creating a SHO image mapping SII to Red, Ha to Green and OIII to Blue. A single exposure is generally 5-12 minutes in length and I generally shoot 30 to as many as 100+ exposures through each filter. Each exposure by itself is virtually worthless and signal to noise ratio is rather poor. Stacking these images increases the ratio and creates a single monochromatic image for that specific channel. There are slight changes in framing of a few pixels from one frame to the next to eliminate "walking noise" in a process called Dithering and that results in the first dozen or so pixels around the border being devoid of usable data which has to be cropped before processing. I posted the mono image to show this effect after you asked about cropping. Once each separate channel is processed in this way, they are mapped to their respective color channels to create a RGB image. It is this image that has color calibration and gradient issues and that is the what I need help with. I use specialized software called PixInsight to do all processing, but it is extremely complex and I'd like to use PhotoShop for some of the final processing if possible. Sorry for the confusion. Here is an example of a well processed image of M31: https://thumbor.forbes.com/thumbor/960x0/https%3A%2F%2Fspecials-images.forbesimg.com%2Fimageserve%2F612f383e9ab18a007b719776%2FAndromeda-Galaxy%2F960x0.jpg%3Ffit%3Dscale
  9. The first image is the only color image. You can't make a color image from a single channel. Turn off Red and Green channels on an image in PS and you have a single Blue channel that is monochrome. I take many separate images in each channel, stack each channel, then map them to their respective channels to create the first image I posted.
  10. The color should be more consistent throughout with the upper right. I believe the bottom left has discoloration from smoke lower on the horizon over some of the nights I was collecting the data.
  11. It's a mono camera. The single images are stacked for each channel(LRGB) and cropped to exclude sections with missing data and then combined into colored version. There's no gradient in the mono images. I posted the mono to show the cropped area.
  12. Here's the same frame after stretching.
  13. ZWO ASI1600mm Pro, but I have to backpedal on that previous statement. The xisf was ever so slightly cropped to remove a band along the very bottom that had no data. Here is the original Ha channel output as jpg.
  14. Not really sure how to go about normalizing the color on this from corner to corner, but thought I'd throw it out there. I actually don't even know where to start with Photoshop or even know if it's the right tool...
  • Create New...