I do not have much experience with DSS, as far as I know the result should be very similar. „Lights“ is the term for a single exposure. The technique is basically the same no matter which software you use.
But if you have Siril specific questions feel free to drop any questions :)
It‘s hard to tell from your image, bit it appears you can even get a bit more details if you register your lights onto the comet itself and then stack all the images. I used Siril for the two step registration process.
But nice image nonetheless!
Cool thank you!
Lets see what the next days will bring. As the comet rises higher maybe it will be even more visible.
Your image looks not to bad either. On my phone the comet looked the same like yours.
The comet is also visible on one single exposure as it is very bright. It was also visible to the naked eye. But stacking reveals even more details because it improves the signal to noise ratio. Also stacking helps removing unwanted objects like satellite trails, planes or moving clouds.
Thanks for sharing! The software worked better than expected on your image!
On my research before purchasing the program I also stumbled over your linked forum post. However I found it very misleading, as the software does not generate details learned from other images but only works with data already in your image. As it is a deconvolution tool results can deviate slightly from the true nature. But that has little to do with AI being used here. I needed a whole semester at university to truly understand the maths behind it. My biggest problem is that the software isn’t open source so one can’t look into all the details. But there are already people working on open alternatives.
But this already is a very specific problem, don’t forget that the biggest difference for good quality makes the data itself. I wish best of luck on your journey!
Oh and I forgot to mention that one other advice I would give is to search the darkest location to shoot from that you can access. Lightpollutionmap really helps with finding such places.
Edit: unstreched file with only background extraction and deconvolution: https://drive.proton.me/urls/QD4870ZMF4#uyVBYxKgWxgb
For untracked this looks not too bad. To improve I would do the following:
Are you interested in sharing the raw stacked file? I use a (paid) deconvolution tool called BlurXterminator and I wonder if it can handle such extreme star shapes. If it works I will of course send you the file.
In amateur astrophotography narrowband filters are used to reveal interstellar ionized gas clouds. In this case you can see hydrogen nebulae as red blobs in the spiral arms around M31.
Well for the high cost, apart from producing them is technologically challenging, it is just simple economics. The market for these filters is still small compared to other things, so the price must be higher to compensate production and development.
Thanks!
No not for this target. As I use a dual narrowband filter I‘ll always get OIII as well, but as you said in this case it just shows the normal continuum. I shot this data to combine it with RGB in the future, but I liked how M31 looks with just narrowband so I wanted to share anyway.
The equipment put together:
I couldn’t agree more, thank you for sharing so much information!
We were in exactly the same situation and bought a Fuji camera. We are very happy with our decision as we can shoot both ‘normal’ photography with the feeling of a nice camera body and astrophotography on a beginner level.
The results we got so far exceeded our expectations by far, we posted some of our images here in this sub or here in full resolution .
One thing to keep in mind is that normal cameras block most of the infrared light, which makes it unsuitable for shooting hydrogen nebulae. That’s a minor reason why we eventually chose a Fuji camera, as they filter a bit less infrared than other brands.
In the end the biggest impact makes the lens/telescope. After a lot of research we chose the Samyang/ Rokinon 135mm f2.0 lens. Also we found it very rewarding shooting with such a ‘small’ focal length because it forgives minor inaccuracies while giving very good results.
For us the biggest reasons for this hobby are to experience the night sky with our own equipment and learning very much (about physics, processing the data, cameras, …). Both things can be achieved with modest equipment and I would keep that in mind when comparing own images with others. Also I personally love the challenge to get the best possible results with things you already have.
Hope that helped a bit.
Full resolution image and more details here
Also this is what our setup looks like to shoot such an image:
Also this is a 3d animation of our setup used to shoot this image:
Very interesting, thank you for sharing. Your linked gif makes it very apparent that it was indeed a satellite.
I also searched in Stellarium and found a decommissioned military satellite called STSS Demo 2 that fits the path and time stamp perfectly.
-9 magnitude is insane, must’ve been a very cool sight.
Ah ok, so I assumed you registered all your light frames onto your stars as your stars look very sharp. And that’s the normal way for every astro image you would normally do. A comet however moves so fast that its position changes even in the short time frame were you took the images.
So after registering all the images with the stars pattern you want to make a second registration were you mark the position of the comet on the first frame and on the last frame. With that now all images are aligned onto the comet and now the stars appear to move in the background. As your stars look so sharp I assumed you didn’t make the second registration. In DSS there is a comet mode for that but I haven’t worked with that so I can’t tell you about the workflow with that program.
Hope that helped in any way!