I am ready to dig into stacking methods. I am studying the Helicon explanation page today.
https://www.heliconsoft.com/helicon-foc ... arameters/
What does surface level mean? Does a big change in surface level mean a nearby object right next to (in the image) a far-away object?
Thanks,
Phil
choosing methods for stacking
Re: choosing methods for stacking
I think the docs are referring to relative depth (height in this example). Consider the image fragments below with leads and pins that are some distance above a surface. The first two are using process B and C. In either process you may see halos due to lack of in focus information around (in this case) foreground edges. if the nearest objects are in focus there is a blurred table below, and an in focus table has blurred and bloated pins above.
In the retouching tab there is an option to show a depth map, and this can be useful to show issues in the processing (notice the sudden jumps in the depth map tone around the pins and in a few other places where there isn't a sharp depth change in reality).
Would be nice if there was a "halo mask" feature that output a mask for the likely problematic areas to help in subsequent retouching but there isn't, so halo reducing may require manual effort in an editor but is doable. e.g.
In the retouching tab there is an option to show a depth map, and this can be useful to show issues in the processing (notice the sudden jumps in the depth map tone around the pins and in a few other places where there isn't a sharp depth change in reality).
Would be nice if there was a "halo mask" feature that output a mask for the likely problematic areas to help in subsequent retouching but there isn't, so halo reducing may require manual effort in an editor but is doable. e.g.
Re: choosing methods for stacking
Thanks, Nick.
The halo-reducing retouching is sometimes doable, but often there is no good nearby in-focus material for cloning. Even in your example of good retouching there are places where it's fuzzy around the pins. See, for example the area around the third pin from the right.
I've done some mathematics around the lens equation to confirm what I think is the problem. I'll post the equations another time. Short conclusion -- When focused on the far object, the magnification of the image is larger. That makes the near object larger, even though it's unfocused, when you focus on the far object. So in the halo you have a choice of out-of-focus background or out-of-focus foreground.
The only fix I have imagined is software and hardware that moves both the lens and the sensor so that when you refocus, the magnification stays the same. Seems more way more complicated that going with small prints only if the halos are bad.
Phil
The halo-reducing retouching is sometimes doable, but often there is no good nearby in-focus material for cloning. Even in your example of good retouching there are places where it's fuzzy around the pins. See, for example the area around the third pin from the right.
I've done some mathematics around the lens equation to confirm what I think is the problem. I'll post the equations another time. Short conclusion -- When focused on the far object, the magnification of the image is larger. That makes the near object larger, even though it's unfocused, when you focus on the far object. So in the halo you have a choice of out-of-focus background or out-of-focus foreground.
The only fix I have imagined is software and hardware that moves both the lens and the sensor so that when you refocus, the magnification stays the same. Seems more way more complicated that going with small prints only if the halos are bad.
Phil
Re: choosing methods for stacking
Your thoughts are exactly the issue. If avoiding by choosing a different composition and/or shooting with a higher F number isn't an option, and fixing from other content in the stacked image isn't viable, another approach that would work for some situations is to create stacks with and without the subject and merge. The stack without the subject could be used purely to add texture with frequency separation, or simply masked into place where required. It's also a task where an AI model might be successful.
Possibly related to stacking settings, I had a curious result where stacking with DNGs came out somewhat blurred due to a stacking issue somewhere, whereas the same images as JPGs (I tend to shoot JPG+RAW) had no problem. I've not got to the bottom of that yet. It definitely seems worthwhile to experiment.
Possibly related to stacking settings, I had a curious result where stacking with DNGs came out somewhat blurred due to a stacking issue somewhere, whereas the same images as JPGs (I tend to shoot JPG+RAW) had no problem. I've not got to the bottom of that yet. It definitely seems worthwhile to experiment.
Re: choosing methods for stacking
Removing the foreground subject is a great idea. Thanks. That will be useful some time for me.
I have no idea why there would be a difference between your jpgs and dngs.
I have no idea why there would be a difference between your jpgs and dngs.
Re: choosing methods for stacking
This is correct for hypercentric lenses. It doesn't hold for the majority case of an entocentric lens or (to a good approximation) for special cases such as telecentric lenses operating within their telecentric depth.
It's possible, in principle, to avoid line of sight obstruction without removing the subject if the degree of hypercentricity exceeds the expansion of foreground blur disk. In practice, I don't know of a lens (or lens combination) which obtains this condition. Some forms of coupled lenses can provide some mitigation, as do rail based stacking systems varying focus by moving the rear standard behind a fixed front standard.
Easer to use a background frame in cases where moving the subject out is feasible.