Update: a more complete and updated info about the use of reverse floating point buffer can be found in post Maximizing Depth Buffer Range and Precision.
I had always thought that using a floating point depth buffer on modern hardware would solve all the depth buffer problems, in a similar way than the logarithmic depth buffer but without requiring any changes in the shader code, and having no artifacts and potential performance losses in their workarounds. So I was quite surprised when swiftcoder mentioned in this thread that he had found that the floating point depth buffer had insufficient resolution for a planetary renderer.
The value that gets written into the Z-buffer is value of z/w after projection, and it has an unfortunate shape that gives enormous precision to a very narrow part close to the near plane. In fact, almost half of the possible values lie within two times the distance of the near plane. In the picture below it's the red "curve". The logarithmic distribution (blue curve), on the other hand, is optimal with regards to object sizes that can be visible at given distance.
Floating point depth buffer should be able to handle the original z/w curve because the exponent part corresponds to the logarithm of the number.
However, here's the catch. Depth buffer values converge towards value of 1.0, or in fact most of the depth range gives values very close to it. The resolution of floating point around 1.0 is entirely given by the mantissa, and it's approximately 1e-7. That is not enough for planetary rendering, given the ugly shape of z/w.
However, the solution is quite easy. Swapping the values of far and near plane and changing the depth function to "greater" inverts the z/w shape so that it iterates towards zero with rising distance, where there is a plenty of resolution in the floating point.
I've also found an earlier post by Humus where he says the same thing, and also gives more insight into the old W-buffers and various Z-buffer properties and optimizations.
I had always thought that using a floating point depth buffer on modern hardware would solve all the depth buffer problems, in a similar way than the logarithmic depth buffer but without requiring any changes in the shader code, and having no artifacts and potential performance losses in their workarounds. So I was quite surprised when swiftcoder mentioned in this thread that he had found that the floating point depth buffer had insufficient resolution for a planetary renderer.
The value that gets written into the Z-buffer is value of z/w after projection, and it has an unfortunate shape that gives enormous precision to a very narrow part close to the near plane. In fact, almost half of the possible values lie within two times the distance of the near plane. In the picture below it's the red "curve". The logarithmic distribution (blue curve), on the other hand, is optimal with regards to object sizes that can be visible at given distance.
Floating point depth buffer should be able to handle the original z/w curve because the exponent part corresponds to the logarithm of the number.
However, here's the catch. Depth buffer values converge towards value of 1.0, or in fact most of the depth range gives values very close to it. The resolution of floating point around 1.0 is entirely given by the mantissa, and it's approximately 1e-7. That is not enough for planetary rendering, given the ugly shape of z/w.
However, the solution is quite easy. Swapping the values of far and near plane and changing the depth function to "greater" inverts the z/w shape so that it iterates towards zero with rising distance, where there is a plenty of resolution in the floating point.
I've also found an earlier post by Humus where he says the same thing, and also gives more insight into the old W-buffers and various Z-buffer properties and optimizations.