tag:blogger.com,1999:blog-4217826068942535587.post1580837133423619458..comments2013-04-05T23:27:19.947+02:00Comments on Outerra: Maximizing Depth Buffer Range and PrecisionBrano Kemenhttps://plus.google.com/114695058354540083700noreply@blogger.comBlogger16125tag:blogger.com,1999:blog-4217826068942535587.post-11941523243762235012013-04-05T23:27:19.947+02:002013-04-05T23:27:19.947+02:00You are right, I swapped them mistakenly.
But othe...You are right, I swapped them mistakenly.<br />But otherwise it should work, you probably have to debug it with some simple cases to see if it produces the right signs and values.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-36164460220035461842013-04-04T23:48:28.076+02:002013-04-04T23:48:28.076+02:00I believe it should be proj[2][2]=-1 and proj[3][...I believe it should be proj[2][2]=-1 and proj[3][2]=0, because they way you wrote z_p will always be calculates as -1. Either way it does not work...Marius Dransfeldhttps://www.blogger.com/profile/16076444707555327926noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-19054144880976114802013-04-04T21:13:29.080+02:002013-04-04T21:13:29.080+02:00With depth value from the depth buffer in range 0....With depth value from the depth buffer in range 0..1, the camera depth is:<br /><br />(exp(depth/FC)-1.0)/C<br /><br />Since you are using logarithmic depth, you can use projection matrix that produces camera depth in z component (since it's not used in shaders anymore), in OpenGL by setting proj[2][2]=0 and proj[3][2]=-1<br />Make the inverse viewproj matrix from the viewproj matrix made from the view matrix and this updated projection matrix. Then you can simply use your existing routine to compute the worldspace position from uv and the camera depth computed above.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-45166706051071522652013-04-04T20:34:38.002+02:002013-04-04T20:34:38.002+02:00How can I restore the worldspace position of a pix...How can I restore the worldspace position of a pixel from screenspace position and depth ?<br /><br />This is what I use for a standard depth buffer:<br />vec3 getPosition(in float depth, in vec2 uv){<br /> vec4 pos = vec4(uv, depth, 1.0) * 2.0 - 1.0;<br /> pos = m_ViewProjectionMatrixInverse * pos;<br /> return pos.xyz / pos.w;<br />}<br /><br />My logarithmic depth is calculated like this:<br />logz = log(gl_Position.w * C + 1.0) * FC;<br />gl_Position.z = (2.0 * logz - 1.0) * gl_Position.w;Marius Dransfeldhttps://www.blogger.com/profile/16076444707555327926noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-26556374816997826052013-01-23T14:04:20.359+01:002013-01-23T14:04:20.359+01:00Thanks! I`ll try it outThanks! I`ll try it outsomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-1897132716424421662013-01-10T11:19:48.497+01:002013-01-10T11:19:48.497+01:00If you are NV-only and no stencil, the best thing ...If you are NV-only and no stencil, the best thing would be to use the reverse floating point depth buffer. You are safe from depth artifacts that way, and performance-wise it's almost the same as with the logarithmic 24b depth.<br /><br />To use the reverse FP depth buffer in OpenGL, you need to:<br />- use FP32 depth buffer format<br />- call glDepthRangedNV(-1,+1)<br />- use the DX-like matrix from 'DirectX vs. OpenGL' part of the article, together with GL_LESS depth func (the matrix can be also modified to invert the mapping to use GL_GREATER)<br /><br />No modification to gl_Position is done here in this case, you just use the matrix in normal way.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-70295975407839725802013-01-10T10:48:04.965+01:002013-01-10T10:48:04.965+01:00Thanks for detailed explanation!
So for now, the ...Thanks for detailed explanation!<br /><br />So for now, the best solution for NVonly users would be:<br /><br />0. use FP32 or D24\D32 depth buffer?<br /><br />1. call glDepthRangedNV with -1,+1<br /><br />2a. override z in VS as<br />gl_Position.z = 2.0*log(gl_Position.w*C + 1)/log(far*C + 1) - 1;<br /><br />or<br /><br />2b. draw with projection matrix you<br />'ve specified in 'DirectX vs. OpenGL' part of your article; invert depth test to GL_GREATER\GL_GEQUAL;<br />override z in VS as <br />gl_Position.z = 2.0*log(gl_Position.z*C + 1)/log(far*C + 1) - 1;<br /><br />3. gl_Position.z *= gl_Position.w;<br /><br />Just don't understand which combination would be best possible on NV hardware (stencil is not used at all)...<br />Would you give an advice for this case, please?<br />Thanks!someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-5132095123413205702013-01-09T15:24:37.080+01:002013-01-09T15:24:37.080+01:00When you use a normal projection matrix in OpenGL,...When you use a normal projection matrix in OpenGL, gl_Position.w comes out as a positive depth from the camera, whereas gl_Position.z contains something expressible as a*z+b, what after perspective division by w falls into -1..1 range.<br /><br />Since the logarithmic equation needs the depth from the camera, I'm using w there, otherwise I'd have to use a modified projection matrix where z comes out as the depth.<br /><br />Being able to use an unchanged projection matrix makes it simpler to switch to the reverse floating point depth mode.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-6291776193948523782013-01-09T13:41:58.409+01:002013-01-09T13:41:58.409+01:00Why you are using gl_Position.w
when overriding gl...Why you are using gl_Position.w<br />when overriding gl_Position.z?<br /><br />gl_Position.z = 2.0*log(gl_Position.w*C + 1)/log(far*C + 1) - 1;<br /><br />i thought it should be<br />gl_Position.z = 2.0*log(gl_Position.z*C + 1)/log(far*C + 1) - 1;<br /><br />Correct me if i`m wrong, pleasesomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-66539251794158220542012-12-06T20:05:12.661+01:002012-12-06T20:05:12.661+01:00The inverse is ok, either:
z = (exp(d*log(C*far+1...The inverse is ok, either:<br /> z = (exp(d*log(C*far+1)) - 1)/C<br />or<br /> z = (pow(C*far+1,d)-1)/C<br /><br />z is the viewspace depth already, screenspace x and y need to be multiplied by z first and then by the projection inverse (but just x and y, since z is right already)Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-84075620092474662962012-12-06T17:53:47.346+01:002012-12-06T17:53:47.346+01:00Any thoughts on reconstructing view space position...Any thoughts on reconstructing view space position from a D24S8 depth buffer storing logarithmic depth? I'm trying to use it in a deferred renderer for DX9 and get great depth precision... however, I didn't yet manage to get reconstruction working.<br /><br />For encoding I use:<br />z = log(C * z + 1) / log(C * farClip + 1) * w;<br /><br />When reading back from the depth buffer I've tried:<br />z = (pow(C * farClip + 1, depth) - 1) / C;<br /><br />.. and then pass (x,y,z,1) through the inverse projection matrix to get view space position.arparsohttps://www.blogger.com/profile/05560266159857456488noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-59142650928683682002012-11-30T12:00:48.631+01:002012-11-30T12:00:48.631+01:00Maybe be this article can be usefull: http://www.h...Maybe be this article can be usefull: http://www.humus.name/index.php?page=Articles&ID=4 - about Just Cause 2 render (were not so big open spaces as Outerra but anyway about 30km range).Lex4arthttps://www.blogger.com/profile/16579807522350479982noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-26951926241585164912012-11-29T07:47:03.753+01:002012-11-29T07:47:03.753+01:00Thanks, I forgot to specify the units, added now -...Thanks, I forgot to specify the units, added now - those are frames per second. In the last table performance goes down for the floating-point buffer (the first column).Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-84133086714382257742012-11-29T03:36:48.498+01:002012-11-29T03:36:48.498+01:00Enjoyable read; very nicely explained.
The tables...Enjoyable read; very nicely explained.<br /><br />The tables near the end could do with some units. I'm guessing the numbers are ms to render the image(?), since when you say performance goes down in the last one, the numbers get larger. But the only reference to what you're measuring is a "FPS" note in the intro paragraph of that section.Malcolm Tredinnickhttps://www.blogger.com/profile/07021548995628910191noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-682990823185028732012-11-28T22:25:55.996+01:002012-11-28T22:25:55.996+01:00Yes, some of those probably will come, once we get...Yes, some of those probably will come, once we get the final permission to use them in Outerra.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-47118336823389142172012-11-28T21:51:58.496+01:002012-11-28T21:51:58.496+01:00Wow, what a post! :)
Not so easy to understand but...Wow, what a post! :)<br />Not so easy to understand but really really interesting.<br /><br />P.S.: are those models coming with some new release? ;)giuliohttps://www.blogger.com/profile/06278573372345738494noreply@blogger.com