tag:blogger.com,1999:blog-4217826068942535587.comments2013-04-05T23:27:19.947+02:00OuterraUnknownnoreply@blogger.comBlogger197125tag:blogger.com,1999:blog-4217826068942535587.post-11941523243762235012013-04-05T23:27:19.947+02:002013-04-05T23:27:19.947+02:00You are right, I swapped them mistakenly.
But othe...You are right, I swapped them mistakenly.<br />But otherwise it should work, you probably have to debug it with some simple cases to see if it produces the right signs and values.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-36164460220035461842013-04-04T23:48:28.076+02:002013-04-04T23:48:28.076+02:00I believe it should be proj[2][2]=-1 and proj[3][...I believe it should be proj[2][2]=-1 and proj[3][2]=0, because they way you wrote z_p will always be calculates as -1. Either way it does not work...Marius Dransfeldhttps://www.blogger.com/profile/16076444707555327926noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-19054144880976114802013-04-04T21:13:29.080+02:002013-04-04T21:13:29.080+02:00With depth value from the depth buffer in range 0....With depth value from the depth buffer in range 0..1, the camera depth is:<br /><br />(exp(depth/FC)-1.0)/C<br /><br />Since you are using logarithmic depth, you can use projection matrix that produces camera depth in z component (since it's not used in shaders anymore), in OpenGL by setting proj[2][2]=0 and proj[3][2]=-1<br />Make the inverse viewproj matrix from the viewproj matrix made from the view matrix and this updated projection matrix. Then you can simply use your existing routine to compute the worldspace position from uv and the camera depth computed above.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-45166706051071522652013-04-04T20:34:38.002+02:002013-04-04T20:34:38.002+02:00How can I restore the worldspace position of a pix...How can I restore the worldspace position of a pixel from screenspace position and depth ?<br /><br />This is what I use for a standard depth buffer:<br />vec3 getPosition(in float depth, in vec2 uv){<br /> vec4 pos = vec4(uv, depth, 1.0) * 2.0 - 1.0;<br /> pos = m_ViewProjectionMatrixInverse * pos;<br /> return pos.xyz / pos.w;<br />}<br /><br />My logarithmic depth is calculated like this:<br />logz = log(gl_Position.w * C + 1.0) * FC;<br />gl_Position.z = (2.0 * logz - 1.0) * gl_Position.w;Marius Dransfeldhttps://www.blogger.com/profile/16076444707555327926noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-33916504145945644632013-04-02T09:34:39.574+02:002013-04-02T09:34:39.574+02:00Are you heard about DYNAMO-simulation programming ...Are you heard about DYNAMO-simulation programming language? I dont know if Outerra belongs to simulation software, but it is near by those. So, if the life is needed in Outerra you should get to know about creating of simulations.Jarihttps://www.blogger.com/profile/03720596545086195706noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-67571242947008043622013-01-23T16:49:45.838+01:002013-01-23T16:49:45.838+01:00Works correct now, thank youWorks correct now, thank yousomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-18790914028965288992013-01-23T15:52:00.027+01:002013-01-23T15:52:00.027+01:00R is made of u,v and n world-space vectors in rows...R is made of u,v and n world-space vectors in rows, in glsl that would be<br /><br />R = transpose(mat3(u,v,n))Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-64414890622078546632013-01-23T15:40:08.166+01:002013-01-23T15:40:08.166+01:00Partially get it working, but still having false r...Partially get it working, but still having false rejects. Frustum setup is same as yours (left,right,top,bottom,near planes go from origin, far plane - not used), plane normals - normalized. <br />I`m wondering if my R matrix is correct. code is as follows (OpenGL):<br />mat3 R; R[0] = u; R[1] = v; R[2] = cross(u,v);<br />R = inverse(R); // R is now transform from camera-relative world space to uvn space<br />Is it correct ?someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-15769400871496871422013-01-23T14:07:56.247+01:002013-01-23T14:07:56.247+01:00Wow )
Thanks for quick reply!
Yeah, it's reall...Wow )<br />Thanks for quick reply!<br />Yeah, it's really usefull, i've got a lot of forest tiles to cull, and your solution is better than simple sphere test for that case :}<br /><br />I`ll go try now...someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-26556374816997826052013-01-23T14:04:20.359+01:002013-01-23T14:04:20.359+01:00Thanks! I`ll try it outThanks! I`ll try it outsomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-12846612116933746652013-01-23T14:03:49.353+01:002013-01-23T14:03:49.353+01:00Oh, someone found it useful :)
plane is vec4 with...Oh, someone found it useful :)<br /><br />plane is vec4 with plane normal (normalized) and distance. I forgot to add that our frustum is centered in the camera point, and thus all frustum planes except the far one go through it and hence the fourth component d=0. We aren't culling with the far plane. That means dot(box_center,plane_normal) gets us the distance from the center to the plane directly.<br /><br />Now R*plane.xyz is a projection of plane normal into the uvn space, and dot with abs() of it and box half-extents (those are in uvn space) gets us the distance from the center to the farthest/nearest corner of the box from given plane.<br /><br />Hope that helps, I'll add the missing info to the post.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-2947887082491933312013-01-23T13:40:20.380+01:002013-01-23T13:40:20.380+01:00Nice! But what are plane,center and R types? vec3,...Nice! But what are plane,center and R types? vec3,vec3,mat3?<br /><br />so if we have unnormalized frustum planes in world space(or viewer relative world space), tile center and u,v vectors in world space(or viewer relative world space), we can do the following:<br /><br />vec4 planes[6]; // frustum ws<br />vec3 center,u,v,extents; // tile ws<br />mat3 R;<br />R[0] = u;<br />R[1] = v;<br />R[2] = normalize(cross(u,v));<br /><br />// test:<br />for (int i = 0; i < 6; ++i) {<br /> float d = dot(center, planes[i].xyz) + planes[i].w; <br /> float3 npv = abs(R * planes[i]); // how to transform here ? wtb w component? <br /> float r = dot(extents, npv); <br /> if(d+r > 0) return ... // partially inside <br /> if(d-r >= 0) return ... // fully inside<br />}<br /><br />Quite confused :/ Could you clarify a lil bit, please? Thanks!someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-1897132716424421662013-01-10T11:19:48.497+01:002013-01-10T11:19:48.497+01:00If you are NV-only and no stencil, the best thing ...If you are NV-only and no stencil, the best thing would be to use the reverse floating point depth buffer. You are safe from depth artifacts that way, and performance-wise it's almost the same as with the logarithmic 24b depth.<br /><br />To use the reverse FP depth buffer in OpenGL, you need to:<br />- use FP32 depth buffer format<br />- call glDepthRangedNV(-1,+1)<br />- use the DX-like matrix from 'DirectX vs. OpenGL' part of the article, together with GL_LESS depth func (the matrix can be also modified to invert the mapping to use GL_GREATER)<br /><br />No modification to gl_Position is done here in this case, you just use the matrix in normal way.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-70295975407839725802013-01-10T10:48:04.965+01:002013-01-10T10:48:04.965+01:00Thanks for detailed explanation!
So for now, the ...Thanks for detailed explanation!<br /><br />So for now, the best solution for NVonly users would be:<br /><br />0. use FP32 or D24\D32 depth buffer?<br /><br />1. call glDepthRangedNV with -1,+1<br /><br />2a. override z in VS as<br />gl_Position.z = 2.0*log(gl_Position.w*C + 1)/log(far*C + 1) - 1;<br /><br />or<br /><br />2b. draw with projection matrix you<br />'ve specified in 'DirectX vs. OpenGL' part of your article; invert depth test to GL_GREATER\GL_GEQUAL;<br />override z in VS as <br />gl_Position.z = 2.0*log(gl_Position.z*C + 1)/log(far*C + 1) - 1;<br /><br />3. gl_Position.z *= gl_Position.w;<br /><br />Just don't understand which combination would be best possible on NV hardware (stencil is not used at all)...<br />Would you give an advice for this case, please?<br />Thanks!someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-5132095123413205702013-01-09T15:24:37.080+01:002013-01-09T15:24:37.080+01:00When you use a normal projection matrix in OpenGL,...When you use a normal projection matrix in OpenGL, gl_Position.w comes out as a positive depth from the camera, whereas gl_Position.z contains something expressible as a*z+b, what after perspective division by w falls into -1..1 range.<br /><br />Since the logarithmic equation needs the depth from the camera, I'm using w there, otherwise I'd have to use a modified projection matrix where z comes out as the depth.<br /><br />Being able to use an unchanged projection matrix makes it simpler to switch to the reverse floating point depth mode.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-6291776193948523782013-01-09T13:41:58.409+01:002013-01-09T13:41:58.409+01:00Why you are using gl_Position.w
when overriding gl...Why you are using gl_Position.w<br />when overriding gl_Position.z?<br /><br />gl_Position.z = 2.0*log(gl_Position.w*C + 1)/log(far*C + 1) - 1;<br /><br />i thought it should be<br />gl_Position.z = 2.0*log(gl_Position.z*C + 1)/log(far*C + 1) - 1;<br /><br />Correct me if i`m wrong, pleasesomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-66539251794158220542012-12-06T20:05:12.661+01:002012-12-06T20:05:12.661+01:00The inverse is ok, either:
z = (exp(d*log(C*far+1...The inverse is ok, either:<br /> z = (exp(d*log(C*far+1)) - 1)/C<br />or<br /> z = (pow(C*far+1,d)-1)/C<br /><br />z is the viewspace depth already, screenspace x and y need to be multiplied by z first and then by the projection inverse (but just x and y, since z is right already)Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-84075620092474662962012-12-06T17:53:47.346+01:002012-12-06T17:53:47.346+01:00Any thoughts on reconstructing view space position...Any thoughts on reconstructing view space position from a D24S8 depth buffer storing logarithmic depth? I'm trying to use it in a deferred renderer for DX9 and get great depth precision... however, I didn't yet manage to get reconstruction working.<br /><br />For encoding I use:<br />z = log(C * z + 1) / log(C * farClip + 1) * w;<br /><br />When reading back from the depth buffer I've tried:<br />z = (pow(C * farClip + 1, depth) - 1) / C;<br /><br />.. and then pass (x,y,z,1) through the inverse projection matrix to get view space position.arparsohttps://www.blogger.com/profile/05560266159857456488noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-59142650928683682002012-11-30T12:00:48.631+01:002012-11-30T12:00:48.631+01:00Maybe be this article can be usefull: http://www.h...Maybe be this article can be usefull: http://www.humus.name/index.php?page=Articles&ID=4 - about Just Cause 2 render (were not so big open spaces as Outerra but anyway about 30km range).Lex4arthttps://www.blogger.com/profile/16579807522350479982noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-26951926241585164912012-11-29T07:47:03.753+01:002012-11-29T07:47:03.753+01:00Thanks, I forgot to specify the units, added now -...Thanks, I forgot to specify the units, added now - those are frames per second. In the last table performance goes down for the floating-point buffer (the first column).Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-84133086714382257742012-11-29T03:36:48.498+01:002012-11-29T03:36:48.498+01:00Enjoyable read; very nicely explained.
The tables...Enjoyable read; very nicely explained.<br /><br />The tables near the end could do with some units. I'm guessing the numbers are ms to render the image(?), since when you say performance goes down in the last one, the numbers get larger. But the only reference to what you're measuring is a "FPS" note in the intro paragraph of that section.Malcolm Tredinnickhttps://www.blogger.com/profile/07021548995628910191noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-682990823185028732012-11-28T22:25:55.996+01:002012-11-28T22:25:55.996+01:00Yes, some of those probably will come, once we get...Yes, some of those probably will come, once we get the final permission to use them in Outerra.Brano Kemenhttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-47118336823389142172012-11-28T21:51:58.496+01:002012-11-28T21:51:58.496+01:00Wow, what a post! :)
Not so easy to understand but...Wow, what a post! :)<br />Not so easy to understand but really really interesting.<br /><br />P.S.: are those models coming with some new release? ;)giuliohttps://www.blogger.com/profile/06278573372345738494noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-53948170638868240452012-11-16T14:09:09.406+01:002012-11-16T14:09:09.406+01:00Try posting to the nVidia developer forum and see ...Try posting to the nVidia developer forum and see if anyone has more info.<br /><br />https://devtalk.nvidia.comUnknownhttps://www.blogger.com/profile/08007147311708801470noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-51075901490403701972012-11-16T10:59:29.782+01:002012-11-16T10:59:29.782+01:00@David Roger The meshes are still sorted by textur...@David Roger The meshes are still sorted by textures, the binding groups come atop of that. Besides, glUniform is not a problem, the performance hit shows up in the draw call following the texture bind. Binding the textures in groups and then passing the id in uniforms minimizes that hit.Laco Hrabcakhttps://www.blogger.com/profile/03654468440795093747noreply@blogger.com