tag:blogger.com,1999:blog-4217826068942535587.comments2023-10-07T11:24:50.610+02:00OuterraOuterrahttp://www.blogger.com/profile/01028397138049592535noreply@blogger.comBlogger198125tag:blogger.com,1999:blog-4217826068942535587.post-36065885082350985012023-09-01T19:42:52.215+02:002023-09-01T19:42:52.215+02:00Your journey through the Himalayas is not just a t...Your journey through the Himalayas is not just <a href="https://dariusztravel.com/plan-trip-to-usa/" rel="nofollow">a trip</a>; it's an odyssey through the very cradle of the gods. With each footstep, you're tracing ancient pilgrimage routes, breathing in air tinged with the scent of prayer flags, and embracing panoramas so grand they defy human comprehension. It's as if you've been invited into a celestial realm, a place where the veil between the earthly and the divine is at its thinnest. A Himalayas trip doesn't just show you the world; it transforms how you see it.Travel APKhttps://www.blogger.com/profile/05548262832682787893noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-11634539743963496972023-09-01T19:39:23.203+02:002023-09-01T19:39:23.203+02:00Embarking on a Himalayas trip is akin to stepping ...Embarking on a Himalayas trip is akin to stepping into a living canvas where every element—the sky-piercing peaks, the ethereal mists, and the tapestry of flora and fauna—contributes to an otherworldly masterpiece. As you traverse its terrains, it's as if you're turning the pages of a grand, <a href="https://dariusztravel.com/restaurants-in-gold-beach-oregon/" rel="nofollow">untold epic</a>, one where each vista offers a new chapter in a story of awe, reverence, and self-discovery.Travel APKhttps://www.blogger.com/profile/05548262832682787893noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-39238293719588100152023-09-01T18:25:28.555+02:002023-09-01T18:25:28.555+02:00Your Himalayas trip is like a soul-stirring sympho...Your Himalayas trip is like a soul-stirring symphony composed of towering peaks, sacred temples, and untamed wilderness. Every step taken on those ancient trails feels like a note in a grand opus, capturing the essence of human endurance, spiritual depth, and the overwhelming majesty of nature. It's not just <a href="https://dariusztravel.com/best-places-to-visit-in-europe-in-april/" rel="nofollow">an adventure</a>, but a pilgrimage of the heart and mind through one of the Earth's last unspoiled sanctuaries.Travel APKhttps://www.blogger.com/profile/05548262832682787893noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-78251243690262665412023-04-10T06:57:59.331+02:002023-04-10T06:57:59.331+02:00Welcome to Your Tour and Travel Guide, Book Your P...Welcome to Your Tour and Travel Guide, Book Your Packages and Comfort here…<br /><a href="https://journeyofhimalaya.com/tourpackage/manali-tour-package-from-chandigarh/" rel="nofollow">Manali Trip Packages from Chandigarh</a><br />journeyofhimalayahttps://www.blogger.com/profile/11243955440346908671noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-58451893454020264492022-08-18T19:17:13.590+02:002022-08-18T19:17:13.590+02:00if you did this 10 years ago, I am eager to see wh...if you did this 10 years ago, I am eager to see what you're doing nowtsiroshttps://www.blogger.com/profile/12095317681886654671noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-54283122325981208162022-07-07T07:35:25.886+02:002022-07-07T07:35:25.886+02:00I'm seeing sea waves on small ponds in some pl...I'm seeing sea waves on small ponds in some places. Is there a long term plan to differentiate between sea water, fresh river and lake water, water falls and rapids. they will need the same foam. The openstreetmap site marks waterfalls. It also should be noted that coral reefs have clearer water. Light gets deeper. <br />Real coral is a long way off. We need a noisy terrain mesh with a very bright multicolored surface. Then over this we could add static coral and moving fronds of soft coral, seaweed and anemones as a complex 'grass'. wesley brucehttps://www.blogger.com/profile/17311908911551426601noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-2961088393346006722022-07-06T10:36:22.557+02:002022-07-06T10:36:22.557+02:00When I go back to the road the changes are permane...When I go back to the road the changes are permanent. Is this local only on my computer or am I making changes others can see? Also its not clear how to rotate a spline node. When I place them its always turned 30 degrees left or right.<br /> wesley brucehttps://www.blogger.com/profile/17311908911551426601noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-11941523243762235012013-04-05T23:27:19.947+02:002013-04-05T23:27:19.947+02:00You are right, I swapped them mistakenly.
But othe...You are right, I swapped them mistakenly.<br />But otherwise it should work, you probably have to debug it with some simple cases to see if it produces the right signs and values.Outerrahttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-36164460220035461842013-04-04T23:48:28.076+02:002013-04-04T23:48:28.076+02:00I believe it should be proj[2][2]=-1 and proj[3][...I believe it should be proj[2][2]=-1 and proj[3][2]=0, because they way you wrote z_p will always be calculates as -1. Either way it does not work...Anonymoushttps://www.blogger.com/profile/16076444707555327926noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-19054144880976114802013-04-04T21:13:29.080+02:002013-04-04T21:13:29.080+02:00With depth value from the depth buffer in range 0....With depth value from the depth buffer in range 0..1, the camera depth is:<br /><br />(exp(depth/FC)-1.0)/C<br /><br />Since you are using logarithmic depth, you can use projection matrix that produces camera depth in z component (since it's not used in shaders anymore), in OpenGL by setting proj[2][2]=0 and proj[3][2]=-1<br />Make the inverse viewproj matrix from the viewproj matrix made from the view matrix and this updated projection matrix. Then you can simply use your existing routine to compute the worldspace position from uv and the camera depth computed above.Outerrahttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-45166706051071522652013-04-04T20:34:38.002+02:002013-04-04T20:34:38.002+02:00How can I restore the worldspace position of a pix...How can I restore the worldspace position of a pixel from screenspace position and depth ?<br /><br />This is what I use for a standard depth buffer:<br />vec3 getPosition(in float depth, in vec2 uv){<br /> vec4 pos = vec4(uv, depth, 1.0) * 2.0 - 1.0;<br /> pos = m_ViewProjectionMatrixInverse * pos;<br /> return pos.xyz / pos.w;<br />}<br /><br />My logarithmic depth is calculated like this:<br />logz = log(gl_Position.w * C + 1.0) * FC;<br />gl_Position.z = (2.0 * logz - 1.0) * gl_Position.w;Anonymoushttps://www.blogger.com/profile/16076444707555327926noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-33916504145945644632013-04-02T09:34:39.574+02:002013-04-02T09:34:39.574+02:00Are you heard about DYNAMO-simulation programming ...Are you heard about DYNAMO-simulation programming language? I dont know if Outerra belongs to simulation software, but it is near by those. So, if the life is needed in Outerra you should get to know about creating of simulations.Jarihttps://www.blogger.com/profile/03720596545086195706noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-67571242947008043622013-01-23T16:49:45.838+01:002013-01-23T16:49:45.838+01:00Works correct now, thank youWorks correct now, thank yousomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-18790914028965288992013-01-23T15:52:00.027+01:002013-01-23T15:52:00.027+01:00R is made of u,v and n world-space vectors in rows...R is made of u,v and n world-space vectors in rows, in glsl that would be<br /><br />R = transpose(mat3(u,v,n))Outerrahttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-64414890622078546632013-01-23T15:40:08.166+01:002013-01-23T15:40:08.166+01:00Partially get it working, but still having false r...Partially get it working, but still having false rejects. Frustum setup is same as yours (left,right,top,bottom,near planes go from origin, far plane - not used), plane normals - normalized. <br />I`m wondering if my R matrix is correct. code is as follows (OpenGL):<br />mat3 R; R[0] = u; R[1] = v; R[2] = cross(u,v);<br />R = inverse(R); // R is now transform from camera-relative world space to uvn space<br />Is it correct ?someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-15769400871496871422013-01-23T14:07:56.247+01:002013-01-23T14:07:56.247+01:00Wow )
Thanks for quick reply!
Yeah, it's reall...Wow )<br />Thanks for quick reply!<br />Yeah, it's really usefull, i've got a lot of forest tiles to cull, and your solution is better than simple sphere test for that case :}<br /><br />I`ll go try now...someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-26556374816997826052013-01-23T14:04:20.359+01:002013-01-23T14:04:20.359+01:00Thanks! I`ll try it outThanks! I`ll try it outsomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-12846612116933746652013-01-23T14:03:49.353+01:002013-01-23T14:03:49.353+01:00Oh, someone found it useful :)
plane is vec4 with...Oh, someone found it useful :)<br /><br />plane is vec4 with plane normal (normalized) and distance. I forgot to add that our frustum is centered in the camera point, and thus all frustum planes except the far one go through it and hence the fourth component d=0. We aren't culling with the far plane. That means dot(box_center,plane_normal) gets us the distance from the center to the plane directly.<br /><br />Now R*plane.xyz is a projection of plane normal into the uvn space, and dot with abs() of it and box half-extents (those are in uvn space) gets us the distance from the center to the farthest/nearest corner of the box from given plane.<br /><br />Hope that helps, I'll add the missing info to the post.Outerrahttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-2947887082491933312013-01-23T13:40:20.380+01:002013-01-23T13:40:20.380+01:00Nice! But what are plane,center and R types? vec3,...Nice! But what are plane,center and R types? vec3,vec3,mat3?<br /><br />so if we have unnormalized frustum planes in world space(or viewer relative world space), tile center and u,v vectors in world space(or viewer relative world space), we can do the following:<br /><br />vec4 planes[6]; // frustum ws<br />vec3 center,u,v,extents; // tile ws<br />mat3 R;<br />R[0] = u;<br />R[1] = v;<br />R[2] = normalize(cross(u,v));<br /><br />// test:<br />for (int i = 0; i < 6; ++i) {<br /> float d = dot(center, planes[i].xyz) + planes[i].w; <br /> float3 npv = abs(R * planes[i]); // how to transform here ? wtb w component? <br /> float r = dot(extents, npv); <br /> if(d+r > 0) return ... // partially inside <br /> if(d-r >= 0) return ... // fully inside<br />}<br /><br />Quite confused :/ Could you clarify a lil bit, please? Thanks!someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-1897132716424421662013-01-10T11:19:48.497+01:002013-01-10T11:19:48.497+01:00If you are NV-only and no stencil, the best thing ...If you are NV-only and no stencil, the best thing would be to use the reverse floating point depth buffer. You are safe from depth artifacts that way, and performance-wise it's almost the same as with the logarithmic 24b depth.<br /><br />To use the reverse FP depth buffer in OpenGL, you need to:<br />- use FP32 depth buffer format<br />- call glDepthRangedNV(-1,+1)<br />- use the DX-like matrix from 'DirectX vs. OpenGL' part of the article, together with GL_LESS depth func (the matrix can be also modified to invert the mapping to use GL_GREATER)<br /><br />No modification to gl_Position is done here in this case, you just use the matrix in normal way.Outerrahttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-70295975407839725802013-01-10T10:48:04.965+01:002013-01-10T10:48:04.965+01:00Thanks for detailed explanation!
So for now, the ...Thanks for detailed explanation!<br /><br />So for now, the best solution for NVonly users would be:<br /><br />0. use FP32 or D24\D32 depth buffer?<br /><br />1. call glDepthRangedNV with -1,+1<br /><br />2a. override z in VS as<br />gl_Position.z = 2.0*log(gl_Position.w*C + 1)/log(far*C + 1) - 1;<br /><br />or<br /><br />2b. draw with projection matrix you<br />'ve specified in 'DirectX vs. OpenGL' part of your article; invert depth test to GL_GREATER\GL_GEQUAL;<br />override z in VS as <br />gl_Position.z = 2.0*log(gl_Position.z*C + 1)/log(far*C + 1) - 1;<br /><br />3. gl_Position.z *= gl_Position.w;<br /><br />Just don't understand which combination would be best possible on NV hardware (stencil is not used at all)...<br />Would you give an advice for this case, please?<br />Thanks!someonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-5132095123413205702013-01-09T15:24:37.080+01:002013-01-09T15:24:37.080+01:00When you use a normal projection matrix in OpenGL,...When you use a normal projection matrix in OpenGL, gl_Position.w comes out as a positive depth from the camera, whereas gl_Position.z contains something expressible as a*z+b, what after perspective division by w falls into -1..1 range.<br /><br />Since the logarithmic equation needs the depth from the camera, I'm using w there, otherwise I'd have to use a modified projection matrix where z comes out as the depth.<br /><br />Being able to use an unchanged projection matrix makes it simpler to switch to the reverse floating point depth mode.Outerrahttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-6291776193948523782013-01-09T13:41:58.409+01:002013-01-09T13:41:58.409+01:00Why you are using gl_Position.w
when overriding gl...Why you are using gl_Position.w<br />when overriding gl_Position.z?<br /><br />gl_Position.z = 2.0*log(gl_Position.w*C + 1)/log(far*C + 1) - 1;<br /><br />i thought it should be<br />gl_Position.z = 2.0*log(gl_Position.z*C + 1)/log(far*C + 1) - 1;<br /><br />Correct me if i`m wrong, pleasesomeonehttps://www.blogger.com/profile/17470767993040095329noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-66539251794158220542012-12-06T20:05:12.661+01:002012-12-06T20:05:12.661+01:00The inverse is ok, either:
z = (exp(d*log(C*far+1...The inverse is ok, either:<br /> z = (exp(d*log(C*far+1)) - 1)/C<br />or<br /> z = (pow(C*far+1,d)-1)/C<br /><br />z is the viewspace depth already, screenspace x and y need to be multiplied by z first and then by the projection inverse (but just x and y, since z is right already)Outerrahttps://www.blogger.com/profile/01028397138049592535noreply@blogger.comtag:blogger.com,1999:blog-4217826068942535587.post-84075620092474662962012-12-06T17:53:47.346+01:002012-12-06T17:53:47.346+01:00Any thoughts on reconstructing view space position...Any thoughts on reconstructing view space position from a D24S8 depth buffer storing logarithmic depth? I'm trying to use it in a deferred renderer for DX9 and get great depth precision... however, I didn't yet manage to get reconstruction working.<br /><br />For encoding I use:<br />z = log(C * z + 1) / log(C * farClip + 1) * w;<br /><br />When reading back from the depth buffer I've tried:<br />z = (pow(C * farClip + 1, depth) - 1) / C;<br /><br />.. and then pass (x,y,z,1) through the inverse projection matrix to get view space position.arparsohttps://www.blogger.com/profile/05560266159857456488noreply@blogger.com