Volume Rendering

Hello everyone,

I dabbled in volume rendering a little bit and for the most part it is ‘relatively’ simple to make your standard 3d graphics pipeline on top of the existing one.

For now my test renderer have a underlying dataset to just visualize density points in space. But the depth-testing gives me real headaches.

I tried the standard opengl depth-testing. But something is off and I guess the clipspace to worldcoordinate calculations are not the way they should be.

Here the code in my pixel shader (I render the volume on the surface of a cube).

META_PS(true, FEATURE_LEVEL_ES2)
float4 PS_Custom(PixelInput input) : SV_Target
{

    // Get depthbuffer value at point of model hull
    float4 depthBufferValue = DepthBuffer.Sample(MeshTextureSampler, input.Position.xy);
    // Adjust to clipspace range [-1,1]
    float4 clipSpaceValue = float4(input.Position.xy * 2.0f - 1.0f, depthBufferValue.x * 2.0f - 1.0f, 1.0f);
    // From clipspace to normalized world space
    float4 homogenousPosition = mul(clipSpaceValue, ViewProjectionMatrixInverse);
    // Calc real world position
    float4 worldPosition = float4(homogenousPosition.xyz / homogenousPosition.w, 1);
    // Translate to local position
    float3 localPosition = mul(worldPosition, LocalMatrix).xyz;

    float depthLength = length(localPosition - input.CameraLocal);

And this to set up the shader and the matrizes:

            var data = new Data();
            Matrix.Multiply(ref renderContext.View.View, ref renderContext.View.Projection, out var viewProjection);
            Actor.GetLocalToWorldMatrix(out var world);
            Matrix.Invert(ref viewProjection, out var viewProjectionInverse);
            Matrix.Transpose(ref world, out data.WorldMatrix);
            Matrix.Transpose(ref viewProjection, out data.ViewProjectionMatrix);
            Matrix.Transpose(ref viewProjectionInverse, out data.ViewProjectionMatrixInverse);
            Actor.GetWorldToLocalMatrix(out var local);
            Matrix.Transpose(ref local, out data.LocalMatrix);
            data.CameraPosition = Cam.Position;

        context.BindSR(0, _gpuTextureBuffer);
        context.BindSR(1, renderContext.Buffers.DepthBuffer);

Can someone help me with this?

Maybe you could output SV_Depth in your pixel shader and perform depth-testing by GPU (depth buffer could be bonded with render targets)? (hardware depth buffer is always a nightmare for graphics programmers :slight_smile: )

Overriding the pre-calculated depth buffer would be like wasting the champagne and serve water instead. The FLAX Engine does an excellent job and should do as much work for me as possible in general. ; )

But I am one step further. I tested the content of the z-buffer and linearizing is absolutely possible.

This clean visualization is with far and near plane values according to Camera or Render class info. They give both the same standard values.

[ 00:31:48.915 ]: [Info] FarPlane: 40000
[ 00:31:48.935 ]: [Info] NearPlane: 1

And a little bit adjusted for better visualization.

	float n = CameraNear;
	float f = float(CameraFar / 100.0f);

        float depthLinear = n / (f - d * (f - n));

Bytheway… I tripped over the constant buffer initialization the FLAX shader transpiler demands too many times. What are the exact requirements for this? The transpiler only throws standard HLSL errors and the documentation only stated “use [StructLayout(LayoutKind.Sequential)] for more complex buffers”. What are the other layouts for and are there limits for the CB-Buffer structures? The shader itself doesn’t complain and just throw random buffer values around if the binding is not as expected.

For Constant Buffer data we use low-level approach where HLSL buffer matches the structure in C++/C# so that’s why this [StructLayout(LayoutKind.Sequential)] is needed in C# (to ensure structure has proper members layout) and PACK_STRUCT(..) macro for C++ structures.
If you have a problem with binding data with CB you can post that structure code and shader buffer so I can help :slight_smile:

Thank you! I keep it in mind if my buffer values go haywire the next time.

I now moved my cube renderer to a deferred single pass renderer with combined depth testing.

And it works!

Though the setup was an absolute pain. Is it worth it? I hope so. Double pass rendering is easy to begin with but very ugly and slow in the end.

And I come back to the help you offered me mafiesto4.
I guess part of why my first try in depth reconstruction failed was the cb buffer binding.

If i set up more than one Vector3 for binding to the shader, than the values will not be given to the shader.
But as soon as I just pass normal float values, than all goes as expected. Can you look further into this? This error or “behaviour” drove me absolutely nuts. ^^

The Example with error-vector commented out:

META_CB_BEGIN(0, Data)
float4x4 InverseProjectionMatrix;
float4x4 ViewMatrix;
float4x4 InvertedViewMatrix;
float3 CameraPosition;
// float3 ActorPosition;	// -> Values are not correctly bind
float ActorX; // Values are as expected
float ActorY;
float ActorZ;
META_CB_END

In Script:

        [StructLayout(LayoutKind.Sequential)]
        private struct Data
        {
            public Matrix InverseProjectionMatrix;
            public Matrix ViewMatrix;
            public Matrix InvertedViewMatrix;
            public Vector3 CameraPosition;
            // public Vector3 ActorPosition;
            public float ActorX;
            public float ActorY;
            public float ActorZ;
        }


            var data = new Data();

            /////
                
            Matrix.Transpose(ref renderContext.View.View, out data.ViewMatrix);
            Matrix.Transpose(ref renderContext.View.IV, out data.InvertedViewMatrix);
            Matrix.Transpose(ref renderContext.View.IP, out data.InverseProjectionMatrix);
            // data.ActorPosition = Actor.Position;
            data.CameraPosition = Cam.Position;
            data.ActorX = Actor.Position.X;
            data.ActorY = Actor.Position.Y;
            data.ActorZ = Actor.Position.Z;

            /////

Ah it might be because HLSL has different structure alignment rules - everything is aligned to vector4/float4. So your structure should be:

META_CB_BEGIN(0, Data)
float4x4 InverseProjectionMatrix;
float4x4 ViewMatrix;
float4x4 InvertedViewMatrix;
float3 CameraPosition;
float Padding0; // align up to full float4 (float3+float)
float3 ActorPosition;	
float Padding1; // align up to full float4 (float3+float)
float ActorX; 
float ActorY;
float ActorZ;
float Padding2; // align up to full float4 (3*float+float)
META_CB_END

Read more in official HLSL docs.

Holy shit. Yeah, thats it! Thank you very much. That means i need to pad my data struct on cpu-side in classical 16byte fashion or provide filler memory-pointers on the gpu to get access to the data cramped between.