My progress (pt.1)

I haven’t written a new entry for a long time now since I have been very busy and had to look for a new fulltime employment. However, I still tried to put time into my „engine“ and the game I want to build with it once I have all the necessary things in place.

With this post I’d like to give a brief overview of what I have worked on all the months. There is actually quite a lot to talk about and I will most likely talk about the changes in multiple blog posts.

Renderer

I started working on a basic rendering architecture. As a reference I used the resources presented at the molecule musing blog where Stefan Reinalter uses a bucket approach that can be filled with draw calls which then will be sorted and submitted to the gpu. I haven’t integrated sorting yet since there isn’t a lot to draw just now. However, I plan to make use of my task system that should help with multithreading the sort algorithm. Each bucket will than be sorted in parallel.

auto Key = GenDrawKey64(true, Material.Index, RenderLayer::StaticGeom, 0.0f);

auto DrawCmd = GeometryBucket.AddCommand<Draw::DrawIndexedData>(Key);
DrawCmd->IndexCount = Mesh->NumIndices;
DrawCmd->VertexArrayId = Mesh->VertexArrayId;
DrawCmd->Model = InModelMatrix;

The above sample shows how an indexed draw call could be submitted to a specific bucket. In this case a geometry bucket which renders to the gbuffer.

On top of this I implemented a deferred rendering pipeline, SSAO, FXAA, PBR (without reflections and gi at the moment) and am working on shadow mapping.

DX11

Besides the implementation of the renderer I integrated DX11, too. This forced me to rehtink about the structure of how the GDI-layer, which communicates with DirectX or OpenGL. I took some clues from different engines and decided to use an approach that should fit well enough with modern APIs such as DX12 and Vulkan, too.

The take-away here is that I use a large PipelineStateObject that contains all the resources needed for a pipeline state switch whenever objects need to be rendered differently from the other it will make a full pipeline switch.

// Create DEBUG PSO!
GfxPipelineStateObjectDesc desc = {};
desc.RasterizerState.CullMode = GfxCullMode::CullBack;
desc.RasterizerState.FillMode = GfxFillMode::FillSolid;
desc.RasterizerState.FrontCounterClockwise = 0;    
desc.DepthStencilState.DepthEnable = true;
desc.DepthStencilState.DepthFunc = GfxComparisonFunc::LessEqual;
desc.DepthStencilState.DepthWriteMask = GfxDepthWriteMask::DepthWriteMaskAll;
desc.BlendState.RenderTarget[0].BlendEnable = FALSE;
desc.BlendState.RenderTarget[0].RenderTargetWriteMask = (((1 | 2) | 4) | 8);
desc.InputLayout = gPositionNormUV2Layout;
desc.TopologyType = GfxTopologyType::TopologyPolygon;
desc.PixelShaderId = rs->FindShader(CommonShaderHandles::DebugPS)->GfxShaderId;
desc.VertexShaderId = rs->FindShader(CommonShaderHandles::DebugVS)->GfxShaderId;
CachePSO("Debug", gdi->CreatePSO(desc, nullptr));

The code above shows an example of the creation of a pso object. Calling gdi->CreatePSO creates all the state objects internally. PSO Objects can be saved in a map for easy lookup later on.

Switching between PSOs means that I’ll have to call the SetPipelineState method from the GDI class with the id of the pso. Once the pso is bound I can decide to bind the shaders and commit the resources bound to the pipeline state such as Samplers and ShaderResourceViews (Textures, RenderTargets etc.).

The current state of the engine.

As you can see in the screenshot there is still no lighting at the moment. However, this will change in the upcoming days as I already had it up and running when I still used OpenGL and now it’s only a matter of porting it over to DX11 and HLSL.

Once this is implemented I’ll look and see how it performs with more draw calls and complex meshes. I still have to implement texture support, too.

Day 2 – Fixing the camera

This blog post will be a short one, but the problem I were facing took me some time to figure out since I have forgotton so much about the basic math involved and I never had a lot of experience using quaternions anyways.

Ever since I implemented a basic free fly camera there has been an issue with it’s movement. The camera didn’t behave correctly when moving and rotating the mouse calculating the new directions of the camera.

Example: I tilted the camera downwards but the camera moved upwards depending on how the camera were positioned.

I implemented transforms having three values a position, the scale and a quaternion representing the orientation of the transform. Having the orientation allows me to calculate the direction vectors like this:

Forward = Transform->Rotation * glm::vec3(0, 0, 1);
Right = Transform->Rotation * glm::vec3(1, 0, 0);
Up = Transform->Rotation * glm::vec3(0, 1, 0);

In our game world however the camera isn’t moved around, but the world is around the camera. Therefore we need to calculate the inverse of the orientation of the camera.

Quaternions that describe an orientation are defined to be always of unit length. Which gives us an advantage since we can only conjugate the orientation.

We have a quaternion q which is defined by [w, v]. Where w is the scalar value of the quaternion and v is a vector. In order to check if it’s a unit quaternion we can just check it’s magnitude.

Magnitude:
M(q) = sqrt(w² + x² + y² + z²)

Conjugate:
q* = q(w, -v)

Inverse:
q-1 = q* / M(q)

An unit quaternion is given when M(q) = 1. This means that when the magnitude is 1 we can assume that q-1 = q*. There is another optimization we can do here and remove the sqrt in this case since sqrt(1) = 1.

So what I have done in the end is to conjugate the rotation when I calculate the view matrix of the camera:

InCamera->View = glm::mat4_cast(glm::conjugate(InTransform->Rotation));
InCamera->View = glm::translate(InCamera->View, -InTransform->Position);

Links about quaternions I found useful: