My progress (pt.1)

I haven’t written a new entry for a long time now since I have been very busy and had to look for a new fulltime employment. However, I still tried to put time into my „engine“ and the game I want to build with it once I have all the necessary things in place.

With this post I’d like to give a brief overview of what I have worked on all the months. There is actually quite a lot to talk about and I will most likely talk about the changes in multiple blog posts.


I started working on a basic rendering architecture. As a reference I used the resources presented at the molecule musing blog where Stefan Reinalter uses a bucket approach that can be filled with draw calls which then will be sorted and submitted to the gpu. I haven’t integrated sorting yet since there isn’t a lot to draw just now. However, I plan to make use of my task system that should help with multithreading the sort algorithm. Each bucket will than be sorted in parallel.

auto Key = GenDrawKey64(true, Material.Index, RenderLayer::StaticGeom, 0.0f);

auto DrawCmd = GeometryBucket.AddCommand<Draw::DrawIndexedData>(Key);
DrawCmd->IndexCount = Mesh->NumIndices;
DrawCmd->VertexArrayId = Mesh->VertexArrayId;
DrawCmd->Model = InModelMatrix;

The above sample shows how an indexed draw call could be submitted to a specific bucket. In this case a geometry bucket which renders to the gbuffer.

On top of this I implemented a deferred rendering pipeline, SSAO, FXAA, PBR (without reflections and gi at the moment) and am working on shadow mapping.


Besides the implementation of the renderer I integrated DX11, too. This forced me to rehtink about the structure of how the GDI-layer, which communicates with DirectX or OpenGL. I took some clues from different engines and decided to use an approach that should fit well enough with modern APIs such as DX12 and Vulkan, too.

The take-away here is that I use a large PipelineStateObject that contains all the resources needed for a pipeline state switch whenever objects need to be rendered differently from the other it will make a full pipeline switch.

// Create DEBUG PSO!
GfxPipelineStateObjectDesc desc = {};
desc.RasterizerState.CullMode = GfxCullMode::CullBack;
desc.RasterizerState.FillMode = GfxFillMode::FillSolid;
desc.RasterizerState.FrontCounterClockwise = 0;    
desc.DepthStencilState.DepthEnable = true;
desc.DepthStencilState.DepthFunc = GfxComparisonFunc::LessEqual;
desc.DepthStencilState.DepthWriteMask = GfxDepthWriteMask::DepthWriteMaskAll;
desc.BlendState.RenderTarget[0].BlendEnable = FALSE;
desc.BlendState.RenderTarget[0].RenderTargetWriteMask = (((1 | 2) | 4) | 8);
desc.InputLayout = gPositionNormUV2Layout;
desc.TopologyType = GfxTopologyType::TopologyPolygon;
desc.PixelShaderId = rs->FindShader(CommonShaderHandles::DebugPS)->GfxShaderId;
desc.VertexShaderId = rs->FindShader(CommonShaderHandles::DebugVS)->GfxShaderId;
CachePSO("Debug", gdi->CreatePSO(desc, nullptr));

The code above shows an example of the creation of a pso object. Calling gdi->CreatePSO creates all the state objects internally. PSO Objects can be saved in a map for easy lookup later on.

Switching between PSOs means that I’ll have to call the SetPipelineState method from the GDI class with the id of the pso. Once the pso is bound I can decide to bind the shaders and commit the resources bound to the pipeline state such as Samplers and ShaderResourceViews (Textures, RenderTargets etc.).

The current state of the engine.

As you can see in the screenshot there is still no lighting at the moment. However, this will change in the upcoming days as I already had it up and running when I still used OpenGL and now it’s only a matter of porting it over to DX11 and HLSL.

Once this is implemented I’ll look and see how it performs with more draw calls and complex meshes. I still have to implement texture support, too.

Day 3, 4 – Fixing Mesh Imports

I have combined the last couple of days into one blog post since I have done multiple things since I fixed the camera issues.

I found another issue with how I imported meshes and this took me a lot longer than it should have taken (I guess this was mostly due to me not getting enough sleep since I fixed the issue the next day within minutes).

First of all what was the problem? When I implemented mesh loading using the assimp library I mostly just copy pasted most of the code (I know never do this) in order to get something up and running fast to test out other stuff. I never tested the mesh loading with different models so I was pretty surprised when I got this:

When I actually should have gotten this, without the textures / colors of course.

The first thing I did was refactor / restructure my code to get a better overview. This changed nothing except giving me a clearer overview of the code in place which is good anyway.

After a lot of digging around in RenderDoc and looking at the Index Buffers and Vertex Buffers I knew that the problem definitely was in how these were passed to the GPU. It turned out that I was just stupid since I mixed up the index buffer and vertex array buffer binding orders which caused the mesh to be incorrect.

As you can see from the screenshot now everything looks as it should look, except for the texture/material of course which I will implement soon.

Day 2 – Fixing the camera

This blog post will be a short one, but the problem I were facing took me some time to figure out since I have forgotton so much about the basic math involved and I never had a lot of experience using quaternions anyways.

Ever since I implemented a basic free fly camera there has been an issue with it’s movement. The camera didn’t behave correctly when moving and rotating the mouse calculating the new directions of the camera.

Example: I tilted the camera downwards but the camera moved upwards depending on how the camera were positioned.

I implemented transforms having three values a position, the scale and a quaternion representing the orientation of the transform. Having the orientation allows me to calculate the direction vectors like this:

Forward = Transform->Rotation * glm::vec3(0, 0, 1);
Right = Transform->Rotation * glm::vec3(1, 0, 0);
Up = Transform->Rotation * glm::vec3(0, 1, 0);

In our game world however the camera isn’t moved around, but the world is around the camera. Therefore we need to calculate the inverse of the orientation of the camera.

Quaternions that describe an orientation are defined to be always of unit length. Which gives us an advantage since we can only conjugate the orientation.

We have a quaternion q which is defined by [w, v]. Where w is the scalar value of the quaternion and v is a vector. In order to check if it’s a unit quaternion we can just check it’s magnitude.

M(q) = sqrt(w² + x² + y² + z²)

q* = q(w, -v)

q-1 = q* / M(q)

An unit quaternion is given when M(q) = 1. This means that when the magnitude is 1 we can assume that q-1 = q*. There is another optimization we can do here and remove the sqrt in this case since sqrt(1) = 1.

So what I have done in the end is to conjugate the rotation when I calculate the view matrix of the camera:

InCamera->View = glm::mat4_cast(glm::conjugate(InTransform->Rotation));
InCamera->View = glm::translate(InCamera->View, -InTransform->Position);

Links about quaternions I found useful:

Day 1 – Boilerplate

The last couple of blog posts have been about my ventures into game dev with Unity and Unreal rather than what I first set to do with this blog to document my journey programming a small engine and games with it.

I wanted to do this to brush up my C++ and get more into low-level programming again. A couple of days I started where I left off (mostly due to me not having a lot of time as stated in the last blog post).

As I said this is more of a documentation about me learning stuff rather than some sort of tutorial.

I started out with DX12 but quickly realised that a lot has changed from DX11. A lot changed that it makes it harder to even get something simple up and running.

I decided to ditch DX12 in favor of OpenGL in order to be able to focus on implementing things fast instead of having to deal with an API for the next couple of months.

Someday I want to definitely pick up DX12 again and implement it as a renderer api, but for now I will stay with OpenGL and might integrate DX11 later on.

So what I’m doing at the moment is moving all OpenGL related code into a layer I call GDI (Graphics Device Interface). This layer contains all low-level graphics api calls. I mostly followed the tutorial series of Cherno ( here. The reason I added an abstraction between the renderer and the gdi is mostly due to me being stupid and wanting to support multiple APIs.

auto Shader = ResourceTable::GetShader(CommonShaderHandles::Debug);
		assert(Shader != nullptr);

		auto ShaderResource = shader->GetResource();
		if (ShaderResource )


			ShaderResource->SetInt("ourTexture", 0);

			glBindTexture(GL_TEXTURE_2D, diffuseTexture->GDI_TextureId);

			// set pojection
			ShaderResource->SetMat4("model", Model);
			ShaderResource->SetMat4("view", g_camera->GetView());
			ShaderResource->SetMat4("projection", g_camera->GetProjection());

			glBindTexture(GL_TEXTURE_2D, 0);


The above sample shows the current rendering code I refactored into the GDI-Layer which renders a simple cube mesh. Once I got the renderer up and running this will of course be moved to individual render commands.

This is an example of how a vertex array with associated index buffer and vertex buffer can be created and be uploaded to the GPU.

			GfxVertexArray* GridArray;
			GridVertexArrayId = GfxGDI::Get()->CreateVertexArray(&GridArray);

			GfxBufferLayout Layout = 
				{ GfxShaderDataType::Float3, "aPos" },
				{ GfxShaderDataType::Float2, "aUV0" }

			GfxVertexBuffer* VertexBuffer;

			GfxGDI::Get()->CreateVertexBuffer(vertices, sizeof(vertices), &VertexBuffer);

			GfxIndexBuffer* IndexBuffer;
			GfxGDI::Get()->CreateIndexBuffer(indices, sizeof(indices) / sizeof(u16), &IndexBuffer);


This is mostly the same as the youtube video suggest to do it, but with some adjustments I decided todo.

Every Graphics Resource has an handle assigned to identify them. I decided to use handles a lot in order to decouple stuff. Handles in my opinion have the benefit that they can be passed very easily without having to worry about nullpointers, smart pointers and the like.

struct DAWN_API GenericHandle
	u32 Index = 0;
	u32 Generation = 0;
	bool IsValid = false;

This is a simple handle that can be passed around and be used to access any Graphics Resource from the GDI by the set index.

Eventhough it’s a lot of work to support multiple graphics apis I will still do it since this is only a learning experience for me.

Unfortunately, I’m not quite happy yet, how the abstraction is handled since I think adding a lot of abstraction / layers a) adds more complexity to the code and b) has a certain performance impact to the overall application.

However, in the past I saw some engines doing it in a similar fashion. For Example as far as I know and have seen from the source code Unreal Engine 4 does it in a very similar fashion, yet on a totally different scale of course.

IKL – Round Up

I have been very busy lately which caused delays in the blog posts. There is still a lot of work ahead of me. However, I would like to give a round up about what happened recently what I did in the last couple of weeks.


We attended gamescom with Idle Knight Legends and had some great meetings with publishers and other game devs. This was mostly due to us looking for a potential partner that can us help with marketing, monetization strategies and other areas.

Unfortunately, this year we didn’t have any time to see the games. I would have loved to see the Cyberpunk 2077 presentation, but the lines are just growing and growing each year.


Most of the things I work on have been User Interface related. A larger thing I integrated and still am integrating is how the game reacts when the player reaches it’s inventory limits.

The players inventory is limited to 10 items (might change later) which can be extended by using ingame currency. So once the player runs out of space we somehow need to communicate this to the player and give him/her an option what to do with the received item.

The initial process how this should be handled looked as follows -> show a window that has the option to either reject the item or swap it with an existing one -> if the player wants to swap the item open the inventory -> the player selects the item and disenchants it -> close the inventory and return to the loot chest. This seemed rather tidious which caused me to come up with a, in my opinion, better solution.

As you can see from the screenshot there is a Storage placed at the castle. This storage will save all the collected items and miners when there is no place left in the respective inventories for both, thus the player can decide to collect them later on once he/she has more place.

Besides the storage chest I started integrated an improved ui that gives the player the option to compare the currently equipped item with the new one and make his/her decision based on the information he/she gets.

Once the inventory is full the player will have the option to either store the item or disenchant it. In the sample screenshot there is of course no real difference between the two items stat wise but it should get the point across anyways. I will add an option to swap items directly, too.

Improving Feedback

We decided to add the name of the miner and their quality to the portrait in order to give the player better feedback which miner is assigned to which lane. This in my opinion greatly improved the feel of the game. We would like to give miners different color schemes which if it works allows us to maybe integrate a dye system for armors, too.

UE4 Dev 04 – Implementing Death

Yesterday I thought it would be great to implement a way how the Combat System is notified about death of participants while a fight is going on. While doing this I implement the death animation for my current prototype enemy, too.

The one thing still missing is the notification about the death which is ironically since this is the essential part of it, but this is just a matter of calling the method I added to the Combat System. I haven’t done it yet since I’m still unsure which layer I want to give control over this.

A video of the game showing the death implementation in action (heavy WIP).

The death of a combat participant is determined by the character’s health attribute that is attached to it. As mentioned before in another blog post I used GAS (GameplayAbilitySystem) to add attributes to characters in the world.

To easily figure out if the character is still alive I added a IsAlive() method to the base character class in my RPG plugin layer. This is straightforward and easy to use.

float ARPGCharacterBase::GetHealth() const
	return this->AttributeSet->GetHealth();

bool ARPGCharacterBase::IsAlive() const
	return this->GetHealth() > 0.0f;

This method is nothing special and it only checks if the health is greater than zero as seen in the code sample.

This method can be called within the animation blueprint for the enemy I use which in return will play the death animation and stay there till the state changes.

Animation Blueprint of the Enemy Character. This checks if the character is still alive.

As you can see this is pretty easy as it will only set the IsDead variable to the value of the IsAlive method.

This is the very simple animation graph.

So once the IsDead variable is set to true the graph will transition into the Death state. This is of course at the moment very simple since the enemy cannot do a lot currently.

This is of course isn’t related to the combat system at all. In order to be able to have some hook I can use to notify the Combat System I added a BlueprintNativeEvent OnDeath to the base character class, that will be called once the health is zero.

void OnDeath();

This can of course be used in blueprints or overridden in C++ with custom logic. As you might imagined this will be used to notify the CombatManager about the death of a potential CombatParticipant.

The following method can be called to notify the system about the character’s death.

void UCombatManager::NotifyDeath(ARPGCharacterBase* InCharacter)
	if (!bIsInCombat || !InCharacter)

	auto Participant = this->Participants.FindByPredicate([InCharacter](FCombatParticipant& Participant) { return Participant.Character == InCharacter; });
	if (!Participant)

	if (Participant->bIsDead)

	Participant->bIsDead = true;
	bool bEndSequence = false;
	bool bPlayerWon = false;

	if (Participant->bIsEnemy)

		if (RoundInfo.DeadEnemies >= RoundInfo.TotalEnemies)
			bEndSequence = true;
			bPlayerWon = true;

		if (RoundInfo.DeadPlayers >= RoundInfo.TotalPlayerParty) {
			bEndSequence = true;


This is the entire method that is used to determine whether or not the „sequence“ is over after the death of the passed Character.

UE4 Dev 03 – Integrating GAS (Part 2)

The last part of the blog post series contained general information about the GameAbility System in UE4. This part will present information about how I use the system in my project.

Please be aware that I started working with GAS a week ago and I’m still learning a lot each day using the system. I might change my view about somethings in a couple of weeks.

The Project

While digging through the unreal engine marketplace I stumbled upon the isometric cube world asset package.

This sparked my imagination and I settled on implementing a turn based RPG with small worlds similiar to the ones you can see in the screenshot. Worlds are seamlessly connected and the player will have to fight, solve puzzles and quests to progress further in the game.

Let me explain how I structured the project and for what I’m using GAS. I split the project into multiple plugins one containing RPG specific classes. This contains GAS base classes I copied over from the ActionRPG sample, too.

The second plugin implements the turn based combat. It contains several classes that can be extended in blueprints and function. I structured the plugin in a way that it can be used for other projects, too if I so desire.

As you might already figured out I use GAS for different kinds of combat commands such as attacks, using items and so on. Each time I need to change the attributes such as health I will apply a Gameplay Effect.

Using Combat Commands and GAS

Combat Commands inherit from UObject and can be used to execute code when activated similar to the UGambeplayAbility class. These commands can be implemented via blueprints. Once attached to a character actor they can be executed if there is enough AP (ActionPoints) left for that player.

This sounds a lot like Gameplay Abilities they too can be implemented via blueprints and be executed by a GameplaySystemComponent attached to a actor in the world. Why not use Gameplay Abilities as Combat Commands directly? This would work perfectly fine I guess, but the reason why I chose not to do this is that I want to keep the freedom of choice and provide a rather generic approach to turn based combat.

I don’t want to always use GAS in every project I might end up using the Turn Based Combat system I wrote, thus I’m happy with how it works at this point of development. Of course this is more of a fun project and a learning experience. I’d have decided otherwise in a more professional project.

Blueprint Graph in UE4
Blueprint Graph of a specialized combat command activating an ability from GAS.

As seen in the blueprint graph once the command is applied it tries to execute an Ability by it’s class. The class of the ability comes from the associated command data. Command Data is an data asset that can holds data such as how high the cost of action points is, which combat command class to use among other stuff.

The most interesting bit is the function that is being called TryActivateAbilityByClass. There is another alternative that is called TryActivateAbilityByTag which takes a gameplay tag that is associated to a specific ability and tries to execute it.

* Attempts to activate every gameplay ability that matches the given tag and DoesAbilitySatisfyTagRequirements().
* Returns true if anything attempts to activate. Can activate more than one ability and the ability may fail later.
* If bAllowRemoteActivation is true, it will remotely activate local/server abilities, if false it will only try to locally activate abilities.
UFUNCTION(BlueprintCallable, Category = "Abilities")
bool TryActivateAbilitiesByTag(const FGameplayTagContainer& GameplayTagContainer, bool bAllowRemoteActivation = true);

 * Attempts to activate the ability that is passed in. This will check costs and requirements before doing so.
 * Returns true if it thinks it activated, but it may return false positives due to failure later in activation.
 * If bAllowRemoteActivation is true, it will remotely activate local/server abilities, if false it will only try to locally activate the ability
UFUNCTION(BlueprintCallable, Category = "Abilities")
bool TryActivateAbilityByClass(TSubclassOf<UGameplayAbility> InAbilityToActivate, bool bAllowRemoteActivation = true);

 * Attempts to activate the given ability, will check costs and requirements before doing so.
 * Returns true if it thinks it activated, but it may return false positives due to failure later in activation.
 * If bAllowRemoteActivation is true, it will remotely activate local/server abilities, if false it will only try to locally activate the ability
bool TryActivateAbility(FGameplayAbilitySpecHandle AbilityToActivate, bool bAllowRemoteActivation = true);

These methods are available in the mentioned UAbilitySystemComponent that is attached to the executing actor (in my case a character base class in the RPG Plugin).

The first two variants ByClass and ByTag can be called from Blueprints to activate the ability. Whereas the last one TryActivateAbility is only available in C++ and takes a FGameplayAbilitySpecHandle to the Ability.

This is the time to talk about another very important aspect. From my understanding in order to be able to activate an ability you’ll have to first „give“ it to the actor that tries to execute it.

In other words you need to call the GiveAbility method in the UAbilitySystemComponent.

FGameplayAbilitySpecHandle GiveAbility(const FGameplayAbilitySpec& AbilitySpec);

As far as I know there is no possibility to call this function from Blueprints so you have to use C++. As you can see the method returns a handle that can later be passed to activate the ability.

The way how I register abilities at the moment is that I iterate over all the attached combat commands and check if they are of the type UAbilitCombatCommandData (a custom class derived from the CombatCommandData Class from my Turn Based Combat Plugin) and register the ability class with the corresponding AbilitySystemComponent.

if (CombatCommands.Num() > 0)
        auto AbilitySystem = GetAbilitySystemComponent();
		for (auto Command : CombatCommands)
			auto AbilityData = Cast<UAbilityCombatCommandData>(Command);
			if (AbilityData)
	if (AbilitySystem)
		AbilitySystem->GiveAbility(FGameplayAbilitySpec(AbilityData->Ability, this->GetCharacterLevel(), INDEX_NONE, this));


The above code is currently used in the BeginPlay method of the player character. I will move this to another method once I implemented this in a more generic way for multiple character types.

This is it for now… there is still a lot to write about in another blog post highlighting GameplayEffects and other stuff. I might also dive a bit more into how I built the Turn Based Combat Plugin in another blog post!

UE4 Dev 02 – Integrating GAS (part 1)

This will blog post will be a bit longer since there is a lot of ground to cover due to the complexity of the GameplayAbility System built into Unreal Engine 4. I will split it into multiple parts which are released over the next couple of days.

In short: GAS is a sophisticated ability system that can be used for RPGs or for example Mobas (or any other game that uses abilities). It is fully featured and can get quite complex with what you can do such as status effects, passive effects, basic attacks or such simple stuff like using an item and activating some effect. There are several tutorials & talks about it on youtube. I will link to them in order to cut some length of this post.

As far as I know the one caveat is that for some parts you’ll have to use C++ to integrate GAS. As a starting point I would recommend to take a deep look into the ActionRPG Sample project that makes great use of it. I copied a lot of the base classes over to my system as a starting point.

The most important classes to know


This is the starting point that is used to execute GameplayAbilities and apply GameplayEffect. I attached this class to the player and enemies.


Use this to give characters in the game world different kinds of attributes such as Health, Stamina, Strength and so on. Whatever you can think of you can add here. However only Float types are supported at this time!


This class functions as a base class and can be overriden in Blueprints. This is the place you will integrate the actual ability. For Example one that will play an animation montage and reacts to a gameplay event which in return will affect the attributes of the set target.


Gameplay Effects can be applied to alter attributes contained in the attached UAttributeSet of the UAbilitySystemComponent. This is very flexible and is controlled by creating Data Assets that inherit from the GameplayEffects. Using this you will have different kinds of settings which we will cover in the next blog post.

These classes are the most important ones for now to grasp the concept of the system. The attribute set defines all the base attributes that will control the characters state (health, stamina, damage output and so on).

UAttributeSet is modified by -> UGameplayEffects -> which in return is applied by activating a UGameplayAbility.

All of these are managed and controlled by the UAbilitySystemComponent that is attached to each character that should have abilities or is affected by these.

I’m of course still learning about how to effectively use the system as I implement it into my project.

The next part will highlight how this system is being used in my project and how these parts are connected by different data assets and function calls.

IKL – More UI work.

As with a lot of mobile games – Idle Knight Legends gives the player the ability to boost certain aspects of the game. This feature can be accessed by a big button in the middle of our bottom navigation.

The middle button can be used by the player to access the boost functionality.

Clicking the button will open up a window allowing the player to boost the game by watching a video ad. We will never force the player to watch an ad and if they decide to watch one we will always reward the player for it.

The boost window.

As you can see from the screenshot above the player can activate up to three gameplay effects that will boost certain aspects of the game. From top to bottom: online income, offline income and attack bonus. We might adjust these in the future.

The player can fill the time up to six hours – each time granting an additional one hour of boost by watching a video ad. There is still some polishing and translation work todo with the window.

Improving the boost button

The active boost button as of now. We might add additional effects later.

To prevent the player having to open the window each time he/she wants to know how much time is still left we decided to add a timer to the boost button in the bottom navigation.

This is what I’ve been working on today among other stuff. So let’s dive in how we approached this.

To show the remaining time we just added a panel above the boost button. To support this we change the appearance of the button adding some kind of glow that functions as some sort of progressbar and decreases with the remaining boost time.

To support the active state we added a fade to the thin line in the middle of the button and might add a shine later on.

I make a lot of use of the DOTween library and as of recently use a open source ui library called UIEffects that extends the built in ui of Unity by adding some nice effects such as a shine, improved shadows or dissolving.

DOTween can be found at and UIEffect at

IKL – Reimplementing EXP

Last week we had a meeting about what we still need to do till we can release the beta version of the game. There is still of course a lot of polish as stated in the last blog post. However today I want to write about why we decided to reintroduce exp in the game.

We implemented an experience system a while ago in the game but ultimately decided to disable it due to balancing issues with how you progress in the game.

Since then a lot has changed in the game and we rebuilt it nearly entirely which led to the decision to reintegrate the experience / level system in order to have a clear progression for the player.

There is another reason why we think that a level system is a great addition to the game. In the last blogpost I talked about runes. We decided that runes will be rewarded the player when the player reaches specific levels thus we have full control over when we give the player these runes.

Level System UI integration in Idle Knight Legends.

Besides using the level as a measurement when to reward the player with runes we will use the level in the highscore list, too. Before adding exp we used the accumulated prestige points in the highscore to rank players. We think that using the level is more easy to understand.

Experience Points can be rewarded from different kinds of gameplay actions such as opening a reward chest, using prestige or from other rewards.

From an implementation side of things it was pretty easy to re-implement the level system since I just had to remove the obsolete attribute and re-add the function calls in the subsystems. There is still some work todo with new systems we added, but this is nothing more than adding something along these lines:

Observer.Trigger(CommandType.Player_GiveExp, 12.0f);
An icon that is used to show the player that he received exp.

There now is still some UI work todo such as adding the icon in the screenshot shown above and adding animations.