Using ‘Hierarchical Levels Of Detail’ to Optimize Game Environments

Hierarchical Levels Of Detail or HLOD is a method of combining multiple static meshes into a larger, simpler proxy mesh that will display at distances far away from the camera. It will then transition back to the original separate higher detail meshes at close distances.

These merged Proxy meshes are much less resource intensive to render and can drastically reduce draw calls and triangle counts for scenes when there are a lot of different objects on the screen at once.

You can see in the above example how enabling the HLOD system reduces triangles and draw calls without having to cull any objects from view. The HLOD version actually has more detail in this example as meshes are distance culled without the HLOD enabled.
The HLOD system works by assigning static meshes to groups called clusters and then building a proxy mesh for each of these clusters. These proxy meshes will then be rendered instead of the original group of static meshes at distances far from the camera.

Setting up HLOD for your level

HLOD must be enabled in the world settings of your level.

To set it up after enabling this, you’ll need to access the HLOD Outliner Window. You can access this from ‘Window>Hierarchical LOD Outliner’. 

The HLOD Outliner

The HLOD Outliner shows the HLOD settings and controls for your level.
It shows the LOD levels, which let you have multiple sets of clusters with different settings. For example LOD 0 has small clusters with less simplified proxy meshes and LOD2 will have large clusters that will have largely reduced triangle counts. Each level’s settings can be tweaked individually and will affect all clusters within that LOD level. You can manually decide how many LOD levels to include for you HLOD system. It also shows the HLOD Clusters and all of the Original Meshes within each cluster.

Generating Clusters

There are three main ways to generate HLOD clusters:


You can drag objects directly from the world outliner into the desired LOD level to create a cluster and then drag other objects into that cluster to add them to it. Be warned – clicking ‘generate clusters’ will override any manual clustering you’ve already done.


To automatically generate clusters just click ‘generate clusters’ in the outliner. This will generate clusters according to the cluster generation settings for each LOD which can be acessed under ‘LODSystem>Hierarchical LODSetup>HLOD Level x>Cluster Generation Settings’.

Here you can dictate the average size of clusters at this LOD level, how much of the cluster to try and fill with meshes and the minimum number of actors to build a cluster from.


It is possible to refine the generation process to be more specific by using HierarchicalLODVolumes. These are volumes that tell the generation process to bundle all the meshes enclosed by the volume into one cluster. It can either be manually placed and resized into your scene or you can place one around an existing HLOD cluster.

This can be a method to retain manual cluster control over certain areas while using automatic cluster generation for the rest of your scene. You can exclude certain HLOD levels from the volume’s detail panels to make sure it is used for the specific level you need.

Generating Proxy Meshes

Once the clusters are set up for your scene, you can start generating the proxy meshes by clicking ‘Generate Proxy Meshes’.

You can tweak the mesh generation settings for each HLOD level in ‘LODSystem>Hierarchical LODSetup>HLOD Level x >Mesh Generation Settings’. Here you can set the draw distance and decide whether or not to simplify the mesh.

After generating the proxy meshes, each one can be manually tweaked in the same way as a normal static mesh, including further reducing triangles and setting up LODs for each mesh.

Close up HLODs For Reducing Draw Calls On Blueprint Objects

The performance of VR games is often bottlenecked by the number of draw calls and HLOD proxy meshes are an effective way of reducing this number. Player interactable blueprint objects that have more draw calls than a standard static mesh can get very expensive when there are many of them on the screen at once. This is a method to minimize the effect of this by replacing these blueprint objects with higher detail HLOD proxy meshes until the player is close enough to be able to interact with them. 

Set up an LOD0 level in the HLOD Outliner. Set  ‘Desired Bound Radius’ to the minimum value, ‘Min Number Of Actors to Build’ to 1 and the ‘Override Draw Distance’ to an appropriate distance to swap between Proxy Mesh and Blueprint Actor. These settings will stop the automatic cluster generation process from adding any meshes to this LOD level except for those within HierarchicalLODVolumes. The next step is to build HierarchicalLODVolumes around each of the blueprint actors that you’d like to be affected by this system.

After these clusters are set up, you can create any extra HLOD levels on top of this level with normal settings to use the standard HLOD system as well. Once everything is set up, hit ‘Generate Clusters’. The LOD0 Proxy Meshes look almost very similar to the blueprint actor  but with far fewer draw calls.

Once the play reaches the draw distance defined earlier, the Proxy mesh will be replaced with the blueprint Actor. You may need to check the draw distances on the Blueprint actor to make sure they aren’t overlapping by too much or you may end up rendering both and lose the benefit of using this method.

Clearing Space On Perforce

I reached that dreaded moment where our perforce server hit its maximum limit.

Even when trying to obliterate some directories from our depot, I was getting hit with the reply:

The filesystem ‘P4ROOT’ has only 2G free, but the server configuration requires at least 2G available.

P4Configure to change this file size requirement did not work for P4Root.

What I ended up having to do was log on as the root user of my digital ocean hosted linux server, navigate up to the home directory by doing cd .. and then navigate into the perforce folder and manually delete one of the smaller projects from the archives folder.

After doing this, I was able to obliterate a larger project and verify the new space on the p4 server with the p4diskspace command.

At some point later I will need to look into setting up perforce to delete older archived versions of files.

TargetSpawner – scripting spawning patterns of actors

TargetSpawner is a set-up for spawning scripted patterns of actors, for use in challenges or other gameplay. TargetSpawner is an abstraction – it doesn’t specify the type of actor that is spawned and could be used make various types of gameplay with a common scripting format.

BP_TargetSpawner is the blueprint class used for this, but you’ll also need the struct STargetTrackRow, as the basis for a data table. In the project depot you’ll also find an example Excel file that can be used as a template.

Below are two sections – “How does it work” and a tutorial for setting up an activity.

How does it work

Inside the struct and data table

The struct STargetTrackRow simply contains 8 strings representing 4 sets of 2D axes.

A data table based on STargetTrackRow is one or more sets of these eight strings. Each row represents a time step. Each pair of columns (e.g. Obj0_X and Obj0_Y) represent the co-ordinates of an object to be spawned at that time.

The row name is not important. In the example above, the rows are named as 0.5s intervals, but they could have any unique name and don’t impact on the tempo at all. Also in the example above, there are empty rows from 8.5s (row 18) onwards – that makes no difference and they could be deleted.

Inside BP_TargetSpawner


BP_TargetSpawner has seven variables.

  • SpawnAreaSize is an editable Vector2D that defines the height and width of the spawn area.
  • SpawnAreaHeight is an editable float that defines how heigh spawn area sits above the origin of the BP_TargetSpaner actor. This means that BP_TargetSpawner can easily be placed at ground level with the spawn area sitting above it, e.g. from knee-height to head-height.
  • TargetObjects is an editable array of Actor classes. It is used to define the one to four types of actor objects that this TargetSpawner can spawn. The four types correspond to the four pairs of columns in the data table, in order. Anything other than the first four will be ignored.
  • TargetTracks is an editable array of data tables. This defines which data table to use for this TargetSpawner. If there is more than one listed, they can be selected by index or picked at random when the scripting starts.
  • TrackTimeStep is an editable float that is used to specify the tempo of the script. It is a duration in seconds, so 0.5 will give you two beats per second, or two data table rows per second.
  • CurrentTable and CurrentRow are non-editable ints that are used for iteration.


On construct, BP_TargetSpawner uses the parameters to size and position the spawn area. Scripting begins when StartTrack is executed.

StartTrack takes an int as a parameter, which specifies which index of TargetTracks will be played. If it is less than zero (default = -1) then it will pick one at random.

SpawnObject spawns an actor class indexed in TargetObjects at a relative position within the spawn area. It is called by PlayNextRow. PlayNextRow also makes use of GetCurrentTrack, GetCurrentRow and GetCurrentTrackLength which are used to read the specified data table and provide the co-ordinates. When finished PlayNextRow calls the event WaitForNextRow, which re-calls PlayNextRow after a delay, unless the end of the table has been reached.


Here we will walk through the steps of adding a TargetSpawner for an activity you are creating.

Target objects

First, you will need some actor classes to use as your target actors. All the movement, collision and behaviour of the targets is handled by the target actors, in order for TargetSpawner to be more abstract/versatile. These could be any object at all as long as it is derived from Actor.

A typical use case would be to create an actor class that has a visible mesh or sprite and moves, either towards the player, or to be dodged. Another use case might be an actor that is a target to be punched or hit in some other way, that appears at a location and then disappears. You can specify up to four of these classes for use with a single BP_TargetSpawner.


Add a BP_TargetSpawner to your level.

(If you are making an actor that contains a whole challenge or other activity, you could add the BP_TargetSpawner as a child actor instead.)

Place the actor (origin) at ground height. The magenta arrow represents the forward vector of the spawn area. The target actors will be spawned with the same rotation as this arrow (in world-space). In the details panel, adjust the SpawnAreaSize and SpawnAreaHeight to suit your gameplay.

SpawnAreaSize is a radius. The default values shown above would create a spawn area that is 2m wide and 1.5m tall. It would sit 0.5m off the ground, extending from 0.5m to 2m.

Next add your target object classes to TargetObjects. You’ll need at least one and you have up to four. The order is significant, as it will match the columns in your table.

Unless you already have a data table to use, you can leave TargetTracks empty for now, but TrackTimeStep can be set to be the tempo of your scripted spawning. This is a duration in seconds, so 0.5 will give you two beats per second, or two data table rows per second in other words. Smaller numbers will give you more fine control over the timing, but a longer table. Bigger numbers will give you a smaller table with less fine timing control, but will also help you to make more rhythmic patterns.

Making a data table

You can use either the Unreal data table editor or a spreadsheet application (or even a text editor) to add the actual scripted timing, but first you’ll want to create the data table itself.

In Content Browser, right click and make a new Data Table asset (in the Miscellaneous sub-menu). You’ll be prompted to select a struct to use as for your table rows. You must select STargetTrackRow.

Editing in Unreal

Hit the Add button on the toolbar to add new rows. You can name your rows by double-clicking the RowName column. You might like to name your rows to follow your chosen timing, or you can skip naming them entirely.

Once you’ve created rows, you can edit them in the row editor panel.

Editing in Excel

Open TargetTrackTest.xlsx or TargetTrackTest.csv, as a template, and save as a new file.

Leave row 1 as it is. You can change the row “names” in colum A however – they can be anything as long as they are unique. In the template they are half-second timings, but you might use a different tempo.

In either method, edit your rows so that each row is a time-step, and each pair of columns is a location within the spawn area. Type co-ordinates to create your spawn pattern.

The co-ordinates are relative to the spawn area. 0,0 is the centre. 1,1 is the bottom-right and -1,-1 is the top-left. You can add as many rows as you like, but you cannot add more columns.

In Unreal…

Once your pattern is complete, you can just hit save and it’s done. If you need to make changes, you can just edit and re-save.

In Excel…

Once your pattern is complete, save your spreadsheet as a CSV file.

Return to the Unreal editor and open your new Data Table asset. Hit the Re-Import button on the toolbar and browse to your CSV file. Your spreadsheet will now be imported to the data table.

If you make any changes to your spreadsheet, be sure to save your CSV and hit the Re-Import button again.

Now that you have a data table in Unreal containing your scripted spawning, the last step is to select your BP_TargetSpawner in the world and add your table to it’s TargetTracks parameter.

To start the BP_TargetSpawner, just call the StartTrack function, either in the level BP or by some other scripted way.

Handprint highlight material and component

BP_HandprintComponent, MPC_Handprint and MF_Handprint are the main elements of the hand location highlighting sytem.


BP_HandprintComponent is a scene component that should be added as a child to each hand of the player pawn. It has an editable bool parameter called IsLeft, which should be true for the left hand and false for the right.

All this component does is take it’s world location and store it in MPC_Handprint.


MPC_Handprint is a material parameter collection containing seven scalar and four vector parameters. These values can be set by BP_HandprintComponent or any other script, and are used by MF_Handprint.

  • DistanceMin and DistanceMax specify the range of distance from the hand to the surface over which the highlight is visible
  • Size is a multiplier for the size of the highlight. The visible size is also affected by the distance to the surface.
  • LeftAlpha and RightAlpha are used to lerp between opacity and transparency for each hand, or to hide the highlights altogether.
  • LeftFill and RightFill are used to lerp or switch between the two masks: T_HandprintMask_0 at a value of zero and T_HandprintMask_1 at a value of one. This would be use for example by other scripts to show when the hand is in range of a target.
  • LeftColour and RightColour specify the colour of each hand’s highlight. The alpha chanel is not used.
  • LeftPos and RightPos are updated by BP_HandprintComponent with the world location of each hand.


MF_Handprint is a material function that takes a base colour as input, and outputs a base colour with hand highlights, as well as an emissive channel.

It works by taking the world co-ordinates from the hand, via MPC_Handprint, and projects them into screen-space, using the distance between the shaded pixel and the hand to calculate the size of the highlight.

a section of MF_Handprint

The colours, overall alpha and mask lerping alpha are all also taken from MPC_Handprint.

MF_Handprint should be included in any other base material that should display hand highlights, like in the following example:

RouteMgr procedural level streaming

This document explains our procedural level system called RouteMgr. It is divided into three sections: an overview, how to use it, and troubleshooting.



The RouteMgr “system” consists of a few different actors and a specific way of setting up levels and sublevels.

assets associated with RouteMgr
  • BP_RouteMgr is an actor that must be placed in a persistent level to control the streaming of level chunks.
  • BP_RouteNode, BP_RouteNode_StitchPoint and BP_RouteNode_RouteNode are actors placed inside chunk levels to help with positioning and stitching them.
  • S_ChunkInfo and E_ChunkCategories are used to store information about the chunks. S_ChunkInfo is the basis for the data tables which constitute a level. This is so that certain data about the level is stored outside of the level asset – we can read it before loading the level itself.
  • BP_RouteTrigger is an overlap volume that tells calls an event in a BP_RouteMgr when touched by a player pawn. It’s used to tell RouteMgr to load in a new chunk and possibly unload an old one.
  • BP_FollowingOcean creates an ocean and/or horizon that maintains a position relative to the player, so that however far you travel you are always at the centre of the environment.
  • WBP_ChunkTool is a utility widget you can use in editor to automate some data-input. WBP_ChunkTool is still a work-in-progress and currently the data must be input manually.


This section clarifies some of the jargon used in this document. These are terms made up for this solution and not official terminology.

  • Chunk. A chunk is a sub-level that can be streaming procedurally via RouteMgr.
  • Parent/child chunk. Some chunks are sub-levels intended to be attached to another sub-level, for example to create variations with different obstacles or decoration, these are child chunks and the main chunk they are attached to is the parent chunk.
  • Stitching. When a sub-level is streamed via RouteMgr, it’s transform is adjusted so as to create a continuous path. This is what we’re calling stitching. The level’s transform is adjusted so that a stitch node in one chunk matches a stitch node in another chunk.
  • Categories. Chunks fit into one of four categories: RunFlat, RunSteep, Climb, Station and HeroSection. These are used for selecting/ordering chunks.
  • Head/body/tail. The head chunks are the chunks that have been loaded ahead of the player. The tail chunks are the chunks that exist behind the player before being unloaded. The current chunk that the player is traversing is the body chunk.

Naming Conventions

  • Levels
    [LEVELNAME] is the name of the level/environment
  • Chunks
    [LEVELNAME] is the name of the level/environment
    [CATEGORY] is the type of chunk, e.g. RunFlat, Station
    [INDEX] is an identifier, could just as easily be a name as a number
  • Child chunks
    [LEVELNAME] is the name of the level/environment
    [CATEGORY] is the type of chunk, e.g. RunFlat, Station
    [CHILDTYPE] describes what the child chunk contains, e.g. “Obstacles”
    [INDEX] is a numeric identifier
  • Data tables
    [LEVELNAME] is the name of the level/environment

How to Use RouteMgr

Making a procedural level

Step 1 – Set up your persistent world

Starting from a new level, set up your environment, such as skydome, fog and lighting conditions. It’s best for the skylight to be dynamic, as the geometry it will light will be in sub-level chunks. Add a BP_FollowingOcean at 0,0,0. Any horizon geometry such as mountains should be attached to the BP_FollowingOcean, either as new child components or as child actors. This is your persistent level.

Step 2 – Create chunk sub-levels

Create a new level asset. In the Levels panel, add your new level as a sub-level.

Design your chunk level as an individual island. Make sure you are adding actors to the sub-level and not to the persistent level. (You might find it helpful to change the 2nd column of the World Outliner panel to show the parent levels of the listed actors.)

You can always open the sub-level individually, to see it in isolation. (But note that the skylight will not be there, as it is part of the persistent world.)

You have an important choice here: you can make all of your chunks relative to the same sea-level, in which case, all of your stitch points will be around the same height above sea level. Alternatively, you can design chunks with varying heights above sea level, so that the path through the chunk starts low and ends high, or vice versa. The edges of landscapes and other geometry should be well below sea level, especially if the chunk might appear at different heights.

Repeat this step until you have a number of chunk levels prepared as sub-levels.

Step 3 – adding RouteMgr actors

In your persistent world, add a BP_RouteMgr. If you have designed your chunks to be at different heights, make sure StitchZ is checked and enter the Z height of your ocean in SeaLevelZ. If your chunks are designed to all be at the same height, StitchZ should be unchecked.

In each chunk sub-level, add two BP_RouteNode_StitchPoint actors. The stitch points should be placed at the entry and exit point of the chunk, where you would wish the paths to connect, with the arrow pointing outwards from the chunk.

Add a BP_RouteTrigger to each chunk. It should stretch accross the whole chunk so that it can’t be bypassed. It should be placed near the middle (and where possible, if it is at a place where the player’s view is obscured, it will help to reduce potential pop-in).

If you have designed your chunks to appear at different heights, then you may also need to place a BP_RouteNode_LowPoint in some chunks. When the chunks could be a different heights, if the path dips downward between the stitch points, that point could end up under water. In that case, add a BP_RouteNode_LowPoint within about 50cm of the lowest point you expect the player to be able to go. At the point of choosing a new chunk to load, if any of the nodes would be under water, that chunk will be ignored for selection. If your chunks are to be loaded all at the same height, or if one of the stitch point nodes is already at the lowest point, then you can skip this part.

Step 4 – set up the ChunkInfo data

Create a new data table with the S_ChunkInfo struct as a basis. (Or if you prefer, duplicate an existing one.) Each row in the data table will represent a chunk sub-level.

Make sure the ChunkData parameter of your BP_RouteMgr is pointing to the data table you just created. WBP_ChunkTool is a WIP tool to automate the next step. For now this must be done manually.

Using ChunkTool

Open all of your sub-level chunks and make them all visible the levels panel.

Right-click WBP_ChunkTool and choose “Run Editor Utility Widget”.

In the window that opens, choose “Add Chunks”.

Any chunks that are loaded and visible, will be added to the data table specified in your BP_RouteMgr. Any chunks that already exist will be updated.

You may need to edit the data table manually to add any child chunks, or specify categories or one-way chunks.

Without ChunkTool

Open the data table.

Use the “+” button to add a new row. The name of the row must match the name of the sub-level asset for the chunk.

The third column, StitchNodes, should contain an array of two transforms. The transforms should match the transforms of the stitch nodes in the chunk. You can copy/paste the location and rotation.

If your chunks have multiple heights, the fourth column should list the lowest Z location of a node in each chunk. If all your levels are at the same height, you can ignore this column.

Additionally, you can add any child chunks, or specify categories or one-way chunks.

Make sure all your files are saved. (In PIE, streamed chunks will be loaded from disk, so unsaved changes won’t be visible.) Your procedural route should now be ready to test.

Making a sequential level

RouteMgr can also be used to stream an authored level in sequential chunks.

Select your BP_RouteMgr. In the details panel, uncheck UseRandomChunks, and add sub-level names, in order, to SpecificChunks. You can also chose whether or not to make them loop.

Scripting route spawning

Be default, route spawning happens on BeginPlay. You can deactivate this if you want to script it in another way. Just uncheck InitRouteOnBeginPlay on the BP_RouteMgr.

Adding child chunks

Child chunks can be overlayed over parent chunks to add additional detail or to provide variation in obstacles or other geometry. Multiple child chunks can be listed and one will be selected at random.

To create a child-chunk, move your optional geometry into a different sub-level.

Manually edit the chunks data table and in the ChildChunks column, add an array of names of potential child chunks for this parent chunk.

The child chunk sub-levels will be given exactly the same transform as their parent.

TO DO: There will be two ChildChunks lists: ChildChunks_OneWay and ChildChunks_TwoWay. One way child chunks will force their parent chunks to be treated as one-way if selected.


Some chunks appear under sea level

Make sure StitchZ and SeaLevelZ are set properly on the BP_RouteMgr. Make sure that the nodes actors and/or the LowestZ listing in the data table represent realistic values for the lowest heigh in each chunk, and that these numbers are above SeaLevelZ.

Chunks load in overlapped

Make sure the stitch nodes and/or the node transforms in the data table have been set, including the rotations. If you have set StitchingYawVariance to anything other than zero, try reducing it.

Chunks or child chunks are not loading

Check the output log. If you see “BP_RouteMgr failed to load chunk level: xxxxx” or “BP_RouteMgr could not find level info in data table: xxxxx” check that xxxxx matches the actual name of the sub-level asset. If not, you might need to correct it in the data table.

A specific chunk appears infrequently / seems not to appear at all

Make sure it’s been included in the data table. Check the category of the chunk as specified in the data table HeroSection chunks will not appear as part of the procedural rotation, only when listed in SpecificChunks. If you are using StitchZ, it’s possible that the chunk doesn’t often fit above sea level, maybe because the far away node would put you below water. In that case you could try making the level flatter or including more levels with an upward path, to create more space.

VR Runner Character

I’m going to cover the functions of the player character, including my custom arm swing implementation, the climb up, the BPI climbable interface and how it fits into the player pawn and it’s parent. Plus some other smaller related things.

Arm swinging

The arm swing starts in begin play, I just set the movement mode to be arm swing as the parent pawn has many different movement modes and I don’t want them to be doing anything.

Then I set up two timers to calculate the amount of arm movement. The first runs at 0.05s and calculates the change in position of both hands over that period. The second runs every 0.3s and gets an average of the changes in position over this period, the goal being to smooth out the movement of the player so it is less jittery. Without this the player would slow down as their hand get to the top of their swing and go faster mid swing. Then to the right I set up some initial values for these functions.

In the first function ‘Calculate Arm Swing Amount’ I simply get the change in z position, then store the current position ready for the function to run again. I do this for both hands.

In the second function ‘Calculate Arm Swing Average Over Time’ I take the stored values add them to a total, calculate the average and then reset the values ready for next time it’s run. I do this for both hands.

At the start of the event tick I check if the player is climbing. Then if they are holding anything and finally I check to see if both of the hands are moving enough (this lets you reach out with one hand without moving by accident).

After that I do a check to see if the player is close to an edge.

If they are close to an edge then I adjust the calculation for the speed so they move a bit slower, this helps stop the player accidentally running off an edge and helps them to swap between running and climbing sections with less frustrations.

Finally I add movement input to the player with the speed as the scale value and the direction as an average of where the player is looking and the direction of the hands from the head.

Climb up

The climb up takes the player from being on a wall to on top of the wall. The player initiates a climb up by getting to the top of the wall and then bringing their hands to their hips as they would in real life. If the player is climbing I run a series of checks on the event tick to see if they are ready for a climb up.

The first check is to see if the players hands are both lower than the headset.

Then I check with a sphere trace to see if an object is in front of the player to make sure they are at the top of the wall.

Then I do a trace down in front of the player to get the location I’m going to move them to.

Finally I move the player to their new location using two timelines, the first moves them up to the correct height and the second moves them forward. I do this to give the feeling of climbing up and over and to prevent the player clipping through any obstacles.

Thumb stick adjustments

Here I overwrite the parent stick stuff and let the player adjust their position using either of the thumb sticks (when not arm swinging or climbing), this is so they can have some fine control if they reach for something but are just out of range.

BPI Climbable

This interface is used to call events on objects when the player interacts with them either by overlapping with grip being possible or gripping. There are events for the left and right hand. In the event graph for the player pawn you can find where they are called.

Gripping events are called from two functions ‘ReleaseGrip’ and ‘InitClimbing’ both are inherited from the parent.

I have created a release grip function and added it to the parent here, the function does nothing on the parent but is implemented in the child.

Here you can see that when the function is called it takes the hand and calls the appropriate event if the gripped object implements the interface.

Init climbing is a function that already exists in the parent. In the child I added some of my own code and then do the parent function stuff. Here I make a temp array of the overlapped objects.

And then get the closest one to the centre of the overlap sphere.

I then take the closest object and call the grip event.

Finally I run the parent function.

There are a few additional changes that I have made to the parent class. The first you can see here I make sure that the player can only climb on things that implement the interface.

Here I added some code to allow physics objects to be climbed on if they contain the component tag ‘ForceClimb’.

Arm length calibration

The calibration uses a separate child pawn that can’t move and a calibration map that has instructions for the player. In the ‘Steam_VR_Player_Controller’ I have updated the spawn logic to check to see if the player has been calibrated. First I check if the player is in the calibration map already and if they are then I spawn the calibration pawn. Then if they are not on that map I check if they have already calibrated. If calibrated I spawn them as the regular child pawn. If they are not I save the map they were on, load them into the calibration map and then spawn the calibration pawn.

In the calibration pawn the arm length is calculated when the player presses a grip button. They can then proceed by pressing a face button. The arm length and calibration status is saved in a save game object. The map they were previously on is then loaded.

VR Runner Procedural Level Components

This document describes the use and construction of these components. At the top I will give examples on how to place them and adjust settings and below that I will discuss their construction.

Placing the components

Vertical Wall Climb

You can adjust the height of the vertical wall climb by moving the top spline point up and down. You can also set the difficulty of the climb which will be adjusted to the players arm length.

Horizontal Wall Climb

You can pull the spline points left and right to adjust the length of the wall. There are a few options for the bars, a difficulty and a randomness factor which adjusts the horizontal position by a random amount. You can also use only the bars if you want to build different shaped walls.

Monkey Bars

The monkey bars follow the spline so you can add points and adjust their positions to your liking. The side beams go straight from point to point so you may need to add extra points on the curves. You can also turn off everything but the bars with the checkbox if you want them to fit into a different setup.


The scaffolding can be placed down and then adjusted in height by grabbing the top spline point and dragging it up or down. You can also choose whether you want the boards on the top by clicking the toggle. Additionally you can select an actor to attach to the scaffold for easier placement and to keep them together for future adjustments.

Rope Climb

The rope climb is very simple, just select a climbable rope to attach it to the top and then use the spline to adjust the rope length. There are various options for the rope as well.

Rope Swing

When setting up the rope swing you can select two scaffolds and adjust them from the rope swing so it’s easier to position anything. You can add multiple rope swings in a row by dragging out the spline and adjust the distance between them in the settings.


The zipline is separated into two parts a top and the bottom. You can attach the top to a scaffold or place it wherever you like then you can select the bottom to attach the zipline. Finally you can adjust the starting point of the zipline hold.

Component Construction

All the components use similar logic so I’m only going to cover one and from there it should be fairly straight forward to get an idea of how the others work.

The Vertical Wall Climb

Opening up the construction script you will see a set of functions that are used to set up the wall.

The first function is used to load in the player’s arm length and set up the distance between holds which will be used by the add climbable bars.

In the add climbable bars function there are two important sections. Setting up the loop and getting the positions along the spline. To set up the for loop we want to work out how many bars we want to add. So we take the spline length and divide it by the distance between the holds. (here we minus 1 because we don’t want a hold to appear at the bottom in the floor)

Next we take the index and multiply it by the distance between holds to get the distance along the spline that we want to add the mesh at. We then reverse the values so that the beams are added from the top of the spline. Finally we get the location at the distance along the spline and add a small offset before spawning the beam.

The next two functions add side beams and add backboards both follow the same structure. We work out how many beams to add based on their height (using a fixed number here and by just eyeballing what seemed about right).

Then inside the loop we add a spline mesh and attach it to the spline. To get the spline mesh set up right we need to put in it’s start and end points as well as tangents. Setting up the spline mesh can be a bit of a pain as there are a few settings that can effect it including the spline up direction and forward axis (both of which I’ve set in the add spline mesh node). The tangents can also effect the way the mesh stretches and looks so in this case I’ve manually set them so the beams are consistent but you could get the tangents at the spline distance.

Here is the add backboards function and as you can see it is broadly the same as the above function.

Tutorial – Integrating CC3 character to Advanced Locomotion System v4.

           This will be a quick and dirty way to do this integration, so to keep everything clean from the main project:

           Go to Epic Marketplace and add “Advanced Locomotion System v4” to your library. Then create a project with it.

           After the initial setup migrate or import your CC3 char onto the project. The editor will likely generate a skeleton to the CC3 character.

1 –    Retargeting

           Prepare both ALS and CC3 skeletons for retargeting, go to retarget manager, set both to Humanoid Rig.

           On CC3 character: root bone retarget option is “Animation”, pelvis is “Animation Scaled”, all IK bones will also be set to “Animation”.

Figure 1 – Retargeting options

           Go to the ALS mannequin skeleton and right click, then select “Retarget to another skeleton”.

Figure 2 – Retargeting the skeleton

           This will retarget ALL the animation assets that are referencing that skeleton. Things to note:

  • It will keep the file structure as it was since is is NOT duplicating the assets and then retarget.
  • A major CON is that it won’t rename the assets so that needs to be done afterwards.                  
  • A major PLUS would be that it will then retain any reference to the current animation assets.

2 – Adding Virtual Bones to the skeleton.

           Back to the CC3 skeleton we need to add the Virtual Bones that ALS uses for IK.
To add them you need to go to the skeleton asset, right click on a bone (this will be the SOURCE) and choose another bones (this is the TARGET). Afterwards you can rename the bone. NOTE: the VB prefix can’t be removed.

Figure 3 – Add virtual bones.

           The following table gives all the needed virtual bones together with the correct sources and targets.


           Also copy the head sockets that are placed on the ALS skeleton and paste them on the CC3 skeleton. They are attached to the “head” bone.

3 – Setting up the Ragdoll/Physical Asset

           The auto generated physical asset from the editor might not be well done for the character. So a quick way to have another is to take advantage of the ALS one. We can always adjust or change it again at a later date. Rename the physical asset.

           Copy the ALS physical asset and move the copy to the CC3 folder. Open the CC3 character skeletal mesh and on the Physics and Lighting entries place the copied Physical asset.

           If you don’t want to copy the physical asset but use another you already have you need to add one extra physical body (if not existing) for it to work with the ALS ragdoll system.

           On the physical asset go to the skeleton tree, click “Options”, and select “Show All Bones”. Right click on the root bone and select “Add shape”.

Figure 4 – Adding root bone body

           After that just make the created body to be kinematic and disable its collision. So “Physics Type” set to “Kinematic” and “Collision Response” set to “Disabled”.

Figure 5 – Body setup

4 – Animation Blueprint adjustments

         If you attempt to play you may notice that the face appears to be melted when the heads does sweep motions. The feet might also be floating a bit. For this we need to apply some adjustments on the animation blueprint. First, the feet.

Go to the “Foot IK” animation layer. On the bottom there’s a box named “Apply IK to Feet”. You need to adjust the “Effector Location”. Both adjustments should be on the X value. The left side is likely a negative value offset while the right side should be a positive value. For this character in particular +3 and -3 worked best.

Figure 6 – Feet IK adjustment

            Then it’s just a matter of “fixing” the jawbone. My take on the issue is that CC3 places weights on the facial bones (to enable animation on them) but since our animations do not use them some animation curves might be applying some influence on them. The jaw problem is due to the “cc_base_facialbone” lagging behind the rest of the head when rotating. One fix could be achieved using a Control Rig but there’s another solution which is less complex. The drawback of it? It disables one bit of the animation blueprint.

           Search for “AimOffsetBehaviors” open the nested state machine and then just disable the “Look Towards Input” state. You can just duplicate the “Look Towards Camera” state and connect the entry node to this new state.  

Figure 7 – Facial bone “fix”.

           Now should just be a matter of renaming the animation assets to keep everything tidy. Renaming the assets in bulk while connected to P4V is likely a lengthy process but it assures that no reference gets broken.

           If you rename the assets, in bulk, without being connected to P4V I recommend doing it in small batches. Close the project and reopen, see if nothing failed to load and continue like that until finished.

           If anything fails to load: rename the culprit asset (or assets) to the previous name, save. Close and reopen. The loading error should be cleared. Rename the assets again and test again just to be sure. And then continue.

5 – ALS and DCS merge

          The initial merge following the tutorials leaves the blueprint with a lot of bugs that have been squashed on the meantime.

          On my opinion the best course of action to apply a new character to it would be: add the new skeletal mesh. Likely, the engine will assign a new skeleton to it, or even, pick a mannequin skeleton and add the new CC3 bones to it.

          I would advise to restore the Mannequin skeleton if the editor alters it, just for precaution.
Then, pick the skeleton that the already existing characters have and assign that skeleton to the new skeletal mesh.

          Adjust the retargeting options (rule of thumb: root as Animation, pelvis as animation scaled, the rest as skeleton and IK bones, weapon bones and such as animation also).

          The engine will then provide runtime retargeting of the current pose to any skeletal mesh that shares the skeleton.
Issues: if the skeletal meshes have different proportions it will be required to correct it. Likely IK will be necessary .

          NOTE: if there are any folders that have the same naming scheme on both projects you may end up overwriting any asset that has the same name.

Figure 8 – Same file name on different projects.

          As per the image above, if I migrate the assets on the left project to the right side project I will be inevitably overwriting the existing files.

Tutorial – Creating Control Rigs for spine and hands orientation

           This tutorial will walk you through on how to create control rig assets that enable you to adjust your character spine using mouse input, working then as an aim offset. And on how to adjust your character hand positioning to create sweep and stabbing motions using spears or staffs. It also provides you the tools to repurpose and recreate for other types of weaponry and motions.

           First it will show on how to create a Control Rig asset, it will provide a step by step to create the spine influencing asset and compounding from it the hands influencing Control Rig will be shown. This tutorial follows the usual UE4 skeleton naming scheme and hierarchy.


  • Control Rig plugin.
  • Full Body IK plugin.
  • Engine version 4.26 or above.

           After this tutorial you will have two control rigs that enable you to apply runtime adjustments to your character spine or hand positioning as shown on the next two videos.

Video 1 – Sweep and stab motion.

Video 2 – Spine Aim Offset.

1 – Creating the Control Rig asset

           To create your control rig asset you just need to right click on the content browser then go to “Animation” and then select “Control Rig”, see Figure 1. The editor will then prompt you to select the parent rig. There should only be a single option named “ControlRig”. Select it.

Figure 1 – Creating the Control Rig.

            Afterwards, opening your newly created asset you will see a window like in Figure 2. To start building your Control Rig you first need to import a skeleton hierarchy. In order to do it, click on the green button named “Import Hierarchy” at the middle of the left side of the window. And then select the desired skeletal mesh.       

           Note: The Control Rig is neither importing the skeleton nor the skeletal mesh itself. It’s creating a copy taking into account the skeleton hierarchy. So it takes into account the existing bones’ names and the corresponding transforms. The control rig asset won’t be referencing the skeleton at all, as it can be verified using the reference viewer.

Figure 2 – Blank Control Rig asset.

           After that you will have your skeleton hierarchy imported and the selected skeletal mesh will be on the viewport as in Figure 3.

Figure 3 – Imported skeleton hierarchy

2 – Adding controls and spaces to Control Rig

           In order to better understand the work that will be done it is best to provide a brief explanation on what Controls and Spaces are:

  • Controls: they are points on the rig global space that you can use to aid or directly adjust bones’ transforms.
  • Spaces: they work as being a secondary frame of reference to any child control (or space) that it parents. Meaning: on control rig the global space origin (0,0,0) is located at the root bone. When you create a space, you are creating a new reference frame for any child of this new space. So their global transform is measured from the rig global whilst the local transform is measured from the parent space.       

           So to create either a space or a control one can right click on a bone and select “New” and then “New Control” or “New Space”. It can also be done on a blank area at the bottom of the “Rig Hierarchy”, so after the root bone chain. Let’s create a control like that.

           A red sphere should appear at the root bone position. Rename this new control to “root_ctrl”. The naming scheme is following the UE4 mannequin scheme, if your skeleton has a different naming convection just adjust the names and follow them by the following suffixes: “_ctrl” if it is meant as a control or “_space” for spaces.          

           The red sphere is a gizmo, it’s just a visual representation of the control that was created. While having it selected, on the right side, on Gizmo section, you can adjust its transform, colour, and type. Change it to a hexagon (see Figure 4 for reference).

Note: spaces do not have gizmos.        

Figure 4 – Gizmo setup.

           Following that, click on the “root_ctrl” and create two more controls named “foot_r_ctrl” and “foot_l_ctrl”. Change the gizmos to boxes and colour the left one to blue (left sided controls will be coloured blue while right side is coloured red). Similarly create a space named “spine_02_space” and parented to this space create a control named “spine_02_ctrl”. Change the “spine_02_ctrl” gizmo to a yellow circle.

           All gizmos should be appearing on top of each other. To reposition them on the “Rig Graph” right click and write “Setup Event” and select it. This is the event that runs before the other events. Think of it as a construction script.    

           Then, on the graph you can place a “Children” node, just right click and search the name. This node can help you to recursively obtain the entire chain of the type of items you want (including the parent). If set to search for controls and there’s a space on the chain, it will jump those. So in our case it will retrieve the root and feet controls as a single collection. A collection is just a container for bones, controls, and spaces.

           We want to set up the controls initial positions as being the respective bones initial positions. For that we can iterate through the created collection using a “For each item” loop node. Expanding the “Item” node you can see that there is a “Name” pin. From it create a “Chop” node, this node is meant to remove substrings with the length specified. Since using our naming convection we can obtain the bones’ names by chopping the item name by a length of 5 (so removing the “_ctrl” suffix). From the remainder we create a “Get Transform” node, set it up to bone type and to retrieve the initial transform from the global space. From this transform we set up a “Set transform” node, the conditions are: “Initial” and “Propagate to children” as True. Also, link the “Item” pin from the loop node to this node. This finishes the positioning of the control in the collection. It should look like it is in Figure 5.

Figure 5 – Collection loop setup.

           Then we just need to setup the “spine_02_space” and his child control. Since we want to have both the space and control initial location to coincide with the “spine_02” bone we just require to setup the space, as the child control will be at (0,0,0) in local space which is where the “spine_02_space” will be located. For that we just need to retrieve the “spine_02” bone initial location. Place a “Get transform” node, set it to retrieve the initial bone transform and then expand the pin and get just the “Translation” pin. From it place a “Transform from SRT” node. It will create a transform from the fed in values. So the initial location will be in, and it will also make the rotation to be (0, 0, 0). Since later we will be adjusting the rotation at runtime it is best to work with it starting at 0. From this transform place a “Set Transform” node and set it up to set the initial transform of “spine_02_space”. As per Figure 6.

Figure 6 – Space setup.

            Having finished the setup event you can check that all the controls should be placed on the correct bones as in Figure 7.

Figure 7 – Finished control setup look.

3 – Setting up the Spine influence.

            Now that the rig is properly done let us move to the setup to influence the spine. For it to work you’ll need to place a “Forwards Solve” node. This is a node that is meant to be read as: you set up the controls’ positions and then move the bones along adjusting if necessary.

            One important note to take is the following: the sequence at which you apply or adjust the controls positioning matters. For example in a parent/child relationship of a space and control: if you first setup the control transform but then change the parent space transform you will be inevitably altering the control positioning. So be aware of that.

            Since there will be quite a few steps to do let’s first place a “Sequence” node. From it the first two execution pins will just mimic the “Setup Event”, we just won’t be setting the initial transforms. So on the first execution pin just paste the setup for the space transform, same as before barring setting “Initial” to false. Then on the second pin paste the loop we had done before. But now we also want to setup the “spine_02_ctrl” to follow the “spine_02” bone. So for that, on “Rig Hierarchy” click on the control and drag it to “Rig Graph”, when prompt select the last option “Create Collection”. Now you will have two collections, one with the “root_ctrl” chain and then one with just “spine_02_ctrl”. From one of them, drag the “Collection” pin and select “Union” this allows you to merge two collections into one. Then drag the resulting pin to the “For Each Item” loop node. Remember to toggle all “Initial” options to false.

            This two parts are just there to assure that your rig controls will be following the bones when they get animated.

            Now we can inject the logic behind the spine influencing. The ideia behind it is simple, we will adjust the “spine_02_space” transform and the “spine_02_ctrl” will follow it by being a child of it.

            From the “Sequence” node third execution pin you place a “Set Transform” node, setup it up to be applied on “spine_02_space”, set “Space” as “Local Space” and “Initial” to false. Now you will want to pass along a “Rotation” value. It will be this value that will come from mouse input. At the right side of “Rig Hierarchy” you will notice that there is a “My Blueprint” tab, on it you can create variables as you can on every blueprint. Create two float variables that are meant to store the yaw and pitch input values. And when creating them click on the eye icon next to them, it will allow you to then use the values as inputs when setting up the Control Rig in an animation blueprint.
Now drag those variables to the “Rig Graph”.

            Dragging the pins from those variables create “Remap” nodes. These are just so you can map the input values to the values you might want to rotate the space by. You will have to adjust these values by trail and error. From the “Remap” nodes create one “From Rotator” node, place the values on the Z and X values and then the resulting rotator gets fed in the “Set Transform” node. The location is taken from the “Get Transform” from the “spine_02” bone. You will end up with something similar to what is shown on Figure 8 (the “Multiply” node shown there is due to how the Yaw value is being calculated on another blueprint).

Figure 8 – “spine_02_space” adjustment logic.

            And for last there is just the need of a “Full Body IK” node, this node will take the controls’ transforms as constraints and then will solve the skeleton positioning adjusting the skeleton pose as needed. For a more thorough explanation it might be best to search for “Inverse Kinematics”.

            So, placing a “Full Body IK” node as stated, there will be a “Root” pin, on it just setup up the root bone of the required chain, in this case let’s use the “pelvis” bone. There will also be an “Effectors” pin with an add symbol besides it, click that to add effector entries. Create three of them and set them up to be the “spine_02”, “foot_r” and “foot_l” bones. Now we need to get all the needed controls’ transforms. On the “Rig Hierarchy” and while holding the control key, click on “spine_02_ctrl” and both feet controls, drag to the “Rig Graph” and select “Get Transforms”. Now expand the transform pins and just drag the “Translation” and “Rotation” to the correct pin entries on the IK node. When you drag one pin the valid entries will light up. Correspond the control transforms to the correct bones. The result should look like Figure 9.

            If the “spine_02_ctrl” gizmo is rotated, probably upwards, adjust its rotation on the corresponding gizmo menu, something like Y = 90 should suffice.

Figure 9 – Full Body IK node.

4 – Setting up the Hands influence

           The setup for this Control Rig is quite similar to the previous one. The only differences are that we will be creating a space to control the hands that will require to be always between both hands, creating controls for both hands and there won’t be a need for spine controls.

            So for that, create a similar “root_ctrl”, “foot_r_ctrl” and “foot_l_ctrl” as before. Also same gizmo setup. Then create a space called “hand_center_space” and create two controls named “hand_r_space” and “hand_l_space”. For these controls’ gizmos use “Circle_Thick” and remember to colour the left one as blue.

            The “Setup Event” will also be similar, same loop logic for the root control chain just need to merge with a collection of both hands’ controls. 
Now the “hands_center_space” will have the requirement of being at the middle point of both hands. For that, get the transform nodes of both bones and from the “Translation” pin add an “Interpolate” node with a “T” value of 0.5. 
This will ensure that the resulting vector will be at the middle point of both hands. Feed this resulting vector to a “Transform from SRT” as the previous rig. The result should look similar to Figure 10.

Figure 10 – Hand control rig setup event.

           Now, for the “Forwards Solve”, same as before. Following the first two execution pins of a “Sequence” node place both parts of code from the “Setup Event” but now with the “Initial” boolean turned to false.         
For the third execution pin we will be setting up our hand influence. Just as before we will be adjusting the space transform which in turn will adjust the child controls. On this setup we will be creating a sideways sweep and a stabbing motion. The sweep motion is just a matter of adjusting the rotation value of the space by the Yaw value that we pass to the Control Rig. While the stabbing motion will take as an input the Pitch value. 

          As before, create two float variables for these values. Then from the Yaw value place a multiply node before the “Remap” node. This multiply node will aid in adjusting the speed of the motion. Pass the remapped value to a “From Rotator” node and the result to the “Rotation” pin of the “Set Transform” node.       

          For the “Translation” value we will still be taking into account the middle point between both hands but then we will be adding a value to the Y positions. So this would offset the position forwards and backwards on our setup. So from the same interpolate setup as mentioned before, pass the split the “Translation” pin from the “Transform from SRT” node and then pass the X and Z values directly to the “Set Transform node”. The Y value will go to an “Add” node.         

          The second “Add” pin will be fed with our Pitch value. Pass the Pitch variable through a “Multiply” node (again to work as a speed multiplier, so adjust the value to your taste) then trough a “Remap” node and from it to the “Add” node and this result to the Y input of the “Translation” on the “Set Transform” node. The result should be similar to Figure 11.    

Figure 11 – Hand Space adjustment setup.

            Now it’s just a matter of adding the “Full Body IK” node, still having the root bone being the pelvis but now the effectors are both the feet and hands bones. Get the corresponding control transforms and set them up accordingly. It should look like as in Figure 11.

Figure 12 – Hand Control Rig Full Body IK setup.

5 – Placing your Control Rig on an animation blueprint.

            To use your Control Rig on an animation blueprint just place a “Control Rig” node on the blueprint, then on “Control Rig Class” choose your newly created rig and to enable your variables just toggle the “Use pin” checkboxes. See Figure 13.

            For mouse input you can pass the usual variables that are also used to drive standard aim offsets or blendspaces.      

Figure 13 – Placing a Control Rig on an animation blueprint.

6 – Final considerations

           Thus this concludes the tutorial. It should provide some basic notions on how to create your own control rigs for a multitude of end results or mechanics you so desire.

           One thing that was not previously explained is the why do you need to also set up feet controls. The reason behind it is that otherwise, when applying the IK node you would be adjusting the feet positioning. By using controls that follow the feet as constraints you assure that the final pose will still follow the feet animation and thus not creating extreme rotations on the rest of the body that could be hard to blend or layer out on the animation blueprint.

          The other point is that, in order to enable your character to drive the Control Rig as in the sample videos shown at the beginning you have to do two things: on the spring arm component set “Use Pawn Control Rotation” to False and on your movement component set the “Rotation Rate” to (0, 0, 0). It likely has only a value on the Z axis, so just set that one to 0.

            For further references see the following links to Epic Games’ documentation and video on the subject:

Control Rig Epic’s documentation.