Building a responsive cinematic animation system in Unreal Engine 4

Singularity University is building a diversity training exercise which requires animating full scenes with multiple human characters, including the potential for these human characters to interact with the player or the environment.

To support this development, we are using the following pieces of unreal engine.

  • Animation Blueprint
  • Level Sequences and Template Sequences
  • Blueprint Character
  • Animation Slots within Animation Blueprint
  • Fabrik IK
  • Power IK Plugin

Videos Of Implementation

Level Sequence, Template Sequence, and Blueprint Triggering

A template sequence can be bound to a particular kind of actor, in this case BP_NPCCharacter.

This particular template can be re-used in different level sequences, so an animator can plug in a sequence of static animations.

Here is an example of the process of inserting a template sequence for the track of a particular chosen actor of the level sequence (in this case BP_NPCCharacter3)

These sequences can be triggered in blueprint, such as this trigger for the specified sequence to play in BP_SequenceManagerActor.

If we were strictly to rely upon animation sequences without an animation blueprint though, we might find ourselves lacking some of the programmatic interaction we would like to add to achieve a believable scene.

Let’s say for example we wanted to create a board room scene with 4 characters seated, looking at a speaking character.

To achieve this purely using sequencer, an animator would need to create custom sequences for each individual character, each with their own different rotations on the head for looking at the speaking character.

Enter the Animation Blueprint & IK Animation

The goal with setting up the animation blueprint for the NPC character is to create an animation blueprint with the flexibility to play triggered animations from the sequence manager but also respond intuitively to the position and actions of the player and actions of the environment.

To achieve this goal, the setup needs to:

1) Track the player head position in VR and animate the eyes and head of the NPC character to follow the player at times when it is appropriate to do so.

2) Allow the NPC to track other objects or NPCs when appropriate to do so (ie, a cue to look at a presentation screen in the environment or another speaking character)

3) Allow the NPC to interact with specified objects when they come within a certain proximity of the NPC. These objects will have an interface defined which returns an Interaction Grip Type, to specify what kind of interaction the NPC should have with the object (should they reach out to grab it with an open palm, grasp it, grab it like a door handle?)

4) Allow the NPC to transition between moving, opening a door, and sitting at specified locations.

5) Allow a slot in the animation blueprint to play specified animations from level/template sequences.

Anim BP Part 1: Locomotion State Machine

The first part of this setup in the animation blueprint is a core locomotion state machine.

This allows for flexible behavior on the character’s base animations, changing between base poses for moving, sitting, idle, and interacting with objects.

The result of the locomotion state machine is saved into a cached pose to be used later in the stream

Sitting in particular has a unique animation set up. Allowing the player to sit but still play other animations with their upper body uses an animation node called Layered Blend Per Bone. Setting the base pose to an animation where the character is sitting, and then layering on top of it from spine_01 up allows for adding additional animations which affect only the upper body.

Anim BP Part 2: Enabling Template Sequences to Override The Base Animation

Following this tutorial link above shows the process for connecting the character blueprint, animation blueprint, and template sequences to allow combining triggering animations in sequencer with the additional functionality of animation blueprints.

In our case, this node is available here and saves its cached pose for use with the later stages in the animation blueprints. This node will need to be updated to apply the same principles for separating sitting and upper body animations and pass in a variable if the animator wishes to set the character as sitting during the animation they configure in the sequence.

Anim BP Part 3: Adding Head & Neck Rotation

This node uses Fabrik IK and passes in a rotation calculated in the character blueprint. The Fabrik IK node in this case is set to copy target rotation, the tip bone is set to the head and the root bone is set to the neck.

Fabrik IK node
This will need to be updated to set limits on maximum head rotation

The next step here is a node called Blend Poses by Bool. This allows for turning on and off an animation feature such as this one, so if the programmer or animator turn off in the character blueprint the variable “Follow Head Position”, the pose from the cached animation of the previous step is used instead, without any modification to the head/neck turn.

Anim BP Part 4: Adding Object Interaction

This part is still in development, its setup is similar to the neck turning set up but relies instead on the position of an object to match the hand, arm, and shoulder position to this object. It can be turned on and off with a boolean and the character can be set up to only trigger this boolean if an object with a matching interface begins an overlap with the character.

The next step for this is investigating the Power IK plugin to see if we can get better results.

Power IK

Investigation into Power IK seemed a little bit underwhelming because I feel it’s a bad trade off in terms of high complexity but low payoff. Their full body IK tends to twist more parts on the human mannequin than we would really want and in my opinion did not offer as much easy to use flexibility as the fabrik IK nodes in terms of specifying how high up the bone hierarchy you want the IK to influence.

Exploring this however introduced me to learning control rig, which I think has high potential for helping our animation systems.

Control Rig

Building Additive IK with Control Rig Nodes

One of the really attractive things about working with control rig is the ability to pass in parameters and consolidate multiple animation adjustments in one cleaner node.

modifying an animation by applying IK with control rig

For example, this older approach of using multiple IK nodes and then adjusting wrist rotation would need all these nodes to be copied in other parts of the animation blueprint if we wanted to re-use the same effect (say we wanted to make an anim bp sequence for shaking hands and grabbing an object). With this approach we could build one control rig for a general purpose (moving the arm towards an object or a hand to shake) but then apply more specific nodes afterwards to the hands and fingers for the difference between hand shaking and object grasping, trimming down the complexity of the overall animation blueprint as these nodes are shared.

previous example, multiple IK’s chained together for opening a door

Control Rig Controls, Forward Solve, Backward Solve

After learning about PowerIK and its control rig, I later found that Epic Games has a much more flushed out humanoid control rig for both male and female mannequins available as a downloadable project.

The control rig example provided by Epic Games

The control rig graph is separated into two sections, forward and backward solve.

green forward solve, blue backwards solve

The forward solve section means if you place instructions in that section to modify things (such as moving the hand control to a specified location and then specifying that the hand bone in the rig hierarchy moves to the location of that hand control with an IK node), then the control rig when applied in animation blueprint adds those instructions on top of whatever pose is being put into it.

How the control rig interacts with poses being put into it (whether from the animation blueprint or directly specifying an animation) depends upon the backwards solver.

The backwards solver in action, taking third person run and applying it to the control rig.

Control Rig And Interaction With Sequencer

Control Rig Forwards Solve vs Backwards Solve

Control Rig Component (for use with Sequencer)

Control Rig vs Re-targeting

Logic for Forts

Forts are set up as small dense challenge areas that force the player to fight a set of enemies in order to either unlock progression on the map (opening the ice door on this map) or to unlock a fixed reward (coins or a permanent extra heart when encountered on the open world).

They are setup as the object GeneralFortBoundingBox and include the following parameters:

  • number of enemies–this spawns a fixed number of enemies at a random location inside the bounding box, right at the start of the fort encounter.
  • Spawn Portal Points–these are an array of point objects that specify where spawn portals are meant to spawn. If empty, no spawn portals will spawn as part of the fort’s challenge.
  • Fort Number–this is a parameter used to track which fort is activating when relevant for keeping track of progression on the map.
  • Crossbow Rat Spawn Points–these are the array of points where crossbow rats should spawn.
  • Siege Bows Array–these are the array of the actual bows, already placed on the map. The user can operate these bows after they kill the rat, so they serve as a kind of turret.
  • Is IceFort (boolean value to decide whether progression logic should apply).
  • Fort Finish Point–single point actor indicating where to spawn the sphere that indicates a player has conquered the fort.

When the player enters the area of a fort, fort-specific combat music starts, the fixed number of enemies spawn, the spawn portals are created, and the siege bow rats are spawned.

Any open world enemies still alive are deleted and the door of the fort closes behind the player.

If the player teleports out of the fort, it destroys all enemies and ceases the fort specific combat music. Some logic should take place at this point to spawn enemies back out in the open world again.

If the player dies in the process of battling inside a fort, they should respawn at a point directly outside of the fort.

To win the fort, the player must close all the spawn portals and all the crossbow rats–as soon as they meet this criteria, all enemies are destroyed and some victory music should play.

In the future, some forts will feature specific mini-bosses and additional parameters will be added to specify different sets of enemies.

Ice Dragon Combat Behavior

Non-Combat Sequences

Idle Breath With Head Turn Behavior

(optional) Talk every 5-10 seconds.

Wounded Breath With Head Turn Behavior

Dragon is wounded, player can walk up and inflict damage in this state. Player needs to look for weak spots.

Stationary Flight With Head Turn Behavior

Dragon appears flying stationary at start of map. It tells player they should turn back, head rotates with player. It flies off towards the back of the ice cavern and is not seen again.

Combat Sequences

when player enters cave, door closes back up behind.

dragon lures player into cave, says they can approach world portal behind them.

Stationary on foot, head moves around to follow player

When player passes trigger box, dragon turns to player and says you fool, blasts them with ice (turning off player movement until they break chains–maybe entrap player in giant ice cube instead, feels more on theme), and flies out of the cave.

after player passes cave trigger box (to follow dragon)

dragon summons rats to mount crossbows (crossbows don’t exist until this moment), flies stationary for 20 seconds taunting the player. Occasionally flies over the player (low pass) then returns to stationary position taunting. Doesn’t do any damage. Dragon occasionally launches slow moving projectiles, player can avoid them or hit with ballista arrows or hammer. This part does damage.

Dragon flies in an overhead circle then flies down to a lower point and launches projectiles at player

Particle Effects On Projectile & Dragon Breath

Projectiles: looking at using bp_ky_shot_ice as an effect model for the projectile itself.

consider using ky_cutter for another take on projectile

consider ice-spiderling projectile

magic spray ice 02 for dragon breath

moving spin ice 02 is cool for potential big projectile

projectile bomblet ice 02 is interesting

spear blast maybe good for end of ice breath

dragon falls down after player hits with arrow. Dragon looks more wounded. Player has to hit dragon 3 times. Program radius around it for when player enters.

Player enters radius, dragon perks up and says “this is not OVER”. Puts player in ice cube, starts to run in one direction. (this cues a new save/restart position)

Player has to hit dragon during this phase by reflecting its projectiles or somehow hitting with hammer or grippable weapon (will be difficult since it’s far away). Dragon takes 5 hits, announces voice when hit, flashes when hit).

Dragon flies over head, swoops over player and drops icicles from above.

Dragon flies to direction above player, stays in stationary position and spits slow moving projectiles player needs to destroy before they are hit.

When the player hits the dragon during this phase with hammers, it flies or runs directly back to the recharge area.

When player destroys portal object, enter last phase. Large dragon ghost appears in middle of map on pedestal.

Dragon runs in one of 8 octagonal points. It turns and fires projectiles at the player (player must duck/weave past projectiles as they fly in or hit with hammers). Dragon then moves to next octagonal point.

If player hits dragon with weapon, it sits stunned for 15 seconds. Player can hit during this time and damage inflicted shows on body. Large dragon ghost in center of map decreases in size and flashes when dragon is hit (this is life bar).

If dragon is not killed while stunned, it goes ghost, a ghoulish laugh plays from center, then it respawns at one of 8 octagonal points.

When player finally kills dragon, the center pedestal where ghost was shows a new sword. Player can use this sword to break portal and teleport back to main world.

Made a behavior tree called baseDragonPlayer Swoop

Tested keeping dragon within a specified radius and adding to its turn radius to keep it within the edges of the circle

Still ran into issues with collision with large objects on map when dragon was at low elevation after swooping towards player.

Also ran into some issues with this approach of adding to turn radius where it either looked choppy or momentum would still take it to uncalculable position.

Also found that experience of dragon swooping on you in first person isn’t as fun–the first person aspect of it blocks all view of the dragon’s destruction because of the particle effect coming down on you.

Decided to re-orient design more towards a siege fight with allies so you can appreciate the dragon’s destruction from a third person.

Player can open 4 portals for turtles who will assist firing projectiles at the dragon.

Dragon will attack those turtle positions and take a center stationary flight position and shoot homing projectiles at player. Player will be forced to go between turtle positions to reactivate them or find area shielded from dragon.

This approach also gives a better lead up feeling to the fight with the dragon–you are rallying increasing numbers of turtle allies as you fight your way through the bottom canyon.


draw distances inside cave need looking at, new max values are not updating.

add some more roll to the dragon turning on its end, will make it feel a bit more real.

add a bit more pitch variation, potentially tied to wing flaps.

Development Blog #1: Designing Trigger Box Logic

This style of dev blog will be meant to be kind of half tutorial, half you the reader following along with my thoughts along the way of developing this VR game.

Right now I am about a month and a half into designing this new ice map.

The ice map is meant to be the first in a series of dungeons, which offer something a bit different than the open world ‘overworld’ part of the map. These dungeons are meant to be linear experiences that offer the player a high challenge in terms of density of monsters, offer some puzzles, and ideally introduce a new item/mechanic as a reward for the player progressing.

The challenge right now for this section of the blog is designing trigger box logic for various encounters on the ice map.

Let’s dive into the sequence of events now:

  1. Enter the map.
  2. See the dragon near the cave entrance, have it shout some menacing things at you.
  3. Fight your way down the ramp until you’ve activated the first respawn point.
  4. Fight your way through three sets of monsters to activate three puzzle platforms which allow you to control giant eye towers and point them towards the ice door.
  5. Unlock the ice door finally and fight boss dragon in a boss battle.

First we want the player to descend the mountain and encounter enemies along the way.

This part is easy-we put some trigger boxes (green squares) and each time the player overlaps with them, we have them spawn enemies. We implement a do once node, and if the player dies we should reset this node as part of the death sequence so enemies can spawn again: we’ll get to this later.

Next step:

When the player reaches the final end of the ramp, we want them to unlock a new respawn point.

Open questions: should they unlock this respawn point only if they’ve defeated all enemies? What if they don’t go towards it and teleport off the edge somehow?

To solve for this, I decided to implement either a magic cylinder that draws to this point in the sky so it signals clearly to the player they need to go there to progress or a line drawn from the player’s hand towards this goal. I’ll add this implementing to my todo’s at the bottom of this post.

Next step.

We want the player to now see an event take place where these eye towers come out of the ground. We also spawn monster portals and/or boss monsters along the way near these towers.

These towers are controlled by this platform with a VRLever the player can spin.

This platform should only be unlocked once the player has beat all the monsters. If they die in the process of fighting the monsters, they should respawn at the last respawn point.

OK–so how is this all achieved in code?

Spawning Monsters–handled by the respawn pad blueprint:

Relevant functions: SpawnNextMonsterEvent–this gets called in one of two scenarios, either the player has survived and finished the previous puzzle, or the player has died, then when the player ends overlap with this wider collision area, it starts the whole sequence again. So only when the player has had a chance to recover mentally and moves themselves again does it start the sequence.

Keeping Track of When to Unlock Lever

This loop is activated after spawning monsters. All monsters spawned or other classes associated with monsters (such as the big spawn portals we create) are added to this array of spawnedActors. Every 0.5 seconds we check if this array is count 0, if not we assume the player has not finished this sequence. If it does tick down to 0, then we destroy the ice cube component of our lever class. TODO: need to draw more attention to this, make sure it’s clear for player they have to go to the lever after they’ve cleared the enemies.

Handling Player Death

If the player dies, we respawn them and destroy all the monsters and return their life back to maximum.

We also have to stop the loop checking for spawned actors before it has a chance to trigger, so they don’t get credit for finishing a sequence just because of monster despawn. This is done by some code in our GameMode class checking to see which map we are on, and then getting a dummy blueprint actor we placed in the level handle all our death logic.

Right now this actor has two responsibilities:

  1. Reactivate the correct respawn pad so that sequence starts again once the player starts moving.
  2. Kill all loops respawn pads are firing checking if their clear conditions are met.

Saving Progress

So if a player has cleared 2 respawn pads, how do we keep track of that?

Also, if a player quits the game, how we do keep track of which map they should open when they restart the game?

For our case for now we are implementing this fairly simply assuming it’s always the same player and they aren’t selecting their own save game file–later on we’ll have to go back and allow multiple save games so different players can have unique progress.

Right now it is implemented with two variables on the save game: iceLevelHighestTP and currentWorldToLoad.

When the player dies, it gets to this section of logic in the function “RespawnPlayerOpenWorld”. It loops through all the possible respawn objects and gets the one that’s equal to the highestTPNum from the save game. When the player completes one of the puzzles on the levers, then it saves this to a higher number.

Next Steps:

Beyond our todo’s, we need to consider the layout for all of these objects in the map. Here you can see the position of the 3 eye towers (all will be under the ground so they have a chance to rise up).

We also have to think about the locations of the towers with the rats operating ballistas and how we want them to respawn. I am leaning towards having them respawn every time regardless of the player location, I want them to feel annoying and not necessarily dangerous but something to keep your mind on. I might add one more rat tower for good measure.


Convert spinner to VRLever Object and test

Implement sound once VRLever object is unlocked

Draw attention to vr lever once it’s unlocked.

Implement sound and animation once final ice door is unlocked

Reset trigger boxes ability to spawn monsters after player death

Implement magic cylinder of great height at first respawn point when that is the player goal or draw an arrow to this point from the player hand.

Sound for eye towers coming out of the ground.

Test travelling to this new level from previous level

Reward waiting: move on to doing ice dragon boss fight behavior logic.