The base drone model is a skeletal mesh with two material slots, 10 bones and 4 sockets. It is located at “/Content/AssetCollection/AF_Ltd_objects/SK_Drone”. All Drone animations can be found in “Content/AssetCollection/AF_Ltd_objects/Animations/Drone” and all material vatiations in “Content/AssetCollection/AF_Ltd_objects/Materials/DroneMaterial/NewDroneMaterialVariations/Materials
There are two drone material variations for the drone body. The main one is “MI_DirtyDroneGreen” which is used by default. There is also an alternate variation in red “MI_DirtyDroneRed”.
The eye of the drone uses a different material slot and can be changed independantly from the rest. By default it uses a simple red emissive material.
There are a series of drone animations already in the project. Some of these are ‘full animations’ (i.e. they affect every bone and are designed to be used standalone), others are only partial in that they are made to be blended with others at the same time.
These can be controlled and combined with animation blueprints. Animation blueprints are a set of instructions desgined to move the skeletal mesh between different animations dependant on certain triggers. Below is a simple example of the animation blueprint for the timer drones.
Initially the drone is in an offline state. This sets the drone to use the ‘Anim_Drone_DropIn’ animation but with a play-rate of 0 so it doesn’t move.
For the drone to transition to the next state, a certain trigger must be met. Here, the boolean value ‘Begin’ must be true. This value can be changed by an external blueprint. In the case of the drone timer, when the player runs through the trigger volume, this value is set to true and the drone can transition to the ‘Enter’ animation state, dropping out of the sky.
Once the drone has progressed through a certain portion of the animation, it will transition to the idle state. The Crashed state here is a seperate option for adding timer drones that have already crashed on the ground (i.e. don’t show the dropping in animation) but still show the time.
The drone contains multiple sockets to attach particle effects to for blueprint actors. This might be too expensive for scenes with many drones planned on screen at once but is fine for less complex scenes.
The engine trail FX – to be attached in two instances to the sockets “EngineL” and “EngineR” – is a niagara system found in “/Content/AssetCollection/AF_Ltd_objects/Particles/FX_DroneEngineTrail”
The Drone Projector FX – to be attached to the “EyeInnerSocket” Socket – is a niagara system found in “/Content/AssetCollection/AF_Ltd_objects/Particles/FX_DroneProjector”
The archery related blueprints can be divided into three camps:
Core blueprints that without them archery would not work.
Secondary blueprints that are realted to challenges, targets, obstacles, etc.
Aid blueprints that are either provide getter functions, or to add polish.
For the first camp we have the following blueprints and needed interfaces:
The second camp is comprised of the following blueprints and interfaces:
The third has the following blueprints:
The interfaces are used to path communication between all of the above blueprints without requiring casting and hard references to their respective classes (thus we only use references of class AActor).
2 Core Blueprints
Just a brief high-level explanation of how the interactions appear before diving into greater detail of each blueprint.
The player first grabs the quiver, attaching into onto himself and grabs the bow. Then going for the quiver, the player can grab an arrow then nocking it into the bow, pulls the string and releases setting the arrow loose. All the attaching events, gripping and so on are being mediated by the beforementioned interfaces.
So, without further ado a deeper explanation of each blueprint.
This blueprint handles the mechanism of feeding arrows to the player.
It has an active grip enabled volume that listens to “grip events” handling them to it’s discretion.
There are only two settings that one needs to be aware of to further tweak this blueprint behaviour:
PlayerHapticWarnScale – Controlls the scale of the haptic feedback that occurs to warn the player that it can grab a new arrow. This trigger is further sent trough BPI_PlayerController.
PlayerHapticWarnDelay – A small delay for grab a new arrow haptic feedback effect.
One can find this beforementioned settings at category Settings|Haptics.
For a first grab the blueprint will call “AttachQuiverToPlayer” that does the following:
Sets flag “bHasAttachedToPlayer” as True.
Forces the player to drop the quiver.
Tells the player’s pawn blueprint to attach this blueprint to itself trough “BPI_ArcheryPlayer”.
Further grip events will cause the blueprint to check if the player has a bow (BP_Bow) equipped and if such bow does not have an arrow (BP_Arrow) already attached to it. This uses both the beforementioned interface and BPI_ArcheryWeapons.
If both responses are positive then it will trigger “SpawnAndAttachArrow” that is sent trough BPI_ArcheryPlayer interface.
This blueprint core functionality is to attach arrows (BP_Arrow) to it’s string and setting them loose. Keeping track of the state arrow attached/not attached..
It keeps track of further the string has been pulled (animating the string in the process and playing feedback haptics if necessary) and uses that displacement to drive the arrows’ velocity.
This blueprint has a few settings that directly impact it’s gameplay feel.
bUseHardestDraw – Toggle between using HardestDrawDistance and EasierDrawDistance
bUseEasiestDraw – Toggle to either use EasiestDrawDistance or one of the other’s variables depending on bUseHardestDraw setting.
EasiestDrawDistance – Current value in use.
bUseLongBow – Toggle between using a non fantasy looking bow that has an etch to slot arrows into or use Paragon Sparrow’s bow.
The draw distance variables controll how much one needs to pull the string to fully draw the bow. That is, how further does the hand that gripped the string needs to travel to pull the string. Currently EasiestDrawDistance is being in use. These draw values are being in use by “GetDrawDistance” function.
bUseLongBow switches the skeletal mesh that the blueprint uses, along with switching the animation blueprint. This is due to the two skeletal meshes in use do not share a skeleton.
Currently the Longbow might need a bit of polishing on the right hand socket and a more in depth tweak to it’s animation blueprint (the string displacement needs further refining on it).
After the bow is grabbed it will remain attached to a player’s hand trigger forced grabs when needed (the go trough BPI_ArcheryPlayer).
This blueprint is just a projectile with not much added functionality added to it.
It has a volume that will trigger attaching events (that is, attaching to the bow if bow does not already has one attached to it) and another volume that handles collision events (they are being handled by BP_HitReactionComponent and BPI_HitReaction.
A few settings exist that impact either the gameplay or just the interactivity of the arrow:
bShouldBeGrippable – Should this arrow be grippable, this setting is geared to when arrows are placed in the level.
bShouldBeAffectedByGravity – Toggle Off if gravity should not affect the arrow after getting shot.
bShouldUseAudioComponent – A toggle to either use the AudioComponent for collision sounds or just spawn a sound at collision location.
bShouldAutoShoot – Toggle to make this arrow shoot right after being spawned, this setting is geared more for an obstacle or turret.
MaximumVelocity – Arrows maximum velocity when shot.
ArrowLifespan – Lifespan of the arrow after being shot.
When BP_Arrow attaches to the bow it will detach itself from the player’s hand. It uses GripOrDropItem from BPI_PlayerCharacter to do so.
This blueprint handles collision events.The only setting to be aware of is DelayBetweenCollisions (this is just a delay for the case the projectile pierces multiple objects and we might not want to trigger too many events in a row).
It works as following:
It’s owner blueprint reports a collision.
Checks if it collided against an actor,
Checks if it hasn’t recently collided with said actor.
Checks if hasn’t collided recently.
Checks if collided actor implements BPI_HitReaction interface.
Sends all collision information to collided actor trough said interface.
This character handles all archery related player interactions.
They are all attaching events barring one (InputAction SwitchPreferredHands).
Only two settings affect gameplay:
PreferredHand – An EHand enum that keeps track of which hand does the player prefer to have the bow attached onto. This setting will later be handled by the same system that will handle obstacles’ calibration..
bUsePreferredHand – This is just a toggle to make the bow grab to either fix the bow to the hand the variable PreferredHand points to or just use the hand that has grabbed the bow.
The InputAction SwitchPreferredHands event just triggers a drop of the bow from the current hand reattaching it right after to the new set hand. (Just a flip/flop).
Parent class of VivePawnCharacter_ARnBox and child of VivePawn_Character.
Has no added functionality that directly affects archery.
This blueprint comes from the VRExpansion plugin by MordenTral.
It implements all player required interfaces:
No settings are used by this blueprint.
Event GripOrDrop item is implmented here.
Has BPI_ArcheryGame mode interface.
Only one setting to keep aware off:
ArrowClass – Soft class reference of the arrow projectil class that will be used when spawning a new arrow.
Implements SpawnNewArrow that spawns a provided ArrowClass and outputs the resulting spawned actor.
Communication done to BP_Bow and BP_Arrow is done trough here.
Two categories of functions and events exist here:
BowHasArrowAttached – Checks if the bow currently has an arrow attached to it.
GetBowAttachmentTransform – Get Bow Socket Transform, dependent on which hand to attach to.
GetStringGripController – Returns the GripController that is grabbing the bow string, IF exists.
AttachArrowToBow – Will force the provided arrow to attach to the target bow.
ForceGrabBowString – Will force the provided grip controller to grab the target bow string.
SetArrowTotalDisplacement – Sets how “pulled” the target bow string was before setting it loose.
SetArrowLoose – Will set loose the target bow attached arrow.
SimulateCollision – Will trigger the collision event on the target arrow, even if no collision has occurred.
Communication done to player character regarding archery is done trough this interface.
A category exists for each element that interacts in game with this character.
AttachQuiver – Attaches target quiver to character.
DropQuiver – Detach target quiver from the character.
IsGrabbingBow – Checks if the player is currently grabbing a bow.
AttachBow – Will attach bow to provided grip controller.
DropBow – Will detach bow from the provided grip controller.
SpawnAndAttachArrow – Will trigger an arrow to spawn and to attach itself to the player’s hand.
Communication to the GM_SteamVR_ARnBox is done trough here.
Currently only one function is there, just to spawn an arrow and to provide a reference to it.
SpawnNewArrow – Will spawn an actor of the class set on the game mode as ArrowClass. Outputs a reference to the newly spawned actor.
Interface used to mediate collision events between actors (in order to have them linked).
HitReaction – Will trigger an HitReaction event on the target actor.
HitRecoil – Will trigger an HitRecoil event on the target actor.
Enum with two entries:
This enum is provided by the engine.
3 Secondary blueprints (Challenge Blueprints)
On this section all archery challenge related blueprints are present, same for target blueprints and all related interfaces and structs.
A higher level explanation on how the challenge is played out:
The player enters the challenge platform. This prompts the challenge controller to start reading the provided timings data table. When each timing is due it will trigger a spawn event on the challenge spawner. Challenge spawner will then in turn read the provided data table that holds information on which object to spawn and which pathing spline said object should follow. If the spawner has been set to spawn a fake target then instead it will spawn said fake target passing along to it the data table spawn information. When a fake target is spawned it will attempt to reach it’s goal destination, which is the first point of the pathing spline. When the goal is reached the fake target will then prompt the challenge spawner to trigger a spawn using the fake target’s saved spawn data. At this point with the correct target then spawned it will fetch the provided pathing spline using it to create it’s flight path which will promptly follow. If the target has any behaviour built in it will activate it at this point. The target is despawned when either the player shoots it with an arrow or the target reached the end of it’s flight path.
This blueprint holds a spline component that is later fed onto actors of BP_ArcheryTarget clase that will use it to build their flight path.
A few settings might be of interest depending on art direction and challenge design. To this actor can be added static mesh components that would act as visual cues (ie: lit/unlit torches, lights flickering) that could aid the player to know from where the targets are coming from and to where are they headed.
This would be more helpfull if the spline has more complex designs for the flight patters.
The settings to be aware off if applying said Visual Cues are the following:
bToggleVisualCues – Toggle to turn on visual cues for when a target is on this spline pathing.
VisualCueStaticMesh – What static mesh should the Visual Cue use.
VisualCueMaterial – What material should the Visual Cue use.
VisualCueDynamicParameter – Name of Visual Cue material scalar parameter to be update at runtime.
The challenge platform consists of two meshes (Platform and Wall). Platform is only used to play visual effects that get triggered when a new target is spawned.
Wall is used to trigger the challenge start when the player’s overlapps it while also confining the player to it’s bounds for the challenge duration.
Currently there are two shapes that can be toggled: a square shape and a circular one.
Just two settings to be aware off:
ChallengeController – Sets the ChallengeController to be triggered. Needs to be set after placing in the level.
bUseSquaredShape – Toggle between using a circular or squared shape for the challenge platform and wall.
When this blueprint gets activated the challenge starts. It will read the provided timings datatable, evaluates each timming checking if they should be altered by any given amount, and when each revised timing is due it will trigger a spawn event to occur.
It also keeps track of currently alive targets (ie: targets that where neither hit nor have finished traversing their pathing).
Just a few settings need to be tracked:
ChallengeSpawner – The current challenge spawner blueprint.
ChallengeSong – Challenge song to play.
MusicTimingsMap – Maps the timings data tables to the challenge songs. The mapping is tied up to the song set in ChallengeSong.
This blueprint has all the required logic for spawning the challenge targets with the correct data assigned to them.
It holds the data datable with information on objects to spawn and spline pathing to assign while also holds data of the chosen challenge objects and the pathing splines that exist in the level.
There’s a few settings to be aware of when setting up the challenge, they are as follows:
ObjectsAndPathingSplineTables – An array that holds the challenge’s DataTables. Use a DataTable whose entries are from F_TargetSpawnDataArray type.
PathingSplines – Holds the challenge’s PathingSplines. Make sure that the actors set are of class BP_PathingSpline
BunkerSplinePairing – Maps the PathingSplines with corresponding spawn Bunkers (if existing). Key: PathingSpline Value: SpawnBunker. Set this if using actors of class BP_Bunker in the challenge.
bShouldSpawnFakeDrone – Toggle between using the FakeDrones or not. FakeDrones are non hittable meshes meant to be spawned further away from the pathing splines.
FakeDroneSpawnPoint – The FakeDrone spawn point. The provided actor needs to have: FakeDroneSpawnPoint tag to it!
bSpawnObjectOnRandomPathingSpline – Set True if spawning object should ignore the data table pathing spline entry and instead getting a pathing spline assigned randomly.
This blueprint will attempt to go towards a goal location (which in turn is the starting point of a BP_PathingSpline actor) and when reaching said goal it will trigger a spawn event.
Thus spawning an actor of BP_ArcheryTarget class. In order to mask this spawning event a mesh component is placed at the spawning event location.
Everything is already set up in place such that when BP_TargetSpawnerARnBox spawn this actor it will have the correct settings. But is always good to be aware of the following variables:
PathingSpline – Holds PathingSpline reference. This needs to be set up when spawning BP_FakeTarget!
TargetSpawnData – The SpawnData to be used to spawn the real target.This needs to be set up when spawning BP_FakeTarget!
TargetSpawner – The current challenge spawner. This needs to be set up when spawning BP_FakeTarget!
SpawnEffectMesh – The mesh to show when triggering the real target spawn.
SpawnEffectMaterial – Material to be applied on SpawnEffectMesh
ObjectsAndPathingSplineTables – An array that holds the challenge’s DataTables. Use a DataTable whose entries are from F_TargetSpawnDataArray type.
I would add that only the two variables in the Visual Effect category are needed to further refine the challenge as they happen a lot during the challenge and their visual impact is significant.
This actor has all the logic needed for it to function. The children blueprints only have their behaviours implemented and override any needed functionality.
A few of it’s settings are set when the actor is spawned while others are set at editor time.
PathingSpline – The pathing spline this target will follow. This needs to be set at spawn time!
TrackSteps – Amount of points to segment the pathing spline into.
FlightDuration – Duration of a full flight from start to finish of the pathing spline.
bShouldAutoActivateMovement – Toggle between having the target following the pathing spline right after spawning or waiting for a trigger to do it. Can be set at spawn time.
bShouldAutoActivateBehaviour – Toggle between having the targets inherent behaviour to be active after spawning or to wait for a trigger to do it. Can be set at spawn time.
RagdollDuration – Amount of time to allow physics simulation before despawning the actor. This is only relevant after the actor has been hit.
StartBehaviourDelay – Delay between spawning and activating behaviour. Requires bShouldAutoActivateBehaviour to be TRUE.
bShouldFacePlayer – Toggle to allow the target to face towards the player. Default is TRUE.
OrientationFix – A fix to properly adjust the mesh orientation on children blueprint that might need it. This is due to the drone skeletal mesh main axis being the Y axis. In Unreal the forward axis is the X axis, so there’s a shift that needs to be accounted for.
CollisionCapsuleSizeMultiplier – A multiplier to the capsule collision size. It will multiply against the bounds of the visual mesh in use. So at a value of 1 it closely envelops the skeletal mesh.
CollisionImpulseMultiplier – A multiplier to apply to the velocity of the projectile that hit this target. It directly impacts the force that will push this actor after an hit.
TargetActor – Actor for this actor to face towards to. Needs to be set at spawn time! And only used if bShouldFacePlayer is set to TRUE.
ChallengeController – This challenge’s controller actor reference. Needs to be set at spawn time!
bShouldPresentDebugText – Toggle TRUE to see debug text in game. Default is FALSE.
This actor has a random bool with weight node that will select if it should accelerate or deccelerate when it’s behavior is triggered.
It basically applies a custom time dilation to “simulate” it as changing the InterpToMovement node proved unsuccessful
So the only setting to keep track it’s the weight to be applied.
BehaviourWeight – Weight to toggle this actor to accelerate or decelerate.
This actor uses a RotatingMovement Component to Rotate the mesh around the movement path.
No settings to be tweaked.
No settings to be tracked. This blueprint uses a projectile movement component to home in on a target position when it’s behaviour is set.
This blueprint uses a timeline to simulate a waving motion. It basically displaces the mesh either sideways (on Y axis) or up/down (on Z axis) or both at same time (albeit for having both perhaps using separate timelines might be advisable).
Few settings to be aware off:
bWaveSideways – Toggle TRUE to allow waving on the Y axis. Default is OFF.
bWaveUpAndDown – Toggle TRUE to allow waving up and down. Default is TRUE.
WaveMaxAmount – Maximum relative displacement for the waving.
Communication to the challenge controller is done trough this interface.
AddNewLiveTarget – Adds new live target to challenge controller target pool.
RemoveLiveTarget – Removes provided target from challenge controller target pool.
StartChallenge – Will start the challenge.
Communication between all archery challenge elements (barring the controller) are done trough here.
So in sum, the spawner, platform, pathing spline, targets and bunker.
GetAllPathingSplines – Returns all pathing spline actors that the spawner holds.
SpawnTarget – Will spawn an object using provided SpawnData. This will not increase the Current Row index.
TriggerNextSpawn – Tells the spawner to read the data table and spawn objects accordingly. This executes needed safety checks and will increase the Current Row index.
TriggerChainSpawning – Tells the spawner to read the data table and spawn objects accordingly. It will do so as many times as set by Number Of Spawns. This executes the needed safety checks and will increase the Current Row index.
WillSpawnFakeDrone – Checks with the challenge spawner to see if it will spawn a fake drone.
TriggerSpawnFrontDoor – Trigger bunker Front door spawn event.
TriggerSpawnRoofDoor – Trigger bunker Roof door spawn event.
TriggerSpawnAnimation – Triggers challenge platform spawn animation. To be used when a target has been spawned.
ChallengeHasEnded – Triggers ChallengePlatform Challenge End event.
InitializeBehaviour – Tells provided target to start it’s inherent behaviour.
GetTrackSpline – Gets spline component from target pathing spline.
AddTargetToTrack – Tells target track that a new target has entered it.
RemoveTargetFromTrack – Tells target track that the provided target has left.
Struct that holds Object Spawn Data. Comprised of the spline track and the index of object to spawn.
Struct that holds an Array of object Target Spawn Data entries.
4 Aid Blueprints
This actor has active volumes that check if a target is residing inside them. Thus controlling a door state: either opened/closed. If opened it can close when no targets are there.
This state is fed onto an animation blueprint to animate doors opening/closing.
This actor has the interface BPI_PlayerController implemented on it. It is used to pass along Play Haptic Feedback commands.
It calls Play Haptic Effect (which is native to the engine).
This blueprint function library has a few functions that can help retrieving the VivePawn’s GrabSpheres, GraspingHands and GrippingControllers. It can also perform checks to see if references we may have are from a left or right hand.
It also has a function to retrieve the VivePawn.
ReturnGrabSpheres – Returns both Grab Spheres. You can either provide a GripController or GrabSphere. Toggle between using bUseGripController. Providing a valid Player blueprint is preferable but the function has a fallback for it.
ReturnGraspingHands – Returns both Grasping Hands. You can either provide a GripController or GrabSphere. Toggle between using bUseGripController. Providing a valid Player blueprint is preferable but the function has a fallback for it.
ReturnGrippingControllers – Returns both Grip Controllers. You can either provide a GripController or GrabSphere. Toggle between using bUseGripController. Providing a valid Player blueprint is preferable but the function has a fallback for it.
IsLeftHandControlerOrGrabSphere – Returns TRUE if the provided input correspondes to a LeftHand object. You can either provide a GripController or GrabSphere. Toggle between using bUseGripController. Providing a valid Player blueprint is preferable but the function has a fallback for it.
ReturnGrippingControllersSpheresAndGraspingHands – Returns all objects related to VivePawn hands. Grab Spheres, Grip Controllers and Grasping Hands. You can either provide a GripController or GrabSphere. Toggle between using bUseGripController. Providing a valid Player blueprint is preferable but the function has a fallback for it.
GetVivePawn – Returns the Player’s VivePawn if existing. Checks if implements BPI_PlayerCharacter interface.
This blueprint macro library currently has two macros in it. One controlls execution providing a ForLoop with a delay setting to it. The other checks the validity of an actor by testing if it has both the correct interface and tag assigned to it.
DoesProvidedActorHasTagAndInterface – Checks if provided actor has both the tag and interface provided.
ForLoopWithDelay – A for loop with a delay setting.
Currently just has one event in it. Meant to implemented in Player Controller blueprints.
PlayHapticFeedback – Will trigger an haptic feedback effect trough the player controller.
Hierarchical Levels Of Detail or HLOD is a method of combining multiple static meshes into a larger, simpler proxy mesh that will display at distances far away from the camera. It will then transition back to the original separate higher detail meshes at close distances.
These merged Proxy meshes are much less resource intensive to render and can drastically reduce draw calls and triangle counts for scenes when there are a lot of different objects on the screen at once.
You can see in the above example how enabling the HLOD system reduces triangles and draw calls without having to cull any objects from view. The HLOD version actually has more detail in this example as meshes are distance culled without the HLOD enabled. The HLOD system works by assigning static meshes to groups called clusters and then building a proxy mesh for each of these clusters. These proxy meshes will then be rendered instead of the original group of static meshes at distances far from the camera.
Setting up HLOD for your level
HLOD must be enabled in the world settings of your level.
To set it up after enabling this, you’ll need to access the HLOD Outliner Window. You can access this from ‘Window>Hierarchical LOD Outliner’.
The HLOD Outliner
The HLOD Outliner shows the HLOD settings and controls for your level. It shows the LOD levels, which let you have multiple sets of clusters with different settings. For example LOD 0 has small clusters with less simplified proxy meshes and LOD2 will have large clusters that will have largely reduced triangle counts. Each level’s settings can be tweaked individually and will affect all clusters within that LOD level. You can manually decide how many LOD levels to include for you HLOD system. It also shows the HLOD Clusters and all of the Original Meshes within each cluster.
There are three main ways to generate HLOD clusters:
You can drag objects directly from the world outliner into the desired LOD level to create a cluster and then drag other objects into that cluster to add them to it. Be warned – clicking ‘generate clusters’ will override any manual clustering you’ve already done.
To automatically generate clusters just click ‘generate clusters’ in the outliner. This will generate clusters according to the cluster generation settings for each LOD which can be acessed under ‘LODSystem>Hierarchical LODSetup>HLOD Level x>Cluster Generation Settings’.
Here you can dictate the average size of clusters at this LOD level, how much of the cluster to try and fill with meshes and the minimum number of actors to build a cluster from.
It is possible to refine the generation process to be more specific by using HierarchicalLODVolumes. These are volumes that tell the generation process to bundle all the meshes enclosed by the volume into one cluster. It can either be manually placed and resized into your scene or you can place one around an existing HLOD cluster.
This can be a method to retain manual cluster control over certain areas while using automatic cluster generation for the rest of your scene. You can exclude certain HLOD levels from the volume’s detail panels to make sure it is used for the specific level you need.
Generating Proxy Meshes
Once the clusters are set up for your scene, you can start generating the proxy meshes by clicking ‘Generate Proxy Meshes’.
You can tweak the mesh generation settings for each HLOD level in ‘LODSystem>Hierarchical LODSetup>HLOD Level x >Mesh Generation Settings’. Here you can set the draw distance and decide whether or not to simplify the mesh.
After generating the proxy meshes, each one can be manually tweaked in the same way as a normal static mesh, including further reducing triangles and setting up LODs for each mesh.
Close up HLODs For Reducing Draw Calls On Blueprint Objects
The performance of VR games is often bottlenecked by the number of draw calls and HLOD proxy meshes are an effective way of reducing this number. Player interactable blueprint objects that have more draw calls than a standard static mesh can get very expensive when there are many of them on the screen at once. This is a method to minimize the effect of this by replacing these blueprint objects with HLOD proxy meshes until the player is close enough to be able to interact with them, at which point the proxy mesh will be swapped out for the blueprint object.
Set up an LOD0 level in the HLOD Outliner. Set ‘Desired Bound Radius’ to the minimum value, ‘Min Number Of Actors to Build’ to 1 and the ‘Override Draw Distance’ to an appropriate distance to swap between Proxy Mesh and Blueprint Actor. These settings will stop the automatic cluster generation process from adding any meshes to this LOD level except for those within HierarchicalLODVolumes. The next step is to build HierarchicalLODVolumes around each of the blueprint actors that you’d like to be affected by this system.
After these clusters are set up, you can create any extra HLOD levels on top of this level with normal settings to use the standard HLOD system as well. Once everything is set up, hit ‘Generate Clusters’. The LOD0 Proxy Meshes look almost indentical to the blueprint actor but with far fewer draw calls.
Now you will need to tweak the max draw distances of the blueprints components. This value defines the maximum distance at which the component will be rendered.
You will need to tweak the numbers to make sure that the max draw distance of the blueprint and the minimum draw distance of the proxy mesh overlap by a small margin – otherwise there will be a small distance range from the obstacle where nothing will be rendered.
This generally requires some tweaking on a per-object basis, as the generated HLOD may not share the same origin as the original obstacle. For example, if you’ve set the ‘Override Draw Distance’ in your HLOD LOD0 mesh generation settings to 1000, a max draw distance of 1100 on the blueprint component may not provide sufficient overlap for every obstacle in your level. Tweak this value on each obstacle to see what works correctly.
I reached that dreaded moment where our perforce server hit its maximum limit.
Even when trying to obliterate some directories from our depot, I was getting hit with the reply:
The filesystem ‘P4ROOT’ has only 2G free, but the server configuration requires at least 2G available.
P4Configure to change this file size requirement did not work for P4Root.
What I ended up having to do was log on as the root user of my digital ocean hosted linux server, navigate up to the home directory by doing cd .. and then navigate into the perforce folder and manually delete one of the smaller projects from the archives folder.
After doing this, I was able to obliterate a larger project and verify the new space on the p4 server with the p4diskspace command.
At some point later I will need to look into setting up perforce to delete older archived versions of files.
TargetSpawner is a set-up for spawning scripted patterns of actors, for use in challenges or other gameplay. TargetSpawner is an abstraction – it doesn’t specify the type of actor that is spawned and could be used make various types of gameplay with a common scripting format.
BP_TargetSpawner is the blueprint class used for this, but you’ll also need the struct STargetTrackRow, as the basis for a data table. In the project depot you’ll also find an example Excel file that can be used as a template.
Below are two sections – “How does it work” and a tutorial for setting up an activity.
How does it work
Inside the struct and data table
The struct STargetTrackRow simply contains 8 strings representing 4 sets of 2D axes.
A data table based on STargetTrackRow is one or more sets of these eight strings. Each row represents a time step. Each pair of columns (e.g. Obj0_X and Obj0_Y) represent the co-ordinates of an object to be spawned at that time.
The row name is not important. In the example above, the rows are named as 0.5s intervals, but they could have any unique name and don’t impact on the tempo at all. Also in the example above, there are empty rows from 8.5s (row 18) onwards – that makes no difference and they could be deleted.
BP_TargetSpawner has seven variables.
SpawnAreaSize is an editable Vector2D that defines the height and width of the spawn area.
SpawnAreaHeight is an editable float that defines how heigh spawn area sits above the origin of the BP_TargetSpaner actor. This means that BP_TargetSpawner can easily be placed at ground level with the spawn area sitting above it, e.g. from knee-height to head-height.
TargetObjects is an editable array of Actor classes. It is used to define the one to four types of actor objects that this TargetSpawner can spawn. The four types correspond to the four pairs of columns in the data table, in order. Anything other than the first four will be ignored.
TargetTracks is an editable array of data tables. This defines which data table to use for this TargetSpawner. If there is more than one listed, they can be selected by index or picked at random when the scripting starts.
TrackTimeStep is an editable float that is used to specify the tempo of the script. It is a duration in seconds, so 0.5 will give you two beats per second, or two data table rows per second.
CurrentTable and CurrentRow are non-editable ints that are used for iteration.
On construct, BP_TargetSpawner uses the parameters to size and position the spawn area. Scripting begins when StartTrack is executed.
StartTrack takes an int as a parameter, which specifies which index of TargetTracks will be played. If it is less than zero (default = -1) then it will pick one at random.
SpawnObject spawns an actor class indexed in TargetObjects at a relative position within the spawn area. It is called by PlayNextRow. PlayNextRow also makes use of GetCurrentTrack, GetCurrentRow and GetCurrentTrackLength which are used to read the specified data table and provide the co-ordinates. When finished PlayNextRow calls the event WaitForNextRow, which re-calls PlayNextRow after a delay, unless the end of the table has been reached.
Here we will walk through the steps of adding a TargetSpawner for an activity you are creating.
First, you will need some actor classes to use as your target actors. All the movement, collision and behaviour of the targets is handled by the target actors, in order for TargetSpawner to be more abstract/versatile. These could be any object at all as long as it is derived from Actor.
A typical use case would be to create an actor class that has a visible mesh or sprite and moves, either towards the player, or to be dodged. Another use case might be an actor that is a target to be punched or hit in some other way, that appears at a location and then disappears. You can specify up to four of these classes for use with a single BP_TargetSpawner.
Add a BP_TargetSpawner to your level.
(If you are making an actor that contains a whole challenge or other activity, you could add the BP_TargetSpawner as a child actor instead.)
Place the actor (origin) at ground height. The magenta arrow represents the forward vector of the spawn area. The target actors will be spawned with the same rotation as this arrow (in world-space). In the details panel, adjust the SpawnAreaSize and SpawnAreaHeight to suit your gameplay.
SpawnAreaSize is a radius. The default values shown above would create a spawn area that is 2m wide and 1.5m tall. It would sit 0.5m off the ground, extending from 0.5m to 2m.
Next add your target object classes to TargetObjects. You’ll need at least one and you have up to four. The order is significant, as it will match the columns in your table.
Unless you already have a data table to use, you can leave TargetTracks empty for now, but TrackTimeStep can be set to be the tempo of your scripted spawning. This is a duration in seconds, so 0.5 will give you two beats per second, or two data table rows per second in other words. Smaller numbers will give you more fine control over the timing, but a longer table. Bigger numbers will give you a smaller table with less fine timing control, but will also help you to make more rhythmic patterns.
Making a data table
You can use either the Unreal data table editor or a spreadsheet application (or even a text editor) to add the actual scripted timing, but first you’ll want to create the data table itself.
In Content Browser, right click and make a new Data Table asset (in the Miscellaneous sub-menu). You’ll be prompted to select a struct to use as for your table rows. You must select STargetTrackRow.
Editing in Unreal
Hit the Add button on the toolbar to add new rows. You can name your rows by double-clicking the RowName column. You might like to name your rows to follow your chosen timing, or you can skip naming them entirely.
Once you’ve created rows, you can edit them in the row editor panel.
Editing in Excel
Open TargetTrackTest.xlsx or TargetTrackTest.csv, as a template, and save as a new file.
Leave row 1 as it is. You can change the row “names” in colum A however – they can be anything as long as they are unique. In the template they are half-second timings, but you might use a different tempo.
In either method, edit your rows so that each row is a time-step, and each pair of columns is a location within the spawn area. Type co-ordinates to create your spawn pattern.
The co-ordinates are relative to the spawn area. 0,0 is the centre. 1,1 is the bottom-right and -1,-1 is the top-left. You can add as many rows as you like, but you cannot add more columns.
Once your pattern is complete, you can just hit save and it’s done. If you need to make changes, you can just edit and re-save.
Once your pattern is complete, save your spreadsheet as a CSV file.
Return to the Unreal editor and open your new Data Table asset. Hit the Re-Import button on the toolbar and browse to your CSV file. Your spreadsheet will now be imported to the data table.
If you make any changes to your spreadsheet, be sure to save your CSV and hit the Re-Import button again.
Now that you have a data table in Unreal containing your scripted spawning, the last step is to select your BP_TargetSpawner in the world and add your table to it’s TargetTracks parameter.
To start the BP_TargetSpawner, just call the StartTrack function, either in the level BP or by some other scripted way.
BP_HandprintComponent, MPC_Handprint and MF_Handprint are the main elements of the hand location highlighting sytem.
BP_HandprintComponent is a scene component that should be added as a child to each hand of the player pawn. It has an editable bool parameter called IsLeft, which should be true for the left hand and false for the right.
All this component does is take it’s world location and store it in MPC_Handprint.
MPC_Handprint is a material parameter collection containing seven scalar and four vector parameters. These values can be set by BP_HandprintComponent or any other script, and are used by MF_Handprint.
DistanceMin and DistanceMax specify the range of distance from the hand to the surface over which the highlight is visible
Size is a multiplier for the size of the highlight. The visible size is also affected by the distance to the surface.
LeftAlpha and RightAlpha are used to lerp between opacity and transparency for each hand, or to hide the highlights altogether.
LeftFill and RightFill are used to lerp or switch between the two masks: T_HandprintMask_0 at a value of zero and T_HandprintMask_1 at a value of one. This would be use for example by other scripts to show when the hand is in range of a target.
LeftColour and RightColour specify the colour of each hand’s highlight. The alpha chanel is not used.
LeftPos and RightPos are updated by BP_HandprintComponent with the world location of each hand.
MF_Handprint is a material function that takes a base colour as input, and outputs a base colour with hand highlights, as well as an emissive channel.
It works by taking the world co-ordinates from the hand, via MPC_Handprint, and projects them into screen-space, using the distance between the shaded pixel and the hand to calculate the size of the highlight.
The colours, overall alpha and mask lerping alpha are all also taken from MPC_Handprint.
MF_Handprint should be included in any other base material that should display hand highlights, like in the following example:
This document explains our procedural level system called RouteMgr. It is divided into three sections: an overview, how to use it, and troubleshooting.
The RouteMgr “system” consists of a few different actors and a specific way of setting up levels and sublevels.
BP_RouteMgr is an actor that must be placed in a persistent level to control the streaming of level chunks.
BP_RouteNode, BP_RouteNode_StitchPoint and BP_RouteNode_RouteNode are actors placed inside chunk levels to help with positioning and stitching them.
S_ChunkInfo and E_ChunkCategories are used to store information about the chunks. S_ChunkInfo is the basis for the data tables which constitute a level. This is so that certain data about the level is stored outside of the level asset – we can read it before loading the level itself.
BP_RouteTrigger is an overlap volume that tells calls an event in a BP_RouteMgr when touched by a player pawn. It’s used to tell RouteMgr to load in a new chunk and possibly unload an old one.
BP_FollowingOcean creates an ocean and/or horizon that maintains a position relative to the player, so that however far you travel you are always at the centre of the environment.
WBP_ChunkTool is a utility widget you can use in editor to automate some data-input. WBP_ChunkTool is still a work-in-progress and currently the data must be input manually.
This section clarifies some of the jargon used in this document. These are terms made up for this solution and not official terminology.
Chunk. A chunk is a sub-level that can be streaming procedurally via RouteMgr.
Parent/child chunk. Some chunks are sub-levels intended to be attached to another sub-level, for example to create variations with different obstacles or decoration, these are child chunks and the main chunk they are attached to is the parent chunk.
Stitching. When a sub-level is streamed via RouteMgr, it’s transform is adjusted so as to create a continuous path. This is what we’re calling stitching. The level’s transform is adjusted so that a stitch node in one chunk matches a stitch node in another chunk.
Categories. Chunks fit into one of four categories: RunFlat, RunSteep, Climb, Station and HeroSection. These are used for selecting/ordering chunks.
Head/body/tail. The head chunks are the chunks that have been loaded ahead of the player. The tail chunks are the chunks that exist behind the player before being unloaded. The current chunk that the player is traversing is the body chunk.
Levels LVL_RouteMgr_[LEVELNAME] [LEVELNAME] is the name of the level/environment
Chunks SLVL_RouteMgr_[LEVELNAME]_[CATEGORY][INDEX] [LEVELNAME] is the name of the level/environment [CATEGORY] is the type of chunk, e.g. RunFlat, Station [INDEX] is an identifier, could just as easily be a name as a number
Child chunks SLVL_RouteMgr_[LEVELNAME]_[CATEGORY][INDEX]_[CHILDTYPE][INDEX] [LEVELNAME] is the name of the level/environment [CATEGORY] is the type of chunk, e.g. RunFlat, Station [CHILDTYPE] describes what the child chunk contains, e.g. “Obstacles” [INDEX] is a numeric identifier
Data tables DT_[LEVELNAME]_Chunks [LEVELNAME] is the name of the level/environment
How to Use RouteMgr
Making a procedural level
Step 1 – Set up your persistent world
Starting from a new level, set up your environment, such as skydome, fog and lighting conditions. It’s best for the skylight to be dynamic, as the geometry it will light will be in sub-level chunks. Add a BP_FollowingOcean at 0,0,0. Any horizon geometry such as mountains should be attached to the BP_FollowingOcean, either as new child components or as child actors. This is your persistent level.
Step 2 – Create chunk sub-levels
Create a new level asset. In the Levels panel, add your new level as a sub-level.
Design your chunk level as an individual island. Make sure you are adding actors to the sub-level and not to the persistent level. (You might find it helpful to change the 2nd column of the World Outliner panel to show the parent levels of the listed actors.)
You can always open the sub-level individually, to see it in isolation. (But note that the skylight will not be there, as it is part of the persistent world.)
You have an important choice here: you can make all of your chunks relative to the same sea-level, in which case, all of your stitch points will be around the same height above sea level. Alternatively, you can design chunks with varying heights above sea level, so that the path through the chunk starts low and ends high, or vice versa. The edges of landscapes and other geometry should be well below sea level, especially if the chunk might appear at different heights.
Repeat this step until you have a number of chunk levels prepared as sub-levels.
Step 3 – adding RouteMgr actors
In your persistent world, add a BP_RouteMgr. If you have designed your chunks to be at different heights, make sure StitchZ is checked and enter the Z height of your ocean in SeaLevelZ. If your chunks are designed to all be at the same height, StitchZ should be unchecked.
In each chunk sub-level, add two BP_RouteNode_StitchPoint actors. The stitch points should be placed at the entry and exit point of the chunk, where you would wish the paths to connect, with the arrow pointing outwards from the chunk.
Add a BP_RouteTrigger to each chunk. It should stretch accross the whole chunk so that it can’t be bypassed. It should be placed near the middle (and where possible, if it is at a place where the player’s view is obscured, it will help to reduce potential pop-in).
If you have designed your chunks to appear at different heights, then you may also need to place a BP_RouteNode_LowPoint in some chunks. When the chunks could be a different heights, if the path dips downward between the stitch points, that point could end up under water. In that case, add a BP_RouteNode_LowPoint within about 50cm of the lowest point you expect the player to be able to go. At the point of choosing a new chunk to load, if any of the nodes would be under water, that chunk will be ignored for selection. If your chunks are to be loaded all at the same height, or if one of the stitch point nodes is already at the lowest point, then you can skip this part.
Step 4 – set up the ChunkInfo data
Create a new data table with the S_ChunkInfo struct as a basis. (Or if you prefer, duplicate an existing one.) Each row in the data table will represent a chunk sub-level.
Make sure the ChunkData parameter of your BP_RouteMgr is pointing to the data table you just created. WBP_ChunkTool is a WIP tool to automate the next step. For now this must be done manually.
Open all of your sub-level chunks and make them all visible the levels panel.
Right-click WBP_ChunkTool and choose “Run Editor Utility Widget”.
In the window that opens, choose “Add Chunks”.
Any chunks that are loaded and visible, will be added to the data table specified in your BP_RouteMgr. Any chunks that already exist will be updated.
You may need to edit the data table manually to add any child chunks, or specify categories or one-way chunks.
Open the data table.
Use the “+” button to add a new row. The name of the row must match the name of the sub-level asset for the chunk.
The third column, StitchNodes, should contain an array of two transforms. The transforms should match the transforms of the stitch nodes in the chunk. You can copy/paste the location and rotation.
If your chunks have multiple heights, the fourth column should list the lowest Z location of a node in each chunk. If all your levels are at the same height, you can ignore this column.
Additionally, you can add any child chunks, or specify categories or one-way chunks.
Make sure all your files are saved. (In PIE, streamed chunks will be loaded from disk, so unsaved changes won’t be visible.) Your procedural route should now be ready to test.
Making a sequential level
RouteMgr can also be used to stream an authored level in sequential chunks.
Select your BP_RouteMgr. In the details panel, uncheck UseRandomChunks, and add sub-level names, in order, to SpecificChunks. You can also chose whether or not to make them loop.
Scripting route spawning
Be default, route spawning happens on BeginPlay. You can deactivate this if you want to script it in another way. Just uncheck InitRouteOnBeginPlay on the BP_RouteMgr.
Adding child chunks
Child chunks can be overlayed over parent chunks to add additional detail or to provide variation in obstacles or other geometry. Multiple child chunks can be listed and one will be selected at random.
To create a child-chunk, move your optional geometry into a different sub-level.
Manually edit the chunks data table and in the ChildChunks column, add an array of names of potential child chunks for this parent chunk.
The child chunk sub-levels will be given exactly the same transform as their parent.
TO DO: There will be two ChildChunks lists: ChildChunks_OneWay and ChildChunks_TwoWay. One way child chunks will force their parent chunks to be treated as one-way if selected.
Some chunks appear under sea level
Make sure StitchZ and SeaLevelZ are set properly on the BP_RouteMgr. Make sure that the nodes actors and/or the LowestZ listing in the data table represent realistic values for the lowest heigh in each chunk, and that these numbers are above SeaLevelZ.
Chunks load in overlapped
Make sure the stitch nodes and/or the node transforms in the data table have been set, including the rotations. If you have set StitchingYawVariance to anything other than zero, try reducing it.
Chunks or child chunks are not loading
Check the output log. If you see “BP_RouteMgr failed to load chunk level: xxxxx” or “BP_RouteMgr could not find level info in data table: xxxxx” check that xxxxx matches the actual name of the sub-level asset. If not, you might need to correct it in the data table.
A specific chunk appears infrequently / seems not to appear at all
Make sure it’s been included in the data table. Check the category of the chunk as specified in the data table HeroSection chunks will not appear as part of the procedural rotation, only when listed in SpecificChunks. If you are using StitchZ, it’s possible that the chunk doesn’t often fit above sea level, maybe because the far away node would put you below water. In that case you could try making the level flatter or including more levels with an upward path, to create more space.
I’m going to cover the functions of the player character, including my custom arm swing implementation, the climb up, the BPI climbable interface and how it fits into the player pawn and it’s parent. Plus some other smaller related things.
The arm swing starts in begin play, I just set the movement mode to be arm swing as the parent pawn has many different movement modes and I don’t want them to be doing anything.
Then I set up two timers to calculate the amount of arm movement. The first runs at 0.05s and calculates the change in position of both hands over that period. The second runs every 0.3s and gets an average of the changes in position over this period, the goal being to smooth out the movement of the player so it is less jittery. Without this the player would slow down as their hand get to the top of their swing and go faster mid swing. Then to the right I set up some initial values for these functions.
In the first function ‘Calculate Arm Swing Amount’ I simply get the change in z position, then store the current position ready for the function to run again. I do this for both hands.
In the second function ‘Calculate Arm Swing Average Over Time’ I take the stored values add them to a total, calculate the average and then reset the values ready for next time it’s run. I do this for both hands.
At the start of the event tick I check if the player is climbing. Then if they are holding anything and finally I check to see if both of the hands are moving enough (this lets you reach out with one hand without moving by accident).
After that I do a check to see if the player is close to an edge.
If they are close to an edge then I adjust the calculation for the speed so they move a bit slower, this helps stop the player accidentally running off an edge and helps them to swap between running and climbing sections with less frustrations.
Finally I add movement input to the player with the speed as the scale value and the direction as an average of where the player is looking and the direction of the hands from the head.
The climb up takes the player from being on a wall to on top of the wall. The player initiates a climb up by getting to the top of the wall and then bringing their hands to their hips as they would in real life. If the player is climbing I run a series of checks on the event tick to see if they are ready for a climb up.
The first check is to see if the players hands are both lower than the headset.
Then I check with a sphere trace to see if an object is in front of the player to make sure they are at the top of the wall.
Then I do a trace down in front of the player to get the location I’m going to move them to.
Finally I move the player to their new location using two timelines, the first moves them up to the correct height and the second moves them forward. I do this to give the feeling of climbing up and over and to prevent the player clipping through any obstacles.
Thumb stick adjustments
Here I overwrite the parent stick stuff and let the player adjust their position using either of the thumb sticks (when not arm swinging or climbing), this is so they can have some fine control if they reach for something but are just out of range.
This interface is used to call events on objects when the player interacts with them either by overlapping with grip being possible or gripping. There are events for the left and right hand. In the event graph for the player pawn you can find where they are called.
Gripping events are called from two functions ‘ReleaseGrip’ and ‘InitClimbing’ both are inherited from the parent.
I have created a release grip function and added it to the parent here, the function does nothing on the parent but is implemented in the child.
Here you can see that when the function is called it takes the hand and calls the appropriate event if the gripped object implements the interface.
Init climbing is a function that already exists in the parent. In the child I added some of my own code and then do the parent function stuff. Here I make a temp array of the overlapped objects.
And then get the closest one to the centre of the overlap sphere.
I then take the closest object and call the grip event.
Finally I run the parent function.
There are a few additional changes that I have made to the parent class. The first you can see here I make sure that the player can only climb on things that implement the interface.
Here I added some code to allow physics objects to be climbed on if they contain the component tag ‘ForceClimb’.
Arm length calibration
The calibration uses a separate child pawn that can’t move and a calibration map that has instructions for the player. In the ‘Steam_VR_Player_Controller’ I have updated the spawn logic to check to see if the player has been calibrated. First I check if the player is in the calibration map already and if they are then I spawn the calibration pawn. Then if they are not on that map I check if they have already calibrated. If calibrated I spawn them as the regular child pawn. If they are not I save the map they were on, load them into the calibration map and then spawn the calibration pawn.
In the calibration pawn the arm length is calculated when the player presses a grip button. They can then proceed by pressing a face button. The arm length and calibration status is saved in a save game object. The map they were previously on is then loaded.
This document describes the use and construction of these components. At the top I will give examples on how to place them and adjust settings and below that I will discuss their construction.
Placing the components
Vertical Wall Climb
You can adjust the height of the vertical wall climb by moving the top spline point up and down. You can also set the difficulty of the climb which will be adjusted to the players arm length.
Horizontal Wall Climb
You can pull the spline points left and right to adjust the length of the wall. There are a few options for the bars, a difficulty and a randomness factor which adjusts the horizontal position by a random amount. You can also use only the bars if you want to build different shaped walls.
The monkey bars follow the spline so you can add points and adjust their positions to your liking. The side beams go straight from point to point so you may need to add extra points on the curves. You can also turn off everything but the bars with the checkbox if you want them to fit into a different setup.
The scaffolding can be placed down and then adjusted in height by grabbing the top spline point and dragging it up or down. You can also choose whether you want the boards on the top by clicking the toggle. Additionally you can select an actor to attach to the scaffold for easier placement and to keep them together for future adjustments.
The rope climb is very simple, just select a climbable rope to attach it to the top and then use the spline to adjust the rope length. There are various options for the rope as well.
When setting up the rope swing you can select two scaffolds and adjust them from the rope swing so it’s easier to position anything. You can add multiple rope swings in a row by dragging out the spline and adjust the distance between them in the settings.
The zipline is separated into two parts a top and the bottom. You can attach the top to a scaffold or place it wherever you like then you can select the bottom to attach the zipline. Finally you can adjust the starting point of the zipline hold.
All the components use similar logic so I’m only going to cover one and from there it should be fairly straight forward to get an idea of how the others work.
The Vertical Wall Climb
Opening up the construction script you will see a set of functions that are used to set up the wall.
The first function is used to load in the player’s arm length and set up the distance between holds which will be used by the add climbable bars.
In the add climbable bars function there are two important sections. Setting up the loop and getting the positions along the spline. To set up the for loop we want to work out how many bars we want to add. So we take the spline length and divide it by the distance between the holds. (here we minus 1 because we don’t want a hold to appear at the bottom in the floor)
Next we take the index and multiply it by the distance between holds to get the distance along the spline that we want to add the mesh at. We then reverse the values so that the beams are added from the top of the spline. Finally we get the location at the distance along the spline and add a small offset before spawning the beam.
The next two functions add side beams and add backboards both follow the same structure. We work out how many beams to add based on their height (using a fixed number here and by just eyeballing what seemed about right).
Then inside the loop we add a spline mesh and attach it to the spline. To get the spline mesh set up right we need to put in it’s start and end points as well as tangents. Setting up the spline mesh can be a bit of a pain as there are a few settings that can effect it including the spline up direction and forward axis (both of which I’ve set in the add spline mesh node). The tangents can also effect the way the mesh stretches and looks so in this case I’ve manually set them so the beams are consistent but you could get the tangents at the spline distance.
Here is the add backboards function and as you can see it is broadly the same as the above function.