JACOB BUCK: Hi, I'm Jacob
Buck, senior technical artist at Epic Games. Let's talk about vehicle
deformations and the vehicle system developed for The Matrix Awakens. We wanted to create
a dynamic and unique experience
for each player. To do that,
the vehicle crashes needed to be physically based and
unique on each playthrough. Utilizing Control Rig and
Chaos's new vehicle system, we're able to create an
experience unlike any other. The goal for the initial
highway shooting sequence was to have explosive
crashes with vehicles flipping
through the air. To add to the mayhem,
when an agent's car clips or runs into a non-agent vehicle,
that car will also begin to crash, creating a cascade effect. These crashes were achieved by
utilizing the vehicle's driving dynamics and modifying the
vehicle's center of mass. The crashes end up taking
on a life of their own. Prototyping a crash in a test
map allows for quick iterations before plugging them
into the gameplay map. Testing how the crash
behaves at 70 miles per hour gives a good
indication of how it will play during
the chase sequence. A timeline node
is used to control the steering and
any modifications to the center of mass. To create a new crash where the vehicle swerves
then counter steers to the other side, it is best
to see how the vehicle behaves with just steering. You will see that while the
vehicle is starting to lean, it needs a little nudge
for it to fully roll over. To get this to happen,
the vehicle's center of mass will need to be raised. To visualize where it is, a debug
view represents its location with a purple sphere. This is a helpful
tool for visualizing the timing between the center
of mass and the steering input. You can easily create
different feeling crashes by adjusting the timing and
amplitude of the steering and the center of mass. While physics is used
to create the crashes, let's take a deeper
look into how physics is being used to deform the cars. Around the vehicles,
there are 14 physics bodies that act as sensors to the world. They're held to the root of the vehicle by a single
prismatic joint. In Unreal 5, plasticity has
been introduced to constraints, enabling plastic deformation. Each constraint now has a linear plasticity and
angular plasticity attribute. These attributes will work with
either the linear or the angular motors and reset the
spring's rest length. When the constraint deforms
past the specified percent, that deformation becomes permanent. Linear plasticity also gives you
the option of which direction it is allowed to travel. By changing the Linear
Plasticity Type to Shrink, the constraint is
only allowed to shrink along the unlocked axis. Once the body has
traveled the length of that joint's
initial rest length, the spring can no longer be reset. This made it easy
to limit the depth that a vehicle could be crushed. By simply changing the
constraint's rest position, the vehicle is able to
sustain more or less damage. The bumpers are pinned to the
front and rear physics bodies with their linear
breakable attributes set. This allows them
to break free from the car when a
threshold of force has been applied to them. In Unreal 5, you now have the ability to query
and set constraint attributes in Blueprints. This allows the user
to define conditions that manipulate a constraint. The vehicle Blueprint
utilizes this by weakening the
linear breakable attributes as the vehicle deforms. Without this, the bumper
constraints would sometimes not break, causing unpleasant shapes. The vehicle Blueprint also
manipulates constraints when receiving weapon damage. The handgun causes scripted,
progressive damage, while the machine gun weakens
the constraint's linear motors on every hit. Also new in Unreal 5 is a
node to disable a hierarchy of dynamic physics bodies. Set All Bodies Below
Physics Disabled is used to enable the doors,
side view mirrors, trunk, and hood during deformation events. This node allows you to disable
a body and all of its children from simulating by continuing
to update its position based off of its parent body. This saves extra bodies in
constraints from being in the system until you need to activate them. The root component
of the vehicle actor is a skeletal mesh driven
by a physics asset. It is invisible to the player
and takes the physics input from the world,
resulting in skeletal animation that responds to the environment. To move the skeletal mesh
that the player sees, a Control Rig component takes
in a map of sources and targets. The bones of the root
skeletal mesh are the sources, and the controls of the
Control Rig are the targets. The mapping acts like a copy
pose in animation Blueprints, but you get to define the
dictionary of sources and targets. The Control Rig
component allows access to data at specific points in
time during its initial setup and evaluation. The On Pre Initialize
event reads a data asset to build a map
between the physics and deformation skeletal meshes. This enables a
data-driven approach so each vehicle
can map the inputs to the rig differently if needed. Since the rig was designed to accommodate different
vehicle types, the same Control Rig was able to be used on every
deformable vehicle in the city sample. To accommodate big structural
differences in vehicle types, booleans with branch nodes are
used sparingly across the rig. Each actor Blueprint
sets its vehicle type to make small changes like trunk
versus hatch door parenting or disabling
unnecessary evaluation for the rear door of the coupe. When first looking
at the Control Rig, you will notice large
yellow spherical controls. These controls are driven by
the physics skeletons' bones through the Control Rig component. When a vehicle is
struck from the front, the physics on the invisible
skeleton react to the collision, bringing the map controls
along for the ride. These controls are pushed in and the rig provides
the deformations. Using a live rig to
drive deformations allows for physics input to
drive an art-directed look when they are damaged. Early on in the
development process, it was determined that the
driver of the vehicle be kept safe from door intrusions
and the roof collapsing. Limits are placed on the physics assets' linear
plasticity attributes as well as on the
Control Rig itself. You can easily bypass these
limits, but the rig will need modifications
to be able to support more serious
deformations. Using the Control Rig to power
the deformations of the vehicle also gives a unique
benefit of being able to easily record and modify vehicle deformations for cinematics. Using Take Recorder,
the physics skeleton is recorded and provides driving
and collision information. Since the physics
skeleton is mapped via the Control Rig component, any animation on those
bones will move the controls on the Control Rig. This allows a user to record
themselves playing and then layer additional
animation on top of the recording for
a cinematic shot. Using Control Rig
with a skeletal mesh whose hierarchy
is this dense did come with some consequences. Copying input data
and output transforms from the Control
Rig's virtual machine can be slow with larger hierarchies. By default,
the cost occurs every tick, even if the Control Rig is empty. The Control Rig component comes with an option for
lazy evaluation. If any input into
the Control Rig has a delta smaller
than specified by the lazy evaluation
transform attributes, then the Control Rig does
not incur the computation for the virtual machine I/O. To further speed up the
rig, Control Rig introduced a
heat map profiler in Unreal 5 to help
identify the computation cost for different nodes in the rig. You can quickly see the
main expense of the rig is happening in
RigFrameFront and RigFrame. The rig was designed to
isolate sections of the vehicle so that they would be
evaluated only when needed. Putting the rigging
into separate functions made it easy to
use pose caching to save computation
time when only one part of the rig is deforming. A pose cache is a
snapshot of the transforms of rig elements at a specific time. In the case of the vehicles,
the controller's positions are compared to the
previous execution. If the transforms are within
the threshold specified on the Get Pose Cache
Delta, then the cache is recalled instead of computing
that section of the rig. If the transforms are
above that threshold, then the rig is
evaluated and the cache updated to
the latest pose. This setup is very similar to
lazy evaluation explained before, except the operation is
happening in the virtual machine and it is up to the user to
specify what items to cache. Using the pose cache along
with lazy evaluation, the vehicle rig shaved off
precious computation time. Since the Control Rig is
only evaluated when needed, any bones that need to
be updated every tick are placed in a post
process animation Blueprint. In the post process
animation Blueprint, position and rotation
data is copied from the physics
skeleton directly to the deformation skeleton
after the Control Rig has run. This enables the
physics bones to directly drive the
deformation bones without passing through
the Control Rig. By using a post-process
animation Blueprint to move the doors, trunk,
hood, and side view mirrors, the car's movable pieces can
still react after deformation whether the rig is cached
or being lazy evaluated. To support all of the vehicle
types in the city sample, a data-driven approach was used. Each vehicle type has a data asset
where a variety of attributes are set. This data is queried
after each physics solve to see if a deformation
event has reached a threshold to cause a secondary action. Each vehicle has a
different-sized crumple zone. Utilizing the plastic
deformation events, each vehicle will
check the distance that a physics body has traveled to determine if it
should disable the vehicle movement
component indicated by flashing hazards. It is quick to tune how
much damage a vehicle can sustain before it's disabled. By modifying these data
assets, you can quickly change how the vehicle reacts
when it runs into things. Each triggered event has an
optional chance, distance, and rotation attribute
to fire the event. To keep the vehicle's
bounding box small, the headlight and
tail light static meshes are spawn
during a collision. It is easy to swap
the static mesh for any object in the project. To open the door, trunk, hood,
and break the side view mirrors, those settings are also
specified in the data asset. To have vehicle 5's trunk or hood
pop open during a side impact, it's as easy as adding
a list of physics bodies to enable when the
front door is hit. Modifying the data assets,
you can change how the car interacts with its surroundings. One feature that was cut
for performance reasons was the spawning of geometry
collections on impact. The grill geometry
collection assets still exists in the city sample and are easy to turn back on. Motion blur on
fast-rotating objects is always a problem in Unreal because it uses linear motion blur. When we look at this
effect on vehicle wheels, we get a rather unpleasant
pinwheel look when the angular velocity of the wheel is high. If we turn off motion blur
and look at the wheel again, we see what is called
the stagecoach effect. If the angular velocity
is such that spoke of a rim rotates
to the position of another spoke over one
frame, then the spoke appears not to move. Otherwise, the spoke appears
to move forward or backwards. We faked the stagecoach
effect by allowing the wheel to rotate up until a
threshold angular velocity, then we attenuate its
rotation with a function that periodically
alternates between forward and reverse rotation. Now the only thing
we are missing is the rotation
blur of the wheel. We developed a material to achieve
this back on the GDC project The Human Race. It works by placing a wheel
sock mesh over the wheel with a material that renders
a controllable radial blur of the actual wheel it encompasses. Finally, we ramp the radial
blur amount on the wheel sock based on the angular velocity. Here, we see both the rotation
attenuation and the radial blur together as the final effect. To parent the sock onto
the vehicle's wheels, a simple Control Rig is
applied to the physics skeletal mesh in an anim Blueprint. This Control Rig allows the wheel
sock to move with the wheels, but it removes the wheels' rotation. Putting everything
together, the final effect gives the wheel
radial motion blur that conveys the
vehicle's speed. To take advantage of Nanite,
the cars have been carefully divided up into many
components that can be swapped for
their skeletal mesh counterparts during an impact. Geometry that
stays as Nanite has a significantly
smaller footprint than its skeletal mesh counterpart. Because of that,
vehicles that do not sustain a
plastic deformation are allowed to swap back to Nanite. Swapping the exterior
of the vehicle to a deformable skeletal mesh was heavy,
but other structural optimizations helped reduce the weight. From the deformable shell of
the car, the exterior panels were extracted from the interior. This was done to
reduce the memory footprint of the skeletal mesh from the skin cache. Being sure that all vehicles and crowd actors fit within the skin cache was critical for utilizing ray tracing and
recompute tangents. Separating these two skeletal
meshes had an added benefit that they could be LODed separately. This allowed the exterior of
the car to maintain a higher fidelity at a greater distance while bringing the
resolution down on the interior. This,
combined with streaming LODs, helped reduce the
memory footprint. Each transform component of the vehicle also had
an added weight. Spawning additional
components when needed helped regain some
extra performance. This was most notably done when
any pane of glass shatters. To complete the vehicle
system, Jon is going to talk about the
amazing VFX work that added another layer of
realism to the simulation. JONATHAN LINDQUIST: Hey,
this is Jonathan Lindquist. I'm excited to share some
of the features that we developed for The Matrix Awakens. We're going to be
covering vehicle destruction,
character dissolves, and so much more. Let's take a look at some of
the behind-the-scenes footage. First and foremost, we decided
to embrace the film's aesthetic, targeting large bullet
holes, dynamic scratches, and realistic
vehicle-to-vehicle collisions on deformable surfaces meant
that we needed to innovate. At a conceptual
level, we wanted the vehicle to appear
physically-based. This meant that every
inch of its surface needed to appropriately respond
to forces from the environment. We clearly had to
evaluate a simulation across the entirety of the mesh's
surface to achieve our goals. Luckily UE5's effects
editor contained all of the necessary capabilities. Niagara can sample
points on the mesh over time, perform custom logic, and render out textures. This made it our go-to solution
for all of our simulation needs. The next challenge was
to map a simulation across the vehicle's surface. As you might
imagine, our vehicle Blueprints are
incredibly elaborate. They mostly start off as
Nanite meshes and then elements are swapped out with
new components as needed. Those moment-to-moment changes
made it difficult for Niagara to track surfaces over time. So to avoid this,
we made a single proxy mesh that contained all of the
vehicle's important features. The proxy and display meshes
contained an identical second UV layout. As a result, textures rendered
using the proxy second UV space were directly applicable to
the final visible surface. Get Triangle Coord at UV
allowed us to evenly distribute a 2D array of particles
across the model surface. Given both the
particle's initial grid position and mesh location, we are able to
associate each point on the model with
a render target pixel. We were then able to
focus on rendering simulation data to textures. Here we can see the colorful
blend of values that drive all of the vehicle's wear and tear. First we'll cover the
vehicle-to-vehicle collision data stored in the blue channel. The Niagara team
wrote a bounding volume hierarchy data interface, or BVH for short. This can be used
to quickly find the nearest particle
in a neighboring system. In our case, it highlights
the most probable point for a vehicle-to-vehicle collision. We then ran an
intersection simulation to see if the vehicle
sensors did in fact collide. The resulting information was
baked into a texture for use in the material graph. This development was
important and has allowed us to accurately
detect collisions between two complex animated surfaces. We implemented tension mapping
to add dents and scratches. Tension mapping is
typically done by displaying the difference between
an edge's at-rest length and its current deformed length. When shorter,
it's compressed, and if it's longer,
it's being stretched. This information
is often used to dynamically layer
in visual details where it matters most. Traditionally, one would rely
on a regularly tesselated mesh to provide smooth results. That wasn't possible in this
case, nor was rendering out a full-resolution tension map. So we went down another route. We sought a
topology-agnostic methodology for sampling surface deformations. Maintaining a constant
kernel size or sample area around each particle sensor
for our use case was essential. For this reason,
we developed a new approach that allowed us to precisely place
sample points near our sensor particles. First, let's explore how
these points were placed. We started with one
of our main sensors. Then a tangent space vector was used to indicate
our sample point's ideal location. As we can see,
it strays from the model's surface. Next, we needed to constrain
it to the mesh's surface to locate an appropriate
sample point. We started off by finding
intersections between our target line and the triangle's edges. If the target line hit an edge,
we performed a number of actions to find the neighboring triangle. Given the new triangle, we could
then reproject the vector down to the new surface. Then we continued the
process for as long as was needed to find a final rest
spot on the mesh's surface. Doing so three times allowed
us to track deformations around the sensor. Interestingly,
it can also be used to place water droplets
on an animated surface. This constraint works in world
space, so we can rotate the model and change wind
parameters at runtime. I'm looking forward to seeing
what the community uses this exciting new tool
for in the future. As a side note,
we explored the use of GPU raycasts to detect vehicle-to-world
collisions. While it remains a viable
strategy and an interesting area for further research, we didn't
use it in The Matrix Awakens. One of the great
features in Unreal Engine is that it allows
you to tweak material parameter collections
at runtime. This allowed us to refine the
material damage map integration across all 15 vehicles at once. In-world adjustments were vital
to finding the right look. Next, we pursued methods for tracing against the
deformed surface geometry. The existing toolset wasn't exactly
suited for our unique needs, so we used this
as an opportunity to develop an
entirely new approach. We leveraged the Niagara
sensor system once again to find those points
of intersection. We actually start by
casting a ray in the weapon Blueprint against a large
bounding box around the vehicle. Once the collision is
detected, the Blueprint feeds the incoming
bullet trajectory into Niagara for refinement. Once within Niagara,
an emitter finds intersections between the incoming
bullet trajectories and each of the particle sensors. The first sensor in the
bullet's path is used. This provides us with a
very coarse intersection location in that
only the particle sensor's pivot
point is returned. We can then refine
our results using the traverse
skeletal mesh module. By walking towards the
incoming bullet trajectory, we're able to find a far more
accurate point of intersection. This worked perfectly,
but generated performance hiccups in the most intense scenarios. Toward the end of the project,
a new for loop with continue node was added. This greatly improved our
performance because it allowed us to opt into
complexity where it was needed. First we traced against spheres
for each of our sensors. Whenever a collision was
detected, we moved on to a slightly more involved
ray disk intersection test for that specific loop. Bullet intersection locations
are then used to spawn decals in a separate emitter. On glass,
a translucent sprite renderer is used to draw
localized cracks. When metal is hit,
a mesh renderer projects a specialized
material down onto opaque surfaces. The projected material
is then gracefully integrated into
the vehicle shader using the material
editor's Dbuffer nodes. When the final impact
of location is found, we use Niagra's Blueprint callback
function to spawn the desired squibs at the correct location. The effects team greatly appreciated
this level of flexibility. Some of the steps that
we previously covered like finding tension map sample
points only have to happen once. So we're able to optimize
our system by performing those steps on level load. On BeginPlay, we spawn the
vehicle damage preprocessor. Once per frame, the preprocessor
calculates and records all of the data needed for
one of our 15 vehicles. It renders out information
to an atlas texture until all of the
vehicles are processed. The final output is a
set of textures that contains sensor location data
along with tension mapping rest state information. Additionally, we only need to execute the simulation
when needed. It's spawned the first
time the vehicle is damaged and is paused as soon as the
collision or a deformation ends. Damaged vehicles impose
a much greater cost than those in mint condition. So we culled damaged
vehicles in the distance. Once again,
we needed a solution that worked for both Nanite and skeletal
meshes at the same time. We found that the easiest
way was to hide the actor. Spawning a localized post-process
volume around the vehicle, along with occluding
particles, allowed us to gracefully hide the transition. We also wanted high
fidelity glass effects that exactly match the original model's material
qualities and form. This meant using actual models
rather than generic sprites. In addition to the visuals, we needed to be aware
of the effect's performance characteristics. At any time, thousands of glass fragments can be
seen in the demo. So we decided to
leverage Niagara's refined GPU particle
systems pipeline to minimize the system's cost. Those decisions drove us towards a marriage of
Blueprint, Niagara, and Pivot Painter 2. For those that don't
know, Pivot Painter is a tool that allows one to
transform groups of polygons within a static mesh as if
they were their own actors. Doing so through the vertex
shader allows one to efficiently manipulate a large number
of objects at once. We bridge the gap between
Pivot Painter 2 and Niagara for this project. Niagara can now
transform mesh elements. We next had to focus
on breaking the glass. I'd like to thank SideFX for
the outstanding support they provided on this project. They automated the task of
fracturing glass from Maya, generating useful
UVs, textures, and then they ran the
new glass elements through Pivot Painter 2. This approach hugely
increased our capabilities. The one downside was that it generated 50-plus
files per vehicle. Luckily, we were able to
leverage an editor utility widget to organize the files,
apply appropriate settings, and generate new material
instances as needed. In the end, one simply needed to assign a data asset
to each vehicle. This is where we incorporated
Pivot Painter into the mix. Each fractured glass pane is
actually a single Niagara mesh. As I enable the
purple bounding boxes, we can see that
the particles and the glass shards move in unison. This correlation is
formed by first sending the particle transforms down to
the glass material via a render target. Then the associated transforms
are applied to the mesh elements using a vertex shader. Developing an efficient
method for testing was crucial due to
the large number of assets that we
had to support. This test map led
us rapidly cycle through each vehicle
and their glass elements quickly. We added a new method
for particle collisions using latent GPU ray traces. GPU ray traces can collide
against skeletal meshes, aren't view
dependent, and have a number of other unique benefits. While this wasn't used
in the final product, it is now exposed
in an experimental option in the collision module. Here's one last effect
that I'd like to share. To dissolve agents,
we first constructed a screen space bounding box around the character. Then we spawned a grid of
particles on that surface. The character and a
duplicate mesh were then made to render to the
custom depth buffer. Each had its own stencil ID. The duplicate was used to
provide a screen space dissolve mask near the edges of decay. We then sampled the custom depth
buffer and stencil ID masks to project particles onto
the character's surface where they were needed. This approach
allowed us to spawn particles on arbitrary surfaces regardless of their complexity. Finally, we layered in
randomization forces and lightning to complete the effect. And that's it. Now I'd like to hand it off to Jack. JACK OAKMAN: Hi,
my name is Jack Oakman, I'm a senior technical
artist at Epic Games. Today,
I'm going to be talking about the bridge destruction
sequence that occurs towards the end of the demo. I'm going to give an
overview of the approach we took to achieve this event,
as well as offering a bit of insight into some of the tools and
techniques that Unreal 5 offers, which you can leverage
in your own simulations. As a key portion of our
chase finale sequence, we wanted to create an
event on an epic scale. The helicopter crashes
beneath an overpass, for which we used similar techniques
to the vehicle destruction pipeline outlined in Jacob's video. Once the helicopter
is at rest though, the player is able to trigger
that final cinematic explosion. The collapse that follows
was achieved entirely using Unreal 5's
Chaos destruction system and the new
suite of modeling tools. There were, of course,
a number of objectives we would first have to
identify before settling on a suitable approach
to this event. Although the finale is
effectively a cinematic sequence, we wanted to run and render the
whole simulation in real time. Of course this
would be happening within a highly-detailed
environment with plenty of vehicles,
environmental assets, and Niagara effects systems
running concurrently. The challenge then was
that of performance. And since the intent was to deploy
the demo on different gaming consoles as a digital
download, it was critical to keep
the disk footprint of this event as
low as possible. As part of a cinematic,
it was important that our result was
deterministic too. Cameras would be framed to
this event and random debris could not be allowed to
pollute the framing of shots, nor interfere with the
other actors in the scene. At the same time,
this event is highly dynamic, and so the cinematics team would need something that could easily be re timed should they
wish to double-cut the event or apply timing
offsets, a common approach when filming explosions and destructive events. All of this needed to be
achieved in a context where Nanite is king and geometry is
therefore at a premium quality. The solution that would allow
us to achieve all of these aims was therefore to capture
a cached simulation. What follows is an overview of
how we achieved that result all without leaving the editor. Having been populated using Unreal
5's rule processor pipeline, the bridge stands at
roughly 100 meters long. Fracturing and simulating
its demise in situ would certainly be
impractical, therefore. And so the first task was
to quarantine the structure into a suitable clean location. We achieved this by
packing the bridge and surrounding assets
into a level instance. Once done, this could then be
read into a cleaner environment and source elements extracted,
the key benefit being that should any changes
occur within this source, we would easily be able to keep
in step with those updates. Static meshes would then be
extracted from the source and worked upon without fear of
polluting the main environment. With our sources
extracted we could see the possibilities
bringing down such a structure. The bridge is an
intersection of many roads. And since these clipped
neatly together, we could approach them
as separate entities and benefit from the timings
of their collapse being offset, allowing for visual variation
and to allow also the simulation to breathe over a number of seconds. The bridge was therefore
separated into five key regions-- the lower deck whose
primary job was to create a gulf
between the player character and the pursuing agents. Destruction here would
be immediate in order to form a kind of sinkhole
whereupon the upper decks could collapse into it. The middle supporting
structures would brace portions of the upper
deck and fail in others. And lastly,
the upper deck would be divided into three key
regions whose collapse would occur somewhat separately, allowing the event to
span a period of seconds. Due to the scale of the
bridge, fracturing was always going to
be a high-risk area when adding to our disk footprint. Settling upon a suitable
fidelity was therefore key. The emergence of Unreal
5's new modeling suite allowed us to sculpt mesh
cutters, which could be authored in order
to concentrate fractures into key areas, effectively
creating a maximum bite radius, within which subsequent
fractures could be contained. The unaffected areas would then
become the kinematic anchors that hold the rest of the bridge up. Radial fractures
with high variability ensured that cracks were
concentrated toward the epicenter of the explosion. And clustering was used to
wrangle these breaks more linearly so the collapse
itself would better follow the original
topology of the structure. Since overpasses
of this nature are built from concrete
bound together by a framework of
interior meshing and large stretches of tension cabling, we wanted to combine
the initial concussive force of the event with a
sense of such a structure failing throughout and then
buckling under its own weight. To achieve this, and thanks to
the cached nature of our sim, we were able to throw the
kitchen sink at our setup. Kinematic bodies were
laid up beneath the bridge and animated to time
its destruction. Fields were used to break the
bridge whereupon collisions would take over and carry the bridge
down, giving a sense of an internal structure offering resistance
all the time. In this example,
we can see the combination at work. The field in red is the
site of the explosion. And the purple field will ensure
that the impact carries over into the neighboring module. The larger fields will
quickly propagate the force along the length of the road module. And after that,
it's up to the underlying collision geometry to take over. Let's run the sim
and see that result. There goes the field bringing
an upward force from below. The other fields have fired
already and the collision geo has taken over,
and there it begins to buckle,
carrying the bridge down. Solver iterations were tuned
to give us the look we want. Collision iterations
were low enough that we get a bit of variation of debris falling through
the collision geo, and push out pairs were
disabled altogether, since this would prevent collision
intersections from easing the bridge apart as it buckles. We are now ready to capture a cache. Again, let's use this
example to show that process. The target geo is selected and
a cache collection is created. A cache manager is automatically
placed in the scene. By default, this is ready
to record the simulation. We play the sim. Once done, we can set the cache manager to static
pose and review what we've caught. The Static Pose
setting passes the transforms of our
simulated bodies and can be played
back as an animation with a minimal presence
on the physics thread. It's a particularly
useful solution once the cinematics and effects teams need to time their shots
and explosions in Sequencer. We can capture multiple
caches concurrently or isolate sections
to capture a new live simulation
against other caches already playing in the scene. Here is the result of the
combined caches, 5 in all. Now that our geometry
collections are moving nicely, it's time to add a bit of visual fidelity to really
sell the event. We'll be adding
Niagara explosions, dust, smoke, and debris later. But for now,
we want the geometry itself to feel more active and for the
internal structure of meshes and tension cabling to show
through as the bridge comes apart. Here is a prototype
portion of the bridge to demonstrate the
look we were going for. The primary operation was
to target the materials on the internal surfaces to
give a sense of layering. In this case, we laid up asphalt
onto a base course of rubble, and then a thick layer of concrete
to match the bridge exterior. Fracture modes recent
addition of an auto UV tool allowed each fractured
interior surface to be atlased, whereupon we could bake a depth mask of each
geometry collection. And this could then be
sampled to accurately follow the bridge's original shape. This was especially
important since the road itself is derived from spline
meshes and has a subtle curvature, making box gradients
too inaccurate to follow the surfaces appropriately. We also wanted to disrupt
the surface of the road during the explosion
to give a sense of asphalt and concrete cracking throughout. To achieve this,
we again used the auto UV tool, this time operating on the
exterior and interior surfaces to create a curvature map. Within the resulting
mask, we were then able to apply a layered
hairline cracks, which could be used to concentrate
higher frequency cracking around the fractures themselves,
and lower frequency cracks would then span the broader
surfaces of the resulting rigid bodies of our fracture. With hairlines now
applied, the result was radially masked
in time with the cached simulation
using Sequencer. The final authoring
step was to mimic the internal network
of reinforcing meshes and tension cabling. To achieve this,
our approach was to leverage a set of instance geometry
exemplars, which would then be gathered in line with
the fractured geometry and go along for the ride. Given the scale of the bridge, it was easy to
propagate and layer tens of thousands of these
instances, all of which could be attached
to the geometry collection via its
auto embed tool. We determined,
however, that though instanced through
Nanite, passing so many transforms during
even a cached playback was not a performance solution. We therefore used the cached
simulation as our reference and embedded
geometry only on the surfaces that would be visible. Nevertheless, we still
have around 4,000 instances in the final version. Although comprised of only a
small number of unique exemplars, the visual complexity
that they offer is enough to fool the
eye into believing that an internal structure is present throughout the
entirety of the bridge. With our 5 cache
collections ready to deploy, their material effect
sequenced alongside them, and Nanite enabled throughout,
the bridge was complete. It was now up to the
effects team to augment the simulation with
explosions, dust and debris using Niagara. So with that in mind,
let's hand it over to Matt Radford. MATT RADFORD: Hey, I'm Matt
Radford, senior visual effects technical artist at Epic Games. Now we're going to talk about effects animation,
motion graphics, and dodging bullets, so enjoy. OK, let's talk about
pyrotechnics, explosions. A lot of things blew up
in The Matrix Awakens and we needed to figure
out a solution that would look really cool in
Next Gen but also would perform in this giant city. So I'm going to show a little
behind the scenes of how these effects came together. And then we'll deep dive on
what we did to make them. [VIDEO PLAYBACK] [EXPLOSION] Watch out! Hold steady. Hell yeah. [EXPLOSION] What are you doing? Time to blow this house down. [TIRES SCREECHING] [EXPLOSION] [END PLAYBACK] MATT RADFORD: So when
we started the project, we weren't really sure what
the PlayStation or Xbox was going to be able to do. But we knew we wanted to push
things to the next generation. So we just started making
stuff, blowing things up, seeing how it would interface
with the Chaos destruction system, and we did a little soul searching. We explored real
time fluid simulation in the engine,
which you can see here. One of our artists, Dan
Pearson, did this cool task that spins and
flips over and then there's like a
bullet time fireball. And it all ran real
time in the engine. It was really satisfying
to play with and look at. And this feature is actually
going to be in beta in 5.0 so you can play with it soon too. In the end, we decided to find
a more performant approach so that we could have tons
of destruction everywhere instead of maybe something super cool that was
isolated in one spot. For this, we kind of went to a classic video game
method, sprites. It's really just
camera-facing quads, billboarding pieces of geometry. We would render
isolated images of smoke, little
puffs, swirls, fire. And we would run
this through a cool shader that would
basically spit out really satisfying film-like images. The simulation was done with
Houdini's GPU Pyro solver. This thing was really new at
the time and it was incredible. It was very real time. It ran pretty quickly
in the viewports. It had great sourcing. And so actually we rendered
a lot of the images you're seeing from the
viewport, and that was what drove
the flip books. And that allowed us to iterate
really, really quickly. So we could do RGB lights,
multiple scattering, baked lighting, it was great. We compress these
down into flip books. This is a little
maybe confusing if you're coming from
the film industry where you're used to
having 3D volumes. But for performance in
games, we'll just kind of compress
it at an angle into a 2D volume. And with the
camera-facing billboard, it's a pretty
effective illusion. So we'd pack our data in there. And Next Gen let us take that
data to 8K, which was pretty cool and got us I think a little
bit of that Next Gen bump. So to make these images, we used
a compositing network inside of Houdini. We bring in stuff from the
viewport or offline renders and we transform, color correct,
make sure they were always in the center of the frame,
and then use the mosaic node to turn them into a flip book. It was just great being able to
do all of this in one location. So for the shading,
we wrote a pyro shader in Unreal that we wanted to make sure
everyone using Unreal could access, and didn't do anything too
crazy, no custom lighting models. We simply took soft,
kind of ambient occlusion lighting, like a dome light, and then we
would plug that into the diffuse. And then we would take the
fire, the temperature, and multiply it by an HDR color
and put this into Emissive. It just works. Lumen handles
transparency really well and we were able to just drop
these things around the levels and they look nice and integrated. So here's the bridge shot
at the end and a look at some of the elements that went
into it using the pyro shader. So Florent Andorra did this. And I think it was just a
really cool moment where people coming from
games and film are all able to use the same tools
and techniques to put pretty images on the screen. I think Unreal Engine
5 really enables this. OK, let's talk about weapons. We really wanted you to feel
powerful like you could rip apart the cars and the road. And so we tried
to create effects that were realistic
and cinematic and I'm going to show a reel
of how that came together. And then we'll talk about how
we did the art and the system. [VIDEO PLAYBACK] [GUNSHOTS] [CRASHING] [END PLAYBACK] MATT RADFORD: OK, so we wanted
to go for that realistic look, and the best way to do that
sometimes is mixing media. So if you can show
someone something fake mixed with something real, they'll probably
just suspend their disbelief and think
it looks cool. To do this, we use some footage
from Action VFX of little spark bursts and spark
hits and a few dust puffs to really give us the realistic
element mixed with CG stuff from Houdini and Niagara. So we take these
grayscale flip books and we'd run them through
a lookup table to color them realistically
and put it on a shader. And you can see me
just like tearing up the road, shooting at stuff, way more than would
actually happen in the game, but it was pretty satisfying. And these were the shipping
effects that we went with. OK, so let's talk about
the system itself, because it's also pretty cool. It's just made with
Blueprint and data assets. There was no C++ involved. So we didn't know how
many weapons that were going to be in the game at first. So we kind of wanted
to keep it open where you could
change the model, swap weapons, add new ones. We did this a lot for NPCs
when they would come online and we'd really want them to
suddenly have a submachine gun. And it was really easy to add. Now these data assets could
contain tons of stuff, including other data assets. And so we could tweak the
system, change the fire rate,
where the shells came from,
how the muzzle flash looked, what squibs would play when a
certain gun shot something else. And we could do this all
while the game was running. That was, I think, the coolest part. So if you were changing
this stuff and tweaking while you're driving on the highway
getting shot at, you'd stop, and it would still be there. That made it all worth it. So for the impacts themselves,
this was kind of the cue that you should shoot at these cars. So it was really important
that anytime someone fired at you,
you saw the result on the screen where
you were looking. We put a lot of strategy into
making sure this felt nice and held up under all
the circumstances. What we do is any time
someone shot at you, we would play a muzzle flash
and the shells and a tracer, and it was all coming out at
the gun aiming kind of at you. But we would actually just look through the camera,
step out a ways, and then shoot a ray
back at us in a cone. And this let us, with the
combination of the data assets, really just tweak how getting
shot at felt in the game. It was really satisfying
and super cinematic. OK, let's talk about tech art. So sometimes it's hard
to know what tech art is. But for us, really, it was
just bringing the frame to life with tons of cool
little details and adding all these
visual references to the movies that we could. [VIDEO PLAYBACK] [GUNSHOTS] [END PLAYBACK] MATT RADFORD: A great
example of these bullet wakes done by Asher Zhu and Ryan Brooks. This refraction shader just
really held up under any angle. It kind of just
sampled the scene color, blurred it,
tinted it green, and aberrated the colors a
little bit as it marched through. The text rain was a tough
effect, because we had to use it to
transition between the construct and
the game world, and also to transform
between characters. So it had to hold up under a
bunch of different circumstances. You can see it
worked on buildings or Keanu Reeves or
Carrie-Anne Moss or a bunch of
MetaHumans in a crowd-- pretty cool shader. And the text rain also had kind
of this close up treatment. And one thing I want to point
out that Asher Zhu did on this I've just never seen before
is making a custom convolution kernel to get this vertical streak. It worked,
and I had never seen that them before, it was really cool. So in the back of the car,
the windshield gets shot out, and there's all this
glass that kind of just sways back and forth as the
car turns left and right. And it was really satisfying. I added this great scale cue to feel like there
was tons of detail. And it just never
got old watching it throughout all the sequences. This was done by Jimmy Gass,
and I think was really successful. Ultimately, we took this
cool, fractured mesh that we had, brought into
Houdini, made points out of it, brought those points
back into Niagara, simulated them with a
position-based dynamic solver, and then used the
results to just transform the original mesh with
world position offset. This let us really art direct
it, made sure the fractures look
just how we wanted, and then have it roll
around in the back seat. The Agent Dodge was
a super cool effect that we were really worried at first,
how are we going to do this? We almost didn't do it. But Asher solved
this great problem of being able to
reproduce meshes really quickly by drawing
one particle per triangle and just sampling the G buffer
at their original location. It created this
incredibly successful, blurring,
streaked version of a character. And I think sometimes it looks
just as good, if not better, than the movies. Let's talk about motion graphics. This was a really important
part of the gameplay. And something that we
needed to get right. And it's subtle, but you needed
to know where you could shoot, what was available as a
target, and so it was important to really do this justice. Here, let's take a look at
some of the motion graphics and how they came together. And you'll see how
much we changed them over the course
of development. [VIDEO PLAYBACK] You
drive, I'll shoot. [GUNSHOTS] I's supposed to be safer. These guys just don't give up. [GUNSHOTS] You again? [END PLAYBACK] MATT RADFORD: So we
knew we were going to have a ton of these
targets, and we really wanted placing them to
feel flexible and easy. So we wanted to make a
great 3D in-world UI system. We did that with
Niagara and a little bit of a custom engine change. Now you can see
here, I'm able to just move these
transforms around, attach them to
objects, do whatever, and they always
stay the same size and play cool animations. These animations were
authored in Niagara, and for overlapping
elements, it's really just spawning a bunch
of different sprites and having them animate
or loop forever or use a color curve to make it glow. And it really was
easy to use and I think Niagara was a great fit. It also let us solve
this all on the GPU, so we could render an infinite
amount of these small cursors and not affect performance at all. The big problem, though,
was they were transparent objects. So they would get motion blurred along with the
rest of the scene. This just didn't feel like
a user interface element. So we made some
changes to the way that transparency is drawn in the engine. Now if you just go down to the
Translucency Pass and swap it, you'll have a little dropdown. And you can just choose
After Motion Blur, and that'll render your thing
perfectly crisp, alias free. It's really cool and I
can't wait to see what people do with this Engine feature. The last thing is anything worth doing is worth
doing with others. We had an awesome team on this. And it took the brainpower of
all of these brilliant people to pull this off. So we're really excited and we hope you enjoy
The Matrix Awakens.