Unreal Engine 5.4: After Effects killer?
As someone who has made a living by developing real-time rendering solutions for the motion design industry for more than a decade, I was excited to hear about Epic’s Avalanche project. I had always wondered why tools like Adobe After Effects provided limited support for 3D graphics, while game engines such as Unreal didn’t offer functionality for creating content in the film and advertising sectors. Epic has been investing heavily in the TV and film industry for a long time by introducing a robust tool-set for virtual production. Extending Unreal to allow the creation of motion graphics content has been a natural next step for the company, seeking to unlock new industries.
When Epic released Level Sequencer in Unreal 4.26 (I hope I am not mistaken with the version), I started contemplating developing software on top of Unreal Engine that would allow anyone, regardless of their technical aptitude, to create immersive 3D motion content for ads and entertainment purposes. That’s how the HYPE platform idea was born. I have been working on HYPE for three years, with the last two years as a side project. Maybe my work will eventually become obsolete due to the Avalanche project; who knows. However, in this article, my goal is to provide my feedback on the brand-new motion design tools shipped with Unreal 5.4, based on my extensive experience working with industry-standard software such as After Effects, both as a software developer and content creator. With my solid understanding of designer workflows and their expectations from the tools they use daily, I believe this overview will be a useful read for anyone considering using Unreal Engine to create motion content.
The Good.
Well, Epic has finally decided to capture the attention of the motion design community. Unreal has been, de facto, the most advanced real-time 3D rendering technology available on the market today. The game engine has evolved well beyond video game content and is now also used by the broadcast and film industries to speed up their workflows. Until recently, combining stunning 3D visuals with 2D graphics and post-production effects required creating 3D content in software A (Maya, Cinema4D, etc.), then exporting it as a sequence of images or video footage into software B, such as After Effects or Final Cut Pro.
It is worth noting that in recent years, Blender has introduced motion design tools, and new players such as Notch have entered the market. However, Epic was the first game engine to introduce robust cinematic tools such as Level Sequencer. Now, the Avalanche project brings Unreal Engine one step further toward the idea of authoring breathtaking 3D renders together with compelling motion graphics in the same sandbox and in real-time.
Support for 2D shapes.
Motion design mode features “2D shapes” such as rectangles, ellipses, rings, stars, lines, and custom shape drawing via line segments, which unfortunately only support line segments, not Bezier curves. These shapes can be drag-and-dropped into the level, and each shape can be edited via a set of unique parameters accessible in the editor UI.
If you need more sophisticated 2D graphics, you can import SVG files. It is hard to say how solid the support for complex SVG shapes is, but from several tests I have done, it looks not bad.
Photoshop-like Material Layer Editor.
Epic has tried to abstract away the existing complex material graph editor by presenting the Material Designer panel. This panel allows defining the shape style by combining layers in the same manner as Adobe Photoshop.
Each layer has a fixed set of inputs: the left quad (green here) is the base color, which can be solid, textured, or gradient; the next slot (chess pattern here) is the mask; and at the far-right side is an FX stack, which provides an impressive number of effects that can be applied to this layer. Layers can be blended in the same way as in After Effects. Unreal provides all the standard blend types found in After Effects.
Cloners and effectors.
This feature is very cool. It allows cloning a single mesh into a 3D grid of its copies and applying special effects (like waves or whirlpools) without needing to mess with the Niagara particle system directly.
You can clone a single actor in all three dimensions and even combine different types of meshes in the same cloner. Effectors are “forces” you can apply to the cloner grid to put the whole thing in motion. There are predefined behaviors such as vortex, noise, gravity, and attraction force, which can be applied to the area defined by the effector with one click. It is also possible to apply more than one effector to the same cloner to create more complex physical behaviors. The feature is easy to use but also limited. Those seeking sophisticated particle effects will still have to use Niagara.
Media plane
This is a video surface actor that can be inserted into a level. Video playback is not a new feature in Unreal Engine, but compared to the already existing video systems such as Media Player or Bink Media Player, this one is straightforward to use.
There is no need to create video textures, playlists, or deal with weird video formats. All you have to do is drop the media plane into the level, provide a path to an mp4 file, and click play. It also allows you to customize the shape of the video surface.
Rundown playlists
This feature is huge. It allows creating a template from an initial motion level setup and using it later to render variations of the original motion composition easily. You can define which properties of the objects in the level to expose for change. It follows the same concept as material instances and parameters. Then, you can create a playlist from the given template, where you tweak the exposed properties to create custom variations by changing template elements such as colors, texts, replacing actors, and so on. Some video tutorials I have watched demonstrated this tool for broadcasting purposes, such as customizing the visual style of a news subtitle bar, but it is much more than that. In essence, this is a robust video content personalization system that can be used to generate countless personalized videos from a single motion template.
Actor Masks
In addition to the existing alpha masking functionality, which is part of the material compositing system, Unreal introduces actor-based masks. These work by making one visual actor a mask that is applied to another visual actor. This enables the designer to set a mask in the shape’s texture space (via Material Designer or Material Graph) as well as perform masking in world space, where one mesh serves as a mask for another.
The Bad.
No support for vector path rendering.
In my opinion, this is the biggest shortcoming of the whole project. The motion design world loves scalable vector graphics for a reason — animation. Cameras or objects, even in 2D mode, are often zoomed in and out during animation. The difference between raster (images, 3D meshes) and vector graphics is that vectors are resolution-independent, while rasters are not. You can read my old article on the subject to get a better understanding of the concept.
Unreal tessellates (triangulates) all the 2D/3D shapes as well as the text glyphs. When the camera is far away, the problem is not apparent, but once you start zooming in, you can see the ugly side of raster graphics. Check out the next two screenshots:
At a close shot, you can see that the ring curvature is not smooth. This is because it is just a sequence of straight lines defined by densely generated triangles. For the shape to look smooth at any distance from the screen, it must be made from Bezier curves and be continuously rasterized at runtime. To compensate for this, Unreal allows you to increase the number of triangles, which adds more line segments to the edge of shapes and makes them look smoother. This option is defined at authoring time. You define the number of segments/sides in the editor. The shape doesn’t adjust its topology at runtime according to the proximity to the camera. Setting a high number of segments means wasting GPU power to process a huge number of polygons for a shape that is always far from the camera, just because the designer decided to keep it “high res” in case a close-up shot occurs.
It is not clear to me why Epic hasn’t tried to solve this by using adaptive real-time triangulation (we have had GPU tessellation stages in every rendering API for more than a decade). Well, maybe GPU tessellation is not a good solution, as it has its own limitations when dealing with concave shapes. But Epic has some of the world’s brightest graphics programmers working on the engine. Why not implement a vector path rendering pipeline similar to what Nvidia has in its NV_PATH OpenGL extension or license existing technology such as the Slug library, which allows true vector text and shape rendering on GPU?
I was able to get a trial license from the Slug author and integrated its vector text into engine’s core rendering pipeline when I was developing HYPE. I see no reason why Epic cannot acquire Slug, as it buys startups that work on products potentially enhancing Unreal.
Here is a comparison between vector text (Slug) and UE’s triangulated Text3D. Notice how the relatively smooth curves of the Text3D-based letter ‘8’ break into line segments when getting close to the camera, while the vector shape on the right preserves its edges smooth and sharp at any distance.
SVG support is great, but the current implementation suffers from the same resolution dependence problem I described above. Epic offers two modes here: 2D and 3D. In 2D mode, the SVG content is rendered to a texture, which is then applied to a quad. This turns the SVG into an image sprite, which has all the deficiencies of raster graphics.
If we zoom in closer to the shape, we lose image quality:
In 3D mode, the shape looks much better if you remember to apply AA (anti-aliasing) on the footage export. However, this comes at the expense of performance, as even for simple vector art, the number of generated polygons is enormous.
The actual mesh looks like this:
Unreal Statistics Window doesn’t show the poly count for Avalanche shapes for some reason, but you can see in the above screenshots that the mesh is extremely dense. Modern motion design heavily relies on vector shapes. If the software cannot provide proper support for vectors, it won’t be adopted by professionals.
A few more issues worth mentioning:
Anti-Aliasing (AA)
- AA is seen only when set explicitly in the render queue. The default AA technique set at the project level, in my case MSAA, had no effect in the editor view-port. This makes the editor preview look jagged, preventing the designer from getting a feel for the final result. In After Effects and similar software, the view-port is effectively “What you see is what you get.”
No Export to Standard Video Formats Such as MPEG4
- It is not clear why Epic doesn’t support export to standard video formats such as MPEG4. Unreal cannot bundle FFMPEG due to L/GPL. But codecs such as x264/x265 can be licensed, and if Epic isn’t interested in paying license fees, there are alternatives such as AV1 and WebM (Vp8/9). There is also an open-source free variation of x264 from Cisco, which I have used in commercial projects in the past. In fact, Unreal Engine supports x264 and x265 via Nvidia and AMD hardware video encoding APIs, which have been part of the engine for a long time. These are used for modules such as pixel streaming. It shouldn’t be hard to write an MP4 multiplexer and add MP4 export support to the render queue.
Cloner Performance
- Cloner significantly impacts performance when the number of clones surpasses several thousand. A 100x100x3 grid of spheres on my machine, which is a gamer’s notebook with an RTX Quadro 3000 GPU, drops below 20 fps.
Rundown Playlists and Transitions Setup Feels Complicated
- This is quite a subjective judgment. I personally had to watch the same tutorial three times in order to grasp the procedure. I would be glad to read other opinions on this matter.
Summary
So, has UE evolved into an After Effects killer? I don’t think so. The shortcomings I outlined above are just a fraction of what is required for UE to become a motion designer’s tool of choice. On the other hand, it is important to remember that UE at its core is still a game engine, and tools like this are meant to enhance the existing creation process, not necessarily introduce a new one.
Consider an indie game development studio planning to create a demo reel for their upcoming title. Now they can use UE’s built-in tools to create one instead of hiring a motion design agency or learning complex and costly third-party software.
Unreal is already filled with dozens of overly complex user interface panels and editors that scare away experienced motion designers. I know this because I work with these guys. Many of them want to try Unreal but complain it’s hard to master. Simplifying the editor workflows should be the central goal to bring this new audience to use the engine. Overall, the currently added motion design tools look like a good starting point for creating film and video ad content. My only major negative feedback is the lack of proper support for vector graphics.
If any part of my analysis contains mistakes, please feel free to correct me in the comments.