Lighting comparisons

I submitted a pull request for lighting in EMI. I’m currently well ahead of my GSoC schedule, so I now have some time for ironing out any last remaining issues related to animation and lighting implementation before moving on to post mid-term tasks. See my roadmap on the ResidualVM wiki.

Regarding lighting, I spent some time trying to figure out the FASTDYN lighting mode I discussed briefly in the previous post, but I wasn’t able to replicate the look of the the original. The FASTDYN lighting seems less precise and was probably used only for performance optimization, so I decided it wasn’t really worth it to spend more time on it. Modern machines can handle the more accurate per-vertex lighting calculations for all actors just fine, and at least to my eye the results look better.

Here are a few more comparison shots, this time without any cheating. Original on the left, ResidualVM on the right.

Mr. Cheese uses the FASTDYN lighting mode in the original, so minor differences can be seen above.

The lawyers use FASTDYN mode. There also seems to be a ResidualVM-specific bug visible here, the lawyers are missing their eyes!

The voodoo lady uses FASTDYN mode which causes minor differences. The lack of head turning is also quite distracting here. I’ll work on that after the GSoC mid-term.

Lighting: first results

My work on animations has been merged back to ResidualVM’s master branch, so I’ve now moved on to the next big thing on my task list, which is lighting. So far I have a basic implementation working that supports ambient, point and directional lights. Below is a comparison showcasing the lighting in the original game and my implementation in ResidualVM. Can you spot the difference?

(To be completely honest I did cheat a bit in the above image, though. EMI has several different lighting modes that can be set per-actor. So far I’ve only focused on the LIGHT_NORMDYN mode, but in addition there is also LIGHT_FASTDYN, which is faster but the lighting is less accurate. Otis and Carla use the LIGHT_FASTDYN mode in the original so the lighting looks a bit different. For the screenshot of the original game I forced LIGHT_NORMDYN for Otis and Carla by executing custom Lua code in the original engine.)
My first attempt was to try to tweak the parameters of the standard OpenGL fixed function lighting model in order to match the lighting in the original EMI. I soon realized it was impossible to get an exact match with this approach, though. For example, EMI uses a simple attenuation model where each light has a minimum and a maximum falloff range. Vertices within the minimum range will be fully lit. Between the minimum and maximum range the light falls off linearly, and at the maximum range vertices will be unlit. OpenGL uses a different attenuation model that can only approximate the model used in EMI.
How did they do the lighting in the original game, then? Back then, GPUs with a programmable pipeline were not yet common, so shaders were out of question. Running apitrace on the original game reveals that no calls are made to glLight functions, so it looks like the original game does not make use of the fixed function lighting either. Instead, it seems the original game calculates shading in software, and the shading is then baked in to the vertex data that is passed on to OpenGL for rendering.
Isn’t this inefficient? Yeah, but not really. The models in EMI are fairly simple, and there is only a few of them in view at once. Many actors use the LIGHT_STATIC lighting mode, which means that lighting only needs to be calculated once for them. Only a fraction of actors use the dynamic lighting mode, in which case the lighting must be recalculated each frame. On modern hardware the main bottleneck is perhaps the data transfer from CPU to GPU, as the vertex data needs to be updated every time the lighting changes.
I decided to implement a software shading solution similar to the original game, and the result can be seen in the image above. This is ideal for the software and fixed function OpenGL renderers in ResidualVM, but certainly isn’t the best solution on modern hardware. For the modern shader-based OpenGL renderer it’s a better idea to use a shader instead, at least for dynamically lit models.
My current lighting hacks can be found be found here.

Animation blending

In EMI, just like in many other 3D games the motion of a 3D model displayed on sceen may be a combination of several different keyframe animations playing simultaneously. For example, the animators may have supplied a ‘walk’ animation and a ‘hold object’ animation for a character in the game. The game engine may produce a ‘walk while holding an object’ animation by combining the lower body part of the ‘walk’ animation with the upper body part of the ‘hold object’ animation. This can be achieved with the animation priorities I described in the previous post. In addition, the engine can animate the transition between different animation states such as ‘walk’ and ‘stand’ by interpolating between them. This interpolation is what I refer to as animation blending.

With animation blending, the final animation that is displayed on the screen is the weighted sum of all simultaneously active animations. The weight of an animation (the “blend weight”) determines how much contribution the animation has in the final result. The total sum of weights should equal 1. For example, we could animate the transition from ‘stand’ to ‘walk’, by linearly interpolating a value α from 0 to 1 over time and setting the blend weight of ‘walk’ to α and the blend weight of ‘stand’ to (1-α) at each step in time. The interpolated transition may not look completely realistic, but in most cases it looks much better than instantly snapping to another animation.

In EMI, the game may request certain animations to be faded in or faded out. To support this in ResidualVM, I store a ‘fade’ value for all active animations. When requested, the value is linearly interpolated between 0 and 1. If animations had equal priority, we could assign weight=fade for all animations and simply divide the intermediate weights by the total sum of weights to get the final normalized blend weights. However, with prioritized animations this changes a bit.

For higher priority animations we want to assign more weight than for lower priority animations. An animation with weight 100% will always completely override anything with a lower priority. If the animation has weight 60%, lower priority animations will only get to distribute the remaining 40% of weight among themselves.

How is this implemented? The way I’m doing it now is I first collect priority-specific accumulated contribution to animation “layers”. Each animation layer contains the accumulated contribution of animation with a certain priority. For example, layer 0 contains the contribution of animation with priority 0, layer 1 contains the contribution of animation with priority 1, and so on. Within a layer we can assign weights for the animations in the simple fashion described before, with the exception that we’ll only divide the weights by the total sum if the sum exceeds 1. I also assign a fade value to animation layers, which is simply the sum of weights.

Once the animation contribution is split into layers, we’ll get the final result by blending the layers together. The blend weights are assigned so that the highest priority animation will contribute the most, and the remaining weight is distributed to lower priority animations. To be exact, the layer weights are calculated as follows for n layers:

weight_n = fade_n
weight_n-1 = fade_n-1 * (1 - fade_n)
weight_n-2 = fade_n-2 * (1 - fade_n-1) * (1 - fade_n)
...
weight_1 = fade_1 * (1 - fade_2) * ... * (1 - fade_n-1) * (1 - fade_n)

The end result is a system where each animation can be independently faded in and out, while still respecting animation priorities.

Animation progress

The Summer of Code is finally here, and the first week is already almost over! I’m sorry that I haven’t updated the blog more, but I’ve been busy working on EMI’s animations this week and I’ve made some excellent progress.

Some background: right now there is a basic implementation of EMI animations in ResidualVM, but the implementation is still far from perfect. I’m currently focusing on two major issues that I’ve identified. Firstly, the current implementation prioritizes animations in the order in which they were started, so the animation that is started last always overrides any previously applied animation (although there are specialized workarounds for some cases). Secondly, unlike in the original game, all animation transitions are instant due to the lack of animation blending. In this post I’m focusing mainly on the prioritization of animations.

If play order shouldn’t matter, then how should it be chosen which animation takes precedence over another? We could get some ideas by looking at Grim Fandango, since it is based on an older version of the same engine. In Grim, each keyframe animation contains a priority field (actually two of them, but let’s keep it simple) that control the order in which animations are applied. A higher priority animation always takes precedence over a lower priority one.

One might ask what happens if two animations with the same priority play at the same time. The answer is they are blended together, and the result is an average between the two animations. Again, the result is the same regardless of the order in which the animations were applied. I’ll describe blending in more detail in a later post.

Knowing how the priorization of animations was done in Grim, the first thing I did of course was to try to find a similar priority field in EMI’s .animb animation data format. Perhaps unsurprisingly, it turns out there is one! However, the catch is that the priority field in EMI is bone-specific. In other words, an animation may specify a different priority value for each bone that the animation is affecting. (Note: EMI uses skeletal animation.)

For example, we could have an animation of Guybrush waving both of his arms. For the left hand the priority value could be 1, and for the right hand it could be 3. In addition, we could have a standing idle animation with a constant priority of 2 for all bones. Now, if both of these animations were applied at the same time, the result would be that Guybrush would wave his right arm, but the rest of his body would follow the standing idle animation.

Typical real priority values I’ve seen so far are 0 for base poses like standing idle, 1 for walk and run, 2 for holding an object in hands, and so on.

Without animation blending, adding support for the priority value is fairly straightforward. When applying the animation to the bones of the skeleton, we can keep track of the currently applied priority for each bone. If the animation’s priority is higher than the bone’s current priority, the animation replaces the current bone transform completely. Otherwise the animation is skipped. Using this method we can apply the animations in arbitrary order and the highest priority animation is always displayed. Of course this simplistic approach is still dependent on the order in which the animations are applied in the case where animation priorities are equal.

Things start to get more complicated once animation blending is added into the mix, though. With blending, a higher priority animation may not completely replace a lower priority one if it has a blend weight lower than 100%. In that case some of the lower priority animation will “show through”. In order to implement this properly animations can no longer be applied in arbitrary order. Instead, we need to calculate the transformation for each bone by applying the animations in descending priority order, taking the blend weights into account at each step. This is complicated by the fact that since priorities are bone-specific, the order in which animations should be applied may be different for each bone.

I’ll go into details on how I solved this in the next post. In the meantime you can check out my progress at https://github.com/Akz-/residual/commits/animations.

Accepted to GSoC 2014!

I will be participating Google Summer of Code this year! My project will be about improving ResidualVM‘s support for Escape from Monkey Island (EMI).

My main goals will be to improve the animations, lighting and sound in ResidualVM’s implementation of the EMI engine. If all goes well, I also hope to be able to spend some time on improving compatibility with the PlayStation 2 version of EMI.

I’ve contributed a number of patches to ResidualVM previously, but the EMI part of the engine is mostly new for me. In order to familiarize myself with the engine, I chose to work on a couple of animation issues that I noticed while playing the game.

My first pull request for EMI implements some missing functionality related to starting and stopping animations, and fixes interpolation between animation frames. Due to the latter change, animations now appear much smoother than before.

The next step will be to implement animation blending. This will improve transitions between different animations.

Being able to participate in GSoC is a unique opportunity, and I.’m very grateful for being accepted. This will be a really exciting summer for me. Now, let’s get started!