My work on animations has been merged back to ResidualVM’s master branch, so I’ve now moved on to the next big thing on my task list, which is lighting. So far I have a basic implementation working that supports ambient, point and directional lights. Below is a comparison showcasing the lighting in the original game and my implementation in ResidualVM. Can you spot the difference?
(To be completely honest I did cheat a bit in the above image, though. EMI has several different lighting modes that can be set per-actor. So far I’ve only focused on the LIGHT_NORMDYN mode, but in addition there is also LIGHT_FASTDYN, which is faster but the lighting is less accurate. Otis and Carla use the LIGHT_FASTDYN mode in the original so the lighting looks a bit different. For the screenshot of the original game I forced LIGHT_NORMDYN for Otis and Carla by executing custom Lua code in the original engine.)
My first attempt was to try to tweak the parameters of the standard OpenGL fixed function lighting model in order to match the lighting in the original EMI. I soon realized it was impossible to get an exact match with this approach, though. For example, EMI uses a simple attenuation model where each light has a minimum and a maximum falloff range. Vertices within the minimum range will be fully lit. Between the minimum and maximum range the light falls off linearly, and at the maximum range vertices will be unlit. OpenGL uses a different attenuation model that can only approximate the model used in EMI.
How did they do the lighting in the original game, then? Back then, GPUs with a programmable pipeline were not yet common, so shaders were out of question. Running apitrace on the original game reveals that no calls are made to glLight functions, so it looks like the original game does not make use of the fixed function lighting either. Instead, it seems the original game calculates shading in software, and the shading is then baked in to the vertex data that is passed on to OpenGL for rendering.
Isn’t this inefficient? Yeah, but not really. The models in EMI are fairly simple, and there is only a few of them in view at once. Many actors use the LIGHT_STATIC lighting mode, which means that lighting only needs to be calculated once for them. Only a fraction of actors use the dynamic lighting mode, in which case the lighting must be recalculated each frame. On modern hardware the main bottleneck is perhaps the data transfer from CPU to GPU, as the vertex data needs to be updated every time the lighting changes.
I decided to implement a software shading solution similar to the original game, and the result can be seen in the image above. This is ideal for the software and fixed function OpenGL renderers in ResidualVM, but certainly isn’t the best solution on modern hardware. For the modern shader-based OpenGL renderer it’s a better idea to use a shader instead, at least for dynamically lit models.
My current lighting hacks can be found be found here.