Status

I’ve added some support for multiple shaders using XmlShaderFormat. It looks for all *.shader files and tries to parse and compile them. However there isn’t a good way to change shaders. So far, all changes have been in the base OpenGL backend. However, in order to expose the shaders through the graphics mode settings, changes may possibly be made to the SDL backend. Right now, it queries a static class function to figure our the supported graphics modes. However, OpenGL calls to compile the shaders cannot succeed before the OpenGL context is created. So either the class needs to be initialized before the SDL backend queries the graphics modes, or the static class function can guess about what shaders will eventually be compiled (For instance if “CoolFilter.shader” exists in the current directory, but is not a valid XmlShader or it does not contain valid GLSL programs, then it would still be reported as a valid graphics mode.).

My work on scaler plugins has slowed down. I want to get it incorporated into the main project, so the things I need to consider are:

  • How will the plugins work with existing backends that used the old scalers?
  • How will they work with future backends?
    • Do more formats need to be supported? Can they be detected at runtime or compile time (Or probably both)?
  • How can they be integrated with the build system?
    • Options to disable formats (e.g. 32 bit formats which are not used)
    • Options to disable plugins. Currently the Edge2x/3x plugin shares the HQ scaler compile option (USE_HQ_SCALERS).

Shaders

I have added some very basic support for shaders into the OpenGL backend. Right now, it looks for a “vertex.glsl” and a “fragment.glsl” to load as vertex and fragment shaders respectively. Eventually I want to have some manifest file that stores the properties of shader programs, but I have not done that yet.

Right now, two uniforms are passed to the shaders. The first is the texture to use called “texture”. The second is a vec2 containing the width and height of the texture in pixels called “textureDimensions”. This is useful since texture coordinates are in the range [0..1] so it is difficult to tell where one pixel ends and another starts. I have attached a shader program that implements scale2x (Advmame2x) by finding the position within a pixel using these uniforms. It duplicates the functionality of the scaler in the SDL backend, but it looks kinda funky with scale factors != 2. It can probably be modified to use a combination of Advmame3x and Advmame2x depending on subpixel position.

What is great about this is that no development tools are required to change the shader programs. So users can create and load shaders without having to configure build environments and compile ScummVM. Hopefully this can lower the barrier to making some creative works.

Advmame2x shader: https://gist.github.com/3161079
ScummVM branch with enabled shaders: https://github.com/singron/scummvm/tree/opengl

I’ve Been Away…

But now I’m back. I had limited time last week to do work so I have no progress to show since then. I was working on a spline interpolating filter that could be extended to be a arbitrary size scaler, but it was needlessly complicated for poor results, and I have scrapped it to work on new, more useful things.

Right now I am revising the last API addition I made (comparing the current frame to the previous frame to update only necessary pixels). It forced extra code into the backend, and with so many backends, it would get easier adoption if more of the bookkeeping was migrated to the scaler code. To simplify the addition of new plugins wishing to use this feature, the relevant code has been included in a subclass that new plugins can inherit and get all the bookkeeping for free.

Then I’ll be taking a look at the OpenGL backend to implement the scalers as shaders. Hopefully with some improvements, the OpenGL backend can improve performance and quality, giving people a reason to actually use it (it is not even included in many release packages).

Edge2x/3x Finished Up

The performance of the scaler is fine now, even in debug builds without optimization. I have reimplemented the changed pixel detection through a new part of the API. The backend queries the plugin to see if it supports using an old image to compare changed pixels. One problem is that panning the screen causes the whole image to be reupdated and the mouse movements to become choppy. However, it happens rarely enough that it really is not an issue, and in optimized builds it does not matter.

I also templated the function for 32bpp support. This scaler is unique in that it uses the products of interpolation to then compare to other pixels (in other scalers, the products of interpolation are only written to the final image). The existing interpolation functions mangled the alpha channel (in the case of rgba and argb) and padding bits (in rgb888). This caused quirky image defects to happen that were trickier to track down, since the different alpha channels caused the comparisons to be different without changing the color of the pixels.

In the past, I had debugged these problems by simply returning the color red from an interpolation function. Then I would see if the broken pixels turned red. Since this scaler compared the results of the interpolation, whenever I followed a similar technique, the scaler would choose a different path, and the image would change in more chaotic ways (e.g. lines ceasing to anti-alias, black pixels appearing (but not red)). Everything at least appears to be fixed now.

Here are some sample images scaled with the 32bpp scaler.

Edge2x
Edge3x

Clarification on Edge Scaler Optimization

I said some ambiguous statements about what optimizations I disabled in the Edge scaler. Currently, scalers update partial rectangles of the screen based on what pixels have actually changed. However this still causes lots of duplicate pixels to have the same calculations repeated on them. For fast scalers, this is good enough, but the Edge2x/3x scalers needed more speedups.

So the original scaler author included code that took these partial rectangles and tried to reconstruct the source image in the scaler. Then it diffed its own source image with future calls to find out exactly which pixels needed to be updated. However, this involves a lot of guess work about where the rectangles are in the original image. I disabled this particular optimization since the backend can more simply give this information to the scaler through a new part of the API (currently in design). Dirty rectangle updates still work just like they do with the other scalers.

The new part of the API will probably be optionally implemented for backends and scalers. The scalers will request that an old source image be kept by the backend and passed to the scaler so that the scaler can run a diff and update the pixels however it wants. This does not complicate other scalers, does not change backends that would not use the Edge scaler, and provides some needed functionality.