finalized API function reference:

So here is the skinny on the pixel format request API :

  • static inline Graphics::PixelFormat Graphics::PixelFormat::createFormat*(void)
    • creates a PixelFormat and initializes it to the specified format. A list of formats provided follows:
      • createFormatCLUT8() // 256 color paletted
      • createFormatRGBA4444()
      • createFormatARGB4444()
      • createFormatABGR4444()
      • createFormatBGRA4444()
      • createFormatRGB555() //16bit 555 RGB
      • createFormatBGR555()
      • createFormatXRGB1555() //only use these if you need the backend to handle alpha
      • createFormatXBGR1555() //otherwise use the 555 versions, instead of 1555
      • createFormatRGB565()
      • createFormatBGR565()
      • createFormatRGB888() // when 24 and 32-bit modes are supported
      • createFormatBGR888() // when 24 and 32-bit modes are supported
      • createFormatRGBA8888() // when 24 and 32-bit modes are supported
      • createFormatARGB8888() // when 24 and 32-bit modes are supported
      • createFormatABGR8888() // when 24 and 32-bit modes are supported
      • createFormatBGRA8888() // when 24 and 32-bit modes are supported
    • Because these methods are static, they can be called before the object is defined, E.G.:
      • Graphics::PixelFormat _screenFormat = Graphics::PixelFormat::createFormatRGB555();
  • void initGraphics(int width, int height, bool defaultTo1xScaler, Graphics::PixelFormat *format = NULL)
    • format is a pointer to a Graphics::PixelFormat describing the pixel format requested of the backend.
    • if format is left as NULL, CLUT8 will be used.
  • Common::List<Graphics::PixelFormat> OSystem::getSupportedFormats(void)
    • returns a list of all pixel formats supported by the backend
    • Backends supporting non-paletted color data must support data in native hardware color order, and should support data RGBA color order.
    • All backends are required to support data in CLUT8 (256 color palette) format.
    • The first item in the list (List.begin()) must always be the pixelformat which provides the greatest RGB colorspace that is directly supported by hardware
      • on dreamcast, this would be Graphics::PixelFormat::createFormatRGB565
      • on PSP, this would be Graphics::PixelFormat::createFormatABGR8888 once 32-bit modes are supported by scalers, and Graphics::PixelFormat::createFormatBGR565 until then.
    • The rest of the list shall be ordered by preference of backend
      • If the backend can convert color orders quickly, larger colorspace formats should be first.
      • An ABGR-preferred SDL system with fast conversion would order like this:
        1. createFormatBGR565()
        2. createFormatRGB565()
        3. createFormatXBGR1555()
        4. createFormatXRGB1555()
        5. createFormatBGR555()
        6. createFormatRGB555()
        7. createFormatBGRA4444()
        8. createFormatABGR4444()
        9. createFormatARGB4444()
        10. createFormatRGBA4444()
        11. createFormatCLUT8()
        • That is, larger colorspace first, equivalent colorspaces ordered by hardware support.
      • Whereas a similarly capable system with slower conversion would order like this:
        1. createFormatBGR565()
        2. createFormatXBGR1555()
        3. createFormatBGR555()
        4. createFormatBGRA4444()
        5. createFormatCLUT8()
        6. createFormatRGB565()
        7. createFormatXRGB1555()
        8. createFormatRGB555()
        9. createFormatRGBA4444()
        10. createFormatABGR4444()
        11. createFormatARGB4444()
        • That is, hardware supported RGB formats first, equivalently supported formats ordered by size of colorspace
    • Note: aside from the guarantee that the first item is directly supported in hardware, there is no way for an engine to determine whether or not any given format on the list is hardware supported. This is the reason that list ordering is important: the engine will use the first item in the list that it is compatible with.
  • Graphics::PixelFormat OSystem::getScreenFormat(void)
    • Returns the pixel format the game screen is currently initialized for.
  • virtual void OSystem::initSize(uint width, uint height, Graphics::PixelFormat *format = NULL)
    • initializes the size and pixel format of the game screen.
    • if format is left as NULL, a new Graphics::PixelFormat will be created, and initialized to CLUT8.
    • this is changed from the separate initFormat that was specified when I did not realize GFX transactions were an optional feature.
  • OSystem::TransactionError OSystem::endGFXTransaction(void)
    • backends supporting GFX transactions will return kTransactionFormatNotSupported in the list of transaction errors, when they are unable to initialize the screen with the requested format.
  • Graphics::PixelFormat Graphics::findCompatibleFormat(Common::List<Graphics::PixelFormat> backend, Common::List<Graphics::PixelFormat> frontend)
    • Returns the first entry on the backend list that also occurs in the frontend list, or CLUT8 if there is no matching format.
  • void Graphics::CursorManager::pushCursor(const byte *buf, uint w, uint h, int hotspotX, int hotspotY, uint32 keycolor, int targetScale, Graphics::PixelFormat *format)
    • format is a pointer to a Graphics::PixelFormat describing the pixel format of the cursor graphic.
    • if format is left as NULL, CLUT8 will be used.
  • void Graphics::CursorManager::replaceCursor(const byte *buf, uint w, uint h, int hotspotX, int hotspotY, uint32 keycolor, int targetScale, Graphics::PixelFormat *format)
    • format is a pointer to a Graphics::PixelFormat describing the pixel format of the cursor graphic.
    • if format is left as NULL, a new Graphics::PixelFormat will be created, and initialized to CLUT8.
  • Graphics::CursorManager::Cursor(const byte *data, uint w, uint h, int hotspotX, int hotspotY, uint32 keycolor = 0xFFFFFFFF, int targetScale = 1, Graphics::PixelFormat *format = NULL)
    • format is a pointer to a Graphics::PixelFormat describing the pixel format of the cursor graphic.
    • if format is left as NULL, a new Graphics::PixelFormat will be created, and initialized to CLUT8.

This is how initialization works:

  • Engine side:
    • Case 1: Game runs in 8-bit paletted mode
      1. No changes are necessary.
    • Case 2: Game runs in a specific high/true color format
      1. engine calls initGraphics with that format
      2. engine calls OSystem::getScreenFormat() to check resulting format against requested format
        • Case A: getScreenFormat() returns requested format
          1. engine runs happily
        • Case B: getScreenFormat() returns CLUT8 and engine has an 8-bit fallback mode
          1. engine switches to fallback mode
          2. engine runs in 8-bits
        • Case C: getScreenFormat() returns CLUT8 and engine doesn’t have fallback mode
          1. engine displays error and returns immediately
        • Case D: getScreenFormat() returns a format that is neither CLUT8 nor the requested format
          1. Tester submits bug report to backend maintainer
          2. Backend maintainer ensures that CLUT8 is the only unrequested mode that backend will fallback to.
    • Case 3: Game can support any RGB mode easily
      1. engine calls OSystem::getSupportedFormats()
      2. engine calls initGraphics with the top list item.
      3. engine calls OSystem::getScreenFormat() to ensure that format is usable properly at requested resolution
        • see cases 2A – 2D
    • Case 4: Game can run in a small number of RGB modes
      1. engine calls OSystem::getSupportedFormats() to get list of formats supported by backend
      2. engine produces list of formats game can run in
      3. engine calls Graphics::findCompatibleFormat(backend_list, engine_list)
      4. engine calls initGraphics with return value from findCompatibleFormat
      5. engine calls OSystem::getScreenFormat() to ensure that format is usable properly at requested resolution
        • see cases 2A – 2D
  • Backend side:
    • backend recieves screen’s requested resolution and format from initGraphics
      • Case 1: NULL pointer
        1. backend initializes screen to 8-bit graphics at requested resolution.
      • Case 2: Hardware directly supports format at requested resolution
        1. backend initializes screen to requested format at requested resolution.
      • Case 3: Hardware supports format at requested resolution, in a different color order
        • Case A: Requested format is RGBA or another conversion-supported color order
          1. backend initializes screen to corrected-order format equivalent at requested resolution.
          2. Backend implements pixel conversion on copyRectToScreen, preferably using ASM hand-crafted for speed.
          3. getScreenFormat will “lie” and return the requested format, rather than the hardware-supported equivalent that is actually being used.
        • Case B: Requested format is GABR or similarly nonsensical/unsupported color order
          1. backend initializes screen to 8-bit graphics at requested resolution.
      • Case 4: Hardware does not support format
        • Case A: requested format has alpha component and hardware supports equivalently-aligned format without alpha
          1. Backend initializes screen to alpha-less equivalent format at requested resolution.
          2. Backend implements alpha blending in software.
          3. getScreenFormat will “lie” and return the requested format, rather than the hardware-supported equivalent that is actually being used.
        • Case B: hardware supports higher format, backend can easily up-convert (optional case, to be handled at backend maintainer’s disgression)
          1. backend initializes screen to higher format.
          2. backend implements up-conversion on copyRectToScreen.
          3. getScreenFormat will “lie” and return the requested format, rather than the hardware-supported equivalent that is actually being used.
        • Case C: all other cases
          1. backend initializes screen to 8-bit graphics at requested resolution.
      • Case 5: Hardware supports format but not at requested resolution
        1. backend initializes screen to 8-bit graphics at requested resolution.

Status report

What follows is a copy of the status report email I sent the ScummVM GSoC mentors list, as it has been requested that I post it on my blog as well:

Hello Eugene, everyone:
Here is the status report and timeline for my project, as requested:

My project was to develop and implement 16-bit (and optionally 24/32 bit) graphical support for games that made use of it. I was to focus primarily on the later HE Scumm games.

The current status is that:

  • the 16-bit HE Scumm games are all displaying in 16-bit
  • A preliminary API has been designed for games to request specific graphical formats of the backend
  • this API has been implemented in the SDL backend

My currently active tasks are:

  • refining/simplifying the API
  • writing documentation for the API

My future tasks are:

  • Exhaustive testing of the 16-bit scumm engine games, to ensure that there are no 16-bit related display glitches
  • Work necessary in other engines with 16-bit RGB graphics to make use of the finalized API, (as time permits)
  • Upgrading the scalers, gui, and SDL backend to support 24/32 bit pixel formats, if time permits

Here is the timeline so far:

  • May 18th – May 21st: Began work on SDL backend, implementing 16-bit screen surface to test with
  • May 22nd – May 29th: Began work on Scumm engine, implementing 16-bit displays to test 16-bit SDL screen.
  • May 30th – Jun 1st: Travis assumed responsibility for Scumm HE rendering code for 16-bit support. I tested games for display bugs, and fixed a backend bug.
  • Jun 3rd: Max creates SVN branch for me, Kirben and I commited our patches.
  • Jun 4th – Jun 6th: developed hack to allow 8-bit and 16-bit cursors to display properly.
  • Jun 7th – Jun 13th: Discussed API concerns with key ScummVM developers, developed ad-hoc API while awaiting conclusions.
  • Jun 14th – Jun 17th: Documented discussion results and refactored ad-hoc API code to bring it in line with the decisions that were made.
  • Jun 18th – Jun 22rd: Began streamlining and refining the API.

Here is my expected timeline for the future (optimistic):

  • Jun 23rd – Jun 26th: Finalize and document the API.
  • Jun 27th – Jul 13th: Test 16-bit SCUMM HE games for display glitching.
  • Jul 14th – Jul 28th: The vacation I mentioned in my application, possible continued testing of SCUMM HE games.
  • Jul 29th – Jul 31st: Do research of other engines requiring RGB color support.
  • Aug 1st – Aug 7th: Enhance these engines to make use of the API
  • Aug 8th – Aug 17th: Improve scalers, gui, and SDL backend to support 24 and 32 bit pixel formats

Here is my expected timeline for the future (pessimistic):

  • Jun 23rd – Jul 13th: Finalize and document the API.
  • Jul 14th – Jul 28th: The vacation I mentioned in my application, begin testing 16-bit HE Scumm games for graphical glitching
  • Jul 29th – Aug 17th: Continued testing and bugfixing of 16-bit HE Scumm games.

I hope this is sufficient,
Jody Northup

Organizing my thoughts

So, in an earlier post, I made an outline of the steps in front of me:

Because of how quickly this project has been moving, I’ve managed to lose all track of where I am and where I’m going.

So, in order to collect my thoughts, I’m referring back to that outline, and taking stock of the steps, their completion status, and their relationship to reality:

  1. Modify the Scumm HE engine to display a 16-bit background resource when the freddicove demo is loaded (to test my understanding of the resource format and standard rendering process).
  2. Integrate this functionality into the standard running of the Scumm HE engines. — Kirben did these three.
  3. Add 16-bit support in place for other resource types. — Kirben did these three.
  4. Modify rendering, for 16-bit HE games, such that the 8-bit resources are rendered using the palette->rgb mapping system that the game engine provides. (possibly involves implementing this functionality) — Kirben did these three.
  5. Perform unit tests to ensure that all 16-bit Scumm HE games are rendering properly — Partially done, remainder delayed until completion of API work
  6. Hack the mouse cursor for 16-bit support because  the erroneous display is incredibly annoying.
  7. Reimplement 16-bit cursor support in a less hackish manner because that was ugly.
  8. Discuss with mentor at length to determine ideal API behavior for bit-depth/pixel format negotiation between game engines and backends. — Not just my mentor, but most of the dev community. (Here are the final results.)
  9. Implement hackish proof-of-concept API while awaiting discussion from dev community.
  10. Implement support for this API in SDL backend and Scumm HE engine. (The real one) — This is basically done, although I want to change the behavior of a few things slightly. — (This is the point at which the mouse cursor will be upgraded, because between the in-game mouse cursor and the in-game menu, at least one must be assured to display properly if any meaningful testing is to be done.) — This turned out to be a lie. See steps 6 and 7
  11. Document this API exhaustively. — This may be difficult, and I will have to discuss with Sev about where and how I should do this.
  12. Finish testing 16-bit Scumm HE games. — I think I have all of them but pjgames, now.
  13. See what can be done about engines other than Scumm.
  14. NEW! See about fixing up the gui, SDL backend, and scalers for 24/32 bit color support.

Because of Kirben’s intervention, this whole task has been going much more quickly than even my most optimistic expectations. I imagine that, on my own, I might have finished step 3 by now, but I really am not sure. Certainly I was expecting it to take at least another month to get to this point.

How this is going to work

After a great deal of discussion on the matter, Eugene has given me a final decision:

The engine is going to specify a Graphics::PixelFormat to the backend.

So here’s how this is going to work:

  • New Methods
    • Graphics::PixelFormat constructor, for convenience of engine developers (details to be determined)
    • OSystem::getBestFormat(void)
      • Returns a Graphics::PixelFormat describing the highest bitdepth supported by the backend (important for YUV games which will want to output in the highest quality)
    • OSystem::initFormat(Graphics::PixelFormat)
      • Set up the color format of the virtual screen in the gfxtransaction, to be applied on endGfxTransaction
    • Graphics::PixelFormat OSystem::getScreenFormat(void)
      • Returns a pixelformat describing the current color mode
    • CursorManager::pushCursorFormat(Graphics::PixelFormat)
      • Pushes a new cursor pixel format onto the stack, and set it in the backend.
    • CursorManager::popCursorFormat(void)
      • Pop a cursor pixel format from the stack, and restore the previous one to the backend.
      • If there is no previous format, the current screen format is used instead.
      • Unlike CursorManager::popCursorPalette, this must be called prior to CursorManager::popCursor
    • CursorManager::replaceCursorFormat(Graphics::PixelFormat)
      • Replace the current cursor pixel format on the stack.
      • If the stack is empty, the format is pushed instead.
      • These methods should be called whenever the equivalent *CursorPalette methods are called, to keep cursor rendering straight across color mode changes.
  • Changed functions/methods
    • initGraphics
      • Takes an optional pointer to a Graphics::PixelFormat struct
      • if pointer is null or not supplied, initializes using CLUT8 (standard 8-bit that everything already uses)
    • OSystem::endGFXTransaction
      • Must apply the color mode setup by initFormat
      • kTransactionPixelFormatNotSupported is included in the return value if color mode setup fails
  • TODO: new and changed class members.

The bitdepth initialization process will be as follows:

  • 8Bit paletted:
    1. Engine calls initGraphics (no changes are required on the engine side)
    2. initGraphics initializes a CLUT8 Graphics::PixelFormat, skips compatibility checking because all backends must support CLUT8 exactly they currently do
    3. initGraphics passes the CLUT8 PixelFormat to OSystem::initFormat
    4. Backend sees that the format matches the one that is currently set up, and ignores that part of the transaction
  • High or true color RGB:
    1. Engine initializes a Graphics::PixelFormat with the color format that the game uses (games which convert from YUV will query OSystem::getBestFormat to decide this)
    2. Engine passes this format to initGraphics
    3. initGraphics passes this format to OSystem::initFormat
    4. Backend sets up virtual screen format to be applied with transaction
    5. Backend attempts to initialize virtual screen with specified format and resolution, falls back to 8-bit and returns kTransactionPixelFormatNotSupported on failure.
    6. initGraphics checks transaction return for kTransactionPixelFormatNotSupported, and warns if encountered.
    7. Engine queries backend (using OSystem::getScreenFormat) to check the current color format
    8. Engine branches based on result:
      • Requested format
        1. Engine runs happily
      • CLUT8
        • If engine supports a 256 color fallback mode
          1. Engine falls back to 256 colors, and runs with only minor complaint
        • If engine does not support a 256 color fallback
          1. Engine displays error message that the system does not support its required format
          2. Engine returns with error (error code TBD).

bitdepth/pixelformat pros and cons

I have recieved and reviewed the discussion in the bitdepth/pixelformat mailing list thread that I started, and I’m a bit worried by the results:

  • One vote (Sven) for the old bitformat (8, 555, 565, 888) style of format specification
  • One vote (Eugene) for the engine requesting only a bitdepth, not a full format
  • Two votes (Johannes and Max) for the engine requesting a (list of) fully formed Graphics::PixelFormat object(s).
  • One vote (Myself) for using an enumerated list
  • Three people (Marcus, Joost, Oystein) contributing useful information about backend concerns, but no recommendations of their own

Nothing resembling a consensus, and very little discussion on the pros and cons of each method.

So, I’m going to list the pros and cons (as I see them) for each format:

  • Old bitformat:
    • Pros:
      • Already defined and implemented
      • Values presented in a very readable and understandable format
      • Requires minimum of sanity checking by backends
      • Requires minimum of sanity checking by engines
    • Cons:
      • Requires extra work for backends wanting to add support for a format not currently defined
      • Requires extra conversions to/from a format that can be understood by video interface
      • Viewed as outdated and “evil” by some menbers of the community
    • Neutral/dependant:
      • Backend responsible for specifying color order
  • Bitdepth only
    • Pros:
      • Already defined
      • Values presented in a very readable and understandable format
      • Minimum of sanity checking necessary for backend
    • Cons
      • Disallows possibility of multiple formats in same bitdepth (ARGB1555/RGB555/RGB565/RGBA444)
      • Requires all engines with higher than 256 color modes to fully understand Graphics::PixelFormat values
      • Requires all engines with higher than 256 color modes to support color format conversions to match backend
      • Requires extra conversions to/from a format that can be understood by video interface
    • Neutral/dependant:
      • Backend entirely responsible for specifying color format
  • Graphics::PixelFormat
    • Pros:
      • Already defined and implemented
      • Allows great flexibility in negotiation of color mode between engine and backend
      • No extra conversions necessary once format is agreed upon
    • Cons:
      • Complicated for engine to produce a list of supported formats
      • Requires backend developers to create and maintain a comprehensive list of supported formats or perform extensive checks to determine compatibility.
      • Requires engine to fully understand and work with Graphics::PixelFormat object
    • Neutral/dependant:
      • Engine is primarily responsible for determining color format.
  • Enum type (colormode and colororder fields)
    • Pros:
      • Allows great flexibility in negotiation of color mode between engine and backend
      • Values presented in a very readable and understandable format
      • Requires minimum of sanity checking by backends
      • Requires minimum of sanity checking by engines
    • Cons:
      • Slightly complicated for engine to produce list of supported formats.
      • Requires extra conversions to/from a format that can be understood by video interface.
    • Neutral/dependant:
      • Engine is primarily responsible for determining color format.
  • Enum type (colormode only)
    • Pros:
      • Values presented in a very readable and understandable format
      • Requires minimum of sanity checking by backends
      • Requires minimum of sanity checking by engines
    • Cons:
      • Requires extra conversions to/from a format that can be understood by video interface.
    • Neutral/dependant:
      • Backend responsible for specifying color order
  • New type (not yet defined)
    • Pros:
      • Can be designed specifically for this task
    • Cons:
      • May require significant effort to implement

Now, if I take Eugene’s recommendation (from IRC) that color order passed from engine to backend is always RGBA, then removing the ability from the engines to specify their own order is actually a plus, as it shields from potential future developer error. But, I don’t know if that idea is universally agreed upon.

The vestiges of flesh

Well, using the ad-hoc solution I described yesterday, I now have the SDL backend supporting (a very limited set of) pixelformat requests from engine codes.

And I’ve got the Scumm engine module making use of it.

Here’s a “before” and “after” set of the Humongous Interactive Catalog:

Catalog, rendered in 555RGB
Catalog, rendered in properly paletted CLUT8

Incidentally, because I had never actually tried the interactive catalog before having converted the backend to render in 555RGB, I never realized until taking these screenshots that those perfect circles on the left are actually horribly distorted ellipses, and therefore, even though I knew everything was rendering at half width, I somehow expected a result that looked a bit more like this:

And just to prove I haven’t broken 16-bit, here’s a screenshot of freddicove running in 16bit (RGB555) color, on the same build of ScummVM

Remember that screenshot back in my first progress update? …this is what it would look like now.

Of course, this is a ad-hoc, temporary solution. It’s not nearly as dirty a hack as my first modification to make the backend render as 16-bit, but still, I strongly doubt that it will be an acceptable final implementation.

Additionally, in its current state, there is absolutely no error checking or sanitizing, and the mouse cursor code is still using the temporary hack I mentioned three posts ago — not respecting or even checking the backend state at all.

But it’s still a start, and I’m still proud of it.

The barest of skeletons

While no consensus has yet been reached on the concerns regarding the API, I am going ahead and starting work one of the possibilites that has begun to feel fully formed in my head:

This model uses an enum type, (temporarily Graphics::ColorFormat) which is divided into two parts, FormatType and ColorOrder, which are ORed together.

To start with, I am providing ten values for FormatType:

  • kFormat8Bit = 0
  • kFormatRGB555 = 1
  • kFormatARGB1555 = 2
  • kFormatRGB556 = 3
  • kFormatRGB565 = 4
  • kFormatRGB655 = 5
  • kFormatARGB4444 = 6
  • kFormatRGB888 = 7
  • kFormatARGB6666 = 8
  • kFormatARGB8888 = 9

and 31 values for ColorOrder

  • kFormatPalette = 0 << 8
  • kFormatRGB = 1 << 8
  • kFormatRBG = 2 << 8
  • kFormatGRB = 3 << 8
  • kFormatGBR = 4 << 8
  • kFormatBRG = 5 << 8
  • kFormatBGR = 6 << 8
  • kFormatARGB = 7 << 8
  • kFormatBGRA = 30 << 8

For this tentative model,  the optional parameter to Engine::InitGraphics will be a Common::List of formats the game supports. The backend will iterate through this list and test each entry for compatibility, using a pair of switches, as per the following simplified example:

OSystem::TransactionError checkFormatCompatibility(Graphics::ColorFormat format)
{
switch(format & kFormatTypeMask) {
case kFormat8Bit:
case kFormatRGB555:
break;
default:
return kTransactionPixelFormatNotSupported;
}

switch (format & kFormatOrderMask) {
case kFormatRGB:
case kFormatARGB:
return kTransactionSuccess;
default:
return kTransactionPixelFormatNotSupported;
}
}

NOTE FOR DEVELOPERS: I do not really expect this to be an acceptable final implementation, as it is a combination of overkill (who’s ever heard of BAGR6666 color?) and underkill (no YUV support), so, pending further notice do not use information from this post to make any plans or modifications to your engine or your backend. I am only using this as a direction for me to pursue while I await a final decision from full developers.

First outlines of an API

After discussing the API a bit with my mentor, Sev, it has been determined that the game engine should initially request a bitdepth/pixelformat of the backend by means of an optional parameter passed to Engine::InitGraphics

There are still a few details that haven’t been determined, like:

  • What happens when the engine requests an unsupported format;
  • Should the parameter be a bitdepth (8, 16, 24, 32) a generic specifier (8, 555, 565, 1555, 888, 8888), a fully formed Graphics::PixelFormat object, or some other format not yet defined;
  • Should any pixelformat conversions be performed if the engine and backend cannot agree on a directly supported format
  • If so, should they be performed by the engine, or by the backend? or should it vary by circumstance?
  • Probably many others that I haven’t considered yet

But, I am beginning to form a mental picture of how this thing will work, and I will, of course, discuss these remaining questions with Sev, and others, at the next opportunity.

Good news, everyone.

My success in getting the initial 16-bit support into the backend, and getting it to work with resources from freddicove, triggered Kirben into a frenzy of updates to the Scumm engine code. 16-bit HE games seem to be working almost perfectly, now.

During this period, I have been doing some bug hunting in regard to this.

For example, when Kirben first mentioned strange graphical glitches occuring in the intro to freddicove, I started msvc debugger and traced the issue to an error in the SDL copyRectToScreen method, that I had failed to correct when changing to 16-bit — when testing for the special case of a rectangle being the full width of the screen, it checked the rectangle’s pitch (or number of bytes from the start of one line to the start of the next), instead of width (or number of pixels from the start of one line to the start of the next) for equality with the screen width, resulting in the case being used when the rectangle being copied was half the screen width, but not when it was full.

I then corrected this error, and began looking at similar glitches that had been reported in baseball2001, and spyfox3.

During this time, the full versions of Spy Fox 3, Backyard Soccer MLS, Freddi Fish 5: The Case of Coral Cove, and Moonbase Commander arrived from ebay, and I began using those for testing. So far, I have concentrated primarily on Spy Fox 3, as it seemed to have the highest instance of graphical errors.

Long story short: Kirben seems to have fixed all of the known display errors that were internal to the Scumm engine, and I now have cursors displaying in 16-bit, because the work he did fixing hePalettes to work properly in 16-bit was incompatible with the cursors rendering properly in 8-bit mode. (I have screenshots of this, but this post is cluttered as it is, and little, if any, difference is visible from those cursors rendering properly in 8-bit.)

Additionally because of all these rapid developments on the scumm engine, I will begin discussion and work on the API as soon as Sev gets back from his vacation.

Minor progress update.

If the prior post is to be taken as a checklist, the first item can now be crossed off.

ScummVM displaying a background from freddicove in glorious 16-bit color.

Many thanks to Kirben for the help he provided in locating this resource.