Weeks 1~2

I’ll explain here how the progress has been (mostly) since the start of GSoC, up to around two weeks later. The coding period has officially started last week, but we’ve been hard at work since the very start.

Week 1 ― Setup and File Cleanup

So to start things off, my mentor shared with me the sources through a private Github repository. The whole of the work goes there. We cloned the repo into engines/saga2 and put all of the original sources into a original/ folder.

Our first job was to have the sources compilable within ScummVM. This required first setting up a skeleton engine. I left this entirely to my mentor. After that, we had to connect it to ScummVM’s build system. I was having some trouble getting the build system to recognize the new engine at first, but after asking around for help, I discovered that because the SAGA engine had saga2 as a sub-engine, it was defaulting to that. Changing the appropriate configuration files allowed me to finally see the build system trying to compile the sources, and obviously failing.

Having almost everything set up, our next step was to clean up all the files and clear the compilation errors along with it. By clean up, I mean we’ve extended their copyright header in every file, set up proper include guards, changed their includes to conform to ScummVM’s guidelines and enclosed everything within a Saga2 namespace.
As for clearing the compilation errors, there are two methods for this: 1. Including a "saga2/std.h" file and defining unknown types there; 2. Stubbing external library functions or functions implemented in assembly with #if 0.

All in all, this whole process took about a week to finish, with me and my mentor working simultaneously. After it was all done, though, we finally had something that compiled, and began working on making individual systems portable, starting with file reading.

Week 2 ― File System

This phase consisted mostly of replacing FILE * variables by Common::File provided by ScummVM’s HAL.
A common technique seen throughout the entirety of the resource loading methods was to read file contents directly into structs. Although this may have worked in the original, 32-bit Little Endian based compiler, it’s an approach that lacks portability. Therefore, whenever we saw code like this:

hResEntry origin;
if ((handle = HR_OPEN(resname, "rb")) == NULL) return;

if (HR_READ(&origin, sizeof origin, 1, handle) != 1) return;

We would change it to allow for portability like this:

hResEntry origin;
if (!_file.open(resname))
	warning("Unable to open file %s", resname);


Where readResource is implemented as this:

void hResource::readResource(hResEntry &element) {
	element.id = _file.readUint32BE();
	element.offset = _file.readUint32LE();
	element.size = _file.readUint32LE();
	uint32 id = element.id;

	debugC(3, kDebugResources, "%s, offset: %x, size: %d", tag2str(id), element.offset, element.size);

Common::File::readUint32LE and etc. are methods whose purpose is to read file contents correctly independent of the host’s endianness. Since the files are meant to be read as Little Endian, we use Common::File::readUint32LE, with the exception of element.id, which for other reasons I will explain in a little bit, we choose to read as Big Endian.

Another important thing to note at this step is their original memory manager. Whenever they declare new pointers, they use methods such as RNewPtr, and often resources are handled by the RHANDLE type, which is managed by methods such as RNewHandle. These are all managed and implemented in rmem.cpp, and their Resource Server, implemented in rserver.cpp, is used to get resources.

We wish to get rid of all of this, so we replace all of their handles by normal pointers, and their custom methods by new/delete (or malloc/free). An example is the method hResContext::load, which we do not replace immediately, but instead create another method, hResContext::loadResource. We then gradually replace calls to hResContext::load by calls to hResContext::loadResource:

dataSegment = scriptRes->loadResource(dataSegID, "saga data segment");

You may notice the dataSegID in the code above. It is defined in the following way in the original:

const uint32        sagaID      = RES_ID('S', 'A', 'G', 'A'),
                    dataSegID   = RES_ID('_', '_', 'D', 'A'),
                    exportSegID = RES_ID('_', 'E', 'X', 'P');

These are the so called hResEntry::id we chose to read as Big Endian before. We chose to do that because the method we replace RES_ID with, MKTAG, transforms 'S', 'A', 'G', 'A' into 0x53414741 (53 41 47 41), which if read as LE, becomes AGAS. Therefore, in order to read it as SAGA, we have to read these 4 bytes as BE in order to find the resources correctly.

This comprises the great majority of what happened in the File System phase. There may be some details left over that I forgot to mention, and I’ll update here accordingly if there’s anything important to add. After the week was over, we could correctly read the file contents and get resources out of them. The next step therefore, was to try to display those resources graphically.

In the next post I will explain how that has been going, and where we’re standing at right now.

Leave a Reply

Your email address will not be published.