Augmentinel is re-skinned version of the Geoff Crammond classic: The Sentinel.
It provides the original gameplay experience, enhancing the best features from the different game ports.
The original game code runs as it would in a Spectrum emulator. The lower 16K of the memory map contains the Spectrum ROM image, and the upper 48K contains a snapshot image taken from the game at the main menu. The Z80 code is interpreted by a CPU core in 69888-cycle blocks every 1/50th of a second, to maintain the original game speed. However, unlike a traditional Spectrum emulator, the display map isn’t converted to a screen image and all I/O requests are ignored (reads return 0xff
).
To maintain control over the running program we use the same technique as TileMap. After the snapshot is loaded a few code patches are made to advance from the menu into the game, and skip any key press prompts. The bulk of the real work is performed inside code hooks, which are breakpoints set at carefully selected locations in the original Sentinel game code. I spent a couple of months reverse-engineering the original game code to learn how it worked, to determine where should be hooked. 12 hooks are used to track and modify the entire game state – more on those below.
I used the Spectrum version of the game as it’s the one I’m most familiar with, but it could just as easily been the original BBC Micro version instead. The core of all game versions were converted from the original BBC Micro 6502 code to ensure each plays the same. Crammond wrote a 6502 to Z80 converter for the CPC version, which was also used as a starting point for the Spectrum version. There are still small sections of unused CPC code and data to be found in the Spectrum version! The x86 and 68000 code appears to have been given similar treatment, making it easy to find equivalent routines in any version.
To give you some idea of how similar the converted code is, here is the map look-up routine in 6502 (BBC), Z80 (Spectrum), and x86 (PC):
LDA obj_x
ASL A
ASL A
ASL A
AND #$E0
ORA obj_z
TAY
LDA obj_x
AND #3
CLC
ADC #4 ; map_base MSB
STA map_ptr_msb
LDA (map_ptr_lsb),Y
CMP #$C0
RTS
ld a, (obj_x)
sla a
sla a
sla a
and 0E0h
ld hl, obj_z
or (hl)
ld e, a
ld a, (obj_x)
and 3
add a, 61h ; map_base MSB
ld (map_ptr_msb), a
ld hl, (map_ptr_lsb)
add hl, de
ld a, (hl)
cp 0C0h
ccf
ret
mov al, obj_x
shl ax, 1
shl ax, 1
shl ax, 1
and ax, 0E0h
or al, obj_z
mov di, ax
mov ah, obj_x
and ax, 300h
add ax, offset map_base
mov bp, ax
mov al, [bp+di]
cmp al, 0C0h
cmc
retn
The code conversion is mostly 1:1 from 6502 to Z80, however the lack of indexed addressing flexibility on the Z80 requires some extra 16-bit arithmetic. The 6502 X
and Y
registers are held in the Z80 C
and E
registers, with the B
and D
registers holding zero for easy indexing use.
There are many other identifiable patterns in the converted code. In the Z80 and x86 example here there is a CCF
/cmc
instruction to invert the carry flag. This is needed because the 6502 carry flag has the opposite status after compare instructions.
It’d be a fun challenge to rewrite the conversion tool in Python, but I’ve resisted so far.
Augmentinel extracts almost all resources it needs directly from the original game data. This includes the vertex/index/face information needed for the game models, which can be loaded almost directly into buffers used by modern 3D hardware. Perhaps the only oddity is that the vertices are stored as polar coordinates rather than cartesian coordinates, but that was probably to simplify model rotation.
All game versions include the BBC Micro bitmap font (upper-case and digits), used for all in-game text. It’s usually drawn offset in different colours to give a 3D appearance. To match that I create extruded character models from the bitmap data, to give real 3D text. The same is done for the symbols used for the energy display, though they are displayed flat using an orthogonal view projection.
Perhaps the only resource embedded directly in Augmentinel is the 16-colour palette, taken from the EGA PC version. Three of the colour entries are dynamic, and change depending on the number of sentries present on the current landscape. The model face colours index into the palette, in some cases referencing the dynamic colours. Augmentinel maintains an equivalent colour look-up table in a shader constant buffer, to retain the same model colour flexibility.
Another area of extraction is the landscape itself, which is stored in a 32x32 block of memory. The landscape is also stored as vertices, to give a 31x31 landscape area. The original game draws the landscape tile by tile in strips, in back-to-front order so closer sections correctly obscure those further back. Augmentinel creates a landscape model from the data, which can be drawn with a single D3D DrawIndexed
call, with all the z-order issues handled by the GPU. Landscape 0000 uses just 4250 vertices and 5766 indices (31 * 31 * 6), which is trivial by modern standards! I’ll leave details of landscape generation and storage for a future article [UPDATE: see sentland.py project for landscape generation details].
The large text used for the title screen (“The Sentinel”) and the 8-digit secret code is actually stored as models in the landscape map. Code converts the bitmap text data into special block models used to create the text, then positions the camera far enough away to view it (with slight pitch and rotation). Augmentinel extracts these placed models for display in the same way, though the modern planar 3D projection means it is missing the slight curve found on the original display.
To give an impression of lighting in the original game, sloped edges of landscapes are coloured black or white, depending on whether they’re facing front/back or left/right (respectively). That simple trick is surprisingly effective at giving a convincing 3D landscape, especially once combined with the two-colour checkered pattern of flat tiles.
The limited colour palette (just 4 active colours on the BBC Micro) meant it wasn’t possible to light game models properly. Neighbouring faces with the same colour are drawn with an outline to prevent the faces blending together. You can see this effect on the chest of the robot and face of the sentinel and sentries. The outlining is encoded in the face colours, so it’s not something the drawing code needs to determine itself.
Augmentinel has the luxury of applying simple lighting to the whole scene. It uses a background ambient lighting level, with two diffuse lights positioned at the front-left and back-right of the landscape. The positions and intensities were chosen to give a similar landscape lighting appearance to the original game.
My sloped landscape faces are coloured by mixing grey with the prominent landscape colour, as determined by sentry count, and then applying normal lighting. This gives an overall colour that matches the original landscapes. The back faces of the landscape should appear darker, so they’re given only a fraction of the ambient light level. This makes it clear that you are viewing under the edge of the landscape. The model faces no longer need outlines in Augmentinel, as they are lit depending on their angle to the lights. This gives a cleaner and more natural appearance than the original game.
The VGA PC version also includes some simulated fog to help convey object distance. This is achieved using 16 gradations of the 16-colour EGA palette, with each step fading slightly more towards the sky colour. It’s used to darken the back of the preview landscape (black sky), and fade distant objects during the main game (blue sky).
Augmentinel applies exponential fog in the vertex shader, using the distance from the camera to each scene vertex. This results in smoother gradients than the VGA version, but can lead to subtle banding artefacts when moving the camera around, due to the change in distance to the view projection plane. The fog is manually reduced on the landscape preview and in sky view mode, as the greater viewing distance would otherwise make the whole scene too foggy.
Augmentinel follows the underlying game state to determine what it should do next. There are currently 9 internal states: Reset, TitleScreen, LandscapePreview, WrongCode, Game, SkyView, PlayerDead, ShowKiller, Complete.
Transitions between these states occur in the following hook handlers:
struct ISentinelEvents
{
virtual void OnTitleScreen() = 0;
virtual void OnLandscapeInput(int &landscape_bcd, uint32_t &secret_code_bcd) = 0;
virtual void OnLandscapeGenerated() = 0;
virtual void OnNewPlayerView() = 0;
virtual void OnPlayerDead() = 0;
virtual void OnInputAction(uint8_t &action) = 0;
virtual void OnGameModelChanged(int id, bool player_initiated) = 0;
virtual bool OnTargetActionTile(InputAction action, int &tile_x, int &tile_z) = 0;
virtual void OnHideEnergyPanel() = 0;
virtual void OnAddEnergySymbol(int symbol_idx, int x_offset) = 0;
virtual void OnPlayTune(int n) = 0;
virtual void OnSoundEffect(int n, int idx) = 0;
};
Here’s how they’re used in a typical game scenario.
The game starts in the Reset state, which loads and patches the snapshot, and extracts the 3D models. The game then runs at maximum emulation speed to advance to the TitleScreen state.
OnTitleScreen
is called as the game is about to prompt for a key press on the title screen. The text for “The Sentinel” is extracted from the in-memory map into a model. This is displayed together with a sentinel model on top of the pedestal to give the familiar title screen. If the user continues we run the emulation at maximum speed to advance to the landscape number prompt.
OnLandscapeInput
is called just before the user would normally be prompted to enter the landscape number. Instead we poke the current landscape number and secret code directly into memory, and skip past the input code. Again, we run the emulation at maximum speed, through the slow landscape generation process.
OnLandscapeGenerated
indicates the landscape is complete, with all objects placed. We advance to the LandscapePreview state and show the preview of the landscape, with the trees removed. When the player continues we restore all placed models and advance to the Game state.
OnNewPlayerView
is called when the player view becomes visible. This happens at the start of the game and after any player movement, such as a transfer or hyperspace. At this point we extract all the placed models from the map, to give a new snapshot of the world. We also find the current player model and set the camera position and orientation to give the correct player view. The emulated game is now running at normal speed, and is completely independent of the 3D view we’re showing. The state of the world remains valid until we’re notified of a change via a hook call.
OnGameModelChanged
notifies us that a model in the world has changed, and we need to update our state. This happens when an object rotates or changes type, a new object is created, or an existing object is absorbed. We extract the new state of the changed object and queue up animations to represent the change. This could be a rotation or a fade in/out. Once complete we’re back in sync with the emulated game.
OnInputAction
is the injection point for user input. Here we can request that the emulated game perform an action, but it’s still ultimately responsible for executing it. Any side-effects from the action (such as a change of view) will be reported back via hooks. Before returning from the hook we must synchronise the player view in the emulated game, in case an action uses it. This is important for robot placement, since the placed robot faces the player position (well, technically it’s 180 degrees from the current facing direction).
OnTargetActionTile
is called when the player is interacting with the map via the pointer cross-hair. This is one of the few areas where Augmentinel replaces original game code functionality, due to calculation precision issues and differences in display projection. It also allows enhancements such as interacting with the objects directly, and gives pixel-perfect selection. We cast a ray from the current facing direction (or controller direction in VR), to determine which object we’re pointing at, and its landscape position. We then decide if the action is allowed, and notify the game of the result. If allowed the game will perform the remainder of the action, or if rejected it will trigger the error beep.
OnPlayTune
requests a given game tune is played, causing us to queue up the required sound sample. More importantly, the tune number also indicates a specific event has just occurred. If the hyperspace tune is played, we know the player has hyperspaced – either manually requested, or forced by a meanie. In that case we blank the player view and wait for the next hook event. OnNewPlayerView
may be called to indicate a new location, OnPlayerDead
could report that the player died trying to hyperspace, or OnPlayTune
could be called again with the level complete tune. In the latter case we extract the new level number and secret code, and reset back to preview it.
This event-driven logic gives us control where we need it, but lets the game run as normal for most of the time.
The game timers control the overall game difficulty level, and they vary between different versions of the game. The sentinel rotates every 10 seconds on the PC, 12 seconds on the BBC/C64, and 15 seconds on the CPC/Spectrum. All player actions have a time cost, putting you closer to the next sentinel rotation, and the risk of being seen. Object creation and absorption costs 2 seconds, just like the original 8-bit versions.
The game timers are driven from an interrupt handler, so they run at a constant rate. They also run at all times interrupts are enabled, including when the screen is blue generating a new view after the player has moved! However, any due timers are only acted upon from non-interrupt processing, so you’ll often hear the sentinel turn just as a new view becomes visible and normal game loop resumes.
Augmentinel attempts to maintain the same pacing by keeping the time costs the same. It also exposes manual control of the timer frequency by the way of a difficulty setting. The default rotation rate is 12 seconds like the BBC version, but if you find it too easy you can reduce that to 10 (Hard) or 8 (Very Hard). New players can increase it to 20 (Very Easy) as they get to grips with how to play.
The Sentinel always seemed like a VR game ahead of its time, and it’s great to finally feel like you’re in the world. Looking around is so much more satisfying than panning the view with the mouse.
The implementation was relatively straight-forward using the OpenVR API, which requires rendering the scene separately for each eye to give the final stereo view for the headset. It did introduce some complications that I hadn’t considered in the flat version:
The camera position can’t be trusted. The player height is calibrated on each new player view, to align the player’s real world eye height with the robot view in the world. The player can abuse this to see further than they should, so the original robot eye position must always be used when determining target visibility.
The camera position can’t be forced. If we want something to be visible we may need to rotate the scene so it’s in front of the player’s current facing direction. This was needed after player death, where the killer is shown briefly.
The player pointer is independent of the player view, so it’s possible to target something that isn’t visible to the player, even if the player view is correctly aligned with the robot view. This required some additional tests during OnTargetActionTile
processing.
Everything needs to be 3D to be convincing. In a flat view it’s only the final 2D projection that needs to look correct, but that’s not the case in VR. This causes minor issues on the title screen, where the sentinel on the pedestal is floating at an angle in front of the large background text. It was also a driving force in creating the extruded text models.
View effects need to make 3D sense when both eyes are combined for the stereo view. The death dissolve effect was originally applied evenly to each eye, which some play testers found unsettling. As a simple work-around I just changed the dissolve to a fade. The same issue applies to the object dissolve effect, but it’s a much smaller part of the player view and doesn’t seem to be a problem.
Aiming at distant objects is trickier than in flat mode. The changes made to allow targetting objects rather than the tiles helps with absorbtion, but creating a boulder on a distant tile is sometimes a challenge.
Questions? Please get in touch using the link below.