The aim of this document is to document what SkyGFX does
Since its inception SkyGFX has evolved quite a bit. The original mission was to port the PS2 graphics of GTA San Andreas to the PC version. GTA III-SA are built upon the RenderWare graphics engine which calls the PS2 platform port "Sky" (named after Sony's SKY PS2 simulator), hence SkyGFX. Even though SA differs more drastically between platforms, GTA III and Vice City have their differences too and thus SkyGFX for III and VC was created with much the same goal.
But PS2 graphics weren't enough. The Xbox releases of III and VC By R* Vienna (aka Neo) had much improved graphics over the PS2/PC versions and so the scope of the mod broadened. The initial hurdle to overcome was to be able to use Direct3D 9 in III and VC to be able to use shaders in a reasonable way. Out of that rwd3d9 was created, a library that mods can link against that extends RW with d3d9 features, mainly shaders. Sharptrails is another mod of mine that uses it. The Xbox version of SA is far less impressive compared to the PS2 and PC versions (it is mostly a more polished version of the PC port) but it also had some graphical differences, mainly vehicles, and they were integrated into SkyGFX as well.
But Xbox graphics weren't enough. In their mobile releases War Drum did some - depending on point of view - nice, interesting or questionable things. Most notable are the colour filter and vehicle rendering in SA. And so the scope broadened again.
But GTA III-SA weren't enough. Liberty City and Vice City were once again the setting of the games Liberty City Stories and Vice City Stories.for PSP and PS2 by R* Leeds. Having moved away from RW for these ports R* Leeds had to be reimplement some things and they used that opportunity for some interesting graphics again. (The RSL engine they wrote is at the high level a simplified copy&paste version of RW. At the lower levels, i.e. the actual rendering, it is completely different).
There are five post processing effects applied by the game to the screen after the main scene has been rendered that SkyGFX touches: the colour filter, radiosity, night vision, infrared vision and grain.
Here you have a matrix of combinations of Colour filters with Radiosity render passes (all using the PS2 map):
|No Radiosity passes||One Radiosity pass||Two Radiosity passes|
|No Colour filter|
|PS2 Colour filter|
|PC/Xbox Colour filter|
|Mobile Colour filter|
Some more comparison with the default postfx settings for each platform:
The colours for control the effect come from the timecycle (Alpha/RGB1/2). The mobile filter also uses another file colorcycle.dat. NB: timecyc.dat is rather broken in the PC version, read about it here.
The PS2 effect works like this: out = in*rgb1*2 + in*rgb2*2*alpha2*2. I.e. it does PS2 colour modulation and blending
The PC/Xbox effect works like this: out = in + in*rgb1*alpha1 + in*rgb2*alpha2
The Mobile effect is a bit more complicated, check the source code if you want to know the details.
Radiosity is only implemented in the PS2 version (There is leftover code in the other versions but it is completely broken or just unfinished). The effect boosts the highlights and applies a strong blur to them, a sort of bloom. It is contolled by a number of variables, some of which can be set from stream.ini, the others from SkyGFX.
In the ini you can set:
One more setting is the highlight limit, the number above which colours are considered highlights. It is set from the timecycle. There's an override in the game, but it is not exposed by SkyGFX.
For this effect the current frame buffer is downsampled once (both dimensions are halved) for every filter pass. Using a blend mode only available on the PS2 the highlights are separated and darkened like this: tmp = in*2 - highlightLimit. Then this is added to the render buffer for every render pass like this: out = out + tmp*intensity/2.
Unlike in previous games R* implemented a custom pipeline to render vehicles. This pipeline is implemented differently on PS2, PC, Xbox and Mobile. SkyGFX implements everything. In addition some vehicle pipelines from other games are implemented, namely Neo reflections from Xbox III/VC and Leeds reflections from LCS and VCS. And since both the PC and Xbox code have their problems I implemented my own (fixed) take on them, the Specular pipeline.
The default SA vehicle effects are two different kinds of environment maps (EnvMap 1 and 2), and some sort specular lighting. EnvMap1 normally uses vehicleenvmap128 but any other texture that doesn't begin with x can be used. It is normally used on bikes, planes and boats. EnvMap2 normally uses xvehicleenv128 but any other texture beginning with x can be used. It is normally used by cars. It also needs a second set of texture coordinates The way the effects are realized depends on the platform (e.g. EnvMap2 doesn't even use the second set of tex coordinates on PC).
R* also implemented a custom building pipeline for SA. Its main job is to interpolate between day and night prelighting, but (depending on platform) it also does a simple wet road effect (a second set of vertex alpha that allows the sky to partially shine through a road), an environment map effect (like EnvMap1 in the vehicle pipeline) and can animate texture coordinates.
The SkyGFX pipelines are rather similar, the differences are minute. The default PC pipeline is a bit different, the code hardly looks finished.
The grass got a custom render pipeline as well, but not in the PC port. The differences include adding the ambient light colour for lighting, PS2 colour modulation and no backface culling. Apart from the rendering there are some other differences, the most important one being a broken random number generator that broke the placement of the grass patches.
Ini settings are:
The vastly superior PS2 Alpha test has always caused R* some grief in the other ports. To work around it they introduced an IDE flag (64/0x40) in the PC port of GTA III that disables z-write per-object (instead of it being automatic and per-pixel as on PS2) for the pre-rendered shadow meshes. SkyGFX does its best to emulate the alpha test by rendering transparent meshes twice: first the pixels above the alpha reference value with z-write and then below the value without z-write. Hence I call it dual pass rendering.
The effects of this are rather noticeable on vegetation, but also vehicle windscreens can be affected. The latter was problematic on a few vehicles (like the Hydra) and the material's alpha was increased for the PC version.
|PC alpha test|
|PS2 alpha test|
The easy way to configure it is to use the gobal setting dualPass. If you want finer granularity, the following options override the global one:
Since texture modulation works differently on the PS2 you might expect some differences caused by that. And indeed there are, though not too many because this is actually rather well abstracted in RenderWare. Only where there is custom R* code that directly drives the PS2 are there differences. Most are in PostFX code and not discussed here, but it's noticeable in the building and grass pipelines as well.
Normally as the last step to calculating a vertex colour the lighting colour is multiplied by the material colour, and the material colour is set up by RW code such that the modulation difference is invisible to the programmer. When this multiplication is not the last step however, the resulting colour can actually brighten textures on the PS2 and this is precisely the case in the building and grass pipelines.
As with the dual pass there is a global setting: ps2Modulate. If you want finer granularity, you can override it:
pointlight fog, water, moon mask, corona ztest, procobj placement, shadows, pc car light (timecycle), clouds, gamma, neo drops, sun glare, lockon siphon
SA ships with three different timecycle files. PS2 and Xbox ship timecyc.dat for NTSC and timecycp.dat for PAL versions (they actually only differ in a single colour). PC and Mobile ship timecyc.dat, but the file is not the same as the console timecyc.dat. They also come with the console timecycp.dat file but it is not used. Unfortunately this PC timecyc.dat is broken in three ways, which is why you should NOT use it. Just replace it with timecycp.dat.
First of all one line is corrupt. RAINY_COUNTRYSIDE 8PM starts with '255' instead of '22 22 22' which causes the whole line to be parsed incorrectly.
Secondly it has set all PostFX alphas to 255. The console timecycle files have an alpha of 127 for most colours, which is almost opaque on PS2. To convert the alphas into the standard range (opaque = 255), all non-PS2 ports multiply them by 2 when they read the file. This of course does not work very well when the values are 255 to begin with. So all ports expect a file with PS2 alpha values (opaque = 128).
Lastly it's missing the last column of numbers in all lines that defines the intensity of the directional light. It happens to be 0 in this case which causes the directional light to be completely disabled. This missing number is most likely a leftover from an earlier phase of development. The third colour in each line defines the colour of the directional light but it is actualy not used. Persumably to save space R* uses a single value for the directional light in SA and the PC file reflects a phase before that change. Unfortunately this disables the directional light for peds and vehicles. They must have noticed and introduced an additional hardcoded light just for vehicles (also used as specular light. Unfortunately it overwrites D3D light 1, but SilentPatch fixes this). Consequently with a correct timecyc.dat vehicles appear much too bright with the default PC pipeline. Both SkyGFX and SilentPatch fix this by disabling the light again.
In the Mobile port this is actually fixed by initializing the value with 1.28. The vehicle pipeline has no hardcoded light there.
A few of the platform differences can be explained by differences in the hardware. The most notable ones are the following:
In a pipeline data flows from one stage to the next. In our case we start with a 3D model, some matrices, some lights and some textures and end up with pixels on the screen. The general pipeline stages haven't changed much over the years but how they are implemented has. RenderWare and GTA were written at a time when the implementation could be rather different on different platforms. As a result it might be worthwhile to understand different implementations to see how they influenced game design decisions.
|Vertex stage||→||Pixel stage|
|early 90s PC||CPU||CPU||CPU||CPU||CPU||CPU|
|mid/late 90s PC||CPU||CPU||CPU||GPU fixed||GPU fixed||GPU fixed|
|early 00s PC||GPU fixed||GPU fixed||GPU fixed||GPU fixed||GPU fixed||GPU fixed|
|early/mid 00s PC||GPU shader||GPU fixed||GPU fixed||GPU fixed||GPU shader||GPU fixed|
|Xbox||GPU shader||GPU fixed||GPU fixed||GPU fixed||GPU shader||GPU fixed|
|PS2||VU||VU||VU||GS fixed||GS fixed||GS fixed|
This is an approximate description of what happens at each stage:
At first everything was done on the CPU. This gives you a lot freedom but is also very slow.
Since you have a lot more pixels than vertices you want to optimize those first, so in the mid 90s we saw the introduction of graphics accelerators (3dfx Voodoo, ATI Rage, Nvidia RIVA 128) that did the rasterization and everything after it in hardware. This was faster but you could only do what the hardware allowed you to: the pipeline stages could only perform a fixed function. The vertex stage still had to happen on the CPU at the time.
So in the late 90s/early 2000s new graphics chips appeared (ATI Radeon R100, Nvidia GeForce 256) that also offloaded the vertex stages onto the GPU. As a result those pipeline stages also became fixed function.
Shortly after, the T&L stage and the pixel stage became programmable with the introduction of vertex and pixel shaders (ATI R200, Nvidia GeForce 3). This gives the programmer a lot more freedom. With subsequent generations of GPUs this pipeline model has only gotten more powerful.
Something to note is that some stages still perform a fixed function today. While the alpha test is usually done in the pixel shader in modern pipelines, blending is still fixed.
The architecture of the Xbox is essentially that of an early 2000s PC. But because the hardware is always the same, game developers could exploit its whole power. For PC games they had to support a wider variety of graphics chips, so those tend to be more conservative in what hardware features they use.
The PS2 on the other hand stands in the tradition of other gaming consoles. They have a lot more special purpose hardware and it shows. The general pipeline architecture looks kind of like that of a mid/early 90s PC, but instead of handling the vertex stage on the CPU, this is done on one of the two vector units (VU1) that are part of the EE (Emotion Engine). Rasterization and the pixel stage is handled by the GS (Graphics Synthesizer). So the general architecture is somewhere between a mid 90s and mid 2000s PC.
As mentioned above, even on modern GPUs some stages are fixed and that means that they cannot necessarily easily simulate the behaviour of that stage of another platform. This is exactly the case with the pixel stages on the PC and PS2, they are too different to simulate each other. PS2 texturing can be simulated by a pixel shader or even the fixed function pipeline. The alpha test and blending cannot.
At the pixel stage textures can be combined with the colours that were calculated at the vertex stage. One such combination is called modulation: channels of both colours are multiplied. How that exactly is done varies a bit though, I'll compare D3D's fixed function pixel stage with the PS2's GS here.
Let's assume we have a range of [0, 255] for each of RGB and A (internally this could well be normalized to [0.0, 1.0]). With the D3DTOP_MODULATE mode in D3D each channel is muliplied like this: (A * B)/255, i.e. any value modulated with 255 is unchanged. With the D3DTOP_MODULATE2X mode the final value is multiplied by 2, i.e. any value modulated with 127.5 is unchanged. This means textures can be brightened with this mode! (they're clamped at 255)
The GS manual says the calculation is (A * B)/128, i.e. any value modulated with 128 is unchanged. So the GS and D3DTOP_MODULATE2X do pretty much the same thing.
So this is a case where D3D can simulate the GS easily (with a pixel shader this is of course even more trivial). D3DTOP_MODULATE is the standard mode everywhere in RW and GTA so where there is a difference we can just use D3DTOP_MODULATE2X. Since the PS2 can only do the equivalent of D3DTOP_MODULATE2X, the colour range has to be scaled from [0-255] to [0-128] to achieve standard modulation. This is normally abstracted away by RW but in code written by R* this is not always the case. Since they only started writing custom code for SA this is of no concern for III and VC.
After the final pixel has been calculated it is written to the framebuffer. It can either replace the old value or be combined with the colour that is already there. The latter is called blending.
The PS2 GS blend equation is very different from the PC (D3D/OpenGL) blend equation. Some modes can be achieved with both, some only with one of them. The following are the general PC and PS2 blend equations. src is the new pixel, dst is the pixel in the frame buffer.
PC: dst = op(dst*D, src*S). All values are considered to be in range [0.0, 1.0]. where op is one of add, sub, invsub, max, min and D, S are one of 0, 1, src, 1-src, srcAlpha, 1-srcAlpha, dst, 1-dst, dstAlpha, 1-dstAlpha, blendfactor and a few others.
PS2: dst = (A - B)*C + D. All values are considered to be in range [0, 255]. A*B is defined as (A×B)/128. where A, B, D are one of src, dst or 0 and C is one of srcAlpha, dstAlpha or a constant value
"Standard" alpha blending is a linear interpolation between src and dst by srcAlpha.
This can easily be achieved in both equations:
PS2: A := src, B := dst, C := srcAlpha, D := dst
→ dst = (src - dst)*srcAlpha + dst PC: op := add, S := srcAlpha, D := 1-srcAlpha
→ dst = dst*(1-srcAlpha) + src*srcAlpha
However a blend mode used for SA Radiosity cannot:
PS2: A := dst, B := src, C := 128, D := dst
→ dst = (dst - src)*128 + dst = dst*2 - src PC: impossible. No factor can ever be 2.
At the alpha test a pixel's alpha value is compared against a reference value. Depending on whether the pixel's alpha is less, equal or greater than the reference value, the pixels fails the alpha test. This is used to weed out pixels that are "too transparent". When the pixels fails the test, hardware differences come into play.
On the PC a pixel that fails is completely discarded. This can either be the result of the fixed function alpha test or of a pixel shader discarding a pixel
On the PS2 on the other hand the pixel need not be completely discarded. The programmer can specify what happens: discard completely, write colour and alpha, write colour only, write depth only. In GTA colour is always written and depth is discarded when the pixel fails. With PC hardware the decision whether to write to the Z-buffer can only be made per drawcall, not per pixel (not even in a shader).
This limitation of PC hardware, which still exists, caused R* to resort to some hacks to render some objects like intended, not without issues either. SkyGFX can simulate the PS2 alpha test by drawing transparent geometry in two passes: first all pixels above the reference value are drawn with Z-write enabled, then all pixels below the reference value are drawn with Z-write disabled.