Tag Archives: Dynamic

Klonoa and its distinct 2.5D camera system

Image

Klonoa: Door to Phantomile (and its Wii remake) is a side-scrolling platform game viewed from a “2.5D” perspective. The player moves the protagonist, Klonoa, along a path in a two-dimensional fashion, but the game is rendered in three dimensions. This allows the path followed to curve and for the player to interact with objects outside of the path.

 Image

The term “2.5D” is also applied (though mathematically incorrect) to 3D games that use polygonal 3D graphics in order to render the world and characters, but the gameplay is still restricted to a 2D plane or gameplay system. The term is rather loose as term because it generally refers to any game with 3D graphics that feature any sort of 2D playing style.

For example, the Crash Bandicoot games of the Playstation 1 were considered 2.5D because despite the 3D graphics, most levels are not as free roaming as its competitor at the time Super Mario 64. There were even some levels where you can only traverse left and right (except maybe a part at the beginning and end where you move to and from your goal).

Image

The main problem is that the assumption of Crash Bandicoot being 2.5D is based on shallow aspects such as level layout and camera perspective of those levels, I’m not saying they’re not important, but in this case those aspects don’t make it a 2.5 game.

Image

The New Super Mario Bros. are also considered examples of the sub-genre as it uses 3D models and animations, but other than that it’s strictly 2D, the 3D parts of it are mere aesthetics. Layout, design, play style and controls, all of it is 2D. Street Fighter IV is another game considered 2.5D for similar reasons due to its 2D gameplay coupled with its 3D rendering.

I consider Klonoa to be the purest example of the subgenre because the combined design of the level layout, the gameplay and especially the camera angle are all 2D with 3D elements thrown in which is essentially the textbook definition of the term.

Image

Like many platformers you have a camera that interpolates accordingly along with the character’s ever changing position in a similar manner to other popular platformers like Mario. However there are points where you will end up turning as the level is not a straight line, when that happens the camera remains parallel to the character while maintaining a certain distance throughout, which includes moments when your character jumps towards the screen which is an example of a dynamic camera angle.

I chose this game in particular because it’s an example of how camera dynamics can ultimately create a new genre in a sense. Like in movies, camera works don’t just provide a visual of the audience but a whole new perspective.

Advertisements

Usages of bloom and blur in games

Bloom, which is also called light bloom or glow, is a computer graphics effect used in video games, demos and high dynamic range rendering (HDR) to reproduce an imaging artifact of real-world cameras. The effect produces fringes (or feathers) of light extending from the borders of bright areas in an image, contributing to the illusion of an extremely bright light overwhelming the camera or eye capturing the scene.

Image

The physical basis of bloom is that lenses can never focus perfectly, even a perfect lens will convolve the incoming image with an Airy disc, the diffraction pattern produced by passing a point light source through a circular aperture. Under normal circumstances, these imperfections are not noticeable, but an intensely bright light source will cause the imperfections to become visible. As a result, the image of the bright light appears to bleed beyond its natural borders.

Image

The Airy disc function falls off very quickly but has very wide tails. As long as the brightness of adjacent parts of the image are roughly in the same range, the effect of the blurring caused by the Airy disc is not particularly noticeable; but in parts of the image where very bright parts are adjacent to relatively darker parts, the tails of the Airy disc become visible, and can extend far beyond the extent of the bright part of the image.

In HDR images, the effect can be re-produced by convolving the image with a windowed kernel of an Airy disc, or by applying Gaussian blur to simulate the effect of a less perfect lens, before converting the image to fixed-range pixels.

Image

The effect cannot be fully reproduced in non-HDR imaging systems, because the amount of bleed depends on how bright the bright part of the image is.

As an example, when a picture is taken indoors, the brightness of outdoor objects seen through a window may be 70 or 80 times brighter than objects inside the room. If exposure levels are set for objects inside the room, the bright image of the windows will bleed past the window frames when convolved with the Airy disc of the camera being used to produce the image.

Image

Current generation gaming systems are able to render 3D graphics using floating point frame buffers, in order to produce HDR images. To produce the bloom effect, the HDR images in the frame buffer are convolved with a convolution kernel in a post-processing step, before converting to RGB space. The convolution step usually requires the use of a large Gaussian kernel that is not practical for realtime graphics, causing the programmers to use approximation methods.

Ico was one of the first games to use the bloom effect. Bloom was popularized within the game industry in 2004, when an article on the technique was published by the authors of Tron 2.0. Bloom lighting has been used in many games, modifications, and game engines such as Quake Live, Cube 2: Sauerbraten and the Spring game engine. The effect is popular in current generation games, and is used heavily in PC, Xbox 360 and PlayStation 3 games as well as Nintendo GameCube and Wii releases such as The Legend of Zelda: Twilight Princess, Metroid A Gaussian blur is one of the most useful post-processing techniques in graphics yet I somehow find myself hard pressed to find a good example of a Gaussian blur shader floating around on the interwebs. The theory behind its value generation can be found in GPU Gems 3; Chapter 40 (“Incremental Computation of the Gaussian” by Ken Turkowski).Prime, and Metroid Prime 2: Echoes.

God of War III’s graphics engine and various implementation

I’ve been playing a lot of God of War III lately and thanks to my Intermediate Computer graphics course, I couldn’t help but consider all the shaders being used. In God of War III, details  in terms of texturing and geometry is not just another step in graphics rendering, but one giant leap for the industry in terms of technology.

Image

Programmable pixel shaders add textures and effects that give a whole new dimension to the quality of the final work. It’s a true generational leap, and performance of the new game, in terms of frame-rate, is in the same ballpark as the previous two God of War titles.

In terms of the character creations themselves, concept art and a low-poly mesh from Maya is handed off to the 3D modellers, who create the basic models using a sculpting tool known as Z-Brush. These models are then given detail – painted in via Photoshop – before being passed along the next stages in the art pipeline to the character riggers and animators.

Image

Kratos himself is a very detailed model, it’s interesting to note that the raw polygon count is considerably lower than the 35,000 or so that comprise the in-game model of Drake in Uncharted 2. But it is significantly higher than Kratos on the PS2 when he had only 5,000 polygons. He had about three textures on the PlayStation 2 and I think he has at least 20 textures on him now. The animation data on him is probably about six times as big.

Image

Kratos is a big guy but did you know that given the intricacy of his model, it would take two Playstation 2’s to fit his memory? That’s not even counting all his weapons that he has to alternate between, eat your heart out Link.

Image

Each character gets a normal, diffuse, specular, gloss (power map), ambient occlusion, and skin shader map. Layered textures were used to create more tiling, and use environment maps where needed.

A new technique known as blended normal mapping adds to the realism of the basic model and hugely enhances the range of animation available. Muscles move convincingly, facial animations convey the hatred and rage of Kratos in a way we’ve never seen before.

The system operates to such a level of realism that wrinkles in the character’s skin are added and taken away as joints within the face of the model are manipulated. The musculature simulation is so accurate that veins literally pop into view on Kratos’s arms as he moves them around.

In terms of character movements, over and above the pre-defined animations created by the team, the God of War technical artists also created secondary animation code. Why hand-animate hair, or a serpent’s tail, when the PS3 itself can mathematically calculate the way it should look? The system’s called Dynamic Simulation; and its effects are subtle but remarkable, accurately generating motion that previously took the animators long man-hours to replicate.

From God of War II to God of War III they’ve used Dynamic Simulation more and more to do more secondary animations on the characters. Before, in previous games, the hair or the cloth would be stiff, it would be modelled into the creatures, but now they actually adding motion to those pieces so you will see hair and cloth moving.”

“Towards the end of the previous game, in collaboration with Jason Minters, I created this dynamic system that uses the Maya hair system to drive a series of joints,” adds technical artist Gary Cavanaugh. “Each of the snakes on the gorgon’s head is independently moving. The animator did not have to individually pose all of these animations but they do have control over the physics… it improves a lot of the workflow for animators.”

The tech art team bridges the gap between artists and coders. The ‘zipper tech’ tool on the left shows how they create animated wounds with belching guts, while the shot on the right shows a bespoke animation rig for the gorgon’s tail.

One of the most crucial elements of the cinematic look of God of War III is derived from the accomplished camerawork. Similar to previous God of War epics – and in contrast to Uncharted 2 – the player actually has very little control over the in-game viewpoint. Instead, Sony Santa Monica has a small team whose job it is to direct the action, similar to a movie’s Director of Photography.

Think about it: so long as the gameplay works, and works well, having scripted camera events ensures that the player gets the most out of the hugely intricate and beautifully designed art that the God of War team has put together. When running from point A to point B, why focus the camera on a piece of ground and wall when instead it can pan back to reveal a beautiful, epic background vista?

Perhaps most astonishingly of all, the final God of War III executable file that sits on that mammoth Blu-ray is just 5.3MB in size – uncompressed, and including SPU binaries – in a project that swallows up a mammoth 35GB of Blu-ray space (40.2GB for the European version with its support for multiple languages).

Another core part of God of War III‘s cinematic look and feel comes from the basic setup of the framebuffer, and the implementation of HDR lighting. Two framebuffer possibilities for HDR on the PlayStation 3 include LogLUV (aka NAO32, used in Uncharted and Heavenly Sword), and RGBM: an alternative setup that has found a home in Uncharted 2 and indeed in God of War III.

The basic technical setups for both formats are covered elsewhere but in terms of the final effect and what it means for the look of the game, the result is a massively expanded colour palette which gifts the artists with a higher-precision range of colours in which to create a unique, stylised and film-like look.

Opting for the RGBM setup over LogLUV means a significant saving in processing, although some precision is lost. The degree of that loss isn’t exactly apparent to the human eye, and we can assume it becomes even less of an issue bearing in mind that the final image is transmitted to your display downscaled over the 24-bit RGB link in the HDMI port.

Image

The filmic look of God of War III is boosted via effective motion blur. The shots demonstrate the camera and per-object implementations in the game.

In terms of post-processing effects, the game is given an additional boost in realism thanks to an impressive implementation of motion blur. Superficially, it’s a similar system to that seen in previous technological showcases like Uncharted 2: Among Thieves and Killzone 2, and helps to smooth some of the judder caused by a frame-rate that can vary between anything from 30 frames per second to 60.

Most games that implement motion blur do so just on a “camera” basis – that is, the whole scene is processed – an effect that of variable effectiveness in terms of achieving a realistic look.

According to Sony Santa Monica’s Ken Feldman, motion blur is calculated not just on the camera, but on an individual object and inner object basis too.

God of War’s camera and object motion blur is a subtle but effective contribution to the cinematic look of the game. Here at 30 per cent speed, the effect is more easily open to analysis. Hit the full-screen button for HD, or click on the EGTV link for a larger window.

The basics of the motion blur system effectively mimic what we see on the cinema screen. Movies run at only 24 frames per second, but make it look like it’s smoother. While filming, the shutter of the camera stays open for around 0.04 seconds. During that window of time, movement in the captured image is blurred. It’s that phenomenon that the tech seeks to mimic in God of War III: more cinematic, more realistic.

Initially the game used the RSX chip to carry out a traditional 2x multisampling anti-aliasing effect. This, combined with the game’s lack of high-contrast edges, produced an extremely clean look in last year’s E3 demo. For the final game, the Sony Santa Monica team implemented a solution that goes way beyond that.

MLAA-style edge-smoothing looks absolutely sensational in still shots but the effect often deteriorates with pixel-popping artifacts. In God of War III this only became especially obvious in the scene shown in the bottom right image.

According to director of technology Tim Moss, God of War III worked with the Sony technology group in the UK to produce an edge-smoothing technique for the game that the developers call MLAA, or morphological anti-aliasing. Indeed, Moss’s colleague Christer Ericson took us to task on the specifics of MLAA a few months back in this DF blog post, revealing that the team has put extensive research into this in search of their own solution.

“The core implementation of the anti-aliasing was written by some great SCEE guys in the UK, but came very late in our development cycle making the integration a daunting task,” adds senior staff programmer Ben Diamand.

The specifics of the implementation are still unknown at this time (though Ken Feldman suggests it “goes beyond” the papers Ericson spoke about in the DF piece) but the bottom line is that the final result in God of War III is simply phenomenal: aliasing is all but eliminated and the sub-pixel jitter typically associated with this technique has been massively reduced compared to other implementations we’ve seen. The implementation of MLAA culminates beautifully when it comes to eradicating edge-aliasing within the game.

The custom anti-aliasing solution is also another example of how PlayStation 3 developers are using the Cell CPU as a parallel graphics chip working in tandem with the RSX. The basic theory is all about moving tasks typically performed by the graphics chip over the Cell. Post-processing effects in particular work well being ported across.

The more flexible nature of the CPU means that while such tasks can be more computationally expensive, you are left with a higher-quality result. The increased latency incurred can be reduced by parallelising across multiple SPUs.

In the case of God of War III, frames take between 16ms and 30ms to render. The original 2x multisampling AA solution took a big chunk of rendering time, at 5ms. Now, the MLAA algorithm takes 20ms of CPU time. But it’s running across five SPUs, meaning that overall latency is a mere 4ms. So the final result is actually faster, and that previous 5ms of GPU time can be repurposed for other tasks.

While the detail that the Sony Santa Monica team has put into the characters and environments is clearly immense, it’s the combination with the pure rendering tech that gives the game its state-of-the-art look and feel. The new God of War engine thrives in its handling of dynamic light sources, for example.

God of War III triumphs in handling dynamic lighting, with up to 50 lights per game object. Helios’ head (bottom right) is the most obvious example of the player directly interfacing with dynamic lighting. Dynamic lighting is one of the big features of the game’s engine. It manages to support up to 50 dynamic lights per game object. They are not using a deferred lighting scheme. Our lead programmer What the team did was place lights in Maya and have them update in real-time in the game on the PS3, it’s like being able to paint with lights.

Where there is light, there is a shadow. Or at least there should be. On the majority of videogames, shadowing tech is fairly basic. Producing realistic shadows is computationally expensive, hence we get a range of ugly artifacts as a result: serrated edges that look ugly up close, or cascade shadow maps that transition in quality in stages right before your eyes.

Image

God of War III stands out in this regard simply because you don’t tend to notice the shadows. They’re realistic. The human eye is drawn to elements that stick out like a sore thumb, and that includes shadows. State-of-the-art techniques result in a very natural look. The result is subtle and it works beautifully, thus creating visual feast for players to enjoy as they play a game with graphics that even surpass blockbuster movies.

Shaders, the 3D photoshop Part 2

In my previous blogpost, I went over the many algorithms used in both 2D and 3D computer graphics. I talked about how they are essentially the same. We’ll use a screen shot from my game Under the Radar that I editted in photoshop, before and after respectively.

Image

Image

Drop shadowing in photoshop is the same as shadow mapping in which it checks if a point is visible from the light or not. If a point is visible from the light then it’s obviously not in shadow, otherwise it is. The basic shadow mapping algorithm can be described as short as this:

– Render the scene from the lights view and store the depths as shadow map

– Render the scene from the camera and compare the depths, if the current fragments depth is greater than the shadow depth then the fragment is in shadow

In some instances, drop shadows are used to make objects stand out of the background with a an outline, in shaders this is done with sobel edge filters.

The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image.

In theory at least, the operator consists of a pair of 3×3 convolution kernels. One kernel is simply the other rotated by 90°. This is very similar to the Roberts Cross operator.

These kernels are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient.

In photoshop, filters are added to images to randomize the noise and alter the look. The equivalent in shaders is known as normal mapping. Normal maps are images that store the direction of normals directly in the RGB values of the image. They are much more accurate, as rather than only simulating the pixel being away from the face along a line, they can simulate that pixel being moved at any direction, in an arbitrary way. The drawbacks to normal maps are that unlike bump maps, which can easily be painted by hand, normal maps usually have to be generated in some way, often from higher resolution geometry than the geometry you’re applying the map to.

Normal maps in Blender store a normal as follows:

  • Red maps from (0-255) to X (-1.0 – 1.0)
  • Green maps from (0-255) to Y (-1.0 – 1.0)
  • Blue maps from (0-255) to Z (0.0 – 1.0)

Since normals all point towards a viewer, negative Z-values are not stored, also map blue colors (128-255) to (0.0 – 1.0). The latter convention is used in “Doom 3” as an example.

Those are a majority of effects that shaders use that are similar to photoshop effects, there’s also color adjustment which can be done in color to HSL shaders along with other sorts of effects.

Shaders, the 3D photoshop

The simplest way to describe shaders is that it is the photoshop of 3D graphics; both of them are used to create effects enhancing lighting and mapping, to make the images more vivid and lively, and to give bad photographers, artists and modelers a chance to redeem their miserable work.

Perhaps the greatest thing they have in common are the algorithms used to execute their operations, they’re not just similar, they’re the exact same math operations.

Image

Their primary difference is photoshop is used to manipulate 2D images and shaders alter 3D images, however both images are made up of pixels.

Image

First, the image must be processed, but how? We must define a generic method to filter the image.

Image

As you can see, all elements in the kernel MUST equal 1. We must normalize by dividing all the elements by the sum in the same way we normalize a vector.

The central element of the pixel, which in the case of what we have above is 6, will be placed all over the source pixel which will then be replaced with a weighted sum of itself and pixels nearby.

That’s how it works it works for images normally, but what about in shaders?

Normally we do forward rendering. Forward rendering is a method of rendering which has been in use since the early beginning of polygon-based 3d rendering. The scene is drawn in several passes, then the cull scene becomes renderable against frustum, then the culled Renderable is drawn with the base lighting component (ambient, light probes, etc…).

Image

The problem with shaders is that fragment/pixel shaders output a single color at a time. It does not have access to its neighbours, therefore convolution is not possible.

The first pass is stored in the Frame Buffer Object (i.e. all color is in a texture) and we can sample any pixel value in a texture!

Digital images are created in order to be displayed on our computer monitors. Due to the limits human vision, these monitors support up to 16.7 million colors which translates to 24bits. Thus, it’s logical to store numeric images to match the color range of the display. By example, famous file formats like bmp or jpeg traditionally use 16, 24 or 32 bits for each pixel.

Image

Each pixel is composed of 3 primary colours; red, green and blue. So if a pixel is stored as 24 bits, each component value ranges from 0 to 255. This is sufficient in most cases but this image can only represent a 256:1 contrast ratio whereas a natural scene exposed in sunlight can expose a contrast of 50,000:1. Most computer monitors have a specified contrast ratio between 500:1 and 1000:1.

High Dynamic Range (HDR) involves the use of a wider dynamic range than usual. That means that every pixel represents a larger contrast and a larger dynamic range. Usual range is called Low Dynamic Range (LDR).

Image

HDR is typically employed in two applications; imaging and rendering. High Dynamic Range Imaging is used by photographers or by movie maker. It’s focused on static images where you can have full control and unlimited processing time. High Dynamic Range Rendering focuses on real-time applications like video games or simulations.

Image

Since this is getting quite long, I’ll have to explain the rest in another blog. So stay tuned for part two where we will go over some of the effects of photoshop and shaders and how they’re the same as well as the algorithms behind them.