Tag Archives: games

Navigation mesh as used in insomniac games

A navigation mesh, or navmesh, is an abstract data structure utilized in AI applications to aid agents in path-finding through vast spaces. Meshes that don’t map to static obstacles in the environment they model offer the additional advantage that agents with access to the mesh will not consider these obstacles in path-finding, reducing computational effort and making collision detection between agents and static obstacles moot. Meshes are typically implemented as graphs, opening their use to a large number of algorithms defined on these structures.

One of the most common uses of a navigation mesh is in video games, to describe the paths that a computer-controlled character can follow. It is usually represented as a volume or brush that is processed and computed during level compilation.

Games are striving for a more immersive and fun experience and Non-Playable-Characters in game play a huge role in it as it gives NPCs the ability to perceive the world around them, react according to situation is a fundamental element of the game.

How AIs navigate is often how design gives personality to the NPC in the game. Designers are not burdened with movement implementation and let him/her worry about higher level aspects of game as it should be. The industry is moving away from heavily scripted AI to more dynamic and innate movement solution as usually AI usage comes in as few commands into the system.

NPC also uses system to scan for targets to feed it back to AI logic to monitor threat perception by AI & player. Also used to determine ‘spawn’ locations in the level. Resistance single player levels reused by multiplayer designers, various world representations have been used as a result

Navigation-mesh has gained considerable current reception. A* algorithm and its mods are typically used for path-finding on the navigation world following covers our experience, implementation and progress path-finding (A*) had volumes as nodes.

 Image

Designer laid out the entire level navigation mesh in maya, polygons are represented as nodes and edges as connections for A*. Navigation was one of the big bottle necks on PPU; maximum soft limit of 8 navigating NPCs at a given time. The lod system was put in place so NPCs at a distance did not use nav and went to great lengths to stagger the nav queries across frames even with above limits.

It was different production cycle for PS3 launch. If you look at RFOM levels . 3 RFOM levels shared a theme and wanted that to make one Resistance 2 level. Level streaming took care of rendering but nav was monolithic for whole level design planned to increase nav poly density by 3x. we were making 8 player co-op mode which implied enough NPCs to keep 8 players engaged in some encounters.

 Image

Tri-edges are A* graph nodes and polys are connections/links in the search graph which provide customization abilities like jump-up and jump-down distance A* parametrization now possible. It introduced Hierarchical path-finding and used A* among poly-clusters to calculate coarse path in order to refine NPC’s immediate path with in poly cluster as needed

They ran into inherent disadvantages of the scheme where in it doesn’t give shortest path which designer expects as a result path caching was added. NPC’s new request for path was checked against its last successful-path (high hit ratio) and made cost estimates in A* do more work which removed edge-mid point to edge-mid point for cost estimate that computed point on edge which minimizes cost (greedily) to be outputted path hardly needed smoothing.

A* at the time was so low that we took out hierarchical path finding. Hardly spent 10% of navigation shader budget on A* even in co-op mode with ~100 NPCs using navigation on screen. Path-finding was changed to consider more parameters ability to jump up/down various distances where we use one navigation-mesh for varied size NPCs.

A* considered NPC’s threshold clearance when searching the design used above feature to selective path among AI. They preferred corner radius (maintains radius away from boundary) and full-frame deferred navigation queries. The game-play code was changed to  use navigation data as queries while AI synced previous frame results and set the query for next frame’s needs. Code was streamlined to batch all navigation queries. Navigation query data access was also isolated for concurrency needs. Readied for shader offloading of navigation processing to find point on mesh query given a volume and parameters, outputs closest point on mesh maintain NPC’s tolerance radius away from boundary. Navigation-mesh cluster’s AABBs are kept in LS for broad-phase.

Read navigation query results from previous frame, steering the setup navigation queries for next frame’s animations and physics setup update. Animation shader jobs are added, physics shader jobs are added and navigation shader jobs are added. Could have run anywhere after pre-update to next-frame’s pre-update.

All the navigation queries were full frame deferred (gameplay issues reads results from previous frame’s issued query, steers and setup up for next frame if it has to)

 Image

One mesh is used for all sized AI so string pull doesn’t maintain NPC’s tolerance away at corners. The issue, especially for big NPCs like Titan in Resistance (where they would try to cut across the edge).

NPC has to not only get to first bend point but also need to orient towards the next bend point in the path. Navigation shader job computed the bezier approach curve for the bend point.

 Image

 

Image

Curve tangent is computed by doing a swept sphere check for the specified distance in the current NPC facing direction. The end curve tangents is computed by doing back projection swept sphere check from first bend point in the opposite direction of second bend point. The bezier curve is computed using above tangents. NPC always aim for the mid-point of the curve for smoother approach.

To support higher volume of NPCs, especially for 8 player coop steering used a simple dynamic avoidance for the shader to compute escape tangents for each obstacle it came across.

 Image

Steering span all escape tangents in a circle around the NPC and are picked the resulting direction is the valid vacant hole closer to the desired direction towards the path bend point. Simplicity trade-off above in order to support huge number of NPCs running and it held up just fine. NPCs could still get stuck as picking the closest escape tangent would make it run into wall.

Quadtrees vs. Grid detection

Me and my groupmates are making a fighting game by the name of Shattered Tides, like any game we’re implementing collision detection code. There are many ways to implement it, however the real challenge is finding the most effect in the best possible time given our anemic time frame. After looking into it, it would seem that the quadtrees are the most common way to go but in some resources they mention the grid based solution.

So which is better and more effective? The right answer depends a little bit on the actual game you’re making, and choosing one over the other is needs implementing both and doing profiling to find out which one is more time or space efficient on your specific game.

Grid detection seems to solely apply to detecting collisions between moving objects and a static background. The greatest advantage is that the static background is represented as a contiguous memory array, and each collision lookup has the Big O of O(1) with good locality if you have to do multiple reads because entities need to be covered more than one cell in the grid. The disadvantage however is if the static background is large, is that the grid can be rather wasteful of space.

If instead you represent the static background as a quadtree, then the cost of individual lookups goes up, but because large blocks of the background take up small amounts of space, the memory requirements decrease, and so more of the background can sit in the cache. Even if it takes 10 times as many reads to do a lookup in such a structure, if it’s all in the cache, it’s still faster than a single lookup with a cache miss.

Personally I’d go with the grid implementation, because it’s easier to do and would be more productive for me to study. If I notice that our game is running slow, we’ll do some profiling and see what could use some help. If it looks like the game is spending a lot of time doing collision detection, we’d try quadtree and see if that’s any better.

A brief overview of the Havok Physics Engine

At 2K Czech, games demanded a physics solution that can scale efficiently and handle highly detailed interactive environments. Having recently moved to the next generation of Havok Physics, Havok’s new physics technology is able to make highly efficient utilization of all available hardware cores with a very lean runtime memory footprint. This combination allows us to deliver the high quality simulation at the scale we need and we are really looking forward to making some incredible games with the new technology.” -Laurent Gorga, Technical Director at 2K Czech.

Laurent Gorga, 2K Czech’s Technical Director, further added that “At 2K Czech, our games demand a physics solution that can scale efficiently and handle highly detailed interactive environments. Having recently moved to the next generation of Havok Physics, we’ve been blown away by how Havok’s new physics technology is able to make highly efficient utilization of all available hardware cores with a very lean runtime memory footprint.”

Developed by the Irish company Havok, the eponymous Havok Physics is a physics engine designed for video games to allow for real-time interaction between objects in 3D. The engine uses dynamic simulation to allow for ragdoll physics. The company was founded by Hugh Reynolds and Dr. Steven Collins in 1998 in Trinity College Dublin where much of its development is still carried out. Dr. Steven Collins currently acts as course director to Interactive Entertainment Technology in Trinity College Dublin, as well as lecturing in real-time rendering. Havok can also be found in Autodesk 3ds Max as a bundled plug-in called Reactor. A plugin for Autodesk Maya animation software and an extra for Adobe Director’s Shockwave are also available.

As a result Havok offers the fastest and most robust collision detection and physical simulation technology available, which is why it has become the ideal physics engine to go to within the games industry and has been used by leading game developers in over 400 launched titles and many more in development.

Havok Physics is fully multi-threaded and cross-platform optimized for leading game platforms including, Xbox 360™, Playstation® 4, PlayStation®3 computer entertainment system, PC Games for Windows, PlayStation Vita®, Wii™, Wii U™, Android™, iOS, Apple Mac OS and Linux.

In 2008 Havok released details of its new Cloth and Destruction middleware. Cloth deals with simulation of character garments and soft bodies while Destruction provides tools for creation of destructible and deformable environments. Havok Cloth was the company’s most widely adopted and bestselling piece of software to date.

At GDC 2009 Havok showcased the first details of its forth coming artificial intelligence tools which will allow for better performing AI in games without the need for the developers to create their own AI models.

Whenever people say “Havok Physics”, all I can think of is The Elder Scrolls: Oblivion.  One of its major selling points was Havok Physics.  Anyone who has played that game knows how messed up Havok Physics can get, and how silly of situations you can create with it. Grand Theft Auto 4 also uses Havok Physics.

It was also used throughout the Halo games from Halo 2 onwards. In the Halo 3 engine The Halo 3 physics engine runs calculations on every single frame of animation, similarly to the collision detection engine. The engine is capable of calculating, among other things, elasticity on portions of character models; and bullet ricochet.

Character models are quite elastic at points, a characteristic that is clearly demonstrated by the Character Stretch Glitch’s presence in the game. The elasticity helps to improve realism at slower speeds. Only some parts of a character’s model are elastic; if you look closely at screenshots of the aforementioned glitch, you will find that the rigid parts of Spartans’ and Marines’ armor don’t stretch.

The physics engine utilizes an optimization found in many video game physics engines: objects that remain at rest for several seconds are temporarily exempted from physics calculations (but not collision detection) until they are disturbed again; this is why floating Crates and Fusion coils can remain floating in the air until the round is restarted or the items are disturbed. An object is considered “disturbed” if it is moved, picked up (in Forge), or if something collides with it.[5] The optimization is likely based on the premise that an object that isn’t moving now isn’t likely to move in the near future unless something moves it or it moves on its own.

Havok has announced the launch of the third major iteration to its Havok Physics technology that features “significant technical innovations” in performance, memory utilization, usability, and is a “major leap forward” for in-game physics simulation. The release is specifically targeted towards next-generation consoles, mobile devices, and PCs with full compatibility and support for current devices.

According to Andew Bond, Vice President of Technology for Havok, this version has resulted in a “new engine core built around fully continuous simulation that enables maximum physical fidelity with unprecedented performance speeds. Beta versions of the technology have been in the hands of a number of leading developers for some time and we have seen dramatic performance gains with simulations running twice as fast or more, and using up to 10 times less memory. Additionally the new core’s performance is extremely predictable, eliminating performance spikes.”

Vertex Buffer Objects, Frame Buffer Objects and Geometry shaders

The modern use of “shader” was introduced to the public by Pixar with their “RenderMan Interface Specification, Version 3.0” originally published in May, 1988.

Image

As graphics processing units evolved, major graphics software libraries such as OpenGL and Direct3D began to support shaders. The first shader-capable GPUs originally only supported pixel shading, but vertex shaders were then introduced when developers realized the power of shaders and sought to take advantage of its potential. Geometry shaders were only fairly recently introduced with Direct3D 10 and OpenGL 3.2, but are currently supported only by high-end video cards.

Geometry in a complete three dimensional scene is lit according to the defined locations of light sources, reflection, and other surface properties. Some hardware implementations of the graphics pipeline compute lighting only at the vertices of the polygons being rendered.

Image

The lighting values between vertices are then interpolated during rasterization. Per-fragment or per-pixel lighting, as well as other effects, can be done on modern graphics hardware as a post-rasterization process by means of a shader program. Modern graphics hardware also supports per-vertex shading through the use of vertex shaders.

Shaders are simple programs that describe the traits of either a vertex or a pixel. Vertex shaders describe the traits such as position, texture coordinates and colors of a vertex, while pixel shaders describe color, z-depth and the alpha value of a fragment. A vertex shader is called for each vertex in a primitive often after tessellation; thus one vertex in, one updated vertex out. Each vertex is then rendered as a series of pixels onto a surface that will be transported to the screen.

Shaders replace a section of video hardware often referred to as the Fixed Function Pipeline (FFP) – so-called because it performs lighting and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach for the convenience of the programmers seeking to manage their code better.

Image

The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card. In the vertex shader, the geometry is transformed.If a geometry shader is in the graphic processing unit and active, some changes of the geometries in the scene are performed. If a tessellation shader is activated in the graphic processing unit and active, the geometries in the scene can be subdivided.

The calculated geometry is triangulated as the triangles are broken down into fragment quads (one fragment quad is a 2 × 2 fragment primitive). Fragment quads are modified according to the pixel shader, then the depth test is executed, fragments that pass will get written to the screen and might get blended into the frame buffer. The graphic pipeline uses these steps in order to transform three dimensional (and/or two dimensional data into useful two dimensional data for displaying. In general, this is a large pixel matrix or “frame buffer”.

Image

Vertex shaders are passed through once for each vertex given to the graphics processor. The purpose is to transform each vertex’s 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer).

Image

Vertex shaders are capable of altering properties such as position, color, and texture coordinates, but cannot create new vertices like geometry shaders can. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present, or the pixel shader and rasterizer otherwise. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving 3D models.

Geometry shaders are a relatively new type of shader, introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions. This type of shader can generate new graphics primitives, such as points, lines, and triangles, from those primitives that were sent to the beginning of the graphics pipeline.

Image

Geometry shader programs are executed after vertex shaders. They take as input a whole primitive, possibly with adjacency information. For example, when operating on triangles, the three vertices are the geometry shader’s input. The shader can then emit zero or more primitives, which are rasterized and their fragments ultimately passed to a pixel shader.

Typical uses of a geometry shader include point sprite generation, geometry tessellation in which you cover a surface with a pattern of flat shapes so that there are no overlaps or gaps, shadow volume extrusion where the edges forming the silhouette are extruded away from the light to construct the faces of the shadow volume, and single pass rendering to a cube map. A typical real world example of the benefits of geometry shaders would be automatic mesh complexity modification. A series of line strips representing control points for a curve are passed to the geometry shader and depending on the complexity required the shader can automatically generate extra lines each of which provides a better approximation of a curve.

Image

Pixel shaders, which are also known as fragment shaders, compute color and other attributes of each fragment. Pixel shaders range from always outputting the same color, to applying a lighting value, to performing bump mapping, specular highlights, shadow mapping, translucency and other amazing feats of rendering as shown here.

Image

They can alter the depth of the fragment for Z-buffering, or output more than one color if multiple render targets are active. In 3D graphics, a pixel shader alone cannot produce very complex effects, because it operates only on a single fragment, without knowledge of a scene’s geometry. However, pixel shaders do detect and acknowledge the screen coordinate being drawn, and can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader. This technique can enable a wide variety of 2D postprocessing effects, such as blur, or edge detection/enhancement for cartoon/cel shading. Pixel shaders may also be applied in intermediate stages to any two-dimensional images in the pipeline, whereas vertex shaders always require a 3D model. For example, a fragment shader is the only type of shader that can act as a postprocessor or filter for a video stream after it has been rasterized.

Full screen effects and their applications in games

A full screen effect is a way in which computer graphics applications can have different special effects added to a scene. Rather than actually rendering a scene with these effects applied to the objects and geometry within it, they are essentially applied after the render. Which means the graphics program creates an image that the user sees, and then applies an effect over this in a way that is seamless. A full screen effect can be used to accomplish numerous tasks, including the addition of motion blur, bloom lighting, and color filtering.Image

To understand how computer graphics can use a full screen effect, it is often the easiest method to first realize how a scene appears. Programs that use Computer Generated Imagery, video games in particular, often render scenes to a display in real time. This means that as a player navigates through a virtual environment, the various objects in a scene that have been created by the developers of that game appear in relation to the player’s position. When the player walks into a room with a box, the game renders the walls, floor and ceiling, and the box in the room as a series of frames or images about 30 times every second.

Image

A full screen effect can then be added to these individually rendered images to create various results. Motion blur, for example, is a phenomenon that can be seen in the real world or on film; objects often appear distorted and blurry as someone moves quickly past them. While this effect can be applied to objects in a virtual scene, it is often easier and less resource-intensive for it to be done as a full screen effect. Multiple partial renders of the objects in a game are created and overlapped so that a blurred image appears to be moving very fast.

Image

Bloom lighting makes lights in a game appear heavier, to make them stand out, to make a game appear more realistic, give shiny textures of light to bright objects or to provide the graphics with a stylized aesthetic. When different light sources are rendered, the game engine then creates additional renders of increased intensity for the lights and then proceeds to overlap them. A player in a game can then see these lights as being brighter, with a stronger glow. A good example of this effect in use would be Runescape:

Image

There is also such thing as simulated bloom effects, these are instances where particles are utilized to simulate bloom lighting around certain points of light, the developers do this by registering the particle count output.

Again from Runescape, on the left is how the fire already is, on the right is what a simulated bloom lighting effect would be like:

Image

Color filtering is similarly done. If a game developer wants someone to see a room in black and white part of the time, without having to create multiple textures for objects within it, then this can be achieved through a full screen effect. While the actual textures in a scene are rendered properly, a filtered layer is placed over each frame to change the colors of objects for a player.

    Depth blur recreates the effect caused by the optics of a lens. Images formed through a lens are in correct focus only when the subject is directly at a certain distance (the focal plane). Objects nearer or farther blur.Notice in this picture how the landscape in the distance is blurred, just like how we would see it in real life:

Image

Often recreated in games by blurring the frame buffer to a temporary texture, and drawing over the frame buffer with that blurred version, alpha blending based on the depth of the scene.

At the end of the day, full screen effects is a fantastic way to either make your games more stylized or realistic.