Tag Archives: video

Quadtrees vs. Grid detection

Me and my groupmates are making a fighting game by the name of Shattered Tides, like any game we’re implementing collision detection code. There are many ways to implement it, however the real challenge is finding the most effect in the best possible time given our anemic time frame. After looking into it, it would seem that the quadtrees are the most common way to go but in some resources they mention the grid based solution.

So which is better and more effective? The right answer depends a little bit on the actual game you’re making, and choosing one over the other is needs implementing both and doing profiling to find out which one is more time or space efficient on your specific game.

Grid detection seems to solely apply to detecting collisions between moving objects and a static background. The greatest advantage is that the static background is represented as a contiguous memory array, and each collision lookup has the Big O of O(1) with good locality if you have to do multiple reads because entities need to be covered more than one cell in the grid. The disadvantage however is if the static background is large, is that the grid can be rather wasteful of space.

If instead you represent the static background as a quadtree, then the cost of individual lookups goes up, but because large blocks of the background take up small amounts of space, the memory requirements decrease, and so more of the background can sit in the cache. Even if it takes 10 times as many reads to do a lookup in such a structure, if it’s all in the cache, it’s still faster than a single lookup with a cache miss.

Personally I’d go with the grid implementation, because it’s easier to do and would be more productive for me to study. If I notice that our game is running slow, we’ll do some profiling and see what could use some help. If it looks like the game is spending a lot of time doing collision detection, we’d try quadtree and see if that’s any better.

A brief overview of the Havok Physics Engine

At 2K Czech, games demanded a physics solution that can scale efficiently and handle highly detailed interactive environments. Having recently moved to the next generation of Havok Physics, Havok’s new physics technology is able to make highly efficient utilization of all available hardware cores with a very lean runtime memory footprint. This combination allows us to deliver the high quality simulation at the scale we need and we are really looking forward to making some incredible games with the new technology.” -Laurent Gorga, Technical Director at 2K Czech.

Laurent Gorga, 2K Czech’s Technical Director, further added that “At 2K Czech, our games demand a physics solution that can scale efficiently and handle highly detailed interactive environments. Having recently moved to the next generation of Havok Physics, we’ve been blown away by how Havok’s new physics technology is able to make highly efficient utilization of all available hardware cores with a very lean runtime memory footprint.”

Developed by the Irish company Havok, the eponymous Havok Physics is a physics engine designed for video games to allow for real-time interaction between objects in 3D. The engine uses dynamic simulation to allow for ragdoll physics. The company was founded by Hugh Reynolds and Dr. Steven Collins in 1998 in Trinity College Dublin where much of its development is still carried out. Dr. Steven Collins currently acts as course director to Interactive Entertainment Technology in Trinity College Dublin, as well as lecturing in real-time rendering. Havok can also be found in Autodesk 3ds Max as a bundled plug-in called Reactor. A plugin for Autodesk Maya animation software and an extra for Adobe Director’s Shockwave are also available.

As a result Havok offers the fastest and most robust collision detection and physical simulation technology available, which is why it has become the ideal physics engine to go to within the games industry and has been used by leading game developers in over 400 launched titles and many more in development.

Havok Physics is fully multi-threaded and cross-platform optimized for leading game platforms including, Xbox 360™, Playstation® 4, PlayStation®3 computer entertainment system, PC Games for Windows, PlayStation Vita®, Wii™, Wii U™, Android™, iOS, Apple Mac OS and Linux.

In 2008 Havok released details of its new Cloth and Destruction middleware. Cloth deals with simulation of character garments and soft bodies while Destruction provides tools for creation of destructible and deformable environments. Havok Cloth was the company’s most widely adopted and bestselling piece of software to date.

At GDC 2009 Havok showcased the first details of its forth coming artificial intelligence tools which will allow for better performing AI in games without the need for the developers to create their own AI models.

Whenever people say “Havok Physics”, all I can think of is The Elder Scrolls: Oblivion.  One of its major selling points was Havok Physics.  Anyone who has played that game knows how messed up Havok Physics can get, and how silly of situations you can create with it. Grand Theft Auto 4 also uses Havok Physics.

It was also used throughout the Halo games from Halo 2 onwards. In the Halo 3 engine The Halo 3 physics engine runs calculations on every single frame of animation, similarly to the collision detection engine. The engine is capable of calculating, among other things, elasticity on portions of character models; and bullet ricochet.

Character models are quite elastic at points, a characteristic that is clearly demonstrated by the Character Stretch Glitch’s presence in the game. The elasticity helps to improve realism at slower speeds. Only some parts of a character’s model are elastic; if you look closely at screenshots of the aforementioned glitch, you will find that the rigid parts of Spartans’ and Marines’ armor don’t stretch.

The physics engine utilizes an optimization found in many video game physics engines: objects that remain at rest for several seconds are temporarily exempted from physics calculations (but not collision detection) until they are disturbed again; this is why floating Crates and Fusion coils can remain floating in the air until the round is restarted or the items are disturbed. An object is considered “disturbed” if it is moved, picked up (in Forge), or if something collides with it.[5] The optimization is likely based on the premise that an object that isn’t moving now isn’t likely to move in the near future unless something moves it or it moves on its own.

Havok has announced the launch of the third major iteration to its Havok Physics technology that features “significant technical innovations” in performance, memory utilization, usability, and is a “major leap forward” for in-game physics simulation. The release is specifically targeted towards next-generation consoles, mobile devices, and PCs with full compatibility and support for current devices.

According to Andew Bond, Vice President of Technology for Havok, this version has resulted in a “new engine core built around fully continuous simulation that enables maximum physical fidelity with unprecedented performance speeds. Beta versions of the technology have been in the hands of a number of leading developers for some time and we have seen dramatic performance gains with simulations running twice as fast or more, and using up to 10 times less memory. Additionally the new core’s performance is extremely predictable, eliminating performance spikes.”

Vertex Buffer Objects, Frame Buffer Objects and Geometry shaders

The modern use of “shader” was introduced to the public by Pixar with their “RenderMan Interface Specification, Version 3.0” originally published in May, 1988.

Image

As graphics processing units evolved, major graphics software libraries such as OpenGL and Direct3D began to support shaders. The first shader-capable GPUs originally only supported pixel shading, but vertex shaders were then introduced when developers realized the power of shaders and sought to take advantage of its potential. Geometry shaders were only fairly recently introduced with Direct3D 10 and OpenGL 3.2, but are currently supported only by high-end video cards.

Geometry in a complete three dimensional scene is lit according to the defined locations of light sources, reflection, and other surface properties. Some hardware implementations of the graphics pipeline compute lighting only at the vertices of the polygons being rendered.

Image

The lighting values between vertices are then interpolated during rasterization. Per-fragment or per-pixel lighting, as well as other effects, can be done on modern graphics hardware as a post-rasterization process by means of a shader program. Modern graphics hardware also supports per-vertex shading through the use of vertex shaders.

Shaders are simple programs that describe the traits of either a vertex or a pixel. Vertex shaders describe the traits such as position, texture coordinates and colors of a vertex, while pixel shaders describe color, z-depth and the alpha value of a fragment. A vertex shader is called for each vertex in a primitive often after tessellation; thus one vertex in, one updated vertex out. Each vertex is then rendered as a series of pixels onto a surface that will be transported to the screen.

Shaders replace a section of video hardware often referred to as the Fixed Function Pipeline (FFP) – so-called because it performs lighting and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach for the convenience of the programmers seeking to manage their code better.

Image

The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card. In the vertex shader, the geometry is transformed.If a geometry shader is in the graphic processing unit and active, some changes of the geometries in the scene are performed. If a tessellation shader is activated in the graphic processing unit and active, the geometries in the scene can be subdivided.

The calculated geometry is triangulated as the triangles are broken down into fragment quads (one fragment quad is a 2 × 2 fragment primitive). Fragment quads are modified according to the pixel shader, then the depth test is executed, fragments that pass will get written to the screen and might get blended into the frame buffer. The graphic pipeline uses these steps in order to transform three dimensional (and/or two dimensional data into useful two dimensional data for displaying. In general, this is a large pixel matrix or “frame buffer”.

Image

Vertex shaders are passed through once for each vertex given to the graphics processor. The purpose is to transform each vertex’s 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer).

Image

Vertex shaders are capable of altering properties such as position, color, and texture coordinates, but cannot create new vertices like geometry shaders can. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present, or the pixel shader and rasterizer otherwise. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving 3D models.

Geometry shaders are a relatively new type of shader, introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions. This type of shader can generate new graphics primitives, such as points, lines, and triangles, from those primitives that were sent to the beginning of the graphics pipeline.

Image

Geometry shader programs are executed after vertex shaders. They take as input a whole primitive, possibly with adjacency information. For example, when operating on triangles, the three vertices are the geometry shader’s input. The shader can then emit zero or more primitives, which are rasterized and their fragments ultimately passed to a pixel shader.

Typical uses of a geometry shader include point sprite generation, geometry tessellation in which you cover a surface with a pattern of flat shapes so that there are no overlaps or gaps, shadow volume extrusion where the edges forming the silhouette are extruded away from the light to construct the faces of the shadow volume, and single pass rendering to a cube map. A typical real world example of the benefits of geometry shaders would be automatic mesh complexity modification. A series of line strips representing control points for a curve are passed to the geometry shader and depending on the complexity required the shader can automatically generate extra lines each of which provides a better approximation of a curve.

Image

Pixel shaders, which are also known as fragment shaders, compute color and other attributes of each fragment. Pixel shaders range from always outputting the same color, to applying a lighting value, to performing bump mapping, specular highlights, shadow mapping, translucency and other amazing feats of rendering as shown here.

Image

They can alter the depth of the fragment for Z-buffering, or output more than one color if multiple render targets are active. In 3D graphics, a pixel shader alone cannot produce very complex effects, because it operates only on a single fragment, without knowledge of a scene’s geometry. However, pixel shaders do detect and acknowledge the screen coordinate being drawn, and can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader. This technique can enable a wide variety of 2D postprocessing effects, such as blur, or edge detection/enhancement for cartoon/cel shading. Pixel shaders may also be applied in intermediate stages to any two-dimensional images in the pipeline, whereas vertex shaders always require a 3D model. For example, a fragment shader is the only type of shader that can act as a postprocessor or filter for a video stream after it has been rasterized.