Tag Archives: AI

Dark side of programming, hard coding, it’s more code now than game

The definition of a DSL, the applicability of DSLs to AI in particular, the practical side of constructing and deploying DSLs for AI, and the benefits (and beneficiaries) of DSLs for AI. Lastly, we will conclude with some advice from hard experience on the subject.

A Domain-Specific Language is mere a language designed to be highly adept at solving a specific class of problems. Some languages are more specialized than others, some languages are designed to execute in certain environments and some languages are intended to be as widely applicable as possible

The first major principle that motivates our application of DSLs is code cleanliness and the effects of mapping code directly into the problem space (versus code which is too generalized, too abstract, or too specific – i.e. “hard coded”). The second major principle that motivates DSL deployment is elimination of irrelevant concerns. Our third motivating principle: removing excess overhead.

Scripts were developed in the early AI work by Roger Schank, Robert P. Abelson and their research group as a method of representing procedural knowledge. They are very much like frames, only the values that fill the slots must be ordered. A script is a structured representation describing a stereotyped sequence of events in a particular context. Scripts are used in natural language understanding systems to organize a knowledge base in terms of the situations that the system should understand.

The classic example of a script involves the typical sequence of events that occur when a person drinks in a restaurant: finding a seat, reading the menu, ordering drinks from the waitstaff. In the script form, these would be decomposed into conceptual transitions, such as MTRANS and PTRANS, which refer to mental transitions [of information] and physical transitions.

Schank, Abelson and their colleagues tackled some of the most difficult problems in artificial intelligence (i.e., story understanding), but ultimately their line of work ended without tangible success. This type of work received little attention after the 1980s, but it is very influential in later knowledge representation techniques, such as case-based reasoning.

Scripts can be inflexible. To deal with inflexibility, smaller modules called memory organization packets (MOP) can be combined in a way that is appropriate for the situation. The behavior design is the roadmap for our AI scripting, giving us a clear set of coding elements to complete before it’s AI takes shape.

Although there are many different methods of designing AI, including sophisticated methods such as neural networks and artificial evolution, with a simple creature like this, our tutorial will follow the most basic and standard method used by all stock Unreal AI; conditional decision-making. This is the method that takes stimuli as a constant condition or trigger to either do this or that; also known as “if this, then do that”, or “if-then” decision-making. A common example of this kind of AI is the AI Script in a Scripted Sequence, but even Bots use this method, just in an extremely complex form.

We have some features of our NPC already defined for us; its model, textures, animation and sounds. Normally, a custom character would be started by defining its purpose or goal and then design those features around it. But for this behavior design, we will work “backwards”; starting with those features and finding the purpose and behavior design that fits it.

These are misconceptions about DSL development. DSLs are, by definition, simple and to the point. They eliminate the bulk of the issues that plague generic languages and require far less infrastructure to deploy effectively. Additionally, they require much less up-front design and development work.

Programmers in general are lazy and often try to find the simplest approach. This is isn’t necessarily a vice, after all, technology is a means to maximize efficiency by decreasing the energy and time required to do something. This is a god send for game developers as better technology can enable them to produce the best software and interface possible in shorter time spans given how higher ups give them specific deadlines. Scripts are ideal because of their flexibility, that way programmers can make necessary changes in time, with hard code it would be near impossible.

A look at Prince of Persia: The Forgotten Sands game engine

In my last Game Engine Design class which was on Halloween, the disappointment of my professor’s lack of costume was neutralized by showing us videos of two Prince of Persia games; The Forgotten Sands and Warrior Within. We were tasked with identifying different aspects of the game engine of the former, often criticizing some of the inner mechanics despite its blockbuster production values and technological achievement.

Image

Prince of Persia: The Forgotten Sands is a 2010 multi-platform game developed by Ubisoft. Any developer will tell you how tasking it is to design the engine to work with certain consoles, similar to car engines, they need to be carefully constructed with the capabilities and limitations in mind. The version we looked at to my recollection is either the PS3 or Xbox 360 version.

My primary grievance with the engine is the sloppily put together animation layering. Most of what I know from animating layering comes from Uncharted 2 and their hierarchal structure of their character kinematics. How it works is that limbs, hands, feet and other body parts are each animated separately while using triggers to switch between animation states according to either your character’s relation to the world and/or according to your character’s action.

The issue with Prince of Persia: The Forgotten Sands’s animation layering is that it lacks detail and feels rushed, for example. When your character grabs onto ledges or walks on the wall for a few seconds, the hands don’t seem to grip on anything, it feels like that Game Jam game Surgeon Simulator where your hand can’t grip on anything.

Spider Prince, spider prince, the bad layering and lack of collision makes me wince.

Image

A major part of gaming is investment, it’s difficult to invest in your player’s actions if something like grabbing onto something doesn’t feel solid. Internet critic and entertainer Doug Walker, under his persona the Nostalgia Critic, attested Tom & Jerry’s effectiveness in its craft due to how solid the animators can make objects, hence enhancing the slapstick.

This technique can also apply to games. Take Ninja Gaiden Sigma for example, when I look at what Prince of Persia: The Forgotten Sands did wrong, Ninja Gaiden Sigma did right. When Ryu climbs, runs along, or jumps off of the walls, you can feel every one of his steps colliding with force on the wall. An example of lack of solidity would be Sonic Heroes a game I loved since my youth, has an issue with enemies feeling too not rigid to the point where you can almost breeze through them like air when your character’s get strong enough. (I may blog about that game’s engine some time down the future) Naruto Shippuden Ultimate Ninja Storm 3 is a game that manages to have both soft and hard collisions. A combat based game at its core, you have your strong, medium and weak attacks, and naturally they make for collisions of the same level of force provided your attack damages your opponent.

It’s difficult to really attest for the force of collisions in games for a lot of people without really playing the games. Though gaming veterans and people involved in the field should be able to immediately identify such deficiencies.

The game’s biggest issue is its AI and how incompetent they are. You know how in movies every enemy attacks one at a time rather than all at once? That’s the game’s AI to a tee. If that’s what the programmers were going for then I’d still protest that they robbed players of a challenge. It doesn’t help that the AI moves at a snail’s pace, both in traversing towards you as well as their attacks. It takes then, no lie, 4 seconds to land a hit on you. Though your attacks are delayed as well, so it all balances out right? WRONG! That’s just bad combat. It’s not even satisfying to kill them due to the sound effects which sound like you’re being blocked rather than tearing away at their flesh.

Image

Yeah just stand there and look intimidating, I’m sure that’ll scare the guy who can move himself more than 2 meters per second.

Once again I must refer to Ninja Gaiden Sigma, when your attacks are blocked, you hear a metallic sound effect, but when you hit you can hear the sound of your weapon tearing his flesh off his skin or bones being broken. It also helps that said game has brilliantly programmed AI whom are nearly as capable as your character, presenting a lot of challenge, a demand for skill on your part and the satisfaction of overcoming said challenge. Which means you better bring it in the boss battle.
Image
Seriously, have I hammered it into your brains already? Go play any of the Ninja Gaiden Sigma games.

I’ve been pretty hard of Prince of Persia The Forgotten Sands all this long. The truth is, despite its shortcomings, its engine has many positive aspects. One of them is its marriage of the camera system and trigger system. The camera manages to follow your character all over the level, placing itself in dynamic and interesting angles whilst capturing the emotions of the environment and situation. This means that when an explosion happens, the camera moves to emphasize the collision.

Image

Perhaps the best example is when the character swings on poles, the camera very subtly moves along with him as he swings, seemingly taking you as the player along with the ride. Parts where you’re assigned an object through an in-game cutscene, the camera will pan to point out where you’re supposed accomplish your task.

Despite the faults of the animation layering, the kinematics is above average by industry, AAA standards. The screen space ambient occlusion is top notch, and gameplay is most likely fun as ever, wouldn’t know though as I haven’t played it. Modelling, texturing, shader rendering, and movement is all top notch as well.

Image

Warrior Within proves to be a superior product despite its technological inferiority; the engine provides much better animation layering, combat kinetics and overall collisions for all the reasons opposite to that of The Forgotten Sands. The former simply has more polished mechanics. In this battle of supremacy, the old proved superior to the new; hopefully developers take not of what it means to deliver on a polished engine.

Navigation mesh as used in insomniac games

A navigation mesh, or navmesh, is an abstract data structure utilized in AI applications to aid agents in path-finding through vast spaces. Meshes that don’t map to static obstacles in the environment they model offer the additional advantage that agents with access to the mesh will not consider these obstacles in path-finding, reducing computational effort and making collision detection between agents and static obstacles moot. Meshes are typically implemented as graphs, opening their use to a large number of algorithms defined on these structures.

One of the most common uses of a navigation mesh is in video games, to describe the paths that a computer-controlled character can follow. It is usually represented as a volume or brush that is processed and computed during level compilation.

Games are striving for a more immersive and fun experience and Non-Playable-Characters in game play a huge role in it as it gives NPCs the ability to perceive the world around them, react according to situation is a fundamental element of the game.

How AIs navigate is often how design gives personality to the NPC in the game. Designers are not burdened with movement implementation and let him/her worry about higher level aspects of game as it should be. The industry is moving away from heavily scripted AI to more dynamic and innate movement solution as usually AI usage comes in as few commands into the system.

NPC also uses system to scan for targets to feed it back to AI logic to monitor threat perception by AI & player. Also used to determine ‘spawn’ locations in the level. Resistance single player levels reused by multiplayer designers, various world representations have been used as a result

Navigation-mesh has gained considerable current reception. A* algorithm and its mods are typically used for path-finding on the navigation world following covers our experience, implementation and progress path-finding (A*) had volumes as nodes.

 Image

Designer laid out the entire level navigation mesh in maya, polygons are represented as nodes and edges as connections for A*. Navigation was one of the big bottle necks on PPU; maximum soft limit of 8 navigating NPCs at a given time. The lod system was put in place so NPCs at a distance did not use nav and went to great lengths to stagger the nav queries across frames even with above limits.

It was different production cycle for PS3 launch. If you look at RFOM levels . 3 RFOM levels shared a theme and wanted that to make one Resistance 2 level. Level streaming took care of rendering but nav was monolithic for whole level design planned to increase nav poly density by 3x. we were making 8 player co-op mode which implied enough NPCs to keep 8 players engaged in some encounters.

 Image

Tri-edges are A* graph nodes and polys are connections/links in the search graph which provide customization abilities like jump-up and jump-down distance A* parametrization now possible. It introduced Hierarchical path-finding and used A* among poly-clusters to calculate coarse path in order to refine NPC’s immediate path with in poly cluster as needed

They ran into inherent disadvantages of the scheme where in it doesn’t give shortest path which designer expects as a result path caching was added. NPC’s new request for path was checked against its last successful-path (high hit ratio) and made cost estimates in A* do more work which removed edge-mid point to edge-mid point for cost estimate that computed point on edge which minimizes cost (greedily) to be outputted path hardly needed smoothing.

A* at the time was so low that we took out hierarchical path finding. Hardly spent 10% of navigation shader budget on A* even in co-op mode with ~100 NPCs using navigation on screen. Path-finding was changed to consider more parameters ability to jump up/down various distances where we use one navigation-mesh for varied size NPCs.

A* considered NPC’s threshold clearance when searching the design used above feature to selective path among AI. They preferred corner radius (maintains radius away from boundary) and full-frame deferred navigation queries. The game-play code was changed to  use navigation data as queries while AI synced previous frame results and set the query for next frame’s needs. Code was streamlined to batch all navigation queries. Navigation query data access was also isolated for concurrency needs. Readied for shader offloading of navigation processing to find point on mesh query given a volume and parameters, outputs closest point on mesh maintain NPC’s tolerance radius away from boundary. Navigation-mesh cluster’s AABBs are kept in LS for broad-phase.

Read navigation query results from previous frame, steering the setup navigation queries for next frame’s animations and physics setup update. Animation shader jobs are added, physics shader jobs are added and navigation shader jobs are added. Could have run anywhere after pre-update to next-frame’s pre-update.

All the navigation queries were full frame deferred (gameplay issues reads results from previous frame’s issued query, steers and setup up for next frame if it has to)

 Image

One mesh is used for all sized AI so string pull doesn’t maintain NPC’s tolerance away at corners. The issue, especially for big NPCs like Titan in Resistance (where they would try to cut across the edge).

NPC has to not only get to first bend point but also need to orient towards the next bend point in the path. Navigation shader job computed the bezier approach curve for the bend point.

 Image

 

Image

Curve tangent is computed by doing a swept sphere check for the specified distance in the current NPC facing direction. The end curve tangents is computed by doing back projection swept sphere check from first bend point in the opposite direction of second bend point. The bezier curve is computed using above tangents. NPC always aim for the mid-point of the curve for smoother approach.

To support higher volume of NPCs, especially for 8 player coop steering used a simple dynamic avoidance for the shader to compute escape tangents for each obstacle it came across.

 Image

Steering span all escape tangents in a circle around the NPC and are picked the resulting direction is the valid vacant hole closer to the desired direction towards the path bend point. Simplicity trade-off above in order to support huge number of NPCs running and it held up just fine. NPCs could still get stuck as picking the closest escape tangent would make it run into wall.

A Change of Pace: Back Alley Brawl

“This bear comes out of nowhere and then slaps us around like we’re his hoes.” – Stan Burdman, professional reviewer

Image

It’s easy to comment and blog about professional games with well made mechanics and competent programming. So lets pick on a lousy PC game by the name of Back Alley Brawl, an incompetently made fighting game that’s available to download for free as no-one would pay for this, they wouldn’t even accept payment to play this.

Given the hot topic of my Game Engine’s class this week is A.I., Back Alley Brawl is an example of how NOT to design opponent A.I., one of the most important components of a fighting is challenge, it’s important that your opponent can fight at your level to keep the game interesting. The programmers of the A.I. in this game are the equivalent of a parents who lets their child wander all around the place. While you’re fighting, your opponent will walk around aimlessly and even keep walking into a wall. Someone hasn’t heard of the A* algorithm.

Image

Not sure if complete moron or training like the protagonist from Turkish Star Wars.

I apologize for comparing the opponent’s A.I to a child, even a child knows not to walk into a wall. I remember my Introduction to Programming professor Amin Ibrahim commenting on the first day how a child will learn to not put its hand on fire after getting hurt once, while a program or machine won’t learn unless its code tells it otherwise. The first rule of coding, broken by people would can develop a functional game.

Image

Where’s the playability? WHERE IS IT?

It’s quite the far cry from Namco Bandai’s Dragoball Z and Naruto games where the characters are always in a state of facing each other; all their movements are based on a grid with the camera managing to keep both players on the screen and opponent A.I. that doesn’t go wandering around. Though it helps that those games were programmed by decent human beings, rather than which ever chimp programmed this. He even inserted himself into the roster Shyamalan style with special guest character Miley Cyrus, another reason to play Namco’s fighters, they include guest characters in Soul Caliber who’ve had GOOD games.

Image

The reason I chose to blog about this game is because unfortunately at our level, our GDW games are closer to the quality of this game than to something like Tekken. Whereas Tekken managed to make fighting a bear a popular and plausible contest outside the Russian Federation, this game stands as proof of PC gaming’s inferiority to consoles where you can download Tekken Revoltution for free. Sorry if I keep mentioning Namco a lot in this blog, being away from home for college causes withdrawal symptoms.

When you wish upon A*, programmers will wonder what you are.

Image

Unlike sports or board games, the great thing about video games is that you can play it alone thanks to the AI; the Artificial Intelligence, presenting you with a challenge by means of opposing enemies. AI is in nearly every game, and as a result with our ever ascending technological breakthroughs, developers are blessed with the opportunity to further improve the AI, making them smarter and more capable, one day they will be smart enough overthrow us and enslave us, nothing to worry about, a kid born out of a confusing time paradox and a robot with a funny accent will save us all. Coincidentally related to the previous sentence, one of the many ways to implement AI is with the A* algorithm.

Image

The A* is an algorithm is a pathfinding and graph traversal that allows an object to follow something. This is done by planning out a traversable path between nodes. It uses a best-first search and finds the shortest (rather least costly path if you think in terms of Big (O)) from an initial node to one goal node. It follows the path of the lowest expected total distance basically whilst keeping a sorted priority queue of alternate paths.It identifies all possible routes that lead to the goal.

A* takes into account distances it already travelled; the g(x) part of the heauristic is the cost from the starting point. The formula for A* is f(n) = g(n) + h(n), g(n) is the cost of the starting node to reach n, and h(n) is the estimate of the cost of the cheapest path from n to the goal node.

Take this unweighted graph for instance:

 Image

We want A path 5 –> v1 –> v2 –> … –> 20

As such g(20) = cost(5àv1) + cost(v1àv2)+…..+ cost(à20)

A* generates an optimal solution if h(n) is an admissible heuristic and the search space is a tree, see h(n) is admissible if it never overestimates the cost to reach the destination node. It also generates an optimal solution if h(n) is a consistent heuristic and the search space is a graph as h(n) this time is consistent if for every node ‘n’ and for every successor node n’ of n, as such: h(n) <= c(n,n’) + h(n’)

Have you ever played a PC game where the computer always knows exactly what path to take, even though the map hasn’t actually been explored yet? Depending upon the game, pathfinding that is too good can be unrealistic. Fortunately though, this is a problem that is can be handled fairly easily.

The answer is to create a separate knownWalkability array for each of the players and computer opponents. Each array would contain information about the areas that the player has explored, with the rest of the map assumed to be walkable until proven otherwise. Using this approach, units will wander down dead ends and make similar wrong choices until they have learned their way around. Once the map is explored, however, pathfinding would work normally. F.E.A.R is an example of a game that uses A* to plan its search.

Image

I’ve programmed an A* algorithm before, it’s a basic but tremendously useful search algorithm that can be used from simple flash games to AAA titles, it’s like the Liam Neeson of algorithms. For the first time in your life you could get people to willingly come after you, well your game object at least. A programmer can dream can’t he?

P.S. you can thank Brett Gilespe for the joke in the title of this blog, that was a good one.

A brief overview of the Havok Physics Engine

At 2K Czech, games demanded a physics solution that can scale efficiently and handle highly detailed interactive environments. Having recently moved to the next generation of Havok Physics, Havok’s new physics technology is able to make highly efficient utilization of all available hardware cores with a very lean runtime memory footprint. This combination allows us to deliver the high quality simulation at the scale we need and we are really looking forward to making some incredible games with the new technology.” -Laurent Gorga, Technical Director at 2K Czech.

Laurent Gorga, 2K Czech’s Technical Director, further added that “At 2K Czech, our games demand a physics solution that can scale efficiently and handle highly detailed interactive environments. Having recently moved to the next generation of Havok Physics, we’ve been blown away by how Havok’s new physics technology is able to make highly efficient utilization of all available hardware cores with a very lean runtime memory footprint.”

Developed by the Irish company Havok, the eponymous Havok Physics is a physics engine designed for video games to allow for real-time interaction between objects in 3D. The engine uses dynamic simulation to allow for ragdoll physics. The company was founded by Hugh Reynolds and Dr. Steven Collins in 1998 in Trinity College Dublin where much of its development is still carried out. Dr. Steven Collins currently acts as course director to Interactive Entertainment Technology in Trinity College Dublin, as well as lecturing in real-time rendering. Havok can also be found in Autodesk 3ds Max as a bundled plug-in called Reactor. A plugin for Autodesk Maya animation software and an extra for Adobe Director’s Shockwave are also available.

As a result Havok offers the fastest and most robust collision detection and physical simulation technology available, which is why it has become the ideal physics engine to go to within the games industry and has been used by leading game developers in over 400 launched titles and many more in development.

Havok Physics is fully multi-threaded and cross-platform optimized for leading game platforms including, Xbox 360™, Playstation® 4, PlayStation®3 computer entertainment system, PC Games for Windows, PlayStation Vita®, Wii™, Wii U™, Android™, iOS, Apple Mac OS and Linux.

In 2008 Havok released details of its new Cloth and Destruction middleware. Cloth deals with simulation of character garments and soft bodies while Destruction provides tools for creation of destructible and deformable environments. Havok Cloth was the company’s most widely adopted and bestselling piece of software to date.

At GDC 2009 Havok showcased the first details of its forth coming artificial intelligence tools which will allow for better performing AI in games without the need for the developers to create their own AI models.

Whenever people say “Havok Physics”, all I can think of is The Elder Scrolls: Oblivion.  One of its major selling points was Havok Physics.  Anyone who has played that game knows how messed up Havok Physics can get, and how silly of situations you can create with it. Grand Theft Auto 4 also uses Havok Physics.

It was also used throughout the Halo games from Halo 2 onwards. In the Halo 3 engine The Halo 3 physics engine runs calculations on every single frame of animation, similarly to the collision detection engine. The engine is capable of calculating, among other things, elasticity on portions of character models; and bullet ricochet.

Character models are quite elastic at points, a characteristic that is clearly demonstrated by the Character Stretch Glitch’s presence in the game. The elasticity helps to improve realism at slower speeds. Only some parts of a character’s model are elastic; if you look closely at screenshots of the aforementioned glitch, you will find that the rigid parts of Spartans’ and Marines’ armor don’t stretch.

The physics engine utilizes an optimization found in many video game physics engines: objects that remain at rest for several seconds are temporarily exempted from physics calculations (but not collision detection) until they are disturbed again; this is why floating Crates and Fusion coils can remain floating in the air until the round is restarted or the items are disturbed. An object is considered “disturbed” if it is moved, picked up (in Forge), or if something collides with it.[5] The optimization is likely based on the premise that an object that isn’t moving now isn’t likely to move in the near future unless something moves it or it moves on its own.

Havok has announced the launch of the third major iteration to its Havok Physics technology that features “significant technical innovations” in performance, memory utilization, usability, and is a “major leap forward” for in-game physics simulation. The release is specifically targeted towards next-generation consoles, mobile devices, and PCs with full compatibility and support for current devices.

According to Andew Bond, Vice President of Technology for Havok, this version has resulted in a “new engine core built around fully continuous simulation that enables maximum physical fidelity with unprecedented performance speeds. Beta versions of the technology have been in the hands of a number of leading developers for some time and we have seen dramatic performance gains with simulations running twice as fast or more, and using up to 10 times less memory. Additionally the new core’s performance is extremely predictable, eliminating performance spikes.”