Tag Archives: script

Dark side of programming, hard coding, it’s more code now than game

The definition of a DSL, the applicability of DSLs to AI in particular, the practical side of constructing and deploying DSLs for AI, and the benefits (and beneficiaries) of DSLs for AI. Lastly, we will conclude with some advice from hard experience on the subject.

A Domain-Specific Language is mere a language designed to be highly adept at solving a specific class of problems. Some languages are more specialized than others, some languages are designed to execute in certain environments and some languages are intended to be as widely applicable as possible

The first major principle that motivates our application of DSLs is code cleanliness and the effects of mapping code directly into the problem space (versus code which is too generalized, too abstract, or too specific – i.e. “hard coded”). The second major principle that motivates DSL deployment is elimination of irrelevant concerns. Our third motivating principle: removing excess overhead.

Scripts were developed in the early AI work by Roger Schank, Robert P. Abelson and their research group as a method of representing procedural knowledge. They are very much like frames, only the values that fill the slots must be ordered. A script is a structured representation describing a stereotyped sequence of events in a particular context. Scripts are used in natural language understanding systems to organize a knowledge base in terms of the situations that the system should understand.

The classic example of a script involves the typical sequence of events that occur when a person drinks in a restaurant: finding a seat, reading the menu, ordering drinks from the waitstaff. In the script form, these would be decomposed into conceptual transitions, such as MTRANS and PTRANS, which refer to mental transitions [of information] and physical transitions.

Schank, Abelson and their colleagues tackled some of the most difficult problems in artificial intelligence (i.e., story understanding), but ultimately their line of work ended without tangible success. This type of work received little attention after the 1980s, but it is very influential in later knowledge representation techniques, such as case-based reasoning.

Scripts can be inflexible. To deal with inflexibility, smaller modules called memory organization packets (MOP) can be combined in a way that is appropriate for the situation. The behavior design is the roadmap for our AI scripting, giving us a clear set of coding elements to complete before it’s AI takes shape.

Although there are many different methods of designing AI, including sophisticated methods such as neural networks and artificial evolution, with a simple creature like this, our tutorial will follow the most basic and standard method used by all stock Unreal AI; conditional decision-making. This is the method that takes stimuli as a constant condition or trigger to either do this or that; also known as “if this, then do that”, or “if-then” decision-making. A common example of this kind of AI is the AI Script in a Scripted Sequence, but even Bots use this method, just in an extremely complex form.

We have some features of our NPC already defined for us; its model, textures, animation and sounds. Normally, a custom character would be started by defining its purpose or goal and then design those features around it. But for this behavior design, we will work “backwards”; starting with those features and finding the purpose and behavior design that fits it.

These are misconceptions about DSL development. DSLs are, by definition, simple and to the point. They eliminate the bulk of the issues that plague generic languages and require far less infrastructure to deploy effectively. Additionally, they require much less up-front design and development work.

Programmers in general are lazy and often try to find the simplest approach. This is isn’t necessarily a vice, after all, technology is a means to maximize efficiency by decreasing the energy and time required to do something. This is a god send for game developers as better technology can enable them to produce the best software and interface possible in shorter time spans given how higher ups give them specific deadlines. Scripts are ideal because of their flexibility, that way programmers can make necessary changes in time, with hard code it would be near impossible.

Advertisements

The cinematics of Arkham Origins, lessons learned from making both a movie and a game

I went to MIGS this weekend and had a blast, being surrounded by all these games, as well as fellow students, upcoming developers and professionals from major studios is quite frankly a dream come true.

Image

My undisputed favorite part of the presentation by Ben Mattes of Warner Bros. games of Montreal. He talked about making a movie and a game at the same time; in which they speak about their experiences creating the cinematics of Arkham Origins.

Image

We all saw the TV spots and trailers, those CG cutscenes looked so visually amazing, I honestly thought it was a live action movie at first glance.

Image

Naturally the process was very difficult, according to their stories, they had since late last year to create everything which is a very tight schedule. That wasn’t even the worst of it. Given that they were telling a story, they naturally had to follow a script. The problem was the script wasn’t readily available to them from the start as you’d expect. No, the script was written, reviewed and approved in increments for the sake of editing flexibility, which left Mr. Mattes team at a disadvantage with the time schedule. Considering how serious WB & DC are about their character, it was not like WB games could take any liberties of the sort. Anything having to do with the story and characters begun and ended with their property owners, the rest was left to the cinematic cutscene developers.

Image

In order to properly animate the characters of the game, they made extensive use of motion capture and shot everything at a studio with an army of stuntmen and stuntwomen enacting the actions of the characters. Everything from Batman’s martial arts to Joker’s over the top body language to Copperhead’s movements was done with motion capture. On the topic of Copperhead, things like climbing on the walls were simulated with walls and rails that they built. Every movement that required some specific environment, the team built them in order to properly capture the right animations.

Indeed, they put so much effort like you wouldn’t even imagine, and of course it was a difficult task given what resources they had to gather. They had to go through the trouble of casting each motion capture actor to perfectly suit their roles, in particular they had to find a large man in order to play Bane. Developers don’t just get people off the street to do these, in order to be hired to do motion capture, you need to be a credible actor and/or stunt person. I even met one at MIGS who told me this information. Like actors in movies, motion capture actors have schedules that they and the developers need to organize. This was a huge problem for them given the issue with getting a script on time.

Image

There is a faster method to create these cutscenes, an alternative to motion capture is performance capture; which is a recording method that encompasses body motion capture, facial motion capture and voice recording. The problem is as you’d expect, it’s far too expensive.

Fortunately the long way proved to be much more ideal in the aesthetics department. With voice acting, they did it separately with expert voice actors such as Troy Barker as Joker. As for the facial rigging, they did that by using blenders, changing the facial expressions manually in maya by interpolating using catmull rom between 9 different expressions.

Image

This ended up working better because they managed to avoid Uncanny Valley and retain the exaggerated expressions of comic book characters.

They captured all these movements with the usage of a virtual camera. But it’s not a traditional virtual camera that’s created in Maya and exported onto the engine. The animators used a portable camera that shot the motion capture set, projecting the objects and animations on a virtual space. Like a regular camera, it’s handled and moved in certain positions by a camera in order to get the exact angle they want. It’s barely different from traditional filmmaking.

Image

Arkham Origins is one of the few games this year that made use of pre-rendered cinematics which is higher quality but takes up more disk space. After all the scenes are shot they take them into the engine and composite them in order to have…..drumroll please…… SHADERS!  Adding lighting effects, dust particles and pyrotechnics to create a more lively and realistic environment.

Image

The lengths the animators took to create their cutscenes is no different from how regular films are shot; they hire actors to perform in front of a camera handled by a camera man, they need to follow the script and have to take the scenes and add effects later on in post-production. It’s uncanny how much effort they went through given the amount of obstacles they encountered, and to produce what they did at that caliber is to be commended. I think these cutscenes have better animation than most Pixar movies.

My only disappointment is not enough time to ask him questions, I had tonnes.