Tag Archives: UOIT

The cinematics of Arkham Origins, lessons learned from making both a movie and a game

I went to MIGS this weekend and had a blast, being surrounded by all these games, as well as fellow students, upcoming developers and professionals from major studios is quite frankly a dream come true.

Image

My undisputed favorite part of the presentation by Ben Mattes of Warner Bros. games of Montreal. He talked about making a movie and a game at the same time; in which they speak about their experiences creating the cinematics of Arkham Origins.

Image

We all saw the TV spots and trailers, those CG cutscenes looked so visually amazing, I honestly thought it was a live action movie at first glance.

Image

Naturally the process was very difficult, according to their stories, they had since late last year to create everything which is a very tight schedule. That wasn’t even the worst of it. Given that they were telling a story, they naturally had to follow a script. The problem was the script wasn’t readily available to them from the start as you’d expect. No, the script was written, reviewed and approved in increments for the sake of editing flexibility, which left Mr. Mattes team at a disadvantage with the time schedule. Considering how serious WB & DC are about their character, it was not like WB games could take any liberties of the sort. Anything having to do with the story and characters begun and ended with their property owners, the rest was left to the cinematic cutscene developers.

Image

In order to properly animate the characters of the game, they made extensive use of motion capture and shot everything at a studio with an army of stuntmen and stuntwomen enacting the actions of the characters. Everything from Batman’s martial arts to Joker’s over the top body language to Copperhead’s movements was done with motion capture. On the topic of Copperhead, things like climbing on the walls were simulated with walls and rails that they built. Every movement that required some specific environment, the team built them in order to properly capture the right animations.

Indeed, they put so much effort like you wouldn’t even imagine, and of course it was a difficult task given what resources they had to gather. They had to go through the trouble of casting each motion capture actor to perfectly suit their roles, in particular they had to find a large man in order to play Bane. Developers don’t just get people off the street to do these, in order to be hired to do motion capture, you need to be a credible actor and/or stunt person. I even met one at MIGS who told me this information. Like actors in movies, motion capture actors have schedules that they and the developers need to organize. This was a huge problem for them given the issue with getting a script on time.

Image

There is a faster method to create these cutscenes, an alternative to motion capture is performance capture; which is a recording method that encompasses body motion capture, facial motion capture and voice recording. The problem is as you’d expect, it’s far too expensive.

Fortunately the long way proved to be much more ideal in the aesthetics department. With voice acting, they did it separately with expert voice actors such as Troy Barker as Joker. As for the facial rigging, they did that by using blenders, changing the facial expressions manually in maya by interpolating using catmull rom between 9 different expressions.

Image

This ended up working better because they managed to avoid Uncanny Valley and retain the exaggerated expressions of comic book characters.

They captured all these movements with the usage of a virtual camera. But it’s not a traditional virtual camera that’s created in Maya and exported onto the engine. The animators used a portable camera that shot the motion capture set, projecting the objects and animations on a virtual space. Like a regular camera, it’s handled and moved in certain positions by a camera in order to get the exact angle they want. It’s barely different from traditional filmmaking.

Image

Arkham Origins is one of the few games this year that made use of pre-rendered cinematics which is higher quality but takes up more disk space. After all the scenes are shot they take them into the engine and composite them in order to have…..drumroll please…… SHADERS!  Adding lighting effects, dust particles and pyrotechnics to create a more lively and realistic environment.

Image

The lengths the animators took to create their cutscenes is no different from how regular films are shot; they hire actors to perform in front of a camera handled by a camera man, they need to follow the script and have to take the scenes and add effects later on in post-production. It’s uncanny how much effort they went through given the amount of obstacles they encountered, and to produce what they did at that caliber is to be commended. I think these cutscenes have better animation than most Pixar movies.

My only disappointment is not enough time to ask him questions, I had tonnes.

Shaders, the 3D photoshop Part 2

In my previous blogpost, I went over the many algorithms used in both 2D and 3D computer graphics. I talked about how they are essentially the same. We’ll use a screen shot from my game Under the Radar that I editted in photoshop, before and after respectively.

Image

Image

Drop shadowing in photoshop is the same as shadow mapping in which it checks if a point is visible from the light or not. If a point is visible from the light then it’s obviously not in shadow, otherwise it is. The basic shadow mapping algorithm can be described as short as this:

– Render the scene from the lights view and store the depths as shadow map

– Render the scene from the camera and compare the depths, if the current fragments depth is greater than the shadow depth then the fragment is in shadow

In some instances, drop shadows are used to make objects stand out of the background with a an outline, in shaders this is done with sobel edge filters.

The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image.

In theory at least, the operator consists of a pair of 3×3 convolution kernels. One kernel is simply the other rotated by 90°. This is very similar to the Roberts Cross operator.

These kernels are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one kernel for each of the two perpendicular orientations. The kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (call these Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient.

In photoshop, filters are added to images to randomize the noise and alter the look. The equivalent in shaders is known as normal mapping. Normal maps are images that store the direction of normals directly in the RGB values of the image. They are much more accurate, as rather than only simulating the pixel being away from the face along a line, they can simulate that pixel being moved at any direction, in an arbitrary way. The drawbacks to normal maps are that unlike bump maps, which can easily be painted by hand, normal maps usually have to be generated in some way, often from higher resolution geometry than the geometry you’re applying the map to.

Normal maps in Blender store a normal as follows:

  • Red maps from (0-255) to X (-1.0 – 1.0)
  • Green maps from (0-255) to Y (-1.0 – 1.0)
  • Blue maps from (0-255) to Z (0.0 – 1.0)

Since normals all point towards a viewer, negative Z-values are not stored, also map blue colors (128-255) to (0.0 – 1.0). The latter convention is used in “Doom 3” as an example.

Those are a majority of effects that shaders use that are similar to photoshop effects, there’s also color adjustment which can be done in color to HSL shaders along with other sorts of effects.

Shaders, the 3D photoshop

The simplest way to describe shaders is that it is the photoshop of 3D graphics; both of them are used to create effects enhancing lighting and mapping, to make the images more vivid and lively, and to give bad photographers, artists and modelers a chance to redeem their miserable work.

Perhaps the greatest thing they have in common are the algorithms used to execute their operations, they’re not just similar, they’re the exact same math operations.

Image

Their primary difference is photoshop is used to manipulate 2D images and shaders alter 3D images, however both images are made up of pixels.

Image

First, the image must be processed, but how? We must define a generic method to filter the image.

Image

As you can see, all elements in the kernel MUST equal 1. We must normalize by dividing all the elements by the sum in the same way we normalize a vector.

The central element of the pixel, which in the case of what we have above is 6, will be placed all over the source pixel which will then be replaced with a weighted sum of itself and pixels nearby.

That’s how it works it works for images normally, but what about in shaders?

Normally we do forward rendering. Forward rendering is a method of rendering which has been in use since the early beginning of polygon-based 3d rendering. The scene is drawn in several passes, then the cull scene becomes renderable against frustum, then the culled Renderable is drawn with the base lighting component (ambient, light probes, etc…).

Image

The problem with shaders is that fragment/pixel shaders output a single color at a time. It does not have access to its neighbours, therefore convolution is not possible.

The first pass is stored in the Frame Buffer Object (i.e. all color is in a texture) and we can sample any pixel value in a texture!

Digital images are created in order to be displayed on our computer monitors. Due to the limits human vision, these monitors support up to 16.7 million colors which translates to 24bits. Thus, it’s logical to store numeric images to match the color range of the display. By example, famous file formats like bmp or jpeg traditionally use 16, 24 or 32 bits for each pixel.

Image

Each pixel is composed of 3 primary colours; red, green and blue. So if a pixel is stored as 24 bits, each component value ranges from 0 to 255. This is sufficient in most cases but this image can only represent a 256:1 contrast ratio whereas a natural scene exposed in sunlight can expose a contrast of 50,000:1. Most computer monitors have a specified contrast ratio between 500:1 and 1000:1.

High Dynamic Range (HDR) involves the use of a wider dynamic range than usual. That means that every pixel represents a larger contrast and a larger dynamic range. Usual range is called Low Dynamic Range (LDR).

Image

HDR is typically employed in two applications; imaging and rendering. High Dynamic Range Imaging is used by photographers or by movie maker. It’s focused on static images where you can have full control and unlimited processing time. High Dynamic Range Rendering focuses on real-time applications like video games or simulations.

Image

Since this is getting quite long, I’ll have to explain the rest in another blog. So stay tuned for part two where we will go over some of the effects of photoshop and shaders and how they’re the same as well as the algorithms behind them.

The art of “Art of Fighting”

Art of Fighting Anthology is a complication of the following games Art of Fighting, Art of Fighting 2, and Art of Fighting 3: The Path of the Warrior released by SNK Playmore (formally SNK) on the Playstation 2.

Image

The first game was released in 1992 on the Neo Geo arcade system and later ported to several home consoles like the Sega Genesis, SNES and the Neo Geo’s own Neo Geo CD. The versions in this complication are the original arcade versions so anything I say here is in regards to those.

Released a year after Street Fighter II, it was not much different from Capcom’s juggernaut fighter, it did offer many innovations which include taunting, the usage of large character sprites and depicting characters getting wounded during the fights. Art of Fighting 2 doesn’t differ much from its predecessor due to only a 2 year gap and no generation differentiation aside from slightly improved animation in terms of fluidity and detail, so I’ll be talking about the first two games up until I mention the third.

Image

At the time the game contained some the biggest sprites in any game, one interesting thing about the game’s camera is that it zooms in and out feature. When the two opponents are as far apart from each other as the game allows, the camera zooms out, but the camera zooms in the closer the two fighters get. The sprite animations are essentially the same as those of Street Fighter II, however they’re stiffer in contrast to Street Fighter II’s much smoother gameplay.

Here’s the game zoomed in:

Image

And here’s the game zoomed out:

Image

The most interesting animation tidbit in this game is the characters getting wounded as they lose health. When you look at the character’s faces you can see blemishes such as blood and swelling. I like this feature because it realistically portrays the result of an actual fight.

Image

This is one of the few fighting games that do this. I’m surprised that not many fighters utilize that detail, especially with how far games have progressed with their graphics and animation. Sure in Street Fighter II they do depict the characters as being severely battered in the aftermath, but that’s only on the character portraits, and only the loser is depicted as being scarred. You may also bring up Mortal Kombat but other than pools of blood flying off the characters with each blow, their skin still doesn’t look punctured and their features remain intact without swelling so it’s still ridiculous. For the record, UFC games don’t count since they’re sports simulators, not fighting games.

I’ve stated this before in my blog on Batman: Arkham City how adding tiny details like torn cloth enhances the mood as it gives the game a more cinematic feel. That detail added in the Art of Fighting series does a lot of service as the games aim to be like movies in their story mode. During story mode, before each fight your character will engage your opponent in dialogue which contains some minor additional animations that include throwing something or striking a pose. Lets not forget the zoom in/zoom out camera that I mentioned earlier. All these features and animations add something to the game which makes it cinematic despite its technological immaturity. By nowadays’ standards it may not seem like anything much but back then it was really something to be fascinated by.

Image

I said fascinating, not well written.

Art of Fighting 3: The Path of the Warrior radically differentiated itself from its predecessors; it still utilizes traditionally animated, hand drawn sprites but this time the game combines 2D sprites with motion capture technology and more computer graphics. This allows for more fluid and believable animation and movements.

When you look at the characters you can see that this installment contains more frames and you can see that when a character kicks you can see it moving all the way as opposed to only drawing three frames to depict rapid movement like in Persona 4: Arena.

Image

The introduction sequence also contains some interesting animations. The main characters are show performing their fighting movesets and you can see the “slow in and slow out” principle of the 12 principles of animation being displayed. Sadly this principle isn’t being displayed during the game. Also in the title sequence there are the characters were depicted in what at first I thought were polygon rendered, computer graphics but they were really regular 2D animations, that’s very impressive on their part.

Image

Unfortunately many of the features in the previous two like the zoom in/zoom out camera and getting wounded as you lose health are omitted in the third installment. However the game makes up for that by having very beautiful backgrounds with moving animations; like in the Quixotec Temple stage you’ll bear witness to wonderfully animated waterfalls and the ripples in the lake as a result of these falls.

Image

Another principle of the 12 principles is being displayed here with the secondary action; the ripples as a result of the waterfalls.

The games have always contained animations in the background to make the gameplay livelier, but out of the trilogy this contains the best by far as it rightfully should, given that it’s the most advanced and latest game.

I’m sure people are wondering why I’m talking about an old game series that is only semi-classic and doesn’t contain computer graphics that we’re studying. The same reason why in filmmaking they study old movies; in many old movies filmmakers would use practical effects and editing tricks in order to get around their limitations, and I feel that the developers of the first two Art of Fighting games had the same mind set. In order to make their game as dynamic as they could with regards to the limited technology of the time, they used a camera that zoomed in and out to copy what filmmakers do in movies where they also shit between wide angle and regular lens. The addition of subtle blemishes to simulate what would happen in a fight and the usage of sprites and dialogue boxes to make up for not being able to have animated cutscenes in order to have a story unfold.

Image

You better appreciate the game animations that paved the way for modern, computer animation, or you’ll have a problem with him.

It’s important to take such things into account in order to appreciate how far we’ve come, much like in filmmakers, our game animating “forefathers” weren’t as lucky to have the technology we do today. But like them we are limited but in terms of time as opposed to technology, so we may have to resort to using subtle hints and tricks to help enhance the game’s aesthetics. Remember, animation isn’t just about what is seen, it’s also about the effect it gives.

Batman Arkham City: An animation milestone

Batman Arkham City is a 2011 game released for the Playstation 3 and X-box 360. It was developed by Rocksteady Studios and published by Warner Bros. Studios. The game uses computer generated graphics as most video games nowadays do. The environments are very vibrant and detailed, the characters look very realistic. Every now and then the game is interrupted for a cutscene, there is almost no difference between them, and the cutscenes are up to date and well rendered, a true testament to this game’s graphics.

As with the source material, the graphics are stylized with a very gothic and dark atmosphere. The city is designed to give the feeling of a dangerous slum where the inmates run the asylum, where no one but yourself is to be trusted and survival is only achievable by the most capable and dangerous individuals. Even though it’s a huge environment you still get the feeling that you’re isolated, that it’s quiet and anything can happen within a heartbeat.

When you fight enemies, you use a Freeflow combat system where the player’s attacks gravitate towards the nearest opponent. You can also use the triangle button to counter your enemy’s attacks, the way you counter the enemies depends on your character’s position as well as the enemy’s method of attack. I bring it up because I find it impressive that they animated so many different attacks with nearly every predicament taken into account.

While playing you’ll notice little details on your character like little bits of Batman’s cape ripped off as you progress. I find it to be a nice touch because it makes the game more realistic and cinematic, it shows all the struggles the player has endured and how critical the situation is.

If you look closely at the characters you’ll notice the attention to detail as you’ll see wounds, sweat, dirt and other blemishes which supports the games nitty gritty feel.

The game’s setting of Arkham City is much larger than Arkham Asylum in the game’s predecessor, it’s been said that it is about five virtual footprints bigger than Arkham Island. The city is large and shelters many villains from Batman’s rogues gallery, because of this areas in the games are design according to the character’s theme and motif, for example in Joker’s turf, you’ll see graffiti on the buildings that fits with Joker’s persona with bright & clownish yet violent and haunting art. The enemies themselves are also very varied, every group wears clothing coordinated with their affiliations; Joker’s henchmen go around wearing face paint and colorful wigs and Dr. Hugo Strange’s underlings would include guards in traditional riot squad/military type uniforms. Even within the ranks of groups, the enemies are varied. They come in all different shapes, sizes, ethnicities, attire and minor aesthetics like tattoos, hair and face paint.

The characters are animated in a very realistic manner; the characters movements and mannerisms mirror that of how humans would behave in real life when they have conversations and engagement in combat, though of course no human being possesses the dexterity of Batman or Nightwing.

Overall the graphics and animation is top notch, in my opinion it can give Pixar Animated Studios a run for its money. The design is provides an excellent atmosphere that accomplishes all the goals of trying to bring the Batman universe to life. There is so much attention to detail in the environment, the characters and in the combat. Video games are considered and art form, and games like this validate the statement.