The Result

I’ve learned a lot from my research, the other day I decided to put everything I have learn t throughout my research into practice and see what I could make in an hour, below is the result.

 

Applications that can render Normal Maps

I have been looking into the different kind of applications available today that can render normal maps using the projection high to low polygon method.

  • Autodesk 3ds Max
  • Autodesk Maya
  • Blender
  • Mudbox
  • Zbrush
  • X-Normal

These are the common applications which I could find documentation for in regards to the rendering of normal maps.

Out of them all I personally use and prefer X-Normal, it is simple to use and it is free.

Edge Padding

When game models are textured using a single texture sheet (decal sheet, or texture atlas), the texture will have UV’d areas (UV shells) and blank areas between them (gutters).

When a game engine renders a scene it uses texture filtering to smoothly render the textures, in a process called downsampling. If the gutters have colors that are significantly different from the colors inside the shells, then those colors can “bleed” which creates seams on the model. The same thing happens when neighboring shells have different colors; as the texture is downsampled eventually those colors start to mix.

To avoid this, edge padding should be added to the empty spaces around each UV shell. Edge padding duplicates the pixels along the inside of the UV edge and spreads those colors outward, forming a skirt of similar colors.

When the UV layout is created, the spacing between the shells should be done with edge padding in mind. If the gutters between the UV shells aren’t at least double the width of the edge padding, neighboring shells will tend to bleed together more quickly.[1]

The normal map with 14 edge padding will have no bleeding between UV shells or the texture outside of the UV shells, the one with 1 edge padding will bleed the background texture through in some game engines.

 

[1] – http://wiki.polycount.com/EdgePadding?action=show&redirect=Edge+Padding

Tangent Basis

Tangent-space normal maps use a special kind of vertex data called the tangent basis. This is similar to UV coordinates except it provides directionality across the surface, it forms a surface-relative coordinate system for the per-pixel normals stored in the normal map.

Light rays are in world space, but the normals stored in the normal map are in tangent space. When a normal-mapped model is being rendered, the light rays must be converted from world space into tangent space, using the tangent basis to get there. At that point the incoming light rays are compared against the directions of the normals in the normal map, and this determines how much each pixel of the mesh is going to be lit. Alternatively, instead of converting the light rays some shaders will convert the normals in the normal map from tangent space into world space. Then those world-space normals are compared against the light rays, and the model is lit appropriately. The method depends on who wrote the shader, but the end result is the same.[1]

The downside to tangent basis is the fact that is it not standardized, different applications use different algorithms to calculate it, therefore a normal map baked in 3ds max for example might look perfect when imported into UDK, but a normal map baked in Maya and imported into UDK might look terrible. This is the only problem with tangent basis and I predict over the coming years the big software companies will get together in an attempt to standardize it.

 

[1]- http://wiki.polycount.com/NormalMap

Types of Normal Maps

Normal maps come in two forms, there is the tangent-space normal map which I would say is used 99% of the time and then there is the object-space normal map which is rarely used.

The tangent-space normal map:

Predominantly-blue colors. Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.

green_plus.png Maps can be reused easily, like on differently-shaped meshes.
green_plus.png Maps can be tiled and mirrored easily, though some games might not support mirroring very well.
green_plus.png Easier to overlay painted details.
green_plus.png Easier to use image compression.
red_x.png More difficult to avoid smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).
red_x.png Slightly slower performance than an object-space map (but not by much)[1]

The object-space normal map:

Rainbow colors. Objects can rotate, but usually shouldn’t be deformed, unless the shader has been modified to support deformation.

green_plus.png Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.
green_plus.png Slightly better performance than a tangent-space map (but not by much).
red_x.png Can’t easily reuse maps, different mesh shapes require unique maps.
red_x.png Difficult to tile properly, and mirroring requires specific shader support.
red_x.png Harder to overlay painted details because the base colors vary across the surface of the mesh. Painted details must be converted into Object Space to be combined properly with the OS map.
red_x.png They don’t compress very well, since the blue channel can’t be recreated in the shader like with tangent-space maps. Also the three color channels contain very different data which doesn’t compress well, creating many artifacts. Using a half-resolution object-space map is one option.[1]

Here is an example of what the two normal maps look like.

[1] – http://wiki.polycount.com/NormalMap

Smoothing Groups and Hard Edges

I was attempting to create a normal map for a mesh today but I kept encountering the same problem, the edges of the mesh look horrible and did not have the smooth effect I had intended.

This led me to conduct some research into smoothing groups. I had no idea how big a role smoothing groups played in the creation of normal maps. Basically any 90 degree angle on a mesh must be split using smoothing groups. The split creates a hard edge. These hard edges also have to be on separate UV islands. If those rules are not follow you will get something I got below.

As you can see in the image above, the edges on the mesh look awful, this is because the hard edges are NOT on separate UV islands.

Now if you look at this example, the edges look smooth and this is the result I was aiming for, as you can see from the UV islands they are split along the hard edges unlike the first example where they were stitched together.

The Origin of Normal Maps

The idea of taking geometric details from a high polygon model was introduced in “Fitting Smooth Surfaces to Dense Polygon Meshes” by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996[1], where this approach was used for creating displacement maps over nurbs. In 1998, two papers were presented with key ideas for transferring details with normal maps from high to low polygon meshes: “Appearance Preserving Simplification”, by Cohen et al. SIGGRAPH 1998.

Normal maps did not really become common place in games till the later games released on the Playstation 2 or the original Xbox, with the release of the Playstation 3 and Xbox 360 they have become common place and you will find them in almost every game.

Research Through Experience

A major part of my research is going to be conducted through the creation/attempted creation of normal maps. I wanted to conduct my research this way because I learn this way and because it will be practice for my final major project.

I have spent the past week learning how to create some rather simple normal maps, I have then applied this knowledge and attempted to create something. This below is the result of that learning. It is a simple floor tile I created a normal map for.

(Please click image to view the higher quality version)

Welcome to my blog

I will be using this blog to document my research for the research practice module.

The subject I will be researching is: Normal Maps

I will be focusing on three areas:

  • What are normal maps and what is the purpose of one?
  • How do you create them and common technical issues
  • Research into normal renderers and why the tangent algorithms are not standardised throughout the industry.
I will be using the following sources: