Edit Mesh Data

From Serious Sam Wiki
Jump to: navigation, search

Introduction

What you see in the viewport when the mesh editor is active is not a mesh. Meshes are used for hardware accelerated rendering. They are compiled from what you are editing in the mesh editor: an edit mesh.

Info 16x16.png Note: The edit mesh is stored alongside the mesh in a .bmf file. However, edit meshes can be removed from the .bmf file which renders the mesh uneditable.

Edit meshes are complex data structures to define various characteristics of a render mesh. In general, an edit mesh consists of the following data type levels:

Layer maps
Layers
Polygon maps
Polygons
Vertices
Vertex maps
Elements

This guide will explain these data types in a bottom up fashion. So let's start at the bottom level:

Elements

The core data types of any edit mesh are jointly referred to as elements. The data type of an element is defined by the vertex map that lists it. So let's take a look at

Vertex Maps

Vertex maps are collections, or lists, of elements. There are five types of vertex maps in Serious Editor:

Morph maps
contain three dimensional position elements.
Weight maps
contain one dimensional weight value elements.
Texture maps
contain two dimensional UV elements.
Color maps
contain four dimensional color elements
Normal maps
contain three dimensional normal elements.

In this guide we will focus on morph maps and texture maps because they are the two integral vertex maps of any edit mesh that is supposed to produce a renderable mesh.

Morph map

Every edit mesh has one, and only one morph map called 'position'. It is created automatically for every edit mesh and holds 3D position elements that are interpreted in relation to the origin of the edit mesh. Elements are added automatically, and exclusively by the mesh editor.

A morph map would look something like this:

Example Morph Map 'position'
Element Number 0 1 2 3 4 5
Element Data x -2 -2 2 2 2 2
Element Data y 0 4 0 4 0 4
Element Data z 2 2 2 2 -2 -2

For instance, element 1 of this morph map defines the position (-2, 4, 2) in relation to the origin of the model entity that is edited.

Texture Maps

The purpose of texture maps is the projection of two-dimensional textures to the three-dimensional edit mesh. The concept used to that end is called UV Mapping. Each element of a texture map represents a 2D position that is interpreted in relation to the origin of the UV map.

An edit mesh can hold more than one texture map. Usually an edit mesh has one texture map for each texture that is supposed to be projected on the render mesh even though this is not required. It just makes UV mapping more comfortable. Texture maps will be illustrated in section Polygon Maps because a polygon map is required for texture maps to show any effect.

Now that vertex maps have been explained let's look at the edit mesh data type that uses them:

Vertices

Vertices are nothing but collectors of vertex map entries. This cannot be stressed enough: Vertices do not contain any 3D coordinates!!! A vertex necessarily lists one element of the morph map 'position'. On top of that it can list elements of other vertex maps.

Info 16x16.png Note: Vertices cannot collect more than one element from each vertex map.

For illustration purposes let's assume that we have six vertices numbered #0 to #5. Each vertex lists the corresponding morph map element of the above example morph map.

Screenshot of example edit mesh. Six vertices with data of listed morph map element. Grid: 1m.
Vertex #0: Morph 'position' element #0 (→ (-2, 0, 2))
Vertex #1: Morph 'position' element #1 (→ (-2, 4, 2))
Vertex #2: Morph 'position' element #2 (→ (2, 0, 2))
Vertex #3: Morph 'position' element #3 (→ (2, 4, 2))
Vertex #4: Morph 'position' element #4 (→ (2, 0, -2))
Vertex #5: Morph 'position' element #5 (→ (2, 4, -2))

Note that in this example, numbers of vertices equal those of morph map elements. This is not mandatory. Any given vertex can reference any morph map element, the same morph map element can also be referenced by more than one vertex.

It should now be clear that when the position of a vertex is changed in the viewport, the vertex data is not changed. What changes is the value of the morph map element that is listed by the edited vertex.
It also follows that the position of vertices that list the same morph map element can only be edited in conjunction.

We now know how morph maps are used by vertices. So let's move on to the next level of the edit mesh data structure:

Polygons

Polygons are collections of vertices. In order to be compiled into the render mesh, an edit mesh polygon needs a minimum amount of three vertices. It can list more vertices if necessary. Polygons with more than three vertices will be split into triangles during compilation of the render mesh.

Screenshot of example edit mesh. Six vertices with data of listed morph map element and two polygons. Grid: 1m.

So let's add two polygons (quadrilaterals) to our example mesh. The data stored in these two polygons is:

Polygon #0: Vertex #0 (→ Morph 'position' element #0 → (-2, 0, 2))
Vertex #2 (→ Morph 'position' element #2 → (2, 0, 2))
Vertex #3 (→ Morph 'position' element #3 → (2, 4, 2))
Vertex #1 (→ Morph 'position' element #1 → (-2, 4, 2))
Polygon #1: Vertex #2 (→ Morph 'position' element #2 → (2, 0, 2))
Vertex #4 (→ Morph 'position' element #4 → (2, 0, -2))
Vertex #5 (→ Morph 'position' element #5 → (2, 4, -2))
Vertex #3 (→ Morph 'position' element #3 → (2, 4, 2))

Note that vertices are listed counter-clockwise from the side of the polygon that is shaded. Since polygons are one-sided entities the order of vertices is used to define the direction that the polygon faces to.

As can be seen in the screenshot, the two polygons share vertices #2 and #3 which means they can only be edited on these corners uniformly. But keep in mind that even if polygons have distinct vertices in the same position they still can only be edited jointly if these vertices list the same morph map element!

Once more it should be stressed that changing the position of a polygon in the viewport does neither change the polygon data, nor the data contained in its vertices. The only thing that changes is the data of the morph map elements referenced by the vertices of the polygon.

We've now covered the most basic concepts of an edit mesh, i.e. how polygons are positioned in mesh space. But we still don't know how texture is applied. On the next edit mesh data structure level this can finally be illustrated because texturing polygons requires

Polygon Maps

Polygon maps define edit mesh characteristics of those polygons that are assigned to them. There are seven types of polygon maps.

Surface maps
Collision maps
AIVisibility maps
Visibility Sector maps
Visibility Properties maps
Navigation maps
Occlusion maps

This article focusses exclusivly on surface maps because in 99.9% of cases meshes are supposed to be rendered which requires a surface shader. A surface map points to one or more shaders that are applied to every polygon of that surface map.

As soon as the first polygon of an edit mesh is created, a surface map called 'Default' is added to the edit mesh. Every polygon added to the edit mesh will automatically be assigned to this surface map as long as no other surface map is active. Edit mesh polygons cannot be assigned to more than one surface map.

Now, in order to render a polygon the mesh editor basically executes the following procedure:

  1. Check if the polygon is assigned to a surface map. If not, the procedure is aborted and the polygon is shaded with default shader.
  2. Check if a shader texture is defined in the surface map that the polygon is assigned to. If not, the procedure is aborted and the polygon is shaded with default shader.
  3. Check if a UV map is defined in the surface map that the polygon is assigned to. If not, the procedure is aborted and the polygon is shaded in white.
  4. Read the UV map elements that are listed by the vertices of the polygon and calculate the part of the shader texture that is enclosed by them.
  5. Shade the polygon with the previously calculated part of the shader texture.

Let's go through the first three steps with our example:

1. Surface map
First both polygons of our example must be assigned to a surface map. Let's assume this surface map is called 'Default'.
2. Shader texture of Default surface map
This is the texture we want to apply to the edit mesh:
Edit Mesh Example Texture.png
So the Default surface map shader must point to it.
3. UV map of Default surface map.
A UV map is required. Let's assume the Default surface shader points to the following texture map:
Example Texture Map 'Texture'
Element Number 0 1 2 3 4 5
Element Data x 1 0 0 0.5 0.5 1
Element Data y 0 2 0 2 0 2
4. Elements of Texture map listed by polygon vertices
Finally, the vertices of our polygons must reference elements of the Texture map. These elements can be used to define those parts of the Default surface map shader that are to be projected on the polygons. So let's add these references to the six vertices:
Vertex #0: Morph 'position' element #0 (→ (-2, 0, 2))
Texture 'Texture' element #1 (→ (0, 2))
Vertex #1: Morph 'position' element #1 (→ (-2, 4, 2))
Texture 'Texture' element #2 (→ (0, 0))
Vertex #2: Morph 'position' element #2 (→ (2, 0, 2))
Texture 'Texture' element #3 (→ (0.5, 2))
Vertex #3: Morph 'position' element #3 (→ (2, 4, 2))
Texture 'Texture' element #4 (→ (0.5, 0))
Vertex #4: Morph 'position' element #4 (→ (2, 0, -2))
Texture 'Texture' element #5 (→ (1, 2))
Vertex #5: Morph 'position' element #5 (→ (2, 4, -2))
Texture 'Texture' element #0 (→ (1, 0))
Screenshot of example edit mesh. Six vertices with data of listed texture map element and two polygons. Grid: 1m.

You can see the result on the right. Looking at this screenshot, the key features of how the texture is applied to the polygons are:

  • The entire width of the texture fits seamlessly on the surface of the two boxes.
  • In terms of height the texture is applied exactly twice.

The easiest way to illustrate why the texture is projected like in the screenshot is by looking at the way texture is projected to a polygon backwards. Instead of projecting parts of a texture map on polygons, alternatively polygons can be projected on a texture map. In fact, Serious Editor offers this kind of reverse engineering for UV mapping purposes. It is activated by pressing NUM 8.

Edit Mesh TextureMap Plain.png

Firstly, it must be noted that

  • x values of texture map elements translate to the u axis,
  • y values of texture map elements translate to the y axis, and
  • the y axis is pointing down (rather unintuitively but that's how it is).

The yellow rectangle on the left displays what part of the UV map is projected on polygon #0, the yellow rectangle on the right displays what part of the UV map is projected on polygon #1. That explains the appearance of the shaded polygons:

  • Both rectangles touch each other which is why the texture is applied seamlessly to both polygons.
  • On the u axis, the rectangles span from 0 to 1 which means that in width, one iteration of the texture is applied. The left half on polygon #0, the right half on polygon #1.
  • On the v axis, the rectangles span from 0 to 2 which means that in height, two iterations of the texture are applied.

For further illustration purposes, let's see what happens if we detach the two rectangles in the 'Texture' map, rotate one of them and re-adjust their sizes.

Screenshot of example edit mesh. Seven vertices with data of listed texture map element and two polygons. Grid: 1m.

As can be seen, there's one more vertex now. Why is that?

The answer is quite simple: If there was just one vertex in the position in question, only one element of the UV map 'Texture' could be listed because a vertex can list only one element of each vertex map. That is also why vertex #3 is sufficient to provide texture map data for both polygons: the same position of the texture map is necessary for both polygons in the same world space position.

Here's the enitre data structure:

Polygon #0: Vertex #0 (→ Morph 'position' element #0 → (-2, 0, 2))
(→ Texture 'Texture' element #1 → (0, 2))
Vertex #2 (→ Morph 'position' element #2 → (2, 0, 2))
(→ Texture 'Texture' element #3 → (1, 2))
Vertex #3 (→ Morph 'position' element #3 → (2, 4, 2))
(→ Texture 'Texture' element #4 → (1, 0))
Vertex #1 (→ Morph 'position' element #1 → (-2, 4, 2))
(→ Texture 'Texture' element #2 → (0, 0))
Polygon #1: Vertex #6 (→ Morph 'position' element #2 → (2, 0, 2))
(→ Texture 'Texture' element #6 → (0, 0))
Vertex #4 (→ Morph 'position' element #4 → (2, 0, -2))
(→ Texture 'Texture' element #5 → (0, 2))
Vertex #5 (→ Morph 'position' element #5 → (2, 4, -2))
(→ Texture 'Texture' element #0 → (1, 2))
Vertex #3 (→ Morph 'position' element #3 → (2, 4, 2))
(→ Texture 'Texture' element #4 → (1, 0))

Ok, I hope this clarified how vertex maps, vertices, polygons and polygon maps work together. Even though there are vertex maps and polygon maps that were not covered here, the basic principles of operation are the same.

Let's head up one level of the edit mesh data structure to the

Layers

Layers can be used to split up the edit mesh into different ... well, layers. You can also think of different layers as distinct sections of an edit mesh. A layer can contain any data type that is on a lower level, i.e. vertex maps, vertices, polygons, and polygon maps. Note that layers cannot share any of these resources, each layer needs their own vertex maps, vertices, etc.

By default, layer #1 is active. Up to ten layers are available.

Layers can be used for for ease of editing because it may be more comfortable to edit certain parts of the edit mesh with other parts out of display. They are used extensively by Croteam to build distinct LOD meshes and/or collision meshes in one and the same mesh.

Ok, we're reaching the top now:

Layer maps

Layers can be assigned to layer maps, thus, defining the particular purpose of that layer and its contents. There are three types of layer maps:

Rendering layer maps
They are used to define a distance range in which the contents of the assigned layer is rendered.
Collision layer maps
They are used to define the contents of the assigned layer as collision mesh.
Visbility layer maps
They are used to define the contents of the assigned layer as visibility mesh.

A rendering layer map called 'Rendering 1' is created automatically by the mesh editor (because usually meshes are supposed to be rendered). Layer 1 is already assigned to it. If no collision layer map is created during mesh editing, the first layer with content is used as collision mesh.

And that's it, we've reached the top level of the edit mesh ... transcend!