Archive for September, 2006

Horde3D

Tuesday, September 26th, 2006

The Horde3D SDK was released today. It has an incredibly clean programming interface. Though the engine uses C++ heavily, the entire engine interface is a handful of C functions. This is really cool because it’s not only very simple but very easily bound to any scripting language out there. The coolest part is that uses deferrend rendering. The project is open source, so it’ll be interesting to peruse the code when it’s released.

Shader Video

Thursday, September 21st, 2006

I whipped up a short video of the engine using the shader system described in a previous post.

I created a simple scene with two boxes, some shader parameters, and a touch of game code. The shader blurs the texture in two dimensions. The game code manipulates the blur_x and blur_y shader parameters on the root node. The shader parameters then propagate automatically to the geometry nodes. Each box gets the same shader parameters but each has a unique material, defined by a material render state for each box. So flexible…

Human Head

Wednesday, September 20th, 2006

Here is something I found on the requirements page for programmers at Human Head — another game development studio here in Madison.

Requirements (Applicants who do not meet these requirements will not be considered)
– Demonstrated knowledge of good software engineering practices…

That delights me. I also discovered that they are an independent developer and they also make non-electronic games (board games, card games, etc).

Engine Updates and Shaders

Wednesday, September 20th, 2006

Last week I did some work refactoring the material and shader system architecture in the Engine.

First, some background.

The Catharsis engine consists of a basic scene graph with generic properties at each node. Geometry is added as leaf nodes. The render state (material, cull, etc) can be modified at each node in the scene graph. The render state is propagated to leaf nodes by traversing the scene graph from the root to each leaf, pushing render state as it is encountered. The geometry node then stores the collected render state for use during rendering. This is a standard implementation of a scene graph.

One of my goals with the Catharsis Engine is to make it as generic and unified as possible. A decision I made at the beginning of this project was to only support video cards with with vertex and pixel shaders. Shaders not only allow amazing visual effects, but also afford an amazingly simple data flow architecture. I use Nvidia’s Cg for shaders. The Cg shaders are compiled to API specific shaders at run time, so the same Cg shader can be used in OpenGL and Direct3D.

In the fixed function pipeline you have to explicitly specify the type of each piece of vertex data. Vertex position, Normal, Color, Texture Coordinate. Even worse the type of the data completely defines how it is used. A normal can only be used for lighting, with the hardware only supporting a few lighting equations. For per pixel operations, texture combiners came along. These days, with a shader you can say exactly how you want to calculate the final color of a pixel using various textures or vertex data. If you want to add together the colors of three textures you can write an equation. In the OpenGL fixed function pipeline you had to setup very limited texture combiners telling it the operation and operands you wanted to use. You programmed (?), very clumsily, by setting various states. It really sucked. For example if you wanted to add two textures together per pixel you told OpenGL to set the combiner function to add, told it operand 1 was texture0 and operand 2 was texture1.

I spent a lot of time creating systems for engines based on fixed function video cards. It is frustratingly inflexible. For each data type you have to define exactly where it goes, what states it has to setup and how. Now, shaders do all of that without imposing any restrictions.

Shaders allow for extremely generic data interfaces. If used properly an engine can be designed as a system for binding data to shader parameters. Everything needed for rendering is a few buffers of data some uniform parameters and textures tied together with a shader. A basic shader would just take a buffer of vertex positions, transform it using the current world, object and view matrices, and output the vertex ready for rendering. Or a shader could take a buffer for vertex positions, normals, colors, and texture coordinates, and you’d have basic textured and colored geometry. Or, you could put in some completely non-standard information. You could have a buffer for vertex position 1, vertex position 2, an interpolation factor, Normal, and extrusion factor. The shader would calculate the position of the vertex by interpolating the between the two vertex positions and then moving the vertex along it’s normal by the extrusion factor. The engine no longer dictates what is possible, it only sets up the data and binds it to parameter names in the shader.

A system like this allows for a lot of experimentation. Once the basic data binding system is complete, it never needs to change. Shaders can be written that take into account a variety of data. The data itself is very simple and the functionality is defined entirely outside the engine, allowing for completely new rendering functionality without changing the engine at all.

This is how Catharsis operates. A Catharsis mesh file contains one or more named vertex buffers with float, float2, float3, or float4 data. An index buffer is optional. On load, Catharsis turns these into hardware vertex buffers. When it comes to rendering Catharsis binds the shader, then binds the vertex buffers and textures (collected during the render state propagation) to shader parameters by name. All that’s left is to call either glDrawArrays or glDrawElements depending on whether there is an index buffer. OpenGL then renders the data using the vertex and fragment shaders.

This basic system allows rendering of anything. Neither the engine nor the mesh format forces the data to be used in a specific way. The mesh format provides the data, the shader provides the processing, and the engine just creates the hardware resources and sets up the data in the shader.

However, until recently there was one thing missing from Catharsis: uniform shader parameters.

Vertex buffers allow data to be specified per vertex. Textures provide data per pixel. Uniform shader parameters provide data per shader. For instance if a shader calculates the color at each vertex with the equation color = color0 + t*(color1-color0). Color0 and color1 are specified using vertex buffers and t is specified once for the shader. At runtime t can be modified so that the shader interpolates the colors differently. Until recently there was no way to manipulate a shader uniform at run time in Catharsis. Well, no a clean way that fit into the scene graph and render state propagation system. Now there are ShaderParam render state objects in the scene graph.

A ShaderParam stores a piece of data (float, float2, float4x4, etc) and a name. ShaderParams attach to scene nodes and propagate much the same way render states do, though there are minor differences that keep them from being render states themselves. ShaderParams are also stored in Geometry leaf nodes along with the render state. When rendering, the ShaderParams are used to bind the data defined in the ShaderParam to a named parameter in the shader.

The greatest effect of this system has been how removed the data becomes from the engine. Everything can be defined in Maya and stored in Maya files. When exported to Catharsis scene format, all that data is ready to use. The user has the ability to change both the data and the way it is processed. An infinite number of ways to process the data are now possible, all without ever needing to modify the engine.

Tools

Saturday, September 16th, 2006

Many game engines can load multiple texture formats, mesh formats, etc. Why? An engine should be a lean, well understood, rock solid collection of code. Having multiple handlers for the same type of data only increases the complexity of the engine. An engine should have exactly one input for each type of data and it should be handled extremely well. It is the tools should support as many formats as possible.There are a number of advantages to this:

1) The engine never cares about handling new types of data. If you wanted to handle a new type of texture, only the tools need to be modified. It’s always easier to modify your tools. Releasing a new version of your engine to allow loading some new image format is bad design.

2) The engine stays small. If you write many image loading functions or use an image library to load various image formats it is probably pretty large. It might even be necessary to include an extra dynamic library just to read all those image types.

3) Not tied to the same programming language as the engine. Tools can be written very quickly in higher level languages like Python instead of in the engine’s high performance language, making tool creation much faster.

4) Validation. Tools have a chance to validate data before it even reaches the engine. This is a perfect opportunity to notify the user of problems with the source asset without having to run it through the engine itself.

5) Control. Data can be organized in the most engine friendly format possible for fast loading. Most source asset data formats are not geared towards loading efficiency and require some amount of data conversion that would be better put in tools.

EDIT: Gamasutra just posted a great article about tool development by Ben Campbell which covers some of the things here and a lot more.
http://gamasutra.com/features/20060921/campbell_01.shtml