Archive for the ‘Uncategorized’ Category

IGDA

Thursday, October 5th, 2006

Denrei’s recent post reminded me that I should reflect on attending the IGDA…

This was the first IGDA Madison meeting and was necessarrily geared towards organizing and aquainting. It was interesting to hear about peoples experiences. We talked with a guy at Human Head about their build system and how they are re-evaluating the system for their next project. They had tried Subversion for assets management, and came to the same conclusion we had: it blows. He eventually mentioned that they were now using PerForce which is basically like Subversion but much faster. This guy was a rendering programmer and I would have liked to talk to him longer, but didn’t get the chance.

A startup company called Frozen Codebase was there. They recently setup shop in Green Bay and received some venture capital. Currently they have a team of 7 people working on their first game and a few interns from the local ITT Tech Institute. I found this very interesting because it’s exactly how my current place of employment operates. Except there are only three people.

Someone asked why I didn’t just use the Torque Engine. I explained that I wanted to try developing an engine with the lightweight shader system that I explained in a previous post. I was interested in making engines and tools with new technology, not making games with the same tools.

Later someone said that he heard the PS3 was hard to develop for. Compared to the development system of the Xbox 360, it must seem unbelievably difficult. The PS3 doesn’t tie directly into an IDE like Visual Studio, you can’t hit F5 to compile and run your game. I told him I found it an interesting platform to develop for and plan on looking at the PS3 linux development kit. He seemed shocked…

All in all it was a good meeting. Everyone there was making games in some form, so there was a common ground. (At the UPL’s game SIG there was often no common ground which was frustrating). I’m looking forward to the next meeting even more. SIGs are going to be formed to focus on specific aspects of game development. I will be attending the programming SIG. I can’t wait.

Provocative thought: Raven Software seemed to be the only non-independent developer in attendence.

Maya Tangents

Thursday, October 5th, 2006

Did you know that Maya 6.5 and up completely supports tangents?  They are as easy to use as normals!  Everyone upgrade.

Collada

Sunday, October 1st, 2006

A long time ago when I was reading about the Khronos Group and enjoying their wonderful open standard specifications I happend upon Collada, an open standard asset exchange specification. The idea behind Collada is to provide a standard intermediate format between the content creation tools (Maya, Max, etc) and the content processing tools that create engine content.

I use Feeling Software’s ColladaMaya and FCollada. Both of these are Open Source allowing me to add extra features when necessary. Instead of dealing with content creation tool file formats and Apis now I can just work with Collada. The process goes something like this: 1) Source assets are exported from the content creation tool to a Collada file; 2) Lots of complicated tools process Collada files into engine assets.

Though it does add an additional step at the very top of the asset pipeline it has some major advantages:
1) The tools programmer only has to learn Collada’s Api
2) Any content creation tool that can export to Collada can be used to create engine assets
3) The FCollada Api is very easy to use

ColladaMaya is very straightforward to use. Simply load it in Maya and when exporting select Collada. ColladaMaya has a number of export options which are easily configurable during export.

FCollada can be used to both create and read Collada files, but most people will use it to read them. It differs from the standard Collada DOM in that it loads the entire file into a tree which can be traversed and queried for data. It is a much easier way to deal with this kind of data. Once I learned the basics of the FCollada Api I was able to create a basic exporter in about a day.

All in all Collada provides much more flexibility to the asset pipeline without tying your asset processing tools to any one content creation tool.

Horde3D

Tuesday, September 26th, 2006

The Horde3D SDK was released today. It has an incredibly clean programming interface. Though the engine uses C++ heavily, the entire engine interface is a handful of C functions. This is really cool because it’s not only very simple but very easily bound to any scripting language out there. The coolest part is that uses deferrend rendering. The project is open source, so it’ll be interesting to peruse the code when it’s released.

Shader Video

Thursday, September 21st, 2006

I whipped up a short video of the engine using the shader system described in a previous post.

I created a simple scene with two boxes, some shader parameters, and a touch of game code. The shader blurs the texture in two dimensions. The game code manipulates the blur_x and blur_y shader parameters on the root node. The shader parameters then propagate automatically to the geometry nodes. Each box gets the same shader parameters but each has a unique material, defined by a material render state for each box. So flexible…

Human Head

Wednesday, September 20th, 2006

Here is something I found on the requirements page for programmers at Human Head — another game development studio here in Madison.

Requirements (Applicants who do not meet these requirements will not be considered)
– Demonstrated knowledge of good software engineering practices…

That delights me. I also discovered that they are an independent developer and they also make non-electronic games (board games, card games, etc).

Engine Updates and Shaders

Wednesday, September 20th, 2006

Last week I did some work refactoring the material and shader system architecture in the Engine.

First, some background.

The Catharsis engine consists of a basic scene graph with generic properties at each node. Geometry is added as leaf nodes. The render state (material, cull, etc) can be modified at each node in the scene graph. The render state is propagated to leaf nodes by traversing the scene graph from the root to each leaf, pushing render state as it is encountered. The geometry node then stores the collected render state for use during rendering. This is a standard implementation of a scene graph.

One of my goals with the Catharsis Engine is to make it as generic and unified as possible. A decision I made at the beginning of this project was to only support video cards with with vertex and pixel shaders. Shaders not only allow amazing visual effects, but also afford an amazingly simple data flow architecture. I use Nvidia’s Cg for shaders. The Cg shaders are compiled to API specific shaders at run time, so the same Cg shader can be used in OpenGL and Direct3D.

In the fixed function pipeline you have to explicitly specify the type of each piece of vertex data. Vertex position, Normal, Color, Texture Coordinate. Even worse the type of the data completely defines how it is used. A normal can only be used for lighting, with the hardware only supporting a few lighting equations. For per pixel operations, texture combiners came along. These days, with a shader you can say exactly how you want to calculate the final color of a pixel using various textures or vertex data. If you want to add together the colors of three textures you can write an equation. In the OpenGL fixed function pipeline you had to setup very limited texture combiners telling it the operation and operands you wanted to use. You programmed (?), very clumsily, by setting various states. It really sucked. For example if you wanted to add two textures together per pixel you told OpenGL to set the combiner function to add, told it operand 1 was texture0 and operand 2 was texture1.

I spent a lot of time creating systems for engines based on fixed function video cards. It is frustratingly inflexible. For each data type you have to define exactly where it goes, what states it has to setup and how. Now, shaders do all of that without imposing any restrictions.

Shaders allow for extremely generic data interfaces. If used properly an engine can be designed as a system for binding data to shader parameters. Everything needed for rendering is a few buffers of data some uniform parameters and textures tied together with a shader. A basic shader would just take a buffer of vertex positions, transform it using the current world, object and view matrices, and output the vertex ready for rendering. Or a shader could take a buffer for vertex positions, normals, colors, and texture coordinates, and you’d have basic textured and colored geometry. Or, you could put in some completely non-standard information. You could have a buffer for vertex position 1, vertex position 2, an interpolation factor, Normal, and extrusion factor. The shader would calculate the position of the vertex by interpolating the between the two vertex positions and then moving the vertex along it’s normal by the extrusion factor. The engine no longer dictates what is possible, it only sets up the data and binds it to parameter names in the shader.

A system like this allows for a lot of experimentation. Once the basic data binding system is complete, it never needs to change. Shaders can be written that take into account a variety of data. The data itself is very simple and the functionality is defined entirely outside the engine, allowing for completely new rendering functionality without changing the engine at all.

This is how Catharsis operates. A Catharsis mesh file contains one or more named vertex buffers with float, float2, float3, or float4 data. An index buffer is optional. On load, Catharsis turns these into hardware vertex buffers. When it comes to rendering Catharsis binds the shader, then binds the vertex buffers and textures (collected during the render state propagation) to shader parameters by name. All that’s left is to call either glDrawArrays or glDrawElements depending on whether there is an index buffer. OpenGL then renders the data using the vertex and fragment shaders.

This basic system allows rendering of anything. Neither the engine nor the mesh format forces the data to be used in a specific way. The mesh format provides the data, the shader provides the processing, and the engine just creates the hardware resources and sets up the data in the shader.

However, until recently there was one thing missing from Catharsis: uniform shader parameters.

Vertex buffers allow data to be specified per vertex. Textures provide data per pixel. Uniform shader parameters provide data per shader. For instance if a shader calculates the color at each vertex with the equation color = color0 + t*(color1-color0). Color0 and color1 are specified using vertex buffers and t is specified once for the shader. At runtime t can be modified so that the shader interpolates the colors differently. Until recently there was no way to manipulate a shader uniform at run time in Catharsis. Well, no a clean way that fit into the scene graph and render state propagation system. Now there are ShaderParam render state objects in the scene graph.

A ShaderParam stores a piece of data (float, float2, float4x4, etc) and a name. ShaderParams attach to scene nodes and propagate much the same way render states do, though there are minor differences that keep them from being render states themselves. ShaderParams are also stored in Geometry leaf nodes along with the render state. When rendering, the ShaderParams are used to bind the data defined in the ShaderParam to a named parameter in the shader.

The greatest effect of this system has been how removed the data becomes from the engine. Everything can be defined in Maya and stored in Maya files. When exported to Catharsis scene format, all that data is ready to use. The user has the ability to change both the data and the way it is processed. An infinite number of ways to process the data are now possible, all without ever needing to modify the engine.

Tools

Saturday, September 16th, 2006

Many game engines can load multiple texture formats, mesh formats, etc. Why? An engine should be a lean, well understood, rock solid collection of code. Having multiple handlers for the same type of data only increases the complexity of the engine. An engine should have exactly one input for each type of data and it should be handled extremely well. It is the tools should support as many formats as possible.There are a number of advantages to this:

1) The engine never cares about handling new types of data. If you wanted to handle a new type of texture, only the tools need to be modified. It’s always easier to modify your tools. Releasing a new version of your engine to allow loading some new image format is bad design.

2) The engine stays small. If you write many image loading functions or use an image library to load various image formats it is probably pretty large. It might even be necessary to include an extra dynamic library just to read all those image types.

3) Not tied to the same programming language as the engine. Tools can be written very quickly in higher level languages like Python instead of in the engine’s high performance language, making tool creation much faster.

4) Validation. Tools have a chance to validate data before it even reaches the engine. This is a perfect opportunity to notify the user of problems with the source asset without having to run it through the engine itself.

5) Control. Data can be organized in the most engine friendly format possible for fast loading. Most source asset data formats are not geared towards loading efficiency and require some amount of data conversion that would be better put in tools.

EDIT: Gamasutra just posted a great article about tool development by Ben Campbell which covers some of the things here and a lot more.
http://gamasutra.com/features/20060921/campbell_01.shtml

Saving Time

Friday, August 25th, 2006

The last few weeks I haven’t had much time for Stolen Notebook.

I am one of two programmers at a small startup software company near Madison. I have about 5 hours of free time a day to split up amongst things like eating, working out, reading, and Stolen Notebook. I have even less time if I want to get a reasonable amount of sleep. The last few weeks have been especially hectic since we are nearing a deadline for a second round of investment and need a solid product to show. We have had two rounds of beta testing in order to prepare, which meant extra time spent on the weekends fixing and supporting problems.

Luckily, at Stolen Notebook we have reached a point in development where the workload has started to spread out considerably. The basic engine design is now very solid. Denrei can push source assets through the tools and Tony can use them in the game. Tony has been able to develop game code with very little direct support from me. This means my time can be spent adding new features instead of constant debugging of tools and engine code.

What I am looking forward to is the possibility to further automate our development process.

Some time ago Tony setup buildbot. It automatically builds our engine, tools and documentation when anyone checks in code to subversion. This has freed up a lot of time for everyone. I don’t have to worry about building the latest tools for denrei, they’re automatically available. While this is great for code there has never been a solid process for building games assets.

A long time ago I created a automated game assets build system called SNax (Stolen Notebook Automated eXporter). It was useful…when it worked. It never worked very often. It was very difficult to maintain SNax because the engine was developing so quickly. SNax couldn’t keep up with the changing formats and features. It was eventually left by the wayside.

The next, shortlived, iteration of asset building was to store source assets in subversion. Let me just say this: SUBVERSION IS TERRIBLE FOR STORING LARGE BINARY FILES. Subversion stores two copies of all files it has under revision control. One copy is the one you work on, the other is hidden away in the .svn directory so that you can revert to the original at any point without copying the file from the subversion server. This means that everything takes up twice as much space, and takes twice as much time to checkout. A secondary problem is that diffing huge binary files is very time consuming, so checking in files you’ve changed takes even more time. Denrei tried this system for a short while, and spent all his time waiting upwards of 10 minutes while his work was checked in. The new asset system will have none of these problems.

The major inspiration for this asset building server is this article. The article describes a very simple way to use python to detect file changes in a directory. I’ve done something like this in C before. Doing it in python is much more sane. This script sits and waits for files to be modified in a directory on the server, which is shared on the network, then exports the source asset to the game asset if necessary.

This script fits very neatly into the existing tool set. Let’s consider snscene, the Maya scene exporter. Right now denrei or Tony has to use snscene manually. By associating the Maya file extension (.mb) with the ‘snscene’ command in the directory watching script assets are exported whenever denrei changes a Maya file on the server. The tools don’t have to be modified to support this since the script uses them the same way a user would. The tools can still be used manually and the tools and the automated build system are strictly separated, making maintanence easy.

Enhancing the benefit of this automated build system is out brand new Gigabit network. Denrei tells me that he can transfer files across the network about four times faster than on our old 100 MBit network. I believe him. Gigabit ethernet makes the asset server feasible. It should be almost unnoticible for denrei that he is working on source assets across the network. Combined with the automated asset builder, every time denrei saves everyone should have the latest games assets in a matter of seconds. And no one has to do a thing.

Working for the Weekend

Wednesday, August 16th, 2006

Things have finally settled down a bit after moving. The new apartment is fantastic. All of Stolen Notebook is now in the same place. We haven’t been able to get back into things though. We’ve been unpacking for the last few days. I just got my computer setup yesterday and internet should be hooked up today.

There is an investor beta-test at work this weekend. I have a lot of things that need to be finished before then. Hopefully by the weekend I’ll be able to spend some time doing Stolen Notebook work. I’ve been amassing a lengthly list of things to do…