The fungine chronicles

A journal to both teach and document my work, for Chen

Back in black


Things have finally slowed down enough to get back into my passion, so here I am. What have I been up to?

Well, the game is getting complex to the point that just building with Visual Studio doesn’t cut it anymore. With over 15 projects in 3 different languages (and so different VS versions) and various interdependencies, builds were slowly getting screwed up. So I decided to switch to using MSBuild manually. Now I have complete control over how everything gets built, and the power to build the entire toolchain in a single command. Huh, toolchain? Oh…

Way back when, the game’s editor was built into the game itself. Eventually this became bothersome when expanding on the game’s features, so I looked into ways to separate the two. I had a lot of goals for the editor to cover many different areas (scripting, shader editing, level editing) and I didn’t want to make some gigantic super editor with a complex interface. I wanted to make a bunch of simpler editors that were designed to do one thing well. I also wanted all of the editors to be able to modify the game during runtime. So the problem became, how do you get a bunch of programs to talk to eachother?

I looked a few options for IPC: named pipes, shared memory, sockets. I eventually decided on named pipes via WCF. Named pipes wins on performance and ease of use. WCF makes it really easy to setup communication between multiple different programs. The way its setup is there’s a program that hosts the WCF service, then all editors/tools and the game itself connect to that host and talk to each other through the host.

Instead of using WCF’s serialization, I decided to use protocol buffers via protobuf-net ( Protobuf-net is a great little library by Marc Gravell, a little light on documentation but then what isn’t these days. Now, everything works well if you treat F# as a syntax-light sort of functional C#, but if you embrace the language for its merits and and use things like discriminated unions (DU) or immutable objects, things get less nice. They way to get everything working is to make surrogates for your DUs. For DUs that don’t hold any values its pretty simple, one surrogate type can represent every valueless DU that you come up with. But for valued-DUs, you basically have to write a surrogate for each union, which contains a field for every possible value. I haven’t found a better way around this. In addition, you have to write that surrogate in C#, you can’t do it in F#. This is because discriminated unions in .Net are represented as a hierarchy of classes – but F# treats it all as one type. So for the following:

type Events =

| Foo of int

| Bar of int * int

There would be a class for Events, a class for Foo, and a class for Bar. Why is this important? Well, protobuf-net wants an exact conversion function from a type to its surrogate and back, and it sees any values of Foo as an instance of the Foo class, not Events. So you have to write a surrogate like:

type Surrogate() =

static member op_Implicit(surrogate : Surrogate) =        
    //extract the value from the surrogate and copy it    

static member op_Implicit(value : Foo) =

    //wrap the value in a surrogate


But you can’t say the type of value is Foo, because Foo isn’t a type to F#! The only exposed type is Events, and if you say value : Events, then protobuf-net will complain that there is no conversion operator for every case that it runs into (Foo, Bar). So, that has to be in C#, since it can see all of the types.      

So with that said, now I can spend time working on a few editors so that testing and developing new game features becomes quick and easy. For now I’ll be working on an actor editor (for developing my component based design), shader editor (for work on graphics stuff), scripting editor (for my own language), and continue working on my game.

Leave a comment »



Well not really, but Microsoft did. The menus are in all-caps.

This is how you remove it for the express editions:

So what’s good about it?

For starters, the express versions come bundled with a set of languages, instead of having a different express version for every language. So Express for Desktop supports C#, C++, and Visual Basic. It would have been nice if F# was included, but that’s tucked away in Express for Web. Also included is unit testing, code analysis, and TFS integration. I’ll stick with and Mercurial, but hey, its nice that Express is getting a lot of love.

Intellisense got a major boost. For example, intellisense now exists for C++/CLI. That is so very, very nice. As I’ve written before, I’m using a native library (Bullet) for use in my managed engine (Fungine), which necessitates the use of C++/CLI. That project was pretty un-fun to work with, mostly because its C++, but in part because of the complete lack of intellisense. Working with it was basically like working with notepad with tabs. Now the project will get some much needed attention. And that’s not all, there’s a whole grab-bag of goodies for C++ in this release: improved reference highlighting, more semantic colorization, and la piece de Resistance 3: C++ 11 support!

F# is now a first class citizen in the Express world. You used to have to use the Visual Studio Shell in combination with the F# compiler. Now it comes bundled with Visual Studio Express for Web 2012. It also comes with improved support for intellisense, and it supports F# 3.0.

So much cool stuff to explore!

Leave a comment »

Upgrades, refactoring, and lessons learned

Big, big month at the laptop this month. The codebase was starting to get unpleasant to work with. The game was structured around a handful of agents that controlled some major task, such as a graphics agent, a physics agent etc. That was fine in the beginning when things were just starting off, but pretty soon the agents became too large. They were responsible for too many messages, adding more messages meant editing a large function and required painful changes to the agent’s state. So, I decided to refactor.


My first plan was to split all of the agents, slim them down so that no agent was responsible for more than 5 messages. This would mean their message processing function would be small and manageable, as would the state they have to manage. So, 5 agents turned into about 12 sleek ones. Its a nice, flexible graph of agents. Whenever I add a major new feature, I can just write in a tiny new agent for say lighting, and very little would need to be changed elsewhere. Things were bad, but now they’re good, forever!

Well.. not quite. I was a little careless with how I connected the graph. When you want to tell an agent to do something and then wait for its reply, you use the PostAndReply method (or its async flavour). I got a little carried away with that. The main game loop requires that you wait for an agent to finish its task before moving onto the next task, for timing reasons. So the Boss tells the Simulation to simulate some frames, and then Scene to draw using PostAndReply. Then Scene tells the Renderer to render. And because the Renderer and the agent in control of all of the renderables share a device context, the Renderer has to wait for a lock to make sure that it’s the only thread accessing the device. The problem is, that’s a lot of waiting to do: Boss waits for Scene (blocked thread), waits for Renderer (blocked thread), waits for Lock (blocked thread). At the same time, a few other agents are busy doing some other fun stuff. A nasty symptom started showing up: the game would randomly pause for one second.

Not having any idea what’s wrong, I set out to collect data. Each agent would record the different types and how many occurrences of each message, per second, to their own file. Then, I whipped up a little something in C# that would graph the data, in hopes that I would be able to notice a pattern. I’d run the game several times with a stopwatch in hand, wait for it to pause, and then run the graph and see what happened at that time. Here’s an example.

chart data

The first few seconds are fine, a few agents hum along at 60 messages/sec (the lines in the middle) while the rest are unbounded and tend to reach 120+ (lines at top). But at 19:13:27 everything comes down to a crashing halt, and then a second later they all pick up again. So, at least I have data proving I’m not seeing things…

What’s the problem here? Sorry, we’re all out of threads! All of the agents run on the same ThreadPool. Every once in a while, too many agents would be busy, and so some agent waiting on another through PostAndReply would have to wait a long time, almost a full second before getting a reply. I’m guessing that the ThreadPool started killing threads since they were all blocked, until they could proceed. This is bad. Very bad. (Ah ReBoot, best show ever).

So, my design needed to be redesigned. In particular, I was very careful to make sure that there wouldn’t be long chains of PostAndReply, at least for critical game functionality like rendering. I replaced that by having the Boss agent do all the PAS’s sequentially and collect the intermediate results, passing them on to the next. For the subset of critical agents, each agent would be guaranteed that they would spin up after something happened, and before some other thing. This also got rid of the lock requirement for the Renderer/Renderables, since it was guaranteed that only one of them would use the device context at a time.

After running the game many times, waiting a very long time, collecting a lot of data, the graphs all came back spike free. Success! On to the upgrades.


My driving principles are KISS and YAGNI. I keep things as simple as possible until they stop being simple, at which point they get refactored into a new simple thing. I learn so much about game development week by week that any “robust” design would just get changed anyways, and its a lot easier to change a simple design than a complex one.

That being said, the way I handled “game objects” was really simple. Basically the former “mega” agents all held a list of some component, and all of the components were related by some id. So the player would be id 0, the mega-graphics agent would have a box for id 0, physics would have a body for id 0 etc. In order to move the player’s graphical representation, you’d ask the mega-graphics agent to move the box with id 0. It worked fine; I reached my goal of making a simple platformer where you jump from platform to platform and collect things. Now that I’m looking into greater game object interaction, things needed to change.

The solution was… wait for it… agents! Yes, all game objects are agents now. Agents that will be able to send messages to each other, instead of having to go through the “engine” agents. One example that I’ll be implementing soon is a mechanism that requires the player to move to a certain location in order for a blocking wall to move out of the way. So once the player enters the sensor, it sends a message to the wall to lower, and when the player leaves it sends a message to raise.

I decided that I needed an event system that would allow agents to subscribe to some event, and perform an action whenever that happens. It was surprisingly easy to implement using (any guesses? anyone? Bueller? Bueller?) agents! Any agent can post subscribe to whatever event they like, and any agent can post new instances of said event from any thread, and it’ll all be ordered and pleasant through magic agents! This really helped glue together the whole design as it seemed like the last remaining piece of the puzzle. Here’s a sample for the Player agent:

do keyboardEvents.Post(Subscribe (KeyPress, fun arg -> agent.Post (Move arg.Key)))

One line of code and now the player responds to keyboard input.

Another change I added was interpolation of physics state for rendering. Since I cap physics simulation at a fixed rate, the physics agent doesn’t perform exactly the same number of steps per frame. Its allowed to fluctuate, so that it will perform more or less steps to maintain that fixed rate. It keeps an accumulator of time spent which always has some time left over, representing how far you are between frames. So, the renderer takes that value and interpolates between the previous and current states to get a smooth update.

Finally, I went back and forth through the entire code base and todo’d all of the todo’s that had built up till now. Things that were hardcoded for a specific function, like say my instancer only supporting boxes, were generalized to support any shape. Cleaned up, fleshed out, optimal. Ready for my next iteration of gameplay features.

So that’s where things are now. I’m going to spend a little while working on my interface so that its easier to make levels than with the console, implement some perlin noise since I’m really sick of looking at squares everywhere, and then start working on game object interactions and physics.

Leave a comment »

State of the game


A lot of changes have happened since last summer. I’ve started work on a new game. Its a platformer, you collect things and solve puzzles. Here’s the current state of things right now:

feb 7 (number 1) [blog post]

feb 7 (number 2) [blog post]

feb 7 (number 3) [blog post]

The first change you should notice is that its all in stunning 3d. There’s a metric boatload and a half of things that’s changed since the last game.

The Renderer

I’m using geometry instancing to draw all of the platforms. Before I was making a separate draw call for each platform, which even though the level shown only has about 100 platforms, took too much time for my laptop to handle. Now they’re all drawn in one draw call.

I added a feature to my shader system that allows me to edit a shader and recompile it as the game is running. This means that you can change the shader in a provided text box and see your changes without having to save it. This is a reoccurring theme for my game, you can edit basically everything as the game is running. Shown here is a basic shader that does procedural texturing. It cuts up each platform into little squares with interleaved colours. I plan to use procedural texturing for everything in the game, since I can’t draw well and its just cool.

I also added a debug renderer for physics. It collects wireframe data from Bullet and then draws that on top of the level geometry. This allows me to see if the physical and graphical representations of stuff is in sync or not. The way I was handling this was rather simple, in my renderer I had code that was basically:


(*first pass: draw all shapes*)
renderBoxes instances positions

(*second pass: draw wireframes*)
renderDebugLines debugLines

So debug rendering was hardwired into my renderer. Now I abstracted it so the renderer just processes a list of commands like [ClearDepth, ClearRender, DrawInstanced, ClearDepth, DrawMesh]. Now its a lot easier to make changes to the way I render things. This will allow me to do some cool post processing stuff later on.


The console

I’ve implemented a basic console that can handle about 20 commands. This was a nice side effect of my message based engine, all the functionality was already there so the console just basically parses text and then sends the appropriate message. I’m looking into turning it into a basic scripting language to make it more generalized and easier to add functionality to.


The future

I have a lot of plans for the future. There’s a ton of physics stuff that I want to add, like constraints, motors, springs, different materials. Then of course I’ll add a scripting language for the game objects, which may or may not be the same as the console script. There’s also plenty of graphical stuff I’d like to explore, I’ve only scratched the surface with procedural textures, then there’s lighting and shadows, post processing effects like depth of field, motion blur.

Leave a comment »

L-Systems, Part 1

In Part 0 I gave a terribly basic intro to Lindenmayer Systems, or L-Systems for short. The gist of it is you’re rewriting strings using a set of rules that you define. One of the ways to get a pretty picture out of that string is to use a Turtle to interpret the string. You can think of Turtle graphics as you would a pen. You have a cursor (the pen tip) which is the location of where the pen marks will appear. You can move the cursor in different directions, and you can either keep the pen on the paper to draw a line, or lift the pen to have a blank space between the new position and the old. These are the basic operations that will allow us to get something out of our strings.

The first step is to assign operations to the symbols in our alphabet. I’ll use words instead of letters in this example to make it clear. Here’s the new alphabet for this example:

DrawLine – Move the turtle forwards by 1 cm (in the direction the turtle is facing) and draw a line connecting the old position to the new position

SkipLine – Move the turtle forwards by 1 cm (again in the proper direction), but do not draw a line

TurnLeft – Rotate the turtle 90 degrees to the left

TurnRight – Rotate the turtle 90 degrees to the right

In addition to this, we need to now keep track of the turtle’s position (x and y coordinates for this 2d example, but it works in 3d too) and orientation. So, lets setup our productions and axiom and make ourselves a nice Koch curve.

w = DrawLine, TurnLeft, DrawLine, TurnLeft, DrawLine, TurnLeft, DrawLine

DrawLine –> DrawLine, DrawLine, TurnLeft, Drawline, TurnLeft, DrawLine, TurnLeft, DrawLine, TurnLeft, DrawLine, DrawLine

Its also assumed if there’s no production for say TurnLeft, implicitly the identity production Something –> Something is used, it means it doesn’t change. Here is a picture of our axiom w, followed by three rewrites using our only production, left to right.

Koch curve 3 derivations

You can rewrite the resulting string as many times as you want using your set of productions, to get a more complex result. The one thing to note here is that each result is just one really long line that you keep bending around to draw a shape. It will give you some pretty results, but to model more complex objects such as plants and trees, we need to have the ability to have lines branch off from each other. That will be the topic for part 2.

Leave a comment »

Lavishing significant lecture signifying L-systems, part 0

A little while ago I started researching ways to generate content, since I’m not very artistic, and I can’t afford to pay my own salary, let alone an artist’s. On my journey I came upon L-Systems, which doubly interested me: it solves this problem, and it relates to language theory, which I used to like.

You’ve got some symbols, say p and c, which make up your alphabet A. Then you’ve got a set of rules that take one symbol (in A) and replace it with one or more symbols (again in A). The final thing you need is a list of one or more symbols which is your starting word, or axiom. As an example:

A = p, c

w = c

p –> c

c –> cp

Starting with our axiom, which is just the symbol c, we go through each symbol in the word and apply the suitable production. The key here is that all of the transformations happen at the same time, ie in one step, every symbol is replaced, instead of only doing one symbol per step. So going for four steps, we have:





That’s definitely a tree, I can tell from the pixels, and from seeing a few trees in my time. Well, so its not one quite yet. This is the meat and potatoes of the idea, but the gravy comes from how we take this, and get this

l-systems blog post 1

This was done using Turtle graphics to interpolate the string that our system came up with, which is what I’ll cover in my next post.

Leave a comment »

Live post

I’m trying something new for this post. Right now I’m tracking down a large memory leak, and I figured I would write about it as I work on it. I’m using the CLRProfiler which is a little tool that packs a lot of punch. The data it collects is insane, everything, and I mean everything your program does, it finds out.  It can give you a histogram of every byte allocated, sort that out by the type of object, how many instances of that object exist, the total memory of each type and its percentage of your entire memory usage, as well as the average object size.

CLRProfiler 1

See that tiny scrollbar on the right? That list goes on for miles, at the bottom there’s a bunch of tiny 12 byte stuff with only one instance each. Apparently I’ve got 37 megs worth of System.Strings floating around somewhere, and that number grows quickly. Right click on that large red bar and hit “Show who allocated”, and it kicks it into sudden death mode.

CLRProfiler 2

CLRProfiler 3

Its very pretty. Its a complete graph of every call made and the memory they allocated, every call those calls made, ad infinitum. I’m pretty sure if I printed this graph out, it would cover most of Toronto.

What does it mean?… I have absolutely no idea. I don’t know how to use this yet. Something somewhere is allocating a metric fuckton of strings, but I don’t use strings on my own anywhere. So something is making strings for me.

A little (well, a lot) of digging later and it turns out since I was storing things in sequences and passing those around, it would keep calling string related functions in my loader. I should have used lists instead. Switching to lists and now I’m back to allocating a few megs in a few seconds, instead of a little over a gigabyte. Good enough for now.

Now I’m going to work on physics, so the player can fly around. This part is going to hurt a lot. I’m going to use Bullet physics library. It’s an advanced physics and collision detection library, used in a lot of commercial games and even movies. Trouble is, its in unmanaged C++, and its documentation is lacking. There are two unofficial managed wrappers; one is dead, the other one is iffy. I’m going to look into it now, and hopefully it will be suitable. Otherwise I’ll have to write my own wrapper simultaneously as I write this engine.

C++/CLI makes both angels and demons cry for their mothers, so this is going to be quite soul numbing…fingers crossed.

1 Comment »

And now for something completely different

I took a long time to solve a problem. Then, I considered posting about it, but I wasn’t confident that I had done enough to show. Now so much has changed, if I keep waiting I’ll end up writing a book in a few years instead of just a post now.

Tic tac toe has given me a good start on my engine, but I have decided to move on to my next game instead of completing it. There’s nothing more that I can extract from it. I am now working on an Asteroids type game.

The last problem I had with Tic tac toe led me onto a new path. Up until my last post, I had been concerned with just getting shapes onto the screen. Then I wanted to be able to click on a space, and create a shape there. That’s where my journey started.

Back then, I was going to just use WPF’s events for input. The problem was the only place I could access these events were in this one function that created the window and set up some graphics stuff. How could I get the position of the mouse from the event, and then send it somewhere that will make a shape at that location? Answer: can’t, with that engine design.

I poked around, trying to make something work. Soon after I decided there was no way that I could have just made a small change and had it work. So I sat back, and opened my mind to brainstorm. In my mind I kept symbolizing the problem as passing a message around from one thing to another. That’s when I thought of the MailboxProcessor, it would be a nice simple way to pass messages around.

So I came up with an agent based design, where each piece of the engine was an agent, with a boss that handled routing messages between the agents. With MailboxProcessors it works two ways, you can either send a message and then continue on with your work (useful for saying things like hey, make something here), or you can send a message and then wait for a reply (useful for saying things like what is the location of something). I think it’ll work well, not just for now, but for future versions too.

The thing about MailboxProcessors is that they’re off working alone in their own thread. So instead of treating the agents as if they were all working on the same thread, sequentially doing their work and then sending a message that would be received in the next frame, I decided to have agents work in parallel.

This turned out to be a little harder than I thought, and created problems for me while I was still in the mindset of “synchronous but concurrent”. Then I threw caution to the wind and adopted the mindset of every agent is its own island. Each agent does its own thing with the data it has available. For instance, if a physics agent hasn’t finished simulating the world for a given frame, the graphics agent would just render the world with the last known state of the world. Once the physics agent simulated the frame, it would send the new data asynchronously in a message to the graphics agent, who would then use that new data the next time it rendered.

Hopefully this new engine will make good use of multicore processors. The laptop I develop on only has an ancient, weak dual core proc, but my gaming rig has a nice beefy quad that I’d love to unleash some serious simulating and rendering on.

A few hours ago I found another benefit of this design, it was really easy to lock each agent to a fixed frame rate once they were all separated. Its important to have a fixed time step for physics simulations, otherwise things tend to… explode.. universe divides by zero, that kind of jazz. It also makes the simulation deterministic. You can also set your priorities for different tasks, for instance perform your physics simulations at 120Hz, poll for user input at say 60Hz, update some AI at 30Hz, instead of having each task sharing the same frequency.

That reminds me, I got to try out a new feature (for me) in F#, units of measurement. Instead of just having numbers, you can give units to those numbers not just so that its clear what the numbers represent, but so the compiler can make sure you aren’t doing something wrong (adding meters to kilograms would result in a compile error, for instance).

I used units of measurement when working on the fixed frame rate problem.  In order to run at a fixed rate, you need to know how much time that you would use per frame. But using a value like 0.00833… doesn’t immediately tell you what rate you chose. Sure you could leave a comment, but then late one night you might change the value to something else, and forget to update the comment.

So, I was able to say:

let lockedFramerate = 120.0<Hz>

and then in order to get the dt of one frame,

let dt = 1.0/lockedFramerate

Now I have a value lockedFramerate, which is given in Hertz, and also a value dt which is given in 1/Hz which the compiler correctly figures out is seconds. The code after that statement treats dt as if it were a measure of seconds, so if I made a dumb mistake such as

let dt = lockedFramerate / 1.0<s>

it would determine the unit for dt to be Hz/s, and any statement trying to treat dt as a value of seconds would generate a compiler error.

<Columbo>Oh, just one more thing </Columbo>, I found a tiny bug in SlimDX while adding support in my input agent to use SlimDX’s RawInput. If you use WPF with RawInput, you won’t receive any messages. Hopefully they’ll look at my ticket soon and use my solution.


A Touch of Colour

A warning: this picture will show a dramatic change from my last post.

march 4 (number 2) [coloured board]

Yes, things have colour now! But surely, colour doesn’t mean much, does it? In WPF, one simply would say line.Brush = *some coloured brush*, and you’re done. In basic OpenGL tutorials you would just add glColor3f(r, g, b) inside your glBegin and again, you’re done. Not so with Direct3D11.  In Direct3D11 its a shader’s world; there are no hardcoded variables such as colour, position, size. Instead, its all values that get passed to the shaders. In the case of this picture, all of the colours are constant over the whole shape, ie it doesn’t change per-vertex, so I stored the colour data in a constant buffer. Constant buffers store per-thing data, some value that should be used for the whole shape. Its more efficient than having a copy of that value for each vertex in the whole shape; less memory is used, as well as less bandwidth.

So now (fun)gine supports the use of constant buffers, which is a major thing when you’re talking shader support. Right now it just supports the use of one buffer, with one value, and its expected to be used as a colour, but it is trivial to generalize this to using n buffers with any values stored within. Such values could be a time value for animation, information for lights/shadows, etc.

(Tic tac toe) is getting close to done. The obvious thing thats left is user input, and game logic.

After reading a post on  AltDevBlogADay, I have rethought my approach to scripting languages. Previously, I was trying to make a semi-full featured language that you would program in. It included a bunch of things that you would need to code something, but not what you would need to write rules.

In his post, Jake talks about unnecessary language features like complicated math support. He argues that doing computations on values is problematic: it can lead to exceptions, its going to be slow to interpret, and it makes scripts very dependent on map values not changing. If you have a script that does something to an object located at some point [x, y, z], and then you download a mod that moves that object, the script is now broken.

When I was designing my scripting language, I wasn’t actually considering how I would use it to make a game, I was just looking at other languages in use and basically competing with their feature lists. But Jake’s post has made me rethink this approach. Sure, having such a language would allow you or modders’ a great deal of flexibility, but it also gives you enough rope to hang yourself.

I’m now taking the approach of having a set amount of script functionality built into (fun)gine, and then that’s it. You won’t be able to write what would amount to “engine” code in your scripts. You can only use the functions given to you by the engine to make rules and decide logic. This means you won’t have control over memory, ability to write functions, etc. If you want more behaviour then you’ll have to wait for an update to the engine, instead of trying to hack that in yourself.

I’m hoping that this will make scripts safe and efficient, and at the same time be easier to write game code with. This is what I imagine a Tic Tac Toe script would look like:

when player clicks a space

    make “x” at space

    if space is centre

        make “o” at row(space) diagonal

    or if space is diagonal

        and row(space) has middle

            make “o” at row(space) diagonal

Or something like that. Space would be defined similar to the template syntax shown earlier, when/make/if would all be functions defined in the engine.


A simple picture is worth a few words

My progress so far with Tic Tac Toe

Here it is, this is what Tic Tac Toe is so far. Very, very simple and boring. But allow me to talk about what you don’t see. Tic Tac Toe is built with my F# game engine (fun)gine. The interface uses WPF in C# and Xaml. All non-interface stuff is rendered with Direct3D11. Already this should tell you that there is something more than meets the eye: combining WPF with Direct3D in a .Net language isn’t so simple. Until recently, it wasn’t even possible. Since the addition of D3DImage, WPF controls have been able to host Direct3D9 content. Hm, 9, but I’m using 11. Ah but wait! It can also host 9ex content. Which means you can create a normal D3D11 renderer, and then share the backbuffer with a D3D9Ex device, which WPF will then display. Its a little more complicated than your average Tic Tac Toe.

You might also be surprised to hear that this impressive display uses a geometry shader. A geometry shader? But you aren’t rendering to a cube map, or doing shadow mapping or, well, ANYTHING! Aha, but I am creating new geometry on the fly. See, I only store line data in my data file. That is, point A and point B. I do this for simplicity, if I want to make a change to a line, I can just move the start or end point, instead of needing to do some math and figure out where each vertex should go. So I use a geometry shader to take each line, and output two triangles that make up the rectangle of the line. Yes, boring simple use case of a geometry shader, but this is the first time I’ve ever used them.

Another not so simple feature of my engine is this template/generator system I came up with today. See, right now my engine is only a few days old, so I haven’t started on a level editor yet, so I’m writing all the data files by hand. And I’m really lazy, so I don’t want to keep writing things over and over. So I came up with the idea of templates. A template contains some lines that you want to draw many times in different locations. Templates aren’t drawn on their own though, so if you define a bunch of templates and thats it, nothing will be drawn. Thats where generators come in. A generator takes the name of a template, and then a collection of positions, and then draws that template at each position. Here is an example for the “X” shape in the picture.

The template is:

<Template name="x">
			<Position X="-1.0" Y="1.0" Z="0.0"/>
			<Position X="1.0" Y="-1.0" Z="0.0"/>
			<Position X="-1.0" Y="-1.0" Z="0.0"/>
			<Position X="1.0" Y="1.0" Z="0.0"/>

and the generator is

<Generator template="x">
	<Position X="0.0" Y="0.0" Z="0.0"/>

Boom, one x, ready to go. Want to draw more x’s? Just add more Position’s inside that generator.

Leave a comment »