Home
About
Screenshots
Download
Documentation

Ephemeris Design Decisions

At the time that I started writing Ephemeris (early 1999) I did some web search for rendering engines and found very few.  Open Inventor looked promising but was commerical at the time, so I decided to write my own.  (I like writing my own things, so this was a good excuse)

The purpose

Ephemeris was originally conceived as the basis of a screen-saver that would create random photo-realistic outdoor scenes but not limited to scenery.  Thus the engine needed to be capable of displaying scenes (mountains, water, sky, clouds, etc.) but also capable storing a library of objects so that the screen-saver AI could intelligently combine them into a pleasing scene.  This gave the primary goals of
  1. Photo-realism
  2. Ability to render from a database of objects
  3. Moderately fast rendering times (1 minute or less)

Rendering method

When computers get fast enough, real time ray-tracing will probably supplant the current graphics methods, at which point POV-Ray, being free and powerful, will probably be the inevitable choice.  I didn't really want to write a ray-tracer, since it would provide nothing that POV-Ray didn't already have.  However, I did some benchmarks on a Pentium III 450 (fast at the time) and if I remember correctly, POV-Ray took between 30 - 60 minutes to render a full-screen image (selected from some of the winning entries in the International Ray-Tracing Competition).  This was way too long for my screen-saver to generate an image (one minute or less would be acceptable) and given a factor of 30 would take five doubling periods (approximately 7 years according to Moore's Law) which was also too long to wait.   This also implies that real-time ray-tracing at full-screen resolution is about 5 orders of magnitude away (100,000 times), assuming that "real-time" means 30 frames per second.  Thus, for the short-term future, at least, ray-tracing is not a viable option for quick rendering at large sizes.  This left either OpenGL or DirectX.  DirectX is about as portable as a bank vault and John Carmack had blasted the DirectX API not too much earlier in one of his .plan files;  since OpenGL certainly seemed like the industry standard, I went with that.  (I have not had cause to regret it)

Model format

Every so often I have tried out various rendering packages and inevitably their user interfaces are extremely hard to use.  For some reason cubes, spheres, and cones are easy to place but anything more complicated is a mystery.  In short, they are a user interface nightmare.  (To be fair, I haven't tried out very many commerical products both for price reasons and out of disappointment with what I had seen)  Some non-GUI environments, such as Pixar's RenderMan software or the POVRay ray-tracing software take a series of commands, which has the disadvantage that you end up programming your scene (like the way that you TeX makes you program your documents—powerful, but usually you are willing to trade power for the ability to write your text without debugging it).  I wanted something stored in a plain text file that was immediately obvious what was happening (in part so that I didn't have to write my own GUI).

Since I wanted to avoid debugging my models, I decided to experiment with an object-oriented approach.  Since I am a programmer, and wanted to be able to reuse my models easily, I wanted a design that permitted reuse of objects in the way that function permit reuse of code.  Thus, the central paradigm of Ephemeris models is small, reusable objects.  There are no commands, just properties of objects.

Having the models easily readable meant that there had to be fairly high-level objects, since objects like mountains and trees could be a huge mass of unreadable triangles otherwise.  Thus the FractalMountain and FractalTree objects.  Things like statues might be conveniently modelled with rotated height maps so the ReliefSculpture was created.  I guessed that Liquids could be modelled with equations;  the AnalyticSurface resulted.  Most objects have smooth surfaces, so clearly a good way of generating them needed to be found.  NURBS surfaces can represent any 3-dimensional surface, so they were the best candidate, especially since OpenGL already had support for them.  The fact that they are slow (except on a GeForce 3, at the time of this writing) was not a problem;  I figured that by the time I finished the engine, Moore's Law would have fixed that.  (In point of fact, I ended up avoiding the problem by pre-computing them and storing the vertices at the expense of memory)

Analysis

The choice to use OpenGL rather than ray-tracing was well-made:  the API is fairly clean and it is portable.  While certain realism is hard to achieve with OpenGL (proper translucency, lense effects, and recursive reflections), my inability to make realistic models and textures and the difficulty in accurately modelling physical phenomena is currently much more of a limiting factor.

The model design, while still the best choice for the requirements set out, has proven to have several problems:  text files are not that easy to make models with and make writing a GUI (to solve the first problem) a pain.  I discovered fairly early on that while a model file was easy to read when written, it was hard to write.  Even something as simple as making a bronze sphere takes a lot of time.  First you try yellow ("color = (0, 128, 128)"), oops green and blue make cyan, ("color = (128, 128, 0)"), oops, bronze isn't really yellow.  At this point you whip out some program that has an RGB color editor (the Windows default color picker is fine for this) and go through several iterations, copying down the values.  Once you get the color right, you then have to try several different shininess parameters.  Obviously, the #material command was intended to allow one to do this only once, but imagine the difficulty in making a chess piece?  (It took me about 4 hours to model:  first I had to sketch out the piece on graph paper to get the coordinates;  I spent the rest of the time fiddling with the coordinates because my sketch was looked terrible on the screen.  Worse, moving a section of points meant that I had to change the coordinates for most of the other points, so it took a lot of time.  With a GUI it took about 5 minutes)

When I started to write the GUI I discovered that C/C++ really prefers data structures and not text.  The GUI had a have a separate parser from the engine because the engine didn't need to keep track of where the objects were in the file, because the engine wasn't going to insert a new subobject, but the GUI would.  Furthermore, the properties needed to have some mapping from the GUI data structures to text and keeping this in sync required constant vigilance on the part of the programmer (me).  Ultimately, however, it is extremely nice to be able to tweak the model files if something goes wrong with the GUI (which is not unlikely, since I have not focused a lot of effort on it) and it is nice having the data independent of a particular implementation's data structures.

I also discovered that some of my assumptions about how the world works were incorrect.  Water does not seem to be easily modelled by simple equations, for instance.  Another discovery is that height maps were not as easy to create from photographs as I had imagined, which limits the usefulness of the ReliefSculpture.  The problem is that the intensities of the colors do not correspond to the displacement of the surface itself and there is no good way to correct for this, since a painted sculpture could very well have the same color in both recessed and relief areas.  On the other hand, NURBS objects have become very useful, especially since the calculation time can be performed once if memory is not an issue, and increasingly less likely issue as technology improves.

The decision to keep everything an object has been very powerful.  First, while programming OpenGL in the engine itself, I have discovered the problem with state programming.  Although state programming is very easy to learn and to program, debugging is difficult because there never is documentation describing how the states interact with each other or which state is a likely candidate for a particular symptom.  I have spent hours tracking down some state problems, when it dealt with a state I didn't know existed, a problem which my models files are immune to.  Secondly, having objects composed of objects composed of objects (etc.) has been useful because it is easy to reuse objects but give them an entirely different visual appearance.

Home    About    Screenshots    Download    Documentation

Copyright © 2001 Geoffrey Prewett