Generating Beautiful 3D Outlines
Recently I needed to draw an outline for the selected object in an architectural renderer I was working on. It turns out that outlining a 3D object is not as easy as it seems like it ought to be. In fact, the most common suggestion on places like StackOverflow really does not look very good. In this article I summarize the main techniques, eventually settling on post-processing for the best results.
Offset and Scale
The most commonly suggested method for outlines is to scale the object slightly and offset it away from the camera. This works fine for a quick and dirty method, but it is not suitable for production work:
- The scaling simply multiplies every dimension, so long objects will have
a thick line on the edges of the long dimension and a thin line on the edges of the thin dimension:
- Scaling also causes pieces of the model far from the center to be offset with respect to where the outline needs to be. For a proper outline of a table you want the table legs to simply become thicker. However, since scaling increases the size of the table, the legs also move away from the center, in addition to becoming thicker. This pretty much guarantees that your outlines will be incorrect, especially if you want the outline to be thick enought to be easily visible. It also ensures that your outline widths change as you change viewing angles.
- Getting a fixed width outline is tricky, since the scaling and offset amounts need to vary with the distance to the camera. (Unless you actually want the width get thinner as the object gets farther away.) But as we have already seen, getting a consistent width in the general case is impossible anyway.
Stencil and Wireframe
The oldest outlining technique is to draw the selected object to the stencil buffer, then draw the object using thick wireframe lines and set the stencil test to only pass when the stencil value is 0. Since pixels belonging to the object will be 1, the only pixels that will be drawn will be outside the object. These will correspond to the edges of the wireframe, since those lines will be half on the interior and half on the exterior. This method was a reasonable approach in the OpenGL 1.1 days, has several drawbacks that make it unsuitable for many modern applications:
- Line drawing is often not accelerated well on consumer-grade graphics cards, so this may be relatively slow.
- OpenGL ES does not require the implementation to support line widths other than 1, which basically makes this solution unusable on mobile and WebGL, unless you are okay with 0.5px outlines.
- OpenGL’s lines do not look very good. Plus, the lines are completely solid so the edge transition will be very harsh.
Post-processing
Post-processing is much more robust. The general method is:
Disabling depth writes when drawing the outline has the side effect that the outline will be visible even if the object is hidden behind another object. Generally this is desirable, as you usually want to know where your selection is even if it is not visible. Should this be undesirable, you can add a depth buffer to the mask texture being rendered to, then in the post-processing step, leave depth buffering enabled and have the fragment shader write the depth value from the texture as the fragment’s depth. (This requires that your OpenGL implemention supports writing the depth value.)
The post-processing step can be any image processing you desire. However, I have found that there are two qualities that help make a good outline:
- The outline one or two pixels away from the edge should be solid. This means you can’t just do something like alpha = 1.0 - dist / width as then the first pixel will be noticeably transparent. For example, with a 5px line, the first pixel could very well have an alpha value of 0.8. This is not difficult to fix but does complicate the math slightly.
- If you want the outline when the object is obscured, you probably do not want the interior to be solid, otherwise you will get a large colored blob, which can be confusing if the shape at the current viewing angle is not easily recognizable. The mask texture has the inside solid, of course, so the shader will need some logic to make the interior transparent. This means that a Gaussian blur might not give good results, although Sobel edge detection might.
I ended up using shaders similar to the following:
Vertex shader:
uniform vec2 pixelSize;
attribute vec2 pos;
varying vec2 texCoord;
void main()
{
texCoord = pos;
// pos ranges from [(0, 0), (1, 1)], so we need to convert to OpenGL’s
// native coordinates of [(-1, -1], (1, 1)].
gl_Position = vec4(2.0 * pos.x - 1.0, 2.0 * pos.y - 1.0, 0.0, 1.0);
}
precision highp float;
uniform sampler2D texture;
uniform vec2 pixelSize;
uniform vec4 color;
varying vec2 texCoord;
void main()
{
const int WIDTH = 5;
bool isInside = false;
int count = 0;
float coverage = 0.0;
float dist = 1e6;
for (int y = -WIDTH; y <= WIDTH; ++y) {
for (int x = -WIDTH; x <= WIDTH; ++x) {
vec2 dUV = vec2(float(x) * pixelSize.x, float(y) * pixelSize.y);
float mask = texture2D(texture, texCoord + dUV).r;
coverage += mask;
if (mask >= 0.5) {
dist = min(dist, sqrt(float(x * x + y * y)));
}
if (x == 0 && y == 0) {
isInside = (mask > 0.5);
}
count += 1;
}
}
coverage /= float(count);
float a;
if (isInside) {
a = min(1.0, (1.0 - coverage) / 0.75);
} else {
const float solid = 0.3 * float(WIDTH);
const float fuzzy = float(WIDTH) - solid;
a = 1.0 - min(1.0, max(0.0, dist - solid) / fuzzy);
}
gl_FragColor = color;
gl_FragColor.a = a;
}
The code for the illustrations may be useful. Please see the functions run_offsetOutline for the offset-and-scale technique and run_blurOutline for the post-processing technique.