rust raytracer school

2021 - 2022

This raytracer project is my current work for my apprenticeship project at the University of Wisconsin-La Crosse. It is a rewrite of an older project in Rust, designed for maximum performance. Over the course of this school year, I am working with Dr. Kenny Hunt to research rendering and optimization techniques as a way to further my understanding of high-performance computing, especially with parallelization.

Sample render

My work with raytracers originally started last year when I wrote a raytracer plugin for a game I play, which exports the current map players have built from a game's server into its file type, reads it into memory, and converts these game shapes into raytracer primitives. Then, the raytracer (iteration one written in JavaScript, iteration two written in Crystal) would render the scene using a variety of rendering techniques, and finally either export the image out to a PNG or load it back into the game by converting the image to a series of bricks, to which the save underwent quadtree optimization to save back on how many bricks loaded in.

In 2021, I was granted the Eagle Apprentice scholarship for UW-La Crosse, which allows me to work with a professor in my department on a project with the goal of becoming more familiar with both my major and with a topic related to my major. In my case, I chose to learn more about high-performance computing, so I decided to rewrite my raytracer project in a language I'd become more familiar with.

Since my old raytracer, this project has already proven to be much faster in its first week of life. It is capable of rendering techniques like diffuse and specular lighting (Blinn-Phong), sun lights, point lights, reflections, textures (solid, checkerboard, and images), UV projections, importing meshes from OBJ files, skyboxes (solid, normal, cubemap), and a number of primitives. While the goal of this project is not to study more and more complicated rendering topics, I am still interested in researching more techniques like refraction, normal maps, ambient occlusion, and possibly some more complicated techniques like global illumination or even caustics.

What's most important is that this project is fast. As a related effort, I've implemented a BVH construction technique (borrowed from sbvh-rs) to vastly increase the performance of ray casting against meshes of arbitrary triangle count and density. The raytracer is capable of rendering meshes with triangles in the hundreds of thousands in fractions of a second.

Scene description language

Initially, scenes were temporarily described and encoded in the entry point to the raytracer, main.rs. I needed a more convenient, clever, and developed way to describe scenes in a way that was easier to write and quicker to swap out.

I first looked at writing an interface for Lua so that scenes could be described in Lua instead. I found a high-level wrapper over Lua's C API and started to implement this, but quickly ran into weird concurrency issues with it and decided it would be more interesting to try something else...

Instead, I decided I'd like to experiment with learning a little bit more about how to build my own interpreter to describe a scene. After reading a page or two from Crafting Interpreters and some general resources on parsing text into tokens and tokens into an AST, I started writing my own scene description language interpreter.

Within about a day, it was capable of declaring and rendering most objects with literals like numbers (123), strings ("abc"), and vectors (<1, 2, 3>). From there, I wanted to start procedurally generating scenes with more common programming concepts like loops, variables, and so on, so I implemented for loops, variables, and a shunting-yard-like expression parser.

Moreover, I also added time parameterization to the SDL CLI, which injects a t variable into the global scope of an SDL file that indicates the frame number the animation is on. The --sequence N flag will render the scene N times, with t for each n up to N. From there, aggregating the result into a GIF is easy.

Below is an animation generated by the raytracer and its SDL, followed by the SDL code used to produce it.


# Some variables for quick customization. We insert them into our camera...
let vw = 500
let vh = 500
let fov = 40
camera { vw, vh, fov, yaw: 0.0001, pitch: 0.0002 }

# Add a basic skybox.
skybox {
    type: "cubemap",
    image: "assets/space.png"

# Here are some variables defined in the top-level scope.
let dist = 3
let radius = 0.5
let n = 24
let time_scale = PI / 32

# Add a basic sun light...
sun {
    vector: <-0.8, -1, -0.2>,
    intensity: 0.8

# A for loop, over the range [0..n)
for i in 0 to n {
    # Add a sphere in each iteration...
    sphere {
        # With a position following a circle, with radius `dist`.
        position: <
            cos(i / n * TAU) * dist * cos(t * time_scale),
            sin(i / n * TAU) * dist,
            cos(i / n * TAU) * dist * sin(t * time_scale) - 12

        # Set the radius to our variable `radius`. Leaving out a value (e.g. `radius: 1`)
        # tries to pull a value out of a variable of the same name, in this case, one we set in the
        # top-level scope.

        material: {
            # Use the HSV color constructor to pick colors off of a rainbow.
            texture: solid(hsv(i / n * 360, 1, 1)),
            reflectiveness: 0.2

# Finally one shiny sphere in the middle, because why not!
sphere {
    position: <0, 0, -13>,
    radius: 2,
    material: {
        reflectiveness: 0.8

Sample shots

All three of these sample shots were shot on September 21st, 2021. Each is 1920x1080 and generated in at most 0.296 seconds.

Sample render

Sample render

Sample render