Julia, Pyglet, and a GPU


A Julia fractal. Note the smooth, anti-aliased rendering. The color is picked as function of the estimated function to the Julia boundary.

A Julia fractal. Note the smooth, anti-aliased rendering. The color is picked as function of the estimated function to the Julia boundary.

Pyglet is a very good Python module, wrapping OpenGL and providing access to mouse and keyboards events. The API is really easy to learn, well-documented, with that "clean & lean" feel to it. I started to use Pyglet for work, and I positively love it.

So, Pyglet, allows you to summon the power of your graphic card using Python. And those days, even a puny low-end graphic card packs a lot of power : GPU, massively parallel computing. Although it's been around for years now, I never bothered tinkering with GPU, lack of time, other interests. Now that I feel comfortable with OpenGL, Python and Pyglet, however, harnessing this GPU goodness require very little effort. With years of delay, let's jump in that band-wagon already !

For a first try, I went for the done, done again, and overdone : fractals. More precisely, Julia fractals. You can see this fractal as function to iterate for each pixel of a picture, that takes as input the coordinate of the pixel.

Z_{n+1} = Z_{n}^{2} + C

Z_n is a complex variable, with Z_0 being the coordinate of the pixel in the complex plane, and C a complex constant. This function is iterated until |Z_N|<2 or a maximum count of iterations is reached. You then color the pixel depending on the number of iterations spent. Consider visiting the Wikipedia page about Julia fractals, there is a lot to learn about it there.

And you get a nice picture, a fractal. What happen if you play with the parameter C ? The Julia fractal is fairly intensive to compute, rendering it in real-time requires some significant engineering... or a GPU, as the computations for each pixel are completely parallel. So what I did is to render the Julia fractal as a fragment shader, and control the C parameter with the mouse. The code for this turns out to be dead simple, assuming you are familiar with the OpenGL API

import pyglet
from pyglet.gl import *
from shader import Shader

class JuliaWindow(pyglet.window.Window):
  def __init__(self):
    super(JuliaWindow, self).__init__(caption = 'julia', width = 512, height = 512)
    self.C = (-0.70176, -0.3842)
    shader_path = 'julia'
    self.shader = Shader(
      ' '.join(open('%s.v.glsl' % shader_path)),
      ' '.join(open('%s.f.glsl' % shader_path))

  def on_mouse_motion(self, x, y, dx, dy):
    self.C = (6. * ((float(x) / window.width) - .5), 6 * ((float(y) / window.height) - .5))

  def on_draw(self):
    glOrtho(-1., 1., 1., -1., 0., 1.)


    self.shader.uniformf('C', *self.C)

    glVertex2i(-1, -1)
    glTexCoord2i(-2, -2)
    glVertex2f(1, -1)
    glTexCoord2i(2, -2)
    glVertex2i(1, 1)
    glTexCoord2i(2, 2)
    glVertex2i(-1, 1)
    glTexCoord2i(-2, 2)


window = JuliaWindow()

This code create a window, and display a quad polygon that fills the window. The texture for the quad is a procedural one, computed by a shader. Here, I use a neat utility class that encapsulate OpenGL API for the shaders : shader.py, by Tristam MacDonald. The C parameter of the fractal  is controlled by the mouse position in the window.

Of course, most of the magic happens inside the shaders. The vertex shader,   julia.v.glsl, is not exactly rocket science.

#version 110

varying vec2 uv;

main() {
  gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
  uv = vec2(gl_MultiTexCoord0);

All the actual job is done in the fragment shader, julia.f.glsl. Basically, I count how much iterations are required to have |Z_N|<2, and use this to pick a color. C can be controlled from outside the shader, as uniforn variable. For sake of simplicity, I hard-coded the color scheme. One nice improvement would be to pick the color from a 1D texture, itself procedurally generated.

#version 110

varying vec2 uv;

int max_iter_count;
uniform vec2 C;

int iter_count;
vec2 Z, dZ;
vec2 sqr_Z;
float sqr_norm_Z, sqr_norm_dZ;

main() {
  max_iter_count = 1024;

  Z = uv;
  dZ = vec2(1., 0.);

  for(iter_count = 0; iter_count < max_iter_count; ++iter_count) {
    sqr_Z = vec2(Z.x * Z.x, Z.y * Z.y);
    if (sqr_Z.x + sqr_Z.y > 1e7)

    dZ = vec2(
      2. * (Z.x * dZ.x - Z.y * dZ.y) + 1.,
      2. * (Z.x * dZ.y + Z.y * dZ.x));
      Z = vec2(
        Z.x * Z.x - Z.y * Z.y,
        2. * Z.x * Z.y) + C;

  sqr_norm_Z = Z.x * Z.x + Z.y * Z.y;
  sqr_norm_dZ = dZ.x * dZ.x + dZ.y * dZ.y;
  vec4 color0 = vec4(1.0, 0.0, 0.0, 1.0);
  vec4 color1 = vec4(1.0, 1.0, 0.0, 1.0);

  gl_FragColor = mix(
   sqrt(sqrt(sqr_norm_Z / sqr_norm_dZ) * .5 * log(sqr_norm_Z))

Note that here I use a variant on the classical Julia : I render a distance to the boundary of the fractal, the trick being from the amazing Inigo Quilez. It gives me anti-aliasing at a much cheaper cost than super-sampling. A few minutes of coding, and I got that instant enlightenment, with a feeling of achievement on the top of it. Those GPU are indeed great little pieces of silicon !

The full code is available for download julia-explorer.tar.bz2

Multiple pages PDF documents with Matplotlib

Matplotlib, what is that ? It is a software package to make plots, yet another one... but a really good one. Since Matplotlib is a Python module, plots are described in Python, rather than a (usually clumsy) custom language. So a script using Matplotlib can harness the full power of Python and its nice modules like Numpy. Say, reading compressed data file, doing some fancy statistics, etc. Moreover, Matplotlib is rather complete, providing a wide diversity of plots and render targets such as PNG, PDF, EPS. Finally, the quality of the plots is up to the standard of scientific publication. Originally a Gnuplot user, it's been a couple of years with Matplotlib and me, and I am very happy with it. Perfect ? Well, documentation for Matplotlib is a little bit scattered. I'm working on a tutorial, a Sisyphus-grade task... Rather than waiting for completing that looooong tutorial, a short recipe today !

The problem

The problem : you want to have several plots on a single figure, you want to fit them on a A4 format, and you want to have this as a PDF document. The latter constraint, PDF document, is welcome, as the recipe is for PDF output only. It works for Matplotlib 1.1.x, and it mostly works for Matplotlib 1.0.x. It's a two step trick.

First step: multiple pages & PDF

Only one Matplotlib back-end seems to support multiple pages, the PDF back-end. We have to explicitly instantiate that back-end and tells it that we want to go multiple pages. Here's how it's done

from matplotlib.backends.backend_pdf import PdfPages

pdf_pages = PdfPages('my-fancy-document.pdf')

Now, for each new page we want to create, we have to create a new Figure instance. It's simple, one page, one figure. We want a A4 format, so we can eventually print our nice document, for instance. That's not too hard, we have just to specify the proper dimensions and resolutions when creating a Figure. Once we are done with a page, we have to explicitly tell it to the back-end. In this example, the script creates 3 pages.

from matplotlib import pyplot as plot
from matplotlib.backends.backend_pdf import PdfPages

# The PDF document
pdf_pages = PdfPages('my-fancy-document.pdf')

for i in xrange(3):
  # Create a figure instance (ie. a new page)
  fig = plot.figure(figsize=(8.27, 11.69), dpi=100)

  # Plot whatever you wish to plot

  # Done with the page

# Write the PDF document to the disk

Second step: multiple plots on a figure

One A4 page is a lot of space, we can squeeze a lot of plots in one single figure. Recent Matplotlib versions (1.x.x and later) have now a nice feature to easily layout plots on one figure : grid layout.

plot.subplot2grid((5, 2), (2, 0), rowspan=1, colspan=1)

The first parameter of subplot2grid is the size of the grid. The second parameter is the coordinate of one element of the grid. That element will be filled with a plot. Finally, we can specify the span of the plot, so that its spans over several elements of the grid. This gives a great deal of control over the layout, without too much headaches. There is a small gotcha however: the coordinates are in row, col order. Here's an example of subplot2grid in action.

import numpy
from matplotlib import pyplot as plot

# Prepare the data
t = numpy.linspace(-numpy.pi, numpy.pi, 1024)
s = numpy.random.randn(2, 256)

# Do the plot
grid_size = (5, 2)

# Plot 1
plot.subplot2grid(grid_size, (0, 0), rowspan=2, colspan=2)
plot.plot(t, numpy.sinc(t), c= '#000000')

# Plot 2
plot.subplot2grid(grid_size, (2, 0), rowspan=3, colspan=1)
plot.scatter(s[0], s[1], c= '#000000')

# Plot 2
plot.subplot2grid(grid_size, (2, 1), rowspan=3, colspan=1)
plot.plot(numpy.sin(2 * t), numpy.cos(0.5 * t), c= '#000000')

# Automagically fits things together

# Done !

Grid layout in action

Note the tight_layout call, which pack all the plots within a figure automatically, and does a nice job at it. But it is in Matplotlib 1.1.x only... With Matplotlib 1.0.x I don't know yet how to do it. Why bothering ? Because say, at work, you might have to deal with Linux distributions are providing rather outdated versions of software in their official packages repositories, and even the unofficial ones are lagging. Yes, CentOS, I'm looking at you !

Putting it together

Now, we can combine subplot2grid with the multiple page trick, and get multiple pages PDF documents which look good.

import numpy

from matplotlib import pyplot as plot
from matplotlib.backends.backend_pdf import PdfPages

# Generate the data
data = numpy.random.randn(7, 1024)

# The PDF document
pdf_pages = PdfPages('histograms.pdf')

# Generate the pages
nb_plots = data.shape[0]
nb_plots_per_page = 5
nb_pages = int(numpy.ceil(nb_plots / float(nb_plots_per_page)))
grid_size = (nb_plots_per_page, 1)

for i, samples in enumerate(data):
  # Create a figure instance (ie. a new page) if needed
  if i % nb_plots_per_page == 0:
  fig = plot.figure(figsize=(8.27, 11.69), dpi=100)

  # Plot stuffs !
  plot.subplot2grid(grid_size, (i % nb_plots_per_page, 0))
  plot.hist(samples, 32, normed=1, facecolor='#808080', alpha=0.75)

  # Close the page if needed
  if (i + 1) % nb_plots_per_page == 0 or (i + 1) == nb_plots:

# Write the PDF document to the disk

A preview of the PDF document you should get from the example. Do not mind the ugly antialiasing, it's me who fumbled with the PDF to PNG conversion.

As you can see, a little bit of arithmetic is used to have the proper number of pages and create them at the right moment. It's a minimal example, but by now, you got the recipe.

Generating a pseudo-random 2d noise texture

Let's imagine an infinite 2d grid (or more realistically, a very large grid, larger than what I can reasonnably keep in memory), and to each node of that grid, we associate an integer value. Every time we look at one particular node of the grid, we want the same value associated to that node. The values of the grid's nodes should follow a random distribution. At least, it should look random to the naked eye : pseudo-random. Let's call this function GridNoise.

When working with procedural content generation, say, procedural textures, GridNoise is an essential building block. The integer associated to one grid node can be used to generate information. Then, between the nodes of the grid, one can interpolate those informations. This is how value noise and Perlin noise works, for instance. Indeed, pretty much all procedural textures uses GridNoise at their core. However, I never seen a clear, consistent method to build something like GridNoise. Since GridNoise is a low-level, essential building block, one might want something without obvious defects. I introduce here my recipe to build such a GridNoise function.

The idea is use the coordinates of a node to build a key, and to hash that key with a hash function to get our integer. A good hash function will give us the pseudo-random distribution for the integer values. Or will it ? Let's take a look. I use Pearson hashing : it's a very simple hash function, fast to compute, it have good properties, and it's dead simple to code. Pearson hashing works on array of 8 bits integers. Here it is my implementation of Pearson hashing in Python, as a functor.

import random

class PearsonHash:
  def __init__(self, rnd=random.Random()):
    self.permut = range(256)
    self.seed = rnd.choice(self.permut)

  def __call__(self, array):
    h = self.seed
    for value in array:
      h = self.permut[h ^ value]
    return h

As you can see, because Pearson hash relies on a permutation table, it's not one single function, but a whole familly of functions ! So you can define plenty variant of it, you are not stuck with one single hash function. It's nice, we will be able to define different varieties of GridNoise.

Now, how to build a key from 2d integer coordinates ? Let's assume 16 bits coordinates (thus defining a 65536x65536 noise texture) for this example,
it would be same with 32 bits. We can take the 16 bits of the X coordinate, the 16 bits of the Y coordinates, put them one after each other into a 32 bits value. 32 bits value, same as an array of 4x8 bytes values. If we have X=1011 and Y=0100, then the resulting 1d coordinate would be Z=10110100.

Let's try this ! Here it is an implementation in Python, assuming you have a Picture object

import array

phash = PearsonHash()
pic = Picture(512, 512)

tmp = array.array('B', [0] * 4)
for i in xrange(pic.h):
  for k in xrange(2):
    tmp[k+2] = ((i >> (8 * k)) & 255)
  for j in xrange(pic.w):
    for k in xrange(2):
      tmp[k] = (j >> (8 * k)) & 255
     pic.pixel(j, i) = phash(tmp)

A 512x512 texture generated with the method described above. Can you see the ugly artifact ?

Hum... It looks like white noise, but we can see some really nasty artifacts. You don't see it ? Let me show you that !

A 512x512 texture generated with the method described above. The two bands are emphasized with colors.

We can see vertical bands, 256 pixels wide. Each band looks noisy, but not so random, the bands have a distinct character. Not so random indeed... We can confirm what is just a visual test by something a bit more rigorous. If we try to compress this picture, throwing lot's of CPU time at this, we find out if there is some obvious repeating pattern. I used optipng for this : it reduced the picture data size by 90%... So our random data are not very random. Random data are data which most compact description are themselves. We did something wrong, but what ? Pearson hashing is innocent, it's a know, well-test hashing function. So the problem is with the way we built our key from node's coordinate. If we reverse X and Y in the 4 bytes array, it just turns the vertical bands into horizontal bands. It does not compress very well... Rotate this by 90 degrees, and suddenly, it become very easily compressible, also around 90% size reduction with optipng.

So we have to built our key completly differently. Struck by an Eurekainstant, I got this intuition : use the Z curve ! The Z what ?!

The Z curce, from order 1 (2x2 grid) to order 4 (16x16 grid)

The Z-curve is a space-filling curve, defining a map between 2d coordinates and 1d coordinates. The 2d coordinates are n n bits numbers. The 1d coordinate is a 2n 2n bits number, built by interleaving the bits of the 2d coordinates. If we have X=1011 and Y=0100, then the resulting 1d coordinate would be Z=10011010. The curve shown above is built linking the nodes by increasing Z. Here it is a naive implementation of the 2d to 1d conversion for the Z curve, assuming 16 bits per coordinates

def mix(x, y):
  i = 0
  for j in xrange(16):
    i |= (x & 1) << (2 * j)
    x >>= 1
    i |= (y & 1) << (2 * j + 1)
    y >>= 1
  return i

Because of its shape, I believed that the Z curve would scramble those nasty patterns we got. It would give something that looks and feel much more like actual noise. The code to generate the picture becomes

import array

phash = PearsonHash()
pic = Picture(512, 512)

tmp = array.array('B', [0] * 4)
for i in xrange(pic.h):
  for j in xrange(pic.w):
    z = mix(j, i)
    for k in xrange(4):
      tmp[k] = (z >> (8 * k)) & 255
    pic.pixel(j, i) = phash(tmp)

And here it is a sample of what we get from this

A 512x512 texture generated by using the Z-order curve. No visible artifacts, looks like proper noise.

No visible artifacts ! Using a clever & expensive compresser does not reduce the size at all ! We did it, we have a very good GridNoise function. For demonstration purpose, I used 16 bits coordinates for the grid nodes, thus GridNoise is a procedure 65536x65536 noise texture. If we use 32 bits, or even 64 bits coordinates, then we got a Universe-scale noise texture to work with. I might have overlooked some awfull flaw, I would be glad to know it if anybody notice that.

The approach generalize well to N dimensions. Z-order is defined in N dimensions : just interleave bits as we did for 2 dimensions. You might have a
look here here to see how to interleave bits efficiently. The Z curve can be replaced by any decent space-filling curve. However, the Z curve is maybe the cheapest to compute I know, while being close to the best one I know, the Hilbert curve.

Spreading points on a disc and on a sphere

When working with computer graphics, machine vision or simulations of various kind, two problems (among many others) which keep popping out are

  • How to have N points approximately evenly distributed over a disc
  • How to have N points approximately evenly distributed over a sphere

One way to build a solution for those two problems is to rely on a particle system. Each point is a particle, particles are repulsing each other with a \frac{1}{r^2} force. Put the particles on the disc or the sphere surface, at random locations. Shake vigorously the particles, add some small drag force, let the particles reach a steady state, shake again ... rinse, repeat. It is not as easy as it sounds. How to setup the forces ? How to handle the boundary of the disc ? Which integrator to use to compute the particle's motion ? And for more than a few hundred particles, it will be terribly slow.

256 points with Vogel's method

There are much simpler and leaner algorithms to evenly distribute N points over a disc or a sphere, if one can live with just a good approximation. The main idea fits in one word : spiral ! Let's start with a disc. Imagine a spiral of dots starting from the center of the disc. In polar coordinates, the N points are produced by the sequence

\rho_i = \theta i

\tau_i = \sqrt{\frac{i}{N}}

\rho_i and \tau_i are respectively the angle in radian and the radius of the i-th point. Why \tau_i = \sqrt{\frac{i}{N}} ? Let's try to cut a unit disc in one disc and one ring of equal areas, by cutting the unit disc at radius r_c. The small disc and the ring are of equal areas, thus \pi r_{1}^2 = \pi - \pi r_{1}^2. Thus r_1 = \sqrt{\frac{1}{2}}. Now, let's try to cut the unit disc in one disc and 2 concentric rings, all of equal areas. A  slightly more complex calculation (a system of 2 linear equations) would tell us that we should cut the unit disc at radius r_1 = \sqrt{\frac{1}{3}} and r_2 = \sqrt{\frac{2}{3}}. The calculation is a bit more complex when cutting the unit disc in N equal areas rings, but you can guess the answer now : r_1 = \sqrt{\frac{1}{N}}, r_2 =\sqrt{\frac{2}{N}}, ..., r_i = \sqrt{\frac{i}{N}}.

Equal thickness versus equal areas concentric rings

What about the ideal \theta ? Brace yourself... This is the golden angle, \pi (3 - \sqrt{5}) ! It's roughly equal to 137.508°, or about 2.39996 radians. The golden angle is the "most irrational" angle, defined as 2 \pi (1 - \frac{1}{\phi}) with \phi being the golden ratio. If \theta is a rational number, we would obtain clusters of points aligned with the center of the disk. Thus \theta have to be irrational. As it turns out, the golden ratio is the irrational number the hardest approximate with a continued fraction. Written as a continued fraction, the golden ratio is the irrational number with the slowest convergence of all the irrational numbers. The golden angle gives the best possible spread for the \rho_i angles. This method to spread points over a disc is called Vogel's method.

Effect of the angle step parameter, on 256 points. The 2 leftmost points layout are for rational values of the angle step. The top center one is for square root of 2, the 3 others are for others irrational values.

Vogel's method is dead simple to implement. In pure, barebone Python, it can be done like this

import math

n = 256

golden_angle = math.pi * (3 - math.sqrt(5))

points = []
for i in xrange(n):
  theta = i * golden_angle
  r = math.sqrt(i) / math.sqrt(n)
  points.append((r * math.cos(theta), r * math.sin(theta)))

Using numpy, one can use vector operations like this

import numpy

n = 256

radius = numpy.sqrt(numpy.arange(n) / float(n))

golden_angle = numpy.pi * (3 - numpy.sqrt(5))
theta = golden_angle * numpy.arange(n)

points = numpy.zeros((n, 2))
points[:,0] = numpy.cos(theta)
points[:,1] = numpy.sin(theta)
points *= radius.reshape((n, 1))

What about the sphere ? We can reuse the golden angle spiral trick. In cylindrical coordinates, we would generate N points like this

\rho_i = \theta i

\tau_i = \sqrt{1 - z_{i}^2}

z_i = \left(1 - \frac{1}{N} \right) \left(1 - \frac{2i}{N - 1} \right)

And voila, an extremely cheap method to spread point evenly over a sphere's surface ! The code for this method is very close to the one for Vogel's method.

n = 256

golden_angle = numpy.pi * (3 - numpy.sqrt(5))
theta = golden_angle * numpy.arange(n)
z = numpy.linspace(1 - 1.0 / n, 1.0 / n - 1, n)
radius = numpy.sqrt(1 - z * z)

points = numpy.zeros((n, 3))
points[:,0] = radius * numpy.cos(theta)
points[:,1] = radius * numpy.sin(theta)
points[:,2] = z

Note that you can also use this method to make a sphere mesh. The sphere will look good even with few polygons, but the indexing and computing the normals won't be especially fast. For this specific problem, making a sphere mesh, Iñigo Quilez have a very nice, efficient method introduced here

256 points on a sphere