Pyrender Documentation¶
Pyrender is a pure Python (2.7, 3.4, 3.5, 3.6) library for physically-based rendering and visualization. It is designed to meet the glTF 2.0 specification from Khronos
Pyrender is lightweight, easy to install, and simple to use. It comes packaged with both an intuitive scene viewer and a headache-free offscreen renderer with support for GPU-accelerated rendering on headless servers, which makes it perfect for machine learning applications. Check out the User Guide for a full tutorial, or fork me on Github.


Installation Guide¶
Python Installation¶
This package is available via pip
.
pip install pyrender
If you’re on MacOS, you’ll need
to pre-install my fork of pyglet
, as the version on PyPI hasn’t yet included
my change that enables OpenGL contexts on MacOS.
git clone https://github.com/mmatl/pyglet.git
cd pyglet
pip install .
Getting Pyrender Working with OSMesa¶
If you want to render scenes offscreen but don’t want to have to install a display manager or deal with the pains of trying to get OpenGL to work over SSH, you have two options.
The first (and preferred) option is using EGL, which enables you to perform GPU-accelerated rendering on headless servers. However, you’ll need EGL 1.5 to get modern OpenGL contexts. This comes packaged with NVIDIA’s current drivers, but if you are having issues getting EGL to work with your hardware, you can try using OSMesa, a software-based offscreen renderer that is included with any Mesa install.
If you want to use OSMesa with pyrender, you’ll have to perform two additional installation steps:
Then, read the offscreen rendering tutorial. See Offscreen Rendering.
Installing OSMesa¶
As a first step, you’ll need to rebuild and re-install Mesa with support
for fast offscreen rendering and OpenGL 3+ contexts.
I’d recommend installing from source, but you can also try my .deb
for Ubuntu 16.04 and up.
Installing from a Debian Package¶
If you’re running Ubuntu 16.04 or newer, you should be able to install the
required version of Mesa from my .deb
file.
sudo apt update
sudo wget https://github.com/mmatl/travis_debs/raw/master/xenial/mesa_18.3.3-0.deb
sudo dpkg -i ./mesa_18.3.3-0.deb || true
sudo apt install -f
If this doesn’t work, try building from source.
Building From Source¶
First, install build dependencies via apt or your system’s package manager.
sudo apt-get install llvm-6.0 freeglut3 freeglut3-dev
Then, download the current release of Mesa from here. Unpack the source and go to the source folder:
tar xfv mesa-18.3.3.tar.gz
cd mesa-18.3.3
Replace PREFIX
with the path you want to install Mesa at.
If you’re not worried about overwriting your default Mesa install,
a good place is at /usr/local
.
Now, configure the installation by running the following command:
./configure --prefix=PREFIX \
--enable-opengl --disable-gles1 --disable-gles2 \
--disable-va --disable-xvmc --disable-vdpau \
--enable-shared-glapi \
--disable-texture-float \
--enable-gallium-llvm --enable-llvm-shared-libs \
--with-gallium-drivers=swrast,swr \
--disable-dri --with-dri-drivers= \
--disable-egl --with-egl-platforms= --disable-gbm \
--disable-glx \
--disable-osmesa --enable-gallium-osmesa \
ac_cv_path_LLVM_CONFIG=llvm-config-6.0
Finally, build and install Mesa.
make -j8
make install
Finally, if you didn’t install Mesa in the system path,
add the following lines to your ~/.bashrc
file after
changing MESA_HOME
to your mesa installation path (i.e. what you used as
PREFIX
during the configure command).
MESA_HOME=/path/to/your/mesa/installation
export LIBRARY_PATH=$LIBRARY_PATH:$MESA_HOME/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MESA_HOME/lib
export C_INCLUDE_PATH=$C_INCLUDE_PATH:$MESA_HOME/include/
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$MESA_HOME/include/
Installing a Compatible Fork of PyOpenGL¶
Next, install and use my fork of PyOpenGL
.
This fork enables getting modern OpenGL contexts with OSMesa.
My patch has been included in PyOpenGL
, but it has not yet been released
on PyPI.
git clone git@github.com:mmatl/pyopengl.git
pip install ./pyopengl
Building Documentation¶
The online documentation for pyrender
is automatically built by Read The Docs.
Building pyrender
’s documentation locally requires a few extra dependencies –
specifically, sphinx and a few plugins.
To install the dependencies required, simply change directories into the pyrender source and run
$ pip install .[docs]
Then, go to the docs
directory and run make
with the appropriate target.
For example,
$ cd docs/
$ make html
will generate a set of web pages. Any documentation files
generated in this manner can be found in docs/build
.
User Guide¶
This section contains guides on how to use Pyrender to quickly visualize your 3D data, including a quickstart guide and more detailed descriptions of each part of the rendering pipeline.
Quickstart¶
Minimal Example for 3D Viewer¶
Here is a minimal example of loading and viewing a triangular mesh model in pyrender.
>>> import trimesh
>>> import pyrender
>>> fuze_trimesh = trimesh.load('examples/models/fuze.obj')
>>> mesh = pyrender.Mesh.from_trimesh(fuze_trimesh)
>>> scene = pyrender.Scene()
>>> scene.add(mesh)
>>> pyrender.Viewer(scene, use_raymond_lighting=True)

Minimal Example for Offscreen Rendering¶
Note
If you’re using a headless server, make sure that you followed the guide for installing OSMesa. See Getting Pyrender Working with OSMesa.
Here is a minimal example of rendering a mesh model offscreen in pyrender. The only additional necessities are that you need to add lighting and a camera.
>>> import numpy
>>> import trimesh
>>> import pyrender
>>> import matplotlib.pyplot as plt
>>> fuze_trimesh = trimesh.load('examples/models/fuze.obj')
>>> mesh = pyrender.Mesh.from_trimesh(fuze_trimesh)
>>> scene = pyrender.Scene()
>>> scene.add(mesh)
>>> camera = pyrender.PerspectiveCamera(yfov=np.pi / 3.0, aspectRatio=1.0)
>>> s = np.sqrt(2)/2
>>> camera_pose = np.array([
... [0.0, -s, s, 0.3],
... [1.0, 0.0, 0.0, 0.0],
... [0.0, s, s, 0.35],
... [0.0, 0.0, 0.0, 1.0],
... ])
>>> scene.add(camera, pose=camera_pose)
>>> light = pyrender.SpotLight(color=np.ones(3), intensity=3.0,
... innerConeAngle=np.pi/16.0,
... outerconeAngle=np.pi/6.0)
>>> scene.add(light, pose=camera_pose)
>>> r = pyrender.OffscreenRenderer(400, 400)
>>> color, depth = r.render(scene)
>>> plt.figure()
>>> plt.subplot(1,2,1)
>>> plt.axis('off')
>>> plt.imshow(color)
>>> plt.subplot(1,2,2)
>>> plt.axis('off')
>>> plt.imshow(depth, cmap=plt.cm.gray_r)
>>> plt.show()


Loading and Configuring Models¶
The first step to any rendering application is loading your models. Pyrender implements the GLTF 2.0 specification, which means that all models are composed of a hierarchy of objects.
At the top level, we have a Mesh
. The Mesh
is
basically a wrapper of any number of Primitive
types,
which actually represent geometry that can be drawn to the screen.
Primitives are composed of a variety of parameters, including
vertex positions, vertex normals, color and texture information,
and triangle indices if smooth rendering is desired.
They can implement point clouds, triangular meshes, or lines
depending on how you configure their data and set their
Primitive.mode
parameter.
Although you can create primitives yourself if you want to,
it’s probably easier to just use the utility functions provided
in the Mesh
class.
Creating Triangular Meshes¶
Simple Construction¶
Pyrender allows you to create a Mesh
containing a
triangular mesh model directly from a Trimesh
object
using the Mesh.from_trimesh()
static method.
>>> import trimesh
>>> import pyrender
>>> import numpy as np
>>> tm = trimesh.load('examples/models/fuze.obj')
>>> m = pyrender.Mesh.from_trimesh(tm)
>>> m.primitives
[<pyrender.primitive.Primitive at 0x7fbb0af60e50>]
You can also create a single Mesh
from a list of
Trimesh
objects:
>>> tms = [trimesh.creation.icosahedron(), trimesh.creation.cylinder()]
>>> m = pyrender.Mesh.from_trimesh(tms)
[<pyrender.primitive.Primitive at 0x7fbb0c2b74d0>,
<pyrender.primitive.Primitive at 0x7fbb0c2b7550>]
Vertex Smoothing¶
The Mesh.from_trimesh()
method has a few additional optional parameters.
If you want to render the mesh without interpolating face normals, which can
be useful for meshes that are supposed to be angular (e.g. a cube), you
can specify smooth=False
.
>>> m = pyrender.Mesh.from_trimesh(tm, smooth=False)
Per-Face or Per-Vertex Coloration¶
If you have an untextured trimesh, you can color it in with per-face or per-vertex colors:
>>> tm.visual.vertex_colors = np.random.uniform(size=tm.vertices.shape)
>>> tm.visual.face_colors = np.random.uniform(size=tm.faces.shape)
>>> m = pyrender.Mesh.from_trimesh(tm)
Instancing¶
If you want to render many copies of the same mesh at different poses,
you can statically create a vast array of them in an efficient manner.
Simply specify the poses
parameter to be a list of N
4x4 homogenous
transformation matrics that position the meshes relative to their common
base frame:
>>> tfs = np.tile(np.eye(4), (3,1,1))
>>> tfs[1,:3,3] = [0.1, 0.0, 0.0]
>>> tfs[2,:3,3] = [0.2, 0.0, 0.0]
>>> tfs
array([[[1. , 0. , 0. , 0. ],
[0. , 1. , 0. , 0. ],
[0. , 0. , 1. , 0. ],
[0. , 0. , 0. , 1. ]],
[[1. , 0. , 0. , 0.1],
[0. , 1. , 0. , 0. ],
[0. , 0. , 1. , 0. ],
[0. , 0. , 0. , 1. ]],
[[1. , 0. , 0. , 0.2],
[0. , 1. , 0. , 0. ],
[0. , 0. , 1. , 0. ],
[0. , 0. , 0. , 1. ]]])
>>> m = pyrender.Mesh.from_trimesh(tm, poses=tfs)
Custom Materials¶
You can also specify a custom material for any triangular mesh you create
in the material
parameter of Mesh.from_trimesh()
.
The main material supported by Pyrender is the
MetallicRoughnessMaterial
.
The metallic-roughness model supports rendering highly-realistic objects across
a wide gamut of materials.
For more information, see the documentation of the
MetallicRoughnessMaterial
constructor or look at the Khronos
documentation for more information.
Creating Point Clouds¶
Point Sprites¶
Pyrender also allows you to create a Mesh
containing a
point cloud directly from numpy.ndarray
instances
using the Mesh.from_points()
static method.
Simply provide a list of points and optional per-point colors and normals.
>>> pts = tm.vertices.copy()
>>> colors = np.random.uniform(size=pts.shape)
>>> m = pyrender.Mesh.from_points(pts, colors=colors)
Point clouds created in this way will be rendered as square point sprites.

Point Spheres¶
If you have a monochromatic point cloud and would like to render it with spheres, you can render it by instancing a spherical trimesh:
>>> sm = trimesh.creation.uv_sphere(radius=0.1)
>>> sm.visual.vertex_colors = [1.0, 0.0, 0.0]
>>> tfs = np.tile(np.eye(4), (len(pts), 1, 1))
>>> tfs[:,:3,3] = pts
>>> m = pyrender.Mesh.from_trimesh(m, poses=poses)

Creating Lights¶
Pyrender supports three types of punctual light:
PointLight
: Point-based light sources, such as light bulbs.SpotLight
: A conical light source, like a flashlight.DirectionalLight
: A general light that does not attenuate with distance.
Creating lights is easy – just specify their basic attributes:
>>> pl = pyrender.PointLight(color=[1.0, 1.0, 1.0], intensity=2.0)
>>> sl = pyrender.SpotLight(color=[1.0, 1.0, 1.0], intensity=2.0,
... innerConeAngle=0.05, outerConeAngle=0.5)
>>> dl = pyrender.DirectionalLight(color=[1.0, 1.0, 1.0], intensity=2.0)
For more information about how these lighting models are implemented, see their class documentation.
Creating Cameras¶
Pyrender supports three camera types – PerspectiveCamera
and
IntrinsicsCamera
types,
which render scenes as a human would see them, and
OrthographicCamera
types, which preserve distances between points.
Creating cameras is easy – just specify their basic attributes:
>>> pc = pyrender.PerspectiveCamera(yfov=np.pi / 3.0, aspectRatio=1.414)
>>> oc = pyrender.OrthographicCamera(xmag=1.0, ymag=1.0)
For more information, see the Khronos group’s documentation here:
When you add cameras to the scene, make sure that you’re using OpenGL camera coordinates to specify their pose. See the illustration below for details. Basically, the camera z-axis points away from the scene, the x-axis points right in image space, and the y-axis points up in image space.

Creating Scenes¶
Before you render anything, you need to put all of your lights, cameras,
and meshes into a scene. The Scene
object keeps track of the relative
poses of these primitives by inserting them into Node
objects and
keeping them in a directed acyclic graph.
Adding Objects¶
To create a Scene
, simply call the constructor. You can optionally
specify an ambient light color and a background color:
>>> scene = pyrender.Scene(ambient_light=[0.02, 0.02, 0.02],
... bg_color=[1.0, 1.0, 1.0])
You can add objects to a scene by first creating a Node
object
and adding the object and its pose to the Node
. Poses are specified
as 4x4 homogenous transformation matrices that are stored in the node’s
Node.matrix
attribute. Note that the Node
constructor requires you to specify whether you’re adding a mesh, light,
or camera.
>>> mesh = pyrender.Mesh.from_trimesh(tm)
>>> light = pyrender.PointLight(color=[1.0, 1.0, 1.0], intensity=2.0)
>>> cam = pyrender.PerspectiveCamera(yfov=np.pi / 3.0, aspectRatio=1.414)
>>> nm = pyrender.Node(mesh=mesh, matrix=np.eye(4))
>>> nl = pyrender.Node(light=light, matrix=np.eye(4))
>>> nc = pyrender.Node(camera=cam, matrix=np.eye(4))
>>> scene.add_node(nm)
>>> scene.add_node(nl)
>>> scene.add_node(nc)
You can also add objects directly to a scene with the Scene.add()
function,
which takes care of creating a Node
for you.
>>> scene.add(mesh, pose=np.eye(4))
>>> scene.add(light, pose=np.eye(4))
>>> scene.add(cam, pose=np.eye(4))
Nodes can be hierarchical, in which case the node’s Node.matrix
specifies that node’s pose relative to its parent frame. You can add nodes to
a scene hierarchically by specifying a parent node in your calls to
Scene.add()
or Scene.add_node()
:
>>> scene.add_node(nl, parent_node=nc)
>>> scene.add(cam, parent_node=nm)
If you add multiple cameras to a scene, you can specify which one to render from
by setting the Scene.main_camera_node
attribute.
Updating Objects¶
You can update the poses of existing nodes with the Scene.set_pose()
function. Simply call it with a Node
that is already in the scene
and the new pose of that node with respect to its parent as a 4x4 homogenous
transformation matrix:
>>> scene.set_pose(nl, pose=np.eye(4))
If you want to get the local pose of a node, you can just access its
Node.matrix
attribute. However, if you want to the get
the pose of a node with respect to the world frame, you can call the
Scene.get_pose()
method.
>>> tf = scene.get_pose(nl)
Removing Objects¶
Finally, you can remove a Node
and all of its children from the
scene with the Scene.remove_node()
function:
>>> scene.remove_node(nl)
Offscreen Rendering¶
Note
If you’re using a headless server, you’ll need to use either EGL (for GPU-accelerated rendering) or OSMesa (for CPU-only software rendering). If you’re using OSMesa, be sure that you’ve installed it properly. See Getting Pyrender Working with OSMesa for details.
Choosing a Backend¶
Once you have a scene set up with its geometry, cameras, and lights,
you can render it using the OffscreenRenderer
. Pyrender supports
three backends for offscreen rendering:
- Pyglet, the same engine that runs the viewer. This requires an active display manager, so you can’t run it on a headless server. This is the default option.
- OSMesa, a software renderer.
- EGL, which allows for GPU-accelerated rendering without a display manager.
If you want to use OSMesa or EGL, you need to set the PYOPENGL_PLATFORM
environment variable before importing pyrender or any other OpenGL library.
You can do this at the command line:
PYOPENGL_PLATFORM=osmesa python render.py
or at the top of your Python script:
# Top of main python script
import os
os.environ['PYOPENGL_PLATFORM'] = 'egl'
The handle for EGL is egl
, and the handle for OSMesa is osmesa
.
Running the Renderer¶
Once you’ve set your environment variable appropriately, create your scene and
then configure the OffscreenRenderer
object with a window width,
a window height, and a size for point-cloud points:
>>> r = pyrender.OffscreenRenderer(viewport_width=640,
... viewport_height=480,
... point_size=1.0)
Then, just call the OffscreenRenderer.render()
function:
>>> color, depth = r.render(scene)

This will return a (w,h,3)
channel floating-point color image and
a (w,h)
floating-point depth image rendered from the scene’s main camera.
You can customize the rendering process by using flag options from
RenderFlags
and bitwise or-ing them together. For example,
the following code renders a color image with an alpha channel
and enables shadow mapping for all directional lights:
>>> flags = RenderFlags.RGBA | RenderFlags.SHADOWS_DIRECTIONAL
>>> color, depth = r.render(scene, flags=flags)
Once you’re done with the offscreen renderer, you need to close it before you can run a different renderer or open the viewer for the same scene:
>>> r.delete()
Google CoLab Examples¶
For a minimal working example of offscreen rendering using OSMesa, see the OSMesa Google CoLab notebook.
For a minimal working example of offscreen rendering using EGL, see the EGL Google CoLab notebook.
Live Scene Viewer¶
Standard Usage¶
In addition to the offscreen renderer, Pyrender comes with a live scene viewer.
In its standard invocation, calling the Viewer
’s constructor will
immediately pop a viewing window that you can navigate around in.
>>> pyrender.Viewer(scene)
By default, the viewer uses your scene’s lighting. If you’d like to start with some additional lighting that moves around with the camera, you can specify that with:
>>> pyrender.Viewer(scene, use_raymond_lighting=True)
For a full list of the many options that the Viewer
supports, check out its
documentation.

Running the Viewer in a Separate Thread¶
If you’d like to animate your models, you’ll want to run the viewer in a
separate thread so that you can update the scene while the viewer is running.
To do this, first pop the viewer in a separate thread by calling its constructor
with the run_in_thread
option set:
>>> v = pyrender.Viewer(scene, run_in_thread=True)
Then, you can manipulate the Scene
while the viewer is running to
animate things. However, be careful to acquire the viewer’s
Viewer.render_lock
before editing the scene to prevent data corruption:
>>> i = 0
>>> while True:
... pose = np.eye(4)
... pose[:3,3] = [i, 0, 0]
... v.render_lock.acquire()
... scene.set_pose(mesh_node, pose)
... v.render_lock.release()
... i += 0.01

You can wait on the viewer to be closed manually:
>>> while v.is_active:
... pass
Or you can close it from the main thread forcibly. Make sure to still loop and block for the viewer to actually exit before using the scene object again.
>>> v.close_external()
>>> while v.is_active:
... pass
Pyrender API Documentation¶
Constants¶
Classes¶
RenderFlags |
Flags for rendering in the scene. |
TextAlign |
Text alignment options for captions. |
GLTF |
Options for GL objects. |
Cameras¶
Classes¶
Camera ([znear, zfar, name]) |
Abstract base class for all cameras. |
PerspectiveCamera (yfov[, znear, zfar, …]) |
A perspective camera for perspective projection. |
OrthographicCamera (xmag, ymag[, znear, …]) |
A perspective camera for perspective projection. |
IntrinsicsCamera (fx, fy, cx, cy[, znear, …]) |
A perspective camera with custom intrinsics. |
Lighting¶
Classes¶
Light ([color, intensity, name]) |
Base class for all light objects. |
DirectionalLight ([color, intensity, name]) |
Directional lights are light sources that act as though they are infinitely far away and emit light in the direction of the local -z axis. |
SpotLight ([color, intensity, range, …]) |
Spot lights emit light in a cone in the direction of the local -z axis. |
PointLight ([color, intensity, range, name]) |
Point lights emit light in all directions from their position in space; rotation and scale are ignored except for their effect on the inherited node position. |
Objects¶
Classes¶
IntrinsicsCamera (fx, fy, cx, cy[, znear, …]) |
A perspective camera with custom intrinsics. |
Sampler ([name, magFilter, minFilter, wrapS, …]) |
Texture sampler properties for filtering and wrapping modes. |
Texture ([name, sampler, source, …]) |
A texture and its sampler. |
Material ([name, normalTexture, …]) |
Base for standard glTF 2.0 materials. |
MetallicRoughnessMaterial ([name, …]) |
A material based on the metallic-roughness material model from Physically-Based Rendering (PBR) methodology. |
Primitive (positions[, normals, tangents, …]) |
A primitive object which can be rendered. |
Mesh (primitives[, name, weights, is_visible]) |
A set of primitives to be rendered. |
Scenes¶
Classes¶
IntrinsicsCamera (fx, fy, cx, cy[, znear, …]) |
A perspective camera with custom intrinsics. |
Node ([name, camera, children, skin, matrix, …]) |
A node in the node hierarchy. |
Scene ([nodes, bg_color, ambient_light, name]) |
A hierarchical scene graph. |
On-Screen Viewer¶
Off-Screen Rendering¶
Classes¶
OffscreenRenderer (viewport_width, …[, …]) |
A wrapper for offscreen rendering. |