Package examples :: Module volumerenderer
[hide private]
[frames] | no frames]

Source Code for Module examples.volumerenderer

  1  #!/usr/bin/env python 
  2   
  3  #! This file is a literate Python program. You can compile the documentation 
  4  #! using mylit (http://pypi.python.org/pypi/mylit/). 
  5  ## title = "glitter Example: Volume Renderer" 
  6  ## stylesheet = "pygments_style.css" 
  7   
  8  # <h1><i>glitter</i> Example: Volume Renderer</h1> 
  9   
 10  # <h2>Summary</h2> 
 11   
 12  # This program will create an invisible OpenGL context to render a volume given 
 13  # as an <a href="www.hdfgroup.org/HDF5/">HDF5</a> data file to images that are 
 14  # written to files. 
 15   
 16  # We will use a very simple algorithm for volume rendering. First, the back 
 17  # face of the bounding box of the volume will be rendered into a texture using 
 18  # front face culling. The texture coordinates (or positions) of the fragments 
 19  # are stored in the texture. Then, in a second pass, the front face of the 
 20  # bounding box is rendered using back face culling. The shader reads the 
 21  # texture coordinates of the previous step from the texture, so that it now 
 22  # knows about both the viewing ray entry point to the bounding box (from the 
 23  # current texture coordinates) as well as the ray exit point (from the texture 
 24  # coordinates stored in the texture). The shader then uniformly samples a 3D 
 25  # texture along the line between these points and sums up the intensities. This 
 26  # is a valid image formation model for purely emissive volumes. To simulate an 
 27  # absorbing volume illuminated uniformly from the back, an exponential 
 28  # transform can optionally be applied to the final pixel intensities. 
 29   
 30  # <h2>Front matter</h2> 
 31   
 32  # <h3>Module docstring</h3> 
 33   
 34  # The module docstring is used as a description of this example in the 
 35  # generated documentation: 
 36  """Volume renderer that creates a set of image files from different perspectives. 
 37   
 38  @author: Stephan Wenger 
 39  @date: 2012-03-07 
 40  """ 
 41   
 42  # <h3>Imports</h3> 
 43   
 44  # We will use <a 
 45  # href="http://docs.python.org/library/itertools.html">itertools</a> to create 
 46  # the vertices of a cube: 
 47  import itertools 
 48   
 49  # <a href="http://numpy.scipy.org/">numpy</a> is used for array operations: 
 50  import numpy 
 51   
 52  # We can usually import classes and functions contained in <i>glitter</i> 
 53  # submodules directly from glitter: 
 54  from glitter import Framebuffer, ShaderProgram, RectangleTexture, VertexArray, float32, State, Texture3D 
 55   
 56  # <h2>Shaders</h2> 
 57   
 58  # The vertex shader transforms the position as usual; additionally, it stores 
 59  # the unmodified vertex coordinates for use in the fragment shader: 
 60  vertex_code = """ 
 61  #version 400 core 
 62   
 63  layout(location=0) in vec4 in_position; 
 64  uniform mat4 modelview_matrix; 
 65  out vec4 ex_front; 
 66   
 67  void main() { 
 68          ex_front = in_position; 
 69          gl_Position = modelview_matrix * 2.0 * (in_position - 0.5) * vec4(1.0, 1.0, 0.5, 1.0); 
 70  } 
 71  """ 
 72   
 73  # We need to different fragment shaders: one for the back faces, and one for 
 74  # the front faces. The shader for the back faces simply stores the 
 75  # untransformed position in the color output: 
 76  back_fragment_code = """ 
 77  #version 400 core 
 78  #extension GL_ARB_texture_rectangle : enable 
 79   
 80  in vec4 ex_front; 
 81  layout(location=0) out vec4 out_color; 
 82   
 83  void main() { 
 84      out_color = ex_front; 
 85  } 
 86  """ 
 87   
 88  # The fragment shader for the front faces performs the actual integration along 
 89  # the viewing ray: 
 90  front_fragment_code = """ 
 91  #version 400 core 
 92  #extension GL_ARB_texture_rectangle : enable 
 93   
 94  #define STEP_SIZE 0.001 
 95  #define MAX_LENGTH sqrt(3.0) 
 96   
 97  in vec4 ex_front; 
 98  uniform sampler2DRect back; 
 99  uniform sampler3D volume; 
100  uniform float intensity_scale; 
101  uniform bool absorption; 
102  layout(location=0) out vec4 out_color; 
103   
104  void main() { 
105          vec3 p0 = texture(back, gl_FragCoord.st).xyz; 
106          vec3 p1 = ex_front.xyz; 
107          vec4 intensity = vec4(0.0); 
108   
109          vec3 d = p1 - p0; 
110          float l = length(d); 
111          if (l < MAX_LENGTH) { 
112                  vec3 s = d * STEP_SIZE / l; 
113                  vec3 pos = p0 + 0.5 * s; 
114   
115                  for (float f = 0.5 * STEP_SIZE; f < l; f += STEP_SIZE) { 
116                          intensity += texture(volume, pos); 
117                          pos += s; 
118                  } 
119          } 
120           
121      out_color = absorption ? exp(-intensity_scale * intensity) : intensity_scale * intensity; 
122  } 
123  """ 
124   
125  # <h2>Vertex arrays</h2> 
126   
127  # Here we define the faces of a cube that will serve as a bounding box to the 
128  # volume: 
129  cube_vertices = tuple(itertools.product((0.0, 1.0), (0.0, 1.0), (0.0, 1.0))) 
130  cube_indices = ((0, 1, 3), (0, 2, 6), (0, 3, 2), (0, 6, 4), (3, 1, 5), (3, 5, 7), 
131          (4, 1, 0), (4, 5, 1), (5, 4, 6), (5, 6, 7), (7, 2, 3), (7, 6, 2)) 
132   
133  # <h2>Volume renderer</h2> 
134   
135  # The <code>VolumeRenderer</code> class performs the actual volume rendering: 
136 -class VolumeRenderer(object):
137 # Some default values for instance variables can be declared as class 138 # variables (these values are all immutable, so no problem here): 139 modelview_matrix = tuple(map(tuple, numpy.eye(4))) 140 intensity_scale = 1.0 141 absorption = False 142 143 # <h3>Initialization</h3>
144 - def __init__(self, volume, **kwargs):
145 assert isinstance(volume, Texture3D), "volume must be a 3D texture" 146 147 # We need a vertex array to hold the faces of the cube: 148 self.vao = VertexArray(cube_vertices, elements=cube_indices) 149 150 # For rendering the back faces, we create a shader program and a 151 # framebuffer with an attached texture: 152 self.back_shader = ShaderProgram(vertex=vertex_code, fragment=back_fragment_code) 153 self.back_fbo = Framebuffer(RectangleTexture(shape=volume.shape[:2] + (4,), dtype=float32)) 154 155 # For the front faces, we do the same. Additionally, the shader 156 # receives the texture containing the back face coordinates in a 157 # uniform variable <code>back</code>, and the 3D texture containing the 158 # volumetric data in a uniform variable <code>volume</code>. 159 self.front_shader = ShaderProgram(vertex=vertex_code, fragment=front_fragment_code, back=self.back_fbo[0], volume=volume) 160 self.front_fbo = Framebuffer(RectangleTexture(shape=volume.shape[:2] + (4,), dtype=float32)) 161 162 # All other parameters are simply set as attributes (this might be 163 # <code>modelview_matrix</code>, <code>intensity_scale</code>, or 164 # <code>absorption</code>). 165 for key, value in kwargs.items(): 166 setattr(self, key, value)
167 168 # <h3>Rendering</h3>
169 - def render(self):
170 # We first draw the back faces, storing their coordinates into a 171 # texture. This means we have to clear and bind the back framebuffer, 172 # activate face culling with cull face mode "FRONT", bind the back face 173 # shader with the current modelview matrix set, and render the vertex 174 # array for the cube: 175 self.back_fbo.clear() 176 with State(cull_face=True, cull_face_mode="FRONT"): 177 with self.back_shader(modelview_matrix=self.modelview_matrix): 178 with self.back_fbo: 179 self.vao.draw() 180 181 # We then draw the front faces, accumulating the intensity between 182 # front and back face. The setup is very similar to the one stated 183 # before, except that the additional uniform variables 184 # <code>intensity_scale</code> and <code>absorption</code> are passed 185 # to the shader: 186 self.front_fbo.clear() 187 with State(cull_face=True, cull_face_mode="BACK"): 188 with self.front_shader(modelview_matrix=self.modelview_matrix, intensity_scale=self.intensity_scale, absorption=self.absorption): 189 with self.front_fbo: 190 self.vao.draw() 191 192 # Finally, we simply return the texture into which we just rendered. 193 # The client code might want to display this directly, or to download 194 # the data into a numpy array. 195 return self.front_fbo[0]
196 197 # <h2>Initialization and main loop</h2> 198 199 # Finally, if this program is being run from the command line, we set up all 200 # the previously mentioned objects and start rendering. The program will expect 201 # two command line parameters: the name of an HDF5 input file, and a 202 # <code>printf</code>-style format string for the image filenames. 203 if __name__ == "__main__": 204 # We need to read a volume filename from <code>sys.argv</code>, so import 205 # <code>sys</code>. 206 import sys 207 208 # We assume the volume is stored in a <a 209 # href="http://www.hdfgroup.org/HDF5/">HDF5</a> file, so import <a 210 # href="h5py.alfven.org"><code>h5py</code></a>. 211 import h5py 212 213 # For saving the resulting images, we use <a 214 # href="http://scipy.org/">SciPy</a>'s <code>imread</code> function: 215 from scipy.misc import imsave 216 217 # We open the HDF5 file specified on the command line for reading: 218 with h5py.File(sys.argv[1]) as f: 219 # The volumetric data is read from the corresponding dataset in the 220 # HDF5 file. Note that the name of the dataset is mere convention. 221 volume = Texture3D(f["data"]) 222 # The data can be either emission densities only or absorption 223 # densities only. If the dataset defines an attribute called 224 # "apsorption", we read it; otherwise we assume it's an emission only 225 # dataset. 226 absorption = f["data"].attrs.get("absorption", False) 227 228 # Now we generate a <code>VolumeRenderer</code> instance for the volumetric 229 # data that has just been read. 230 renderer = VolumeRenderer(volume, absorption=absorption, intensity_scale=0.1, modelview_matrix=numpy.eye(4)) 231 232 # Finally, we iterate over a number of different viewing angles, manipulate 233 # the modelview matrix accordingly, render the volume, and save it to an 234 # image file: 235 maxval = None 236 for idx, angle in enumerate(numpy.mgrid[0:2*numpy.pi:100j]): 237 renderer.modelview_matrix[::2, ::2] = numpy.array(((numpy.cos(angle), numpy.sin(angle)), (-numpy.sin(angle), numpy.cos(angle)))) 238 # The <code>render()</code> function returns a <code>Texture</code> 239 # object. We can access the texture data via the <code>data</code> 240 # attribute. We mirror the image vertically and chop of the alpha 241 # channel before storing the image in a file: 242 image = renderer.render().data[::-1, :, :3] 243 if maxval is None: 244 maxval = image.max() 245 imsave(sys.argv[2] % idx, image / maxval) 246 247 # Note how we did not need to create an OpenGL context explicitly. The 248 # first object that needs an OpenGL context (in this case the 249 # <code>VertexArray</code> in the <code>VolumeRenderer</code>) will create 250 # an invisible context as soon as it is initialized. This is very 251 # convenient; on the other hand, it means that if you create a context 252 # manually (e.g., because you need a visible window), you have to 253 # initialize it before any other objects. Otherwise, these objects may end 254 # up in the wrong context, and sharing data across context is a problem of 255 # its own. 256