.. _volume_rendering:

Volume Rendering
================
.. versionadded:: 1.6

.. warning:: The volume renderer is beta software!  It's still being developed,
             it's kind of slow, and the interface is still rough around the edges.
             Watch this space, though!

Status
------

The volume renderer has undergone many changes since the last release: not only
has it been optimized, but support for a variety of rendering schemes has been
added.  While the volume renderer is still only a secondary project, acting as
a supplement to the primary functionality of ``yt``, it shows promise for
exploring data and creating visualizations of simulations.

The volume renderer is implemented in a hybrid of Python and Cython, which is
Python-like code compiled down to C.  It has been optimized, but it is still a
*software* volume renderer: it does not currently operate on graphics
processing units (GPUs).  However, while the rendering engine itself may not
directly translate to GPU code (OpenCL, CUDA or OpenGL), the Python structures:
partitioning, transfer functions, display, etc., may be useful in the future
for transitioning the rendering to the GPU.

Method
------

Direct ray casting through a volume enables the generation of new types of
visualizations and images describing a simulation.  ``yt`` now has the facility
to generate volume renderings by a direct ray casting method.  This comes with
several important caveats, as well as pointers to other resources that may
provide better images faster.  However, the ability to create volume renderings
informed by analysis by other mechanisms -- for instance, halo location,
angular momentum, spectral energy distributions -- is useful.

The volume rendering in ``yt`` follows a relatively straightforward approach.

#. Create a set of transfer functions governing the emission and absorption as
   a function of one or more variables. (:math:`f(v) \rightarrow (r,g,b,a)`)
   These can be functions of any field variable, weighted by independent
   fields, and even weighted by other evaluated transfer functions.  (See
   `transfer_functions`.)
#. Partition all grids into non-overlapping, fully domain-tiling "bricks."
   Each of these "bricks" contains the finest available data at any location.
#. Generate vertex-centered data for all grids in the volume rendered domain.
#. Order the bricks from back-to-front.
#. Construct plane of rays parallel to the image plane, with initial values set
   to zero and located at the back of the region to be rendered.
#. For every brick, identify which rays intersect.  These are then each 'cast'
   through the brick.

   #. Every cell a ray intersects is sampled 5 times (adjustable by parameter),
      and data values at each sampling point are trilinearly interpolated from
      the vertex-centered data.
   #. Each transfer function is evaluated at each sample point.  This gives us,
      for each channel, both emission (:math:`j`) and absorption
      (:math:`alpha`) values.
   #. The value for the pixel corresponding to the current ray is updated with
      new values calculated by rectangular integration over the path length:

      :math:`v^{n+1}_{i} =  j_{i}\Delta t + (1 - \alpha_{i}\Delta t )v^{n}_{i}`

      where :math:`n` and :math:`n+1` represent the pixel before and after
      passing through a sample, :math:`i` is the color (red, green, blue) and 
      :math:`\Delta t` is the path length between samples.
#. The image is returned to the user:

.. image:: _images/vr_sample.png
   :width: 512

.. _transfer_functions:

Transfer Functions
------------------

Transfer functions are the most essential component.  Several different
fundamental types have been provided, but there are many different ways the
construct complicated expressions to produce visualizations and images using
the underlying machinery.

.. note::
   All of the information about how transfer functions are used and values
   extracted is contained in the functions `TransferFunctionProxy.eval_transfer`
   and `FIT_get_value` in the file `yt/_amr_utils/VolumeIntegrator.pyx`.  If
   you're curious about how to construct your own, or why you get the values
   you do, you should read the source!

There are three ready-to-go transfer functions implemented in ``yt``.
:class:`~yt.extensions.volume_rendering.ColorTransferFunction`,
:class:`~yt.extensions.volume_rendering.ProjectionTransferFunction`, and
:class:`~yt.extensions.volume_rendering.PlanckTransferFunction`.

Color Transfer Functions
++++++++++++++++++++++++

These transfer functions are the standard way to apply colors to specific
values in the field being rendered.  For instance, applying isocontours at
specific densities.  They have several different mechanisms that can be used.
The easiest mechanism is to use
:meth:`~yt.extensions.volume_rendering.ColorTransferFunction.add_layers`, which
will add evenly spaced isocontours between the bounds of the transfer function.
However, you can also use
:meth:`~yt.extensions.volume_rendering.ColorTransferFunction.sample_colormap`,
which will sample a colormap at a given value.  Additionally, you can directly
call
:meth:`~yt.extensions.volume_rendering.ColorTransferFunction.add_gaussian`,
which will allow you to specify the colors directly.

See :ref:`cookbook-simple_volume_rendering` for an example usage.

Projection Transfer Function
++++++++++++++++++++++++++++

This is designed to allow you to very easily project off-axis through a region.
See :ref:`cookbook-offaxis_projection` for a simple example.  Note that the
integration here is scaled to a width of 1.0; this means that if you want to
apply a colorbar, you will have to multiply by the integration width (specified
when you initialize the volume renderer) in whatever units are appropriate.

Planck Transfer Function
++++++++++++++++++++++++

This transfer function is designed to apply a semi-realistic color field based
on temperature, emission weighted by density, and approximate scattering based
on the density.  This class is currently under-documented, and it may be best
to examine the source code to use it.

More Complicated Transfer Functions
+++++++++++++++++++++++++++++++++++

For more complicated transfer functions, you can use the
:class:`~yt.extensions.volume_rendering.MultiVariateTransferFunction` object.
This allows for a very complicated set of weightings, linkages and so on.

The Simple Volume Rendering Interface
-------------------------------------

The simplest interface to volume rendering is exposed in the recipe
:ref:`cookbook-simple_volume_rendering`.  Essentially, this mechanism will
partition and homogenize the necessary region, ray cast, and return to the user
an image.  This is suitable for situations such as time-series volume
rendering, or a single-shot volume rendering.  However, if you want to make
multiple images from the same output, or do more complicated things like
stereoscopic volume rendering, it is recommended to use the camera interface as
discussed in :ref:`volume_rendering_camera`.

The full API for generating volume renderings with this mechanism can be found
in the documentation for
:class:`~yt.extensions.volume_rendering.VolumeRendering`, but it is typically
accessed as the class-factory ``volume_rendering`` that hangs off the hierarchy
object:

.. code-block:: python

   vp = pf.h.volume_rendering(L, W, c, Nvec, tf, whole_box=True)

.. note:: If you do not import ``yt.extensions.volume_rendering`` before
          loading a parameter file, this interface will be unavailable!

.. _volume_rendering_camera:

The Camera Interface
--------------------

.. versionadded:: 1.7

The future of the volume rendering module is through manipulation of a camera.
Currently volume rendering occurs via orthographic projection; the camera
interface will allow this to be moved to a perspective projection, and
furthermore to be made more complex as well as retaining some simplicity.

The primary interface here is through the creation of an instance of
:class:`~yt.extensions.volume_rendering.Camera`, which represents a viewpoint
into a volume.  The camera optionally accepts an instance of
:class:`~yt.extensions.volume_rendering.HomogenizedVolume` that has
already been initialized, but if it is not supplied one, it will generate one
itself.  This can also be specified if you wish to save bricks between repeated
calls, thus saving considerable amounts of time.

The camera interface allows the user to move the camera about the domain, as
well as providing interfaces for zooming in and out.  Furthermore, ``yt`` now
includes a steroscopic camera
(:class:`~yt.extensions.volume_rendering.StereoPairCamera`).

.. warning:: As of the current release, when instantiating a Camera object, you
             must specify the keyword parameter ``pf``.

Here's a fully functional script that demonstrates how to use the camera
interface.  This also includes an example of how to store and re-use
partitioned volumes between executions.

.. code-block:: python

   from yt.mods import *
   import yt.extensions.volume_rendering as vr
   import yt.extensions.image_writer as iw
   import yt.amr_utils as au
   import time

   pf = load("DD1701/DD1701")
   fn = "%s_partitioned.h5" % pf

   dd = pf.h.all_data()
   mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0])
   mi += 0.5 ; ma -= 0.5 # To allow a bit of room at the edges
   hv = vr.HomogenizedVolume(source = dd, pf = pf, fields=["Density"],
                             log_fields=[True])
   if not os.path.exists(fn):
       hv.initialize_source()
       hv.store_bricks(fn)
   else:
       hv.load_bricks(fn)

   tf = vr.ColorTransferFunction((mi, ma))
   tf.add_layers(8, w=0.01)
   c = na.array([0.51,0.52,0.53])
   L = na.array([1.0, 1.0, 1.0])
   W = 1.1

   N = 256

   cam = vr.Camera(c, L, W, (N,N),
                   transfer_function = tf, pf = pf, volume = hv)
   fn = "%s_image.png" % pf

   cam.zoom(1.0)
   vals = cam.snapshot()

   iw.write_bitmap(vals, fn, vals.std()*4.0)

The :class:`~yt.extensions.volume_rendering.StereoPairCamera` object has a
single primary method,
:meth:`~yt.extensions.volume_rendering.StereoPairCamera.split`, that will
return two cameras, a left and a right.
