.. _image-panner:

The AMR Image Panner
====================
.. versionadded:: 1.7

The mechanism for creating images from reduced 2D AMR data in yt is to feed
that data through a pixelization routine, which takes a flattened array of
pixel locations, widths and values and returns a 2D buffer of data suitable for
plotitng as an image.  This enables us to pan and zoom virtually at will, with
no extra cost from re-projecting the simulation.

Typically, this is all hidden from the user: the plot interface mostly just
lets you describe your plot, rather than describe moving around inside the
plot.  But, with the image panning extension, you have more freedom: tools can
be built on top of the pixelization routine, so that updated buffers as the
user moves around in a given dataset can be used for things like image plotting
and display.

The image panning object is utilized as a frontend to the data; it provides
actions to pan and zoom an imaginary porthole on the data.  When the bounds are
updated, the image is re-created that corresponds to those bounds and an
optional callback is called.  The image panner object should be viewed more as
a component in a larger system, rather than anything that's particularly useful
on its own.  However, in the yt code there are already a few plugins that
utilize the image panner to do some interesting things.

An image panner is uniquely defined by:

 * Source: this is the AMR2D Data that provides the data to be pixelized.  As
   of right now, this can only be slices or projections.
 * Size: this is the size of the buffer to pixelize.
 * Field: this is the field to pixelize, from the source.

"Callbacks" can be provided as well.  The idea behind a callback is that every
tiem either the image buffer or the image viewport is changed the callbacks are
called with the new information, so that any dependent routines can then be
called.  This way, routines that rely on images or viewport information are
called only when necessary.

When the image panning extension is imported, most of the image panner classes
and subclasses are registered to hang off the hierarchy object, as all other
data objects do.  The object names are listed in the API documentation, below.

Now, for some examples!

Interactive Motion and Image Saving
-----------------------------------

This is the simplest example, where a callback is provided that saves an image
every time the pixelized image is updated.  The components of this are provided
in the distribution, so only a minimal set of code is necessary.  Here's a
fiducial example, assuming that we just want to save out every time we change
the image:

.. code-block:: python

   import yt.extensions.image_panner
   from yt.mods import *
   
   pf = load("RD0005-mine/RedshiftOutput0005")
   proj = pf.h.proj(0, "Density", "Density")
   
   saver = yt.extensions.image_panner.ImageSaver(0)
   ip = pf.h.image_panner(proj, (512, 512), "Density", callback = saver)

At this point, the *ip* object is set up, and it can accept commands.  Every
time a zoom or pan command is issued, it will re-save a file in the current
working directory called ``wimage_000.png`` (the 000 comes from feeding the
ImagerSaver "0" during instantiation.)  So, for instance, we start out by
initiating our first save:

.. code-block:: python

   ip.zoom(1.0)

This sets the scale nicely, and we have this image saved out:

.. image:: _images/ip_saver_0.png
   :width: 256

This is the full domain.  We can now zoom in:

.. code-block:: python

   ip.zoom(3.0)

and we re-load our ``wimage_000.png`` file:

.. image:: _images/ip_saver_1.png
   :width: 256

But now we see a little halo in the lower right.  So, we can pan over there.
There are two ways to pan -- either by absolute values or by values specified
relative to the current viewport size.  We'll use the latter, and it looks to
me to be about 25% of the viewport to get over to that halo.

.. code-block:: python

   ip.pan_rel( (0.25, 0.25) )

And reloading our image, we see:

.. image:: _images/ip_saver_2.png
   :width: 256

We've successfully centered!

Windowed Rendering
------------------

Another possibility is that you'd like to have multiple pixelized tiles
rendered out, but you only want to have one controller process.  This is more
useful when the rendering processes occur in separate processes, but the
sequential generation of tiles is abstracted in this interface.

Note that here, unlike above, we supply a different number to each instance of
the ImageSaver.  This lets us generating multiple tiles at a time.

.. code-block:: python

   import yt.extensions.image_panner as image_panner
   from yt.mods import *
   
   pf = load("RD0005-mine/RedshiftOutput0005")
   proj = pf.h.proj(0, "Density", "Density")
   
   ws = []
   
   saver = yt.extensions.image_panner.ImageSaver(0)
   ws.append(pf.h.windowed_image_panner(proj, (1024, 1024), (512, 512),
             (0, 0), "Density", callback = saver))
   
   saver = yt.extensions.image_panner.ImageSaver(1)
   ws.append(pf.h.windowed_image_panner(proj, (1024, 1024), (512, 512),
             (512, 0), "Density", callback = saver))
   
   saver = yt.extensions.image_panner.ImageSaver(2)
   ws.append(pf.h.windowed_image_panner(proj, (1024, 1024), (512, 512),
             (0, 512), "Density", callback = saver))
   
   saver = yt.extensions.image_panner.ImageSaver(3)
   ws.append(pf.h.windowed_image_panner(proj, (1024, 1024), (512, 512),
             (512, 512), "Density", callback = saver))
   
   mwvmp = image_panner.MultipleWindowVariableMeshPanner(ws)

The creation of windowed image panners is slightly different than standard
image panners: each one is explicitly told how big the final image is, and
how big their portion is, and the index at which they should start.  Finally,
ths list is supplied to an instance of a controller object.

When the following code is executed:

.. code-block:: python

   mwvmp.zoom(1.0)

four new 512x512 images will be output covering each of the individual regions
"owned" by each windowed panner.  The final image can be constructed by
stitching these sub-images together.  Why is this useful?  Well, we can also
dispatch the rendering jobs to remote nodes!
   
Remote Windowed Rendering
-------------------------

In the section `interactive-parallel`, the process of launching an
IPython-based controller and engines system is described.  That discussion
won't be repeated here too extensively, but if you simply launch several
``ipengine`` instances and an ``ipcontroller`` -- either locally or remotely --
you can utilize a parallel version of the windowed renderer.

.. warning::
   Note that by default ``ipcontroller`` opens up ports!  It is recommended
   that you supply the argument ``--engine-ip=127.0.0.1`` to ensure only local
   connections are allowed.  SSH tunnelling can then be used to connect to the
   controller.

Note that you can also supply callbacks to the remote nodes -- this is the
basis of a future enhancement, where this interface will be used to drive a
tiled display wall, and each render node will supply a large pixelized image to
corresponding its own tile, to be displayed at full resolution.

The interface here is similar to the windowed rendering above, but it will
attempt to dispatch to engines running on the IPython MultiEngineClient
interface instead of creating objects locally.

.. code-block:: python

   import yt.extensions.image_panner
   from yt.mods import *
   
   pf = load("/Users/matthewturk/Research/data/RD0005-mine/RedshiftOutput0005")
   proj = pf.h.proj(0, "Density", "Density")
   
   rvmp = pf.h.remote_image_panner(proj)
   
   rvmp.add_window( (1024, 1024), (512, 512), (0, 0), "Density")
   rvmp.add_window( (1024, 1024), (512, 512), (512, 0), "Density")
   rvmp.add_window( (1024, 1024), (512, 512), (0, 512), "Density")
   rvmp.add_window( (1024, 1024), (512, 512), (512, 512), "Density")
   
   rvmp.zoom(1.0)

The implicit callback supplied to the remote objects is a saver, with the tile
ID as the image name: so all of these will save out ``wimage_XXX.png`` files,
where ``XXX`` is the tile ID of that window.

Google Maps Interface
---------------------

If you have `Chaco <http://code.enthought.com/chaco/>`_ installed, either by
itself or as part of the full Enthought Tool Suite, you can use a pan and zoom
interface like Google Maps to explore your data.

The script to initiate this is relatively simple and very similar to what
is used above:

.. code-block:: python

   import yt.extensions.image_panner
   from yt.mods import *
   
   pf = load("RD0005-mine/RedshiftOutput0005")
   proj = pf.h.proj(0, "Density", "Density")
   
   ip = pf.h.image_panner(proj, (512, 512), "Density")
   from yt.extensions.image_panner.pan_and_scan_widget \
       import VariableMeshPannerView
   vmpv = VariableMeshPannerView(panner = ip)
   vmpv.configure_traits()

Here's a screencast of me using it to explore a big dataset.

.. raw:: html

   <div id="v7570">
   <a href="http://www.macromedia.com/go/getflashplayer">Get the Flash Player</a> to see this video.
   </div>
   <script type="text/javascript" src="https://media.dreamhost.com/mp4/swfobject.js"></script>
   <script type="text/javascript">
   var swf = new SWFObject("https://media.dreamhost.com/mp4/player.swf", "mpl", "201", "211", 8);
   swf.addParam("allowfullscreen", "true");
   swf.addParam("allowscriptaccess", "always");
   swf.addVariable("file", "http://yt.enzotools.org/files/20100328_chaco_ui_conv.flv");
   swf.addVariable("image", "http://yt.enzotools.org/files/20100328_chaco_ui_conv.jpeg");
   swf.write("v7570");
   </script>

:mod:`yt.extensions.image_panner` Image Pan And Zoom Support
------------------------------------------------------------

.. module:: yt.extensions.image_panner

.. autoclass:: yt.extensions.image_panner.VariableMeshPanner
   :members:
   :inherited-members:

.. autoclass:: yt.extensions.image_panner.WindowedVariableMeshPanner
   :members:
   :inherited-members:

.. autoclass:: yt.extensions.image_panner.MultipleWindowVariableMeshPanner
   :members:
   :inherited-members:

.. autoclass:: yt.extensions.image_panner.ImageSaver
   :members:
   :inherited-members:

.. autoclass:: yt.extensions.image_panner.PanningCeleritasStreamer
   :members:
   :inherited-members:
