Hi all,
Registration is now open for the Learn yt Workshop at the University of
Illinois Urbana-Champaign, October 10-12, 2016. This workshop will
introduce attendees to the yt project (yt-project.org), a python toolkit
for the analysis and visualization of volumetric data.
We have create a website for the workshop with more information:
http://yt-project.org/workshop2016/
In addition, if you wold like to register for the workshop, please fill out
this online form:
https://goo.gl/forms/c6oIzNQywU1YWgOe2
The workshop will cover basic usage of yt, including the yt data model,
yt's field system, low-level data inspection, basic visualization
workflows, and data analysis and reduction tasks. We will also cover more
advanced usages like generating simulated observations, halo finding and
halo analysis, volume rendering, advanced 3D visualizations, and advanced
data analysis and reduction tasks. Finally, we will cover how to modify and
extend yt, as well as the development and contribution process.
In addition, there will be time set aside for exploring data you bring to
the workshop, along with opportunities to work directly with yt developers
to load and explore your data.
The workshop will take place at the National Center for Supercomputing
Applications building on the north end of the UIUC campus. The NCSA
building is about a block away from the conference hotel and is next door
to a parking structure that offers metered all-day parking. There are a
number of food trucks nearby, a university-run cafeteria about 2 blocks
away, and a university business district with many good lunch restaurants
about a half a mile away.
We are planning to offer funding for hotel and travel for those requesting
support. If you request funding, you will be notified of available funds
by September 15. Travel awards will be made in the form of arranged
lodging and airfare, with reservations being made by the conference
organizers.
We hope to see you there.
On behalf of the organizing committee,
Matt Turk
Nathan Goldbaum
Jill Naiman
John ZuHone
Kandace Turner

New issue 1279: Non-intuitive, undocumented methods for 3D volume rendering
https://bitbucket.org/yt_analysis/yt/issues/1279/non-intuitive-undocument...
scott_feister:
Scene's functionality on 3D volume rendering and camera rendering in 'perspective' lens mode is not well-documented for users.
## Two bugs: ##
1. In perspective lens mode, the typical use is moving the camera position by cam.set_camera_position([x,y,z]) and pointing the camera by cam.set_camera_focus([x,y,z]). Calling these functions automatically adjusts the lens viewpoint appropriately. However, functions like cam.roll(), cam.rotate() etc. currently change the viewpoint of the lens independently of camera position. How confusing! Also, cam.zoom() doesn't "zoom" the image as you'd expect.
2. When following the Volume Rendering example, it seems like modifying the colormap is performed by "source.tfh.set_bounds((3e-35, 5e-27))" (where source is a reference to the scene's main source) should update the scene rendering, but it isn't. On all but the first render, many more steps are involved (see below!)
## Action items: ##
1. (Easy) The volume rendering examples posted prominently on the yt website should include examples of how to update a scene, including changing colormap, camera angle, etc. for a perspective lens. Nathan has already written up an example such as this in response to my question to the yt-users list. (http://paste.yt-project.org/show/6823/)
2. (Harder) The scene.render() and scene.save() should know not to render the scene without updating the transfer function. Why should it take four steps?
```
#!python
# This method works only for the first render, and is given in the Volume Renderer example
source.tfh.set_bounds((3e-35, 5e-27))
...
sc.render()
```
```
#!python
# This method is what actually works after the first render
source.tfh.set_bounds((3e-35, 5e-27))
source.tfh.build_transfer_function()
source.tfh.setup_default()
source.transfer_function = source.tfh.tf
...
sc.render()
```
Intuitively, it should take one or two steps:
```
#!python
# Something like this is what I'm suggesting
source.tfh.set_bounds((3e-35, 5e-27))
source.update() # for example, does all three steps above (and possibly more)
...
sc.render()
```
3. (Harder) cam.roll(), cam.rotate(), etc. should change the camera position when the camera is in perspective mode, I believe, rather than the viewpoint. cam.zoom() should change the position, as well.
4. (Harder) The Camera and Lens documentation can be expanded to reflect the complexity of the problem, and clearly address which settings are relevant under which lenses. (e.g. the fact that set_width() doesn't do what you might think in perspective mode). The main user documentation page especially on Camera and Lenses (http://yt-project.org/doc/visualizing/volume_rendering.html#camera-lenses) is where I'd expect to find a clear explanation of the various types of lenses with good usage examples on how to move around the camera, aim it places, zoom in, etc. in each case.
The rest of this bug report contains the contextual description of problem from emails on yt-users that led to this bug report:
While working through examples, I hit a roadblock, and I've had some difficulty finding good documentation on the what the "Scene.save()" function actually does; specifically, why it does different things on the first and second call. The first example in the user tutorial page for 3D volume rendering (http://yt-project.org/doc/visualizing/volume_rendering.html) looks something like this:
```
#!python
...
sc = yt.create_scene(ds, lens_type='perspective')
...
source = sc[0]
source.tfh.set_bounds((3e-31, 5e-27))
...
sc.save('rendering.png', sigma_clip=6.0)
```
And, voila, volume rendering saved to png.
However, if I naively continue the script to re-render with new settings:
```
#!python
source.tfh.set_bounds((3e-35, 5e-27))
sc.camera.zoom(2.0)
sc.save('rendering2.png', sigma_clip=6.0)
```
I find that none of my new settings are reflected in "rendering2.png" -- it's just a duplicate of "rendering.png"! But if I start again from scratch with a new scene, the settings take hold. This leaves me (a new user) scratching my head.
So I posed the question to yt-users: Once you've created and saved a scene once, how do you change scene settings like colormap and camera angle?
Nathan Golbaum kindly responded with the following:
Hi Scott,
So this is really two issues.
The first, is that the camera.zoom() function doesn't really zoom in on the image, instead it decreases the "width" the volume rendering region by the zoom factor. This makes a lot of sense for plane-parallel volume renderings (the default), but not so much for perspective lenses as you saw. For a perspective lens you should instead reposition the camera to be closer to the focus to get the same effect.
Next, you also saw that manipulating the TransferFunctionHelper object (source.tfh) after you've already done a volume rendering doesn't update the transfer function. That's because the TransferFunctionHelper object is only used to generate the transfer function (source.transfer_function) if it isn't set yet. If it's already set, it reuses it. So in your example the first time you called save(), the VolumeSource saw that no one had manually created a transfer function, and used the TransferFunctionHelper object to build one. Then, when you asked it for the second rendering, it just reused the same one because right now the VolumeSource doesn't track if the TransferFunctionHelper has been updated. To do what you mean, you need to manually set the transfer function to be the one generated by the TransferFunctionHelper:
source.tfh.build_transfer_function()
source.tfh.setup_default()
source.transfer_function = source.tfh.tf
After manipulating source.tfh.
Using as an example the script from the docs, here's how to make the second image come out as you would expect for the perspective lens:
http://paste.yt-project.org/show/6823/
Which makes these two images:
http://i.imgur.com/gwUTWz1.png (rendering.png)
http://i.imgur.com/M1lSz0N.png (rendering2.png)
I think we could probably do a better job of detecting that the TransferFunctionHelper has been manipulated and avoid this confusion, if you'd like I invite you to open an issue about this on our issue tracker. One might also argue that the zoom function should adjust the camera position for perspective lenses.
Sorry the volume renderer isn't totally intuitive for this use case. I did some work before the yt 3.3.1 release to improve things, but it's still definitely not perfect. I think there's a lot of power there but it also really needs some love from someone who is willing to think about corner cases and interactive workflows.
-Nathan

Hi all,
I came up with an idea to bring a particle approach to model Ray and
AbsorptionSpectrum. But before I move on to implement it in yt, I'd like to
let you know what it is and get feedbacks about it.
The situation is that I have an SPH simulation, and I want to model the Ray
(in order to get the AbsorptionSpectrum) as accurately as possible.
Currently when we create a Ray object, it's always created from the
deposited grid. Although it is a good approximation to the true particle
representation, it is still not the most accurate way. I'd like to be able
to do it in the particle way (like in SPLASH). In the long term, I know
that Matt and Meagan is working on a new system for particle dataset. The
work I'm going to propose could be thought as lying on top of that, in that
the method could be made faster utilizing Matt and Meagon's work, but the
main infrastructure would stay the same.
To introduce what I plan to do, let's have a look at the first figure here
<http://yt-project.org/docs/dev/analyzing/analysis_modules/light_ray_gener...>.
The core concept of a Ray object is the *path length*, *`dl`*. Basically,
if we combine the normal fields with the `dl` field, we get a Ray object.
Now imagine instead of a ray intersecting a lot of grid cells, we have a
ray intersecting a lot of SPH particles. How do we define *`dl`* then? We
could define it as the *integral of the SPH kernel along the intersection*!
And that's the whole trick. From this we could define a particle Ray that
just looks the 'same' as the original grid Ray. Then any analysis built on
top of the Ray object, AbsorptionSpectrum for example, don't need to change
a lot. They will work in different ways simply when provided with different
different kinds of Ray object.
The main difficulty in the implementation is the construction of the
particle `dl` field. Currently I'm doing it brutal-forcedly by computing
`dl` for all the particles and mask out those with zero values. Matt and
Meagan's work will accelerate this by providing the neighboring
information, so I could do the computation on a small set of particles
then. The brutal-force method is not unbearably slow though. And the
computation acceleration could be saved for future work.
I have an external implementation of the particle approach, and have used
it in my current research. I have compared results using the particle
method and those from Trident and they agree statistically as we expected
(thanks Cameron for the help). Now that it looks mature, I'd like to
implement it in yt.
If anyone has any comments, opinions and suggestions, I'd like to hear them.
Thanks for reading,
Bili

Hi all,
Today I'm going to spend some time backporting bugfixes to the stable
branch so we can cut yt 3.3.2. I expect it will be merged in next week at
the PR review hangout (which I will send out an invite for this weekend).
Please let me know if there are any bugfixes you'd particularly like this
release to include or if you'd like me to wait on something you have in the
works.
-Nathan

Hi all,
I’m working on some updates to my yt-based pyXSIM package and I’m stuck and I thought I would email this list for some guidance.
I’m thinking of a situation where we have a data object (sphere, box, whatever) and it straddles a periodic boundary. I want to convert the coordinates such that the coordinates are translated with respect to some arbitrary origin (say, the center of a sphere but in theory it could be anywhere), but are also continuous, i.e, they do not wrap at the boundary.
I’ve looked at the functions get_radius and get_periodic_rvec in yt/fields/field_functions.py, and based on that I have come up with the code below, but it doesn’t quite work for any arbitrary value of the “ctr” argument (the origin). I think it’s because the functions I mentioned only need to calculate absolute value differences in order to compute a radius.
The x, y, and z arguments to the function below are the input coordinate arrays. Could anyone who is more familiar with this sort of thing point out what I should be doing?
Best,
John Z
def get_periodic_coords(ds, ctr, x, y, z):
coords = ds.arr(np.zeros((3, x.size)), "kpc")
coords[0, :] = (x - ctr[0]).to("kpc")
coords[1, :] = (y - ctr[1]).to("kpc")
coords[2, :] = (z - ctr[2]).to("kpc")
if sum(ds.periodicity) == 0:
return coords
dw = ds.domain_width.to("kpc")
for i in range(3):
if ds.periodicity[i]:
c = coords[i, :]
cdw = c - dw[i]
mins = np.argmin([np.abs(c), np.abs(cdw)], axis=0)
ii = np.where(mins == 1)[0]
coords[i, ii] = coords[i, ii] - dw[i]
return coords