Hello!
I've recently discovered yt, and I found the rendering examples really
encouraging. I'm now trying to visualize a large 3D dataset via volume
rendering technique. I have come up with the following script that is
intended to plot a part of the dataset:
---
import yt
from yt.config import ytcfg
import h5py
f = h5py.File('ims-p.hdf5')
intens = f['intensity'][:256,:256,:256]
print(intens.shape)
ds = yt.load_uniform_grid({'Intensity': intens}, intens.shape)
print(ds)
sc = yt.create_scene(ds, ('Intensity'), lens_type = 'perspective')
sc[0].set_log(True)
sc[0].tfh.plot('transfer_function.png', profile_field = 'Intensity')
sc.camera.resolution = 1024
sc.annotate_axes(alpha = 0.1)
sc.annotate_domain(ds, color = [1, 1, 1, 0.1])
sc.save_annotated('vr_grids.png', sigma_clip = 4)
---
I get some really fascinating rendering (attached), and now I want to
tweak it. I would be really happy if you could help me with the
following questions:
1) What is the best way to configure the camera viewpoint so the
complete data cube fits into the view? How to quickly render the domain
axes without performing the costly volume rendering (to find the
"optimal" viewpoint experimentally)?
2) What is the best way to specify axes scales, so that rendered volume
appears as cuboid, not cube? I want the resulting cuboid to be scaled as
3:2:2. How to specify resulting image size, e.g. how to get image of
1600x1200 pixels?
3) In my dataset, the grid is not in fact uniform, but rather X
coordinates are specified by (non-decreasing) array rettime, Y
coordinates are specified by (non-decreasing) array mz. What is the best
way to pass this information (to load_uniform_grid)?
4) Is it possible to use inverse transfer function model, i.e. white
background?
5) My complete dataset is rather large and doesn't readily fit into RAM.
However it seems that for ray-tracing algorithm, it shouldn't be
required that all data is available simultaneously. Is it possible to
feed dataset by chunks?
Your help is greatly appreciated!
Best wishes,
Andrey Paramonov
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

---------- Forwarded message ----------
From: Tomer Nussbaum <tomer.nussbaum(a)mail.huji.ac.il>
Date: Thu, Nov 23, 2017 at 6:08 PM
Subject: Cluster best configurations for YT
To: yt-users(a)lists.spacepope.org
Hi,
We have updated our cluster lately, but our YT platform works very slow
(loads one snapshot an hour, when couple of people run together...).
I wanted to ask you if you can help solving this issue from your experience.
We use infiniband, and we see that the main problem is lots of request in
random access to our hard drives,
so the problem can be in fine tuning YT on the Nodes, or fine tuning the
file system
This brings up this couple of issues (maybe more, if you have another idea,
I would be thankful to know):
1. *IO readings - *Is there a way to optimize the io readings requests
to the file server?
2. *YT configurations -* Are there specific parameters in the YT
configuration for this status?
3. *File server behavior -* Can we set reading server functions (we use
hard drives, serial reading status)?
4. *Metadata cache -* Does using metadata cache will solve the issue?
5. *zlib usage - *How can I check if this feature is activated, how much
is it important in the YT platform?
I will really appreciate any help with it,
Thanx,
Tomer

On behalf of the Trident development team, I am happy to announce a new
stable release of Trident, a python-based code suite for analyzing
astrophysical hydro simulations to investigate the CGM/IGM and produce
synthetic absorption spectra. This release is accompanied by a shift in
development from mercurial/bitbucket to git/github, continuous integration
and testing with travis, and a number of bugfixes and additional features.
Our changelog (below) has a full description of the changes in this
version.
Full installation instructions are available in our documentation, as are
instructions for a number of common use cases in working with your
datasets: e.g., adding arbitrary ion fields, making column density maps,
calculating column densities of individual sightlines, generating
absorption line spectra.
There has also been recent development in the yt community to treat
particle-based datasets more natively rather than by depositing to a grid
and then processing. This effort, nicknamed "*demeshening*", is being led
by Nathan Goldbaum, Meagan Lang, and Matt Turk, with Trident contributions
by Bili Dong and myself. Not only does it get more accurate results, but
it is faster and uses less memory since particle fields don't need to be
deposited to a global grid. This code is now in beta, and users are
encouraged to use it and give bug reports. There are full instructions for
installation of it available in the Trident docs, and we expect the next
stable release of Trident will incorporate it.
Please don't hesitate to contact the mailing list with any questions or
problems you may encounter with the code, and I encourage you to share this
announcement with other scientists who may be interested in using Trident.
*Trident Page:* http://trident-project.org
*Trident Code:* https://github.com/trident-project/trident/
*Trident Docs:* https://trident.readthedocs.io/en/latest/
*Trident Paper*: http://adsabs.harvard.edu/abs/2017ApJ...847...59H
*Changelog to Trident 1.1:*
https://trident.readthedocs.io/en/latest/changelog.html
*Installation Docs for beta "demeshened" yt and Trident:*
https://nbviewer.jupyter.org/url/trident-project.org/notebooks/trident_de...
--The Trident Developer Team:
Cameron Hummels
Britton Smith
Devin Silvia
Matthew Turk
Nathan Goldbaum
Kacper Kowalik
Lauren Corlies
Bili Dong
Hilary Egan
Clayton Strawn
--
Cameron Hummels
NSF Postdoctoral Fellow
Department of Astronomy
California Institute of Technology
http://chummels.org

Hi all,
I’m trying to add a new quantity to a halo catalog following the docs as such,
23 def _total_mass(halo):
24 gasmass, dmmass = halo.data_object.quantities.total_mass()
25 return gasmass+dmmass
26
27 add_quantity('total_mass',_total_mass)
28 hc.add_quantity('total_mass’)
however, it seems at add_quantity is not defined. How do I go about doing this?
Cheers,
DK