Dear yt-users,
Hi, I want to know about the function "extract_connected_sets". I want to
extract SuperNova enriched bubbles (which I define as Z > -4 or something)
and analyze its size or mass or some other values. I'm working with
cosmological simulation by enzo, and there are many SN enriched bubbles.
What I need is the set of "bubble regions" of (probably) YTRegions. For
this purpose, it seemed to me that "extract_connected_sets" would be the
perfect function, but I don't understand what this object is, or how I can
analyze the object, because when I tried projection plot of the objects,
nothing appeared.
Could you tell me how I can use "extract_connected_sets"? Or, is there any
other way to extract SN enriched region?
Best,
YT

Dear yt-users,
I was making a plot with "radial_velocity" and noticed that it is always
positive, which is odd (there should be both inflow and outflow). The
source code is:
>>> print(ds.field_info["gas","radial_velocity"].get_source())
def _radial(field, data):
return data[ftype, "%s_spherical_radius" % basename]
I find that quite confusing. It also looks the same as (‘gas’,
‘radial_magnetic_field’) according to http://yt-project.org/doc/
reference/field_list.html
Thank you!

Hi yt-users!
I am trying to add a field that is radius/rvir. This is an idealized
galaxy sim with static DM potential (no live halo), so I was planning on
putting the virial radius in by hand. I am not totally sure what is
causing yt to choke--does it not like that I am putting in a number? See
below for details.
Thanks in advance for any help!
Best,
Stephanie
The lines of code:
def rrvir(field,data):
return data['radius'].in_units('kpc')/(218.,'kpc')
i =0
while i < len(loop):
ds = yt.load("blahblah/DD"+loop[i]+"/sb_"+loop[i])
ds.add_field(('gas','r_rvir'),function=rrvir)
....many unimportant lines....
I get this error message:
yt : [INFO ] 2018-06-14 09:02:48,783 Gathering a field list (this may
take a moment.)
yt_slices_allouts_mli.py:19: UserWarning: Because 'sampling_type' not
specified, yt will assume a cell 'sampl
ing_type'
ds.add_field(('gas','r_rvir'),function=rrvir)
Traceback (most recent call last):
File "yt_slices_allouts_mli.py", line 19, in <module>
ds.add_field(('gas','r_rvir'),function=rrvir)
File
"/home/stonnesen/yt-conda/src/yt-git/yt/data_objects/static_output.py",
line 1221, in add_field
deps, _ = self.field_info.check_derived_fields([name])
File
"/home/stonnesen/yt-conda/src/yt-git/yt/fields/field_info_container.py",
line 366, in check_derived_fi
elds
fd = fi.get_dependencies(ds = self.ds)
File "/home/stonnesen/yt-conda/src/yt-git/yt/fields/derived_field.py",
line 210, in get_dependencies
e[self.name]
File "/home/stonnesen/yt-conda/src/yt-git/yt/fields/field_detector.py",
line 108, in __missing__
vv = finfo(self)
File "/home/stonnesen/yt-conda/src/yt-git/yt/fields/derived_field.py",
line 250, in __call__
dd = self._function(self, data)
File "yt_slices_allouts_mli.py", line 13, in rrvir
return data['radius'].in_units('kpc')/(300.,'kpc')
File "/home/stonnesen/yt-conda/src/yt-git/yt/units/yt_array.py", line
1372, in __array_ufunc__
out=out, **kwargs)
TypeError: ufunc 'true_divide' not supported for the input types, and the
inputs could not be safely coerced
to any supported types according to the casting rule ''safe''
--
Dr. Stephanie Tonnesen
Associate Research Scientist
CCA, Flatiron Institute
New York, NY
stonnes(a)gmail.com

Hi yt-users,
I am putting yt on a new machine, and it looks like I will have to use the
all-in-one script. I would like to install yt-dev. To do that, what do I
need to rename the BRANCH= to?
Thanks!
Stephanie
--
Dr. Stephanie Tonnesen
Associate Research Scientist
CCA, Flatiron Institute
New York, NY
stonnes(a)gmail.com

Hi everyone,
I'm working on doing an analysis on the ~500 outputs I have from a
simulation run so naturally I want to do it in parallel. The data lives on
Pleiades where a single node has 32/64 Gb depending on the machine you
pick.
The general code structure is to take a dataset and compute the fluxes of
multiple quantities binned in radius. Because the outputs are large, I'd
like to load in 1 dataset per node but then use all 16 cores on the node
for the radial flux calculations.
To test my code, I'm using 2 smaller outputs of ~4.5 Gb each so they should
easily fit on one node but I keep getting memory errors from pleiades. The
code does run correctly on my laptop. I'm fairly certain I'm not setting up
the code correctly with the different num_proc keywords so that it's trying
to do the calculation on a single core instead of half the node.
I've posted a paired down example of my code to pastebin that uses two
outputs from enzo_comoslogy_plus dataset. The code is named
"flux_test_parallel.py" and should run if put inside that dataset
directory. The parallel portion of the code is preceded by a line of #'s. (
http://paste.yt-project.org/show/21/)
Any advice for how to force the parallel structure to use the machine
memory correctly or general pointers for this kind of script would be
really appreciated!
Thanks!
Lauren

Britton,
Thank you - I will experiment and report performance. I think requiring
the same number of bins in all profiles is a reasonable restriction,
even if it increases the output size significantly. If someone is
concerned with keeping the output size minimized, they can always write
halos within several mass ranges into separate outputs with numbers of
bins fixed within a single output but varied between mass ranges, and
that can be accomplished with the existing functionality straightforwardly.
n
On 06/11/2018 11:31 AM, Britton Smith wrote:
> Hi Nick,
>
> Great question, I can see how this could become an issue when halo
> catalogs get large.
>
> I've sketched out a way to do this below that may be slightly hacky, but
> will get the job done.
> http://paste.yt-project.org/show/24/
>
> In the above, I create a new callback that attaches the profiles hanging
> off the halo object to the halo catalog itself, then combine them into
> single arrays at the end and save using yt.save_as_dataset. This will
> work fine if the profiles all have the same number of bins. If not, it
> would probably be better to write to an hdf5 file by hand with a single
> hdf5 group per profile. That said, HDF5 performance is known to degrade
> when the number of groups in a file gets large. In any case, hopefully,
> something like the above will work.
>
> Perhaps in the future, we can make some modifications to the actual code
> to do something like this if it seems to work well.
>
> Britton
>
> On Wed, Jun 6, 2018 at 4:02 AM Nick Gnedin <gnedin(a)fnal.gov
> <mailto:gnedin@fnal.gov>> wrote:
>
>
> I would like to save halo profiles for a large simulation. I understand
> that I can use a simple callback like this:
>
> hc.add_callback("save_profiles", storage="virial_profiles",
> output_dir="profiles")
>
> The problem however is that the call back saves each profiles as a
> separate h5 file, and for a large simulation the number of files may be
> prohibitive. Instead, I would like to save all of the profiles in a
> single fileset, ideally as attributes to halo objects.
>
> For example, inside my profiling callback I can do the following:
>
> def hprof_function(halo):
> ...
> pv = yt.create_profile(...)
> setattr(halo,"profs",pv)
>
> However, hc.create() function does not save all the attributes for halo
> objects, so halo.profs would not be saved.
>
> What would be a proper yt way of storing all the profiles as a single
> dataset?
>
> Thank you,
>
> n
>
> _______________________________________________
> yt-users mailing list -- yt-users(a)python.org
> <mailto:yt-users@python.org>
> To unsubscribe send an email to yt-users-leave(a)python.org
> <mailto:yt-users-leave@python.org>
>

I would like to save halo profiles for a large simulation. I understand
that I can use a simple callback like this:
hc.add_callback("save_profiles", storage="virial_profiles",
output_dir="profiles")
The problem however is that the call back saves each profiles as a
separate h5 file, and for a large simulation the number of files may be
prohibitive. Instead, I would like to save all of the profiles in a
single fileset, ideally as attributes to halo objects.
For example, inside my profiling callback I can do the following:
def hprof_function(halo):
...
pv = yt.create_profile(...)
setattr(halo,"profs",pv)
However, hc.create() function does not save all the attributes for halo
objects, so halo.profs would not be saved.
What would be a proper yt way of storing all the profiles as a single
dataset?
Thank you,
n

Hi Kyungjin,
When you create the HaloCatalog object, you can provide the following
keyword argument:
finder_kwargs={"link": 0.3}
"finder_kwargs" is a dictionary of keywords that will be passed to the halo
finder when it's run. For the FoF halo finder, the "link" keyword does what
you want.
Britton
On Sat, Jun 9, 2018 at 8:33 PM, Kyungjin Ahn <kjahn(a)chosun.ac.kr> wrote:
> Hi,
>
> When doing halo identification with FoF scheme, how does one put in a
> linking length that is not equal to 0.2? I can do fof analysis with default
> options, but cannot find how to change the linking length to, say, 0.3.
> There is a good reason I need to play with this number so I'd appreciate
> any help.
>
> Thanks!
> Kyungjin
>
> --
> Kyungjin Ahn
> Associate Professor
> Department of Earth Sciences, Astronomy Division
> Chosun University
> Tel: 062-230-7340
> +82-62-230-7340 (from abroad)
> homepage: www.chosun.ac.kr/kjahn
>
--
Dr. Britton Smith
Assistant Research Scientist
San Diego Supercomputer Center
University of California, San Diego
T: +1 858 822 0936
F: +1 858 534 5117