Hi all,
Currently to verify file integrity with the install script we download
sidecar md5 file from yt-project.org. These aren't versioned, and md5
itself is largely deprecated for this.
I've created a new sha512-using version of the install script, which
uses sha512 hashes stored *in the install script* to verify file
integrity. This way any time these change we will be notified of
them. I'd like it if a few people could test this -- I have -- on
pristine systems. It changes both how the files are downloaded and it
now uses sha512sum, which should be available on most systems
(according to Kacper :).
You can do this by:
wget https://bitbucket.org/MatthewTurk/yt/raw/367ea3bfff2e/doc/install_script.sh
and then running it, but maybe supplying an alternate directory rather
than the default.
Thanks for any feedback.
-Matt

Hi all,
The yt workshop last week in Chicago (
http://yt-project.org/workshop2012/ ) was an enormous success. On
behalf of the organizing and technical committees, I'd like to
specifically thank the FLASH Center, particularly Don Lamb, Mila
Kuntu, Carrie Eder, for their hospitality; the venue was outstanding
and their hospitality touching. Additionally, we're very grateful to
the Adler Planetarium's Doug Roberts and Mark SubbaRao for hosting us
on Wednesday evening -- seeing the planetarium show as well as volume
renderings made by yt users up on the dome was so much fun. The yt
workshop was supported by NSF Grant 1214147. Thanks to everyone who
attended -- your energy and excitement helped make it a success.
Thanks also to the organizing and technical committees: Britton
Smith, John ZuHone, Brian O'Shea, Jeff Oishi, Stephen Skory, Sam
Skillman, and Cameron Hummels. All talks have been recorded, and you
can clone a unified repository of talk slides and worked examples:
hg clone https://bitbucket.org/yt_analysis/workshop2012/
A few photos have been put up online, too:
http://goo.gl/g02uP
As I am able to edit and upload talks, they'll appear on the yt
youtube channel as well as on the yt homepage:
http://www.youtube.com/ytanalysis
Thanks again, and wow, what a week!
Matt

Hi all.
I'm a bit confused about the unit mechanics in yt right now. I noticed
there's a convert_function for fields, but there are also the units,
time_units, and conversion_factors attributes for StaticOutput.
For the Nyx frontend, I have added convert_function's for each field to
convert the cosmological units into CGS. This works in my fork now and
everything is in cgs and plots fine. Now I know I need to update the
_set_units method of NyxStaticOutput, but I'm not sure what to do with
those attributes and how they are used later.
Best,
Casey

For a long time we have been running into this very problem. I think
it would be appropriate to utilize this code on Kraken, Ranger, etc.
My implementation suggestion would be to put this in the new
startup_tasks, where we determine parallelism. As noted in the
docstring it will have to be modified to use mpi4py.
Britton or Stephen, this sounds like it's directly up your alley as
you run on Kraken the most often. Would one of you be willing to test
it out? My feeling is that we could simply suggest that on these
systems we use this idiom at the top of scripts (where we assume we
distribute this script with yt):
from yt.mpi_importer import mpi_import
with mpi_import():
from yt.mods import *
I think it should recursively watch all the imports. An alternate
option would be to insert some of its logic into yt.mods, or even have
a second mods file that handles it seamlessly, like:
from yt.pmods import *
Ideas?
-Matt
---------- Forwarded message ----------
From: Dag Sverre Seljebotn <d.s.seljebotn(a)astro.uio.no>
Date: Fri, Jan 13, 2012 at 3:51 AM
Subject: [mpi4py] Fwd: [Numpy-discussion] Improving Python+MPI import
performance
To: mpi4py(a)googlegroups.com
Cc: Chris Kees <cekees(a)gmail.com>
This looks very interesting,
Dag
-------- Original Message --------
Subject: [Numpy-discussion] Improving Python+MPI import performance
Date: Thu, 12 Jan 2012 17:13:41 -0800
From: Asher Langton <langton2(a)llnl.gov>
Reply-To: Discussion of Numerical Python <numpy-discussion(a)scipy.org>
To: numpy-discussion(a)scipy.org
Hi all,
(I originally posted this to the BayPIGgies list, where Fernando Perez
suggested I send it to the NumPy list as well. My apologies if you're
receiving this email twice.)
I work on a Python/C++ scientific code that runs as a number of
independent Python processes communicating via MPI. Unfortunately, as
some of you may have experienced, module importing does not scale well
in Python/MPI applications. For 32k processes on BlueGene/P, importing
100 trivial C-extension modules takes 5.5 hours, compared to 35
minutes for all other interpreter loading and initialization. We
developed a simple pure-Python module (based on knee.py, a
hierarchical import example) that cuts the import time from 5.5 hours
to 6 minutes.
The code is available here:
https://github.com/langton/MPI_Import
Usage, implementation details, and limitations are described in a
docstring at the beginning of the file (just after the mandatory
legalese).
I've talked with a few people who've faced the same problem and heard
about a variety of approaches, which range from putting all necessary
files in one directory to hacking the interpreter itself so it
distributes the module-loading over MPI. Last summer, I had a student
intern try a few of these approaches. It turned out that the problem
wasn't so much the simultaneous module loads, but rather the huge
number of failed open() calls (ENOENT) as the interpreter tries to
find the module files. In the MPI_Import module, we have rank 0
perform the module lookups and then broadcast the locations to the
rest of the processes. For our real-world scientific applications
written in Python and C++, this has meant that we can start a problem
and actually make computational progress before the batch allocation
ends.
If you try out the code, I'd appreciate any feedback you have:
performance results, bugfixes/feature-additions, or alternate
approaches to solving this problem. Thanks!
-Asher
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion(a)scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
You received this message because you are subscribed to the Google
Groups "mpi4py" group.
To post to this group, send email to mpi4py(a)googlegroups.com.
To unsubscribe from this group, send email to
mpi4py+unsubscribe(a)googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/mpi4py?hl=en.

Hi--
Is there a way to get a profile to accumulate from [bin: high bin],
instead of [low bin: bin]? I poked through the source and I only see
"accumulation = True, False", but not "\pm 1, 0".
Thanks,
d.
--
Sent from my computer.