Hi all,
I promised an email about this last Friday, and I'm now finally
getting around to it. Hooray?
One common problem that comes up pretty often is the issue of how
fields on-disk are represented, how the units and whatnot get passed
into the frontends, and then how to change this. As an example, it
can be really quite tricky to tell yt that, no, hey, turns out, *this*
version of the data has *different* units than the *other* version, or
something. We special case these in the frontends in almost all cases
-- which requires touching the yt source code, rather than something
that can be passed in.
I'd like to propose a handful of things, which may or may not be reasonable.
* fields.py currently uses tuples of values hanging off a class
definition, which are rather undocumented and also oddly mixed in
mutability and immutability. I'm still trying to think through the
best way to improve this. But, the simplest problem that *needs* to
be fixed is that this is fragile to changing the arguments. We can't
add new attributes without changing everything. Same for getting rid
of arguments.
* to_cgs should instead be to_natural in almost all cases. Setting
the "natural" units of a dataset should be possible. This would
enable doing things like code-as-natural, mks-as-natural, etc. I
*think* it is straightforward, having looked at the code, but it's
also slightly out of my depth in the unit system.
* We should be able to standardize unit *override*. We should, at
the very least, be able to pass in a dict that maps field tuples to
on-disk units, which would override whatever is in fields.py.
* Having a method of dynamically defining code_something units would
be very useful; this has shown up in odd ways with things like
particle units in octree codes where the code_length is differently
defined for different fields. Having a dynamic code_* definition
would ease this, rather than mandating that there are N code_ fields,
of names blah blah blah.
OK, there are my thoughts. Longer term thoughts could be to change
our field definitions to all be sympy expressions, which could be
simplified, but that's for another day.
-Matt

Hey Mark,
I have done some progress on get_vertex_centred_data, with
simple extrapolation, we can get:
https://sketchfab.com/models/d267102434274bdbb5f75f5b9ef27b44.
As you can see, this solution does not work so well, so now Matthew Turk
and I are working on changing the parsing of the octTree to include
ghostzones.
It's great to have another companion to this part of the code.
do you have a Slack account? lets talk there. (I am @tussbaum)
Matt, anything to add?
Cheers,
Tomer
On Wed, Nov 25, 2015 at 2:24 PM, Mark Richardson <
Mark.Richardson(a)physics.ox.ac.uk> wrote:
> Hello,
> I see that the fact that Octree doesn't currently work with
> get_vertex_centred_data had some interest recently. Tomer, have you made
> much progress on this since early November? I'm also very interested in
> getting this to work (I use Ramses), so if you'd like, I'd enjoy coming on
> board and helping in what ways I can.
>
> To start, is there a fork I can follow on the bitbucket?
>
> Cheers,
> -Mark
>
> --
>
> Mark Richardson
> Beecroft Postdoctoral Fellow, Department of Physics
> Denys Wilkinson Building - 555F
> University of Oxford
> Mark.Richardson(a)physics.ox.ac.uk
> +44 1865 280793
>

New issue 1146: Octree construction on some SPH datasets revealing large swath of simulation space with zero density
https://bitbucket.org/yt_analysis/yt/issues/1146/octree-construction-on-s...
Cameron Hummels:
Following the discussion in PR #1880, we identified that the octree construction was not behaving correctly for some SPH datasets. Rather than having full coverage in mesh fields across the simulation volume, regions of no gas particles have their mesh fields deposited as zero (i.e. zero density, zero temperature, etc.). Formally, this should never happen, and it can lead to a lot of unpredictable behavior later in processing the data.
For more information on this problem and how to reproduce it, see Pull Request #1880.

Hi all,
I think a piece of low-hanging fruit to make the contributing process
clearer would be to add a CONTRIBUTING file to the main yt repository.
I think the text of this file mostly exists already in this document in the
docs:
http://yt-project.org/doc/developing/developing.html
One thing I could do is extract the text of that document from the docs
sources and simply add it to the root of the repository like I've done with
the coding style guide in my open style PR. This way it's still in the docs
build (since it will be included using a sphinx include statement) but the
instructions will be closer to the code location in the root of the
repository, increasing discoverability for new contributors who are not
aware of the developer guide in the docs. This CONTRIBUTING file could also
subsume the coding_styleguide.txt file.
This file could also summarize, discuss, and highlight the code of conduct:
http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0023.html
Do others think this is a good idea? Are there other things that should be
in a CONTRIBUTING file. Maybe instead of copy/pasting the content of the
developing.rst file in the docs, it could just link to the developer guide,
and the discussion of the code of conduct and any other new things we'd
like to add could be added there.
Thanks for your advice,
Nathan

---------- Forwarded message ----------
From: Fernando Perez <fperez.net(a)gmail.com>
Date: Thu, Nov 19, 2015 at 3:42 PM
Subject: [IPython-User] [JOB] Project Jupyter is hiring two
postdoctoral fellows @ UC Berkeley.
To: jupyter(a)googlegroups.com, IPython Development list
<ipython-dev(a)scipy.org>, IPython User list <ipython-user(a)scipy.org>,
SciPy Developers List <scipy-dev(a)scipy.org>
Hi all,
We are delighted to announce today that Project Jupyter/IPython has
two postdoctoral fellowships open at UC Berkeley, open immediately.
Interested candidates can apply here:
https://aprecruit.berkeley.edu/apply/JPF00899
We hope to find candidates who will work on a number of challenging
questions over the next few years, as described in our grant proposal
here:
http://blog.jupyter.org/2015/07/07/project-jupyter-computational-narrativ...
Interested candidates should carefully read that proposal before
applying to familiarize themselves with the full scope of the
questions we intend to tackle.
We'd like to thank the support of the Helmsley Trust, the Gordon and
Betty Moore Foundation and the Alfred P. Sloan Foundation.
Cheers,
Brian Granger and Fernando Perez.
--
Fernando Perez (@fperez_org; http://fperez.org)
fperez.net-at-gmail: mailing lists only (I ignore this when swamped!)
fernando.perez-at-berkeley: contact me here for any direct mail
_______________________________________________
IPython-User mailing list
IPython-User(a)scipy.org
https://mail.scipy.org/mailman/listinfo/ipython-user