scons now runs tests via a testrunner to collect lists of skipped tests in build/ (examples have their own builder since they aren't unit tests)
fixed hessian regularisation tests falling over by adding a shortcircuit optional arg that returns the pde, not the solution
added missing/omitted tests back into scons tests

built boost 1.55.0 on savanna which fixes the segfaults we have been seeing
for a long time when using shared_from_this() with the intel compiler.
(While Intel couldn't reproduce, I could on epic, magnus, and raijin w/ system
boost+python. Finally found that using 1.53+ modules on raijin did no longer
exhibit the problem.)

this file is added so that we can import Symbolic by doing:
import esys.escript.symbolic as symb instead of
import esys.escriptcore.symbolic as symb
the one line containined in the file is on line five for reasons prettiness

moved SolverOptions to c++, split into SolverOptions for the options and SolverBuddy as the state as a precursor to per-pde solving... does break some use cases (e.g. pde.getSolverOptions().DIRECT will now fail, new value access is with SolverOptions.DIRECT), examples and documentation updated to match

Remove randomFill python method from ripley domains.
All random data objects (for all domain types) should be generated
using esys.escript.RandomData()
The only filtered random we have is gaussian on ripley but
it is triggered by passing the tuple as the last arg of RandomData().
While the interface is a bit more complicated (in that you always need
to pass in shape and functionspace) it does mean we have a
common interface for all domains.
Removed randomFill from DataExpanded.
The reasoning behind this is to force domains to call the util function
themselves and enforce whatever consistancy requirements they have.
Added version of blocktools to deal with 2D case in Ripley.
Use blocktools for the 2D transfers [This was cleaner than modifying the
previous implementation to deal with variable shaped points].
Note that under MPI, ripley can not generate random data (even unfiltered)
if any of its per rank dimensions is <4 elements on any side.
Unit tests for these calls are in but some extra checks still needed.

This seems to fix #65 but is a bit scary.
netcdf and gdal (and others?) seem to return bytearrays instead of strings in
py3, which when fed back into a library function caused a segfault.
Not sure if I caught every case and whether this is actually a bug in the lib...

Renamed the default options file to make it clear that it was used for g++.
There is now no default mole options file.
This is deliberate. You need to configure your environment based on the compiler you wish to use. (See scripts in /usr/local/env).
If you are doing a lot of building, symlink your chosen file to scons/mole_options.py. DO NOT COMMIT it though.

line 966
"if not args.has_key(n): args[n]=Data()" was changed to "if not args.has_key(n) and not constants.has_key(n): args[n]=Data()".
also added:
args=dict(args.items()+constants.items())
to merge the two dictionaries.

- Fixed unit test failure by populating missing parts of the struct
- prepared for reversing read order from grids
- worked around a segfault caused by python version in jessie (2.7.6) in
combination with gdal,pyproj and RTLD_GLOBAL in dlopen flags.

Mantis issue #675 said
Cihan:
while merging symbolic into trunk I noticed that most dudley tests are
disabled. This happened in revision 3261 where you wrote:
"Disable some unit tests for dudley until I can fix MPI failures"
Is that fixed?
=== I'm renabling most of the test (which pass without difficulty).
But one of them appears to be out of date or something.
More investigation required.

renamed design.Design to design.AbstractDesign as a more explicit/descriptive name, this will break any existing custom implementation until changed to match
updated gmsh.Design docstring to match that of the code

With current Intel compiler (13.5.192) on savanna any build with OMP and without MPI
will segfault in AbstractDomain->getPtr() for unknown reasons.
Trying a workaround for finley which can be adapted to dudley and ripley if it works.

WIP: ripley hand-optimisation & further rules to generator
-> drastically reduced number of constants
-> compile time of Brick.cpp less than half of what it was on savanna
-> additional runtime savings
-> to be continued...

Hopefully, this will address the interpolation problems.
New canInterpolate() function exposed to python which calls probeInterpolation
AbstractDomain now has an additional virtual.
preferredIntrpolationOnDomain()
This will return 0 if interpolation is impossible, 1 if possible and preferred.
It will return -1 if interpolation is possible and preferred in the
oposite direction.
A value of -1 does not say that the proposed interpolation is possible or not.
Rather it indicates "please use the other way".
If you really _need_ to test it that way, use probeInterpolationOnDomain

Implemented interpolation from Reduced[Face]Elements to [Face]Elements and
changed regularization to compute gradient on Function instead of
ReducedFunction. Results differ slightly so this should help with the accuracy.

More work towards joint inversion. There is now one class for inversion cost function which can handle all relevant cases
for a single model inversion, cross gradient case and functional dependence of physical parameters.

some improvements to the robutness of the minimizer:
a break down in the orthogonalization is hnadles via a restart.
a restart can now be set manually
the iteration throughs now an exception in case of failure.
It is also possible to set the density anomaly to zero for regions below a certain depth.

Gravity inversion is disconnect from DataSource objects now in order to allow for more flexibility
when it comes to the definition of of Mapping and Regulariztion. GravityInversion does not hold a reference to the datasource object anymore.
use inv=GravityInversion()
inv.setUp(datasource)
rather than inv.setDataSource(datasource); inv.setUp()

Used "new" raise syntax in a few places
Fixed some tabbing
Fixed some funnies involving changes to xrange/range
added a quick and nasty __hash__ function to Symbol
def __hash__(self):
return id(self)
This does mean that __hash__ and == do not match exactly. Not sure if that matters for our purposes

Added unit test which uses the TestDomain to check reduction.
TestDomains can no longer be created directly from python
(fixes problem I had earlier)
TestDomain can now create testdomains with aribtrarily sized
coordinate vectors.

When "werror" is set to True, scons build will complains about compilation failure of some boost "multi-array" test case. This causes recently unit-tests failures. So I set "werror" back to False before someone who has the knowledge fixes the problem.

Add Boomer AMG as a new preconditioner
Can be used by .setPreconditioner(SolverOptions.BOOMERAMG)
and the following parallel coarsening method can be
used together with Boomer AMG preconditioner:
.setCoarsening(SolverOptions.CIJP_FIXED_RANDOM_COARSENING)
.setCoarsening(SolverOptions.CIJP_COARSENING)
.setCoarsening(SolverOptions.FALGOUT_COARSENING)
.setCoarsening(SolverOptions.PMIS_COARSENING)
.setCoarsening(SolverOptions.HMIS_COARSENING)
Note that Boomer AMG is only available when MPI is enabled.

Two modifications to OpenMP versioned AMG:
(1) in RungeStueben Search:
* use a smaller panel to reduce the searching time for an unkown with
the largest lambda value;
* fix the mistakes made in this function (Two places where set S is
expected, not set S^T).
(2) while construct the coarse level matrix A_C:
A_C = P^T (A P) where A is the fine level matrix and P is the
interpolation operator. Since we've already had the tranpose of P
when we process "A_temp = A P", the performance of matrix product
can be improved when we use P^T for the calculation of A_temp.

Added/modified AMG tests:
- "self.failUnless(...)" became "self.assertTrue(...)"
because fail* is soon to be deprecated,
- fixed logical error in self.assertTrue argument,
- added another test, 'testSquare'.
Added some print statements to AMG.c in verbose mode to
identify which of the stopping conditions were fulfilled.
Changed default AMG parameters for 'setNumPreSweeps' and
'setNumPostSweeps' to 1 (huge speed improvment).
P.S Beware of bugs in the above code; I have only proved
it correct, not tried it... Jokes!

Fixed the quad mask in a very specific case and changed saveVTK to write a
specific value for unused nodes instead of the first one which breaks unit
tests depending on the number of ranks used.
Still need to rebaseline some VTK files so the tests pass.

I've added better credits for dev team.
There is a new page just before the contents page with current dev team
and a more detailed list in the user guide appendix.
Please check this and let me know if you think there are errors or
omissions.
Also fixed a typo.
Removed reference to python style files we no longer use from the
license file.

Fixed building with boost < 1.40 by
- removing the saveVTK method from dudley's MeshAdapter completely
(no point introducing it now when we are trying to get rid of it soon)
- adding a helper function to weipa's python layer which can be called from C++
with older boost versions.

escript now supports out-of-tree builds.
All build and test files are now created under a user-definable build_dir
directory.
This also fixes issue 291.
Removed most svn:ignore props since they are no longer required.

-Do not compare mesh variables (like owner, color etc) in the saveVTK tests
since they depend on the number of ranks.
-Fixed a problem where ranks without data samples can't know whether a
reduced mesh is to be used or not.

Phew!
-escript, finley, and dudley now uses weipa's saveVTK implementation
-moved tests from finley to weipa accordingly; dudley still to do
-rebaselined all test files
-fixed a few issues in weipa.saveVTK, e.g. saving metadata without schema
-added a deprecation warning to esys.escript.util.saveVTK
-todo: change doco, tests and other places to use weipa.saveVTK

Fix the launcher so that it can give an environment even if buildvars is not present.
Environment now includes scons for standalone builds.
Fixes and cleanup for the install documentation.
scons TEMPLATES updated to define the optionfile version by default.
...mumble
finley no longer imports escript into the default namespace
...

Work on user's guide up to section 3.2 (exclusive), including some text on
new DataManager class.
Also reworked a couple of images.
This should be the last commit of the guide before changes to the latex class.

Corrections to user guide sections 1.5 and 1.6.
Converted backgrounds of the very complex stokes flow eps files by raster
images leaving the vectors untouched so the change is barely visible but
makes loading the PDF page some orders of magnitude faster.

This should keep scons happy on systems without pdflatex issuing a warning
to the user. Also updated all other doc sconscript files to use a global
release_dir prefix so we can easily move the files somewhere else if req'd.

Added checkpoint frequency setting to DataManager so that visualisation files
can be created independent of restart files. Also removed the overhead of
creating an EscriptDataset instance for the restart-only case.

Prepared weipa to use Silo's HDF5 driver & compression which performs better
than the default PDB. Code is disabled though because NetCDF 4 is currently
interfering (Unidata has been contacted). Added a note about this.

-New VisIt simulation control interface in weipa (uses VisIt's simv2)
-Rewrote restarts.py to be a more generic data manager for restarts and exports
-Removed weipa python file since the functionality is now handled by restarts.py

The MPI and sequational GAUSS_SEIDEL have been merged.
The couring and main diagonal pointer is now manged by the patternm which means that they are calculated once only even if the preconditioner is deleted.

Fix latex markup in pdf title.
It turns out \pdfinfo must come after \maketitle if you don't want the
latex version of the title.
This behaviour does not happen in evince 2.22.2 (standard in lenny) but does
happen in more recent versions.

Updated various nan checks to consider the windows _isnan
The default compiler flags have changed as well.
+ intel will now take -std=c99 instead of -ansi
+ gcc has -ansi removed which means it defaults to gnu99
We could have set gcc to -std=c99 as well but that gives a
warning on g++ which gets converted into an error by our
pedantic warning.
Rationale:
We need something more than ansi to get proper nan handling.
- We currently don't have any code which does not comply with ansi
but the nan checks don't work.
Impact:
If we want our code to still be able to compile on older compilers
(at reduced functionality) we need to be careful not to introduce other
c99-isms.
If we don't care, then it's time for some celebratory // comments.

Move definition of build_platform on t.poulet suggestion.
Redefine the IS_NAN macro to replace versions commented as
'This does not work'
If you are not compiling with a c99 compliant compiler you will not
recognise nulls but this seems to be the best we can do right now
without checking for explicit bit patterns.

There is now a mechanism to pass a C function into escript and invoke it on each datapoint with:
applyBinaryCFunction
Warning in order to use this function your escript must be compiled with
scons ... iknowwhatimdoing=yes
---------------------
Because this code relies on casts that the C standard does not allow some code has been moved into Dodgy{.h .cpp}
Scons files have been modified to treat these files specially [Warnings are not errors for these files.]

Reincarnation of the escriptreader as a more flexible escriptexport library:
- can be initialised with instances of escript::Data and Finley_Mesh
- is now MPI aware at the EscriptDataset level including Silo writer
- now uses boost shared pointers
The lib is currently only used by the escriptconvert tool but this is going to
change...

bin/escript:
- bail out early if buildvars is not around
- quote variables to avoid syntax errors
- some work towards POSIX compliance (s/==/=/ in string comparisons)
- escript -V was still spitting out 'pre2.0' -> changed to pre4.0
- fixed minor typos

process C_TensorBinaryOperation at the level of a whole sample rather than
each datapoint when an Expanded data is operating with a Constant data. This
could improve the efficiency of the non-lazy version escipt.

inf, sup and Lsup now correctly handle +-infinity.
They also will return NaN if any part of their input is NaN.
This will break unit tests since it exposes the hidden bug (#447 in mantis)
This code relies on the ability to test for NaNs.
To do this it makes use of macros and functions from C99.
If you do not have a C99 compiler, then you will probably get the old behaviour.
That is, you won't know when you have NaNs.
Also did minor tweak to saveDataCSV doco.

Added getInfLocator and getSupLocator to pdetools.
This means if you want to use them, you will need to import them.
These methods return a locator to a point with the smallest/largest value.
Added resolve() and delay() to utils.
Now you can do things like:
d=delay(v)
..
..
z=resolve(d+1)

Merging changes from the lapack branch.
The inverse() operation has been moved into c++. [No lazy support for this operation yet.]
Optional Lapack support has been added for matrices larger than 3x3.
service0 is set to use mkl_lapack.

Taipan and DataVector use size_type (currently long) internally to talk
about sizes.
[A lot more work needs to be done on this].
Exceptions now give nicer messages and output the problematic numbers.
Some minor doco errors fixed.
escript.util.mkDir
Now checks to see if the pathname already exists as a non-directory.
On suggestion from slanger it now calls makedirs.

minval and maxval are now lazy operations (they weren't before).
Whether or not Lsup, sup and inf resolve their arguments before computing answers is controlled by the escriptParam 'RESOLVE_COLLECTIVE'.
Note: integrate() still forces a resolve.
Added some unit tests for operations which weren't tested before.
Added deepcopy implementations for lazy operations which got missed somehow.

Added calls to MPI_Abort to the pythonMPI wrappers.
Now if an uncaught exception is raised on any of the nodes, the whole world should be brought down. I've tested this on a barrier and it works.
One consequence of this is that you should not call sys.exit(2) but we don't recommend using sys.exit anyway.

Added a bunch of valgrind supressions. Hopefully this will make it more useful.
Asking for an offset on a DataConstant containing no samples no longer fails under debug builds.
I've disabled the sample and dataPoint bounds checks since they don't make sense on DataConstant.

Renamed the main cookbook tex file to match our convention.
Replaced doc/cookbook/figures/heatrefraction002contqu.pdf with
a version which is actually pdf. However it needs to be regnerated since
it it sideways.
The examples have had their copyright notices fixed (dates were too early).
sb2.py has been removed since it uses pyvisi.
scons will now build the cookbook as parts of a docs build.
Also in reposnse to :
scons cookbook_pdf

Fix bug in maxGlobalDataPoint and minGlobalDataPoint.
They now give the correct answers and the datapoint ids returned are globally
correct.
Removed some #defines from before COW
Removed hasNoSamples() - I don't trust myself to use that properly let alone anybody else.

ecording some changes related to Anthony's cookbook.
These tests still do not run.
test_heatref.py is the current problem.
Current problems include:
undefined symbols
a dependence on X somehow.
I've already addressed the matplotlib X thing somewhere else.
Find and apply.

A bunch of changes related to saveDataCSV.
[Not completed or unit tested yet]
Added saveDataCSV to util.py
AbstractDomain (and MeshAdapter) have a commonFunctionSpace method to
take a group of FunctionSpaces and return something they can all be interpolated to.
Added pointToStream() in DataTypes to help print points.
added actsConstant() to data - required because DataConstant doesn't store samples the same way other Data do.

Added getMPIWorldSum function to the esys.escript module.
This function takes an integer from each member of the MPIWorld.
This will hopefully address mantis issue 359
Added unit tests for most of the c++ free functions in the module.

Compute nodes will now read the correct options file and use the correct compiler.
I have added env_export to the service0 options file.
Use this if there are variables from the calling environment which need to be passed through scons. (In this case the INTEL_LICENSE_FILE variable).
There are still some issues with icpc linking.

Updated install guide to include the correct version of python.
Updated Mac build instructions not to use numarray or frameworks.
Removed (commented out) references to MacPorts in the Mac instructions.
- These need to be updated and Artak isn't here.
Removed (commented out) references to windows in the install guide.
CSIRO won't have a windows release ready yet.

Updated builddeb script to include the architecture in the package filename.
(the architecture is also written into the control file).
Modified dependencies to not include VTK.
Added python-matplotlib as a "Recomends" dependency.
This means we can build 64bit debs now.

Split test_util_spatial_functions.py into three python files (it had three classes in it).
This reduces memory required to compile down to about 600Mb each as opposed to >1Gb.
This should not affect the tests performed.

Experimental per node cache for lazy evaluation is now available via the
LAZY_NODE_STORAGE #define
It's a bit slower and larger for small problems but a bit faster and
smaller for large (drucker prager) problems.

Modified wrapper script to correctly point at numpy in standalone builds.
Updated build from source doco for linux to take numpy into account.
Also removed backticks in favour of $() - it should be the same but latex won't eat them