process C_TensorBinaryOperation at the level of a whole sample rather than
each datapoint when an Expanded data is operating with a Constant data. This
could improve the efficiency of the non-lazy version escipt.

inf, sup and Lsup now correctly handle +-infinity.
They also will return NaN if any part of their input is NaN.
This will break unit tests since it exposes the hidden bug (#447 in mantis)
This code relies on the ability to test for NaNs.
To do this it makes use of macros and functions from C99.
If you do not have a C99 compiler, then you will probably get the old behaviour.
That is, you won't know when you have NaNs.
Also did minor tweak to saveDataCSV doco.

Added getInfLocator and getSupLocator to pdetools.
This means if you want to use them, you will need to import them.
These methods return a locator to a point with the smallest/largest value.
Added resolve() and delay() to utils.
Now you can do things like:
d=delay(v)
..
..
z=resolve(d+1)

Merging changes from the lapack branch.
The inverse() operation has been moved into c++. [No lazy support for this operation yet.]
Optional Lapack support has been added for matrices larger than 3x3.
service0 is set to use mkl_lapack.

Taipan and DataVector use size_type (currently long) internally to talk
about sizes.
[A lot more work needs to be done on this].
Exceptions now give nicer messages and output the problematic numbers.
Some minor doco errors fixed.
escript.util.mkDir
Now checks to see if the pathname already exists as a non-directory.
On suggestion from slanger it now calls makedirs.

minval and maxval are now lazy operations (they weren't before).
Whether or not Lsup, sup and inf resolve their arguments before computing answers is controlled by the escriptParam 'RESOLVE_COLLECTIVE'.
Note: integrate() still forces a resolve.
Added some unit tests for operations which weren't tested before.
Added deepcopy implementations for lazy operations which got missed somehow.

Added a bunch of valgrind supressions. Hopefully this will make it more useful.
Asking for an offset on a DataConstant containing no samples no longer fails under debug builds.
I've disabled the sample and dataPoint bounds checks since they don't make sense on DataConstant.

Fix bug in maxGlobalDataPoint and minGlobalDataPoint.
They now give the correct answers and the datapoint ids returned are globally
correct.
Removed some #defines from before COW
Removed hasNoSamples() - I don't trust myself to use that properly let alone anybody else.

A bunch of changes related to saveDataCSV.
[Not completed or unit tested yet]
Added saveDataCSV to util.py
AbstractDomain (and MeshAdapter) have a commonFunctionSpace method to
take a group of FunctionSpaces and return something they can all be interpolated to.
Added pointToStream() in DataTypes to help print points.
added actsConstant() to data - required because DataConstant doesn't store samples the same way other Data do.

Added getMPIWorldSum function to the esys.escript module.
This function takes an integer from each member of the MPIWorld.
This will hopefully address mantis issue 359
Added unit tests for most of the c++ free functions in the module.

Split test_util_spatial_functions.py into three python files (it had three classes in it).
This reduces memory required to compile down to about 600Mb each as opposed to >1Gb.
This should not affect the tests performed.

Experimental per node cache for lazy evaluation is now available via the
LAZY_NODE_STORAGE #define
It's a bit slower and larger for small problems but a bit faster and
smaller for large (drucker prager) problems.

FileWriter added: this class takes care of writing data which are global in MPI to a file. It is recommended to use this class rather then the build in open as it takes care of the case of many processors.

size_t may be 64 bits which is incompatible to MPI_INT. This problem is fixed by inserting a cast in Mesh_read.c.
Moreover a fix has been added making sure that gmsh and triangle are executed on one processor only.

This was the troublemaker for test failures on both MAC and Debian. I think numarry has a bug in memory managment. So we removed some numarray.array related lines in _givapp function and tests pass again.

Modified the escript python module.
If the escriptExitProfiling environment variable is set, then the
contents of the status file from /proc will be copied to memescript.pid
This means we can log fun things like peak memory usage.
This only works on linux systems recent enough to have the status file.

Misc fixes:
Added some svn:ignore properties for output files that were cluttering things up.
Lazy fixes:
Fixed shape calculations for TRACE and TRANSPOSE for rank>2.
Adjusted unit test accordingly.
As a Temporary change to DataC.cpp to test for lazy data in DataC's expanded check.
This is wrong but would only affect people using lazy data.
The proper fix will come when the numarray removal code moves over from the branch.
Made tensor product AUTOLAZY capable.
Fixed some bugs resolving tensor products (incorrect offsets in buffers).
Macro'd some stray couts.
- It appears that AUTOLAZY now passes all unit tests.
- It will not be _really_ safe for general use until I can add COW.
- (Everything's better with COW)

Implemented new utility function saveESD() which takes care of dumping the
given data objects with their domain and creates an ESD file containing the
required information. This obsoletes the necessity to use esdcreate for single
timestep datasets.

Addressing mantis issue #221.
Interpolation.. and probeInterpolation.. now "work" for the NullDomain.
Work means throw a descriptive exception if you try to move into or out
of the NullDomain.
The bad_cast exception related to this has been fixed.

Added checks in C_GeneralTensorProduct (Data:: and Delayed forms) as
well as the DataAbstract Constructor to prevent Objects with Rank>4
being created.
Moved the relevant #define into systemdep.
Removed some comments.

Fixed a warning in cpp unit tests under dodebug
Pointed the url for python doco at shake200 rather than iservo.
Added support for trace and transpose to LazyData.
Fixed bug in trace to initialise running totals.

escript/Data: Another fix for parallel var initialization. Also, the (error) return value of MPI_Gather was not used. I applied the same 'hack' as in other places in the file, namely declaring the variable beforehand but still ignoring the return value :-/

Two changes.
1. Move blocktimer from escript to esysUtils.
2. Make it possible to link to paso as a DLL or .so.
Should have no effect on 'nix's
In respect of 1., blocktimer had begun to spring up everywhere, so
for the moment I thought it best to move it to the only other library that
pops up all over the place.
In respect of 2., paso needed to be a DLL in order to use the windows intelc /fast
option, which does aggressive multi-file optimisations. Even in its current form, it either
vectorises or parallelises hundreds more loops in the esys system than appear in the pragmas.
In achieving 2. I have not been too delicate in adding
PASO_DLL_API
declarations to the .h files in paso/src. Only toward the end of the process of
the conversion, when the number of linker errors dropped below 20, say, did I choosy about what
functions in a header I declared PASO_DLL_API. As a result, there are likely to be many routines
declared as external functions symbols that are in fact internal to the paso DLL.
Why is this an issue? It prevents the intelc compiler from getting aggressive on the paso module.
With pain there is sometimes gain. At least all the DLL rules in windows give good
(non-microsoft) compiler writers a chance to really shine.
So, if you should see a PASO_DLL_API on a function in a paso header file,
and think to yourself, "that function is only called in paso, why export it?", then feel free to
delete the PASO_DLL_API export declaration.
Here's hoping for no breakage.....

I may get into trouble for this.
boost-python 1.34 does have a docstring_options class,
but does not have a 3 argument constructor for
it. So the test has been modified to
#if ((BOOST_VERSION/100)%1000 > 34) || (BOOST_VERSION/100000 >1)
If you wish to make things more delicate, one can define a 2 argument construction
of docopt just for 1.34 (with an #elif). Probably not worth the effort frankly.
Hope that this has not broken anything for anyone else. The SVN logs suggest this is a
little fragile.....
Also, please be aware that much of our chemistry interface code, that we wish to use with
escript, makes extensive use of boost python.
Having two different boost versions mucking with the python interpreter sounds
like a really bad idea, I'm sure you'll agree.
The problem is that it is not a simple task for us to build new versions of boost-python
on all our platforms. Consequently, it would be nice to be informed when you guys
intend to upgrade a support library of this nature so that we can plan and allocate
resources to keep up.
Cheers.

Ensure ESCRIPT_EXPORTS is defined only while compiling escript on windows.
A similar comment applies to FINLEY_EXPORTS.
SConstruct:
Fixed a comment.
reformatted the Export() call, and added IS_WINDOWS_PLATFORM to the exports.

Added a private operator= to Taipan to prevent people from copying trying to copy instances.
Not that we have any of that.... but if someone ever tries, we'll be ready for them.
I have come to the conclusion that -Weffc++ is not usable for us. Its output is too noisy complaining about classes which we do not control.

A cleanup of some of the problems I found doing a Wall compile.
Removed some commented out lines.
Swapped some member initialisers.
Removed virtual qualifiers from some methods in FunctionSpace.
Fixed some unused or (possibly) uninitialised variables.

Modified Data::toString() so it doesn't throw on DataEmpty.
Added setEscriptParamInt and getEscriptParamInt as free functions.
At the moment all they do is allow you to set the param TOO_MANY_LINES.
This is used to determine when printing a Data object will show you the
points and when it will print a summary.
I've set the default value back to 80 lines.
If you need to see more lines use (in python):
setEscriptParamInt("TOO_MANY_LINES",80000)

convection.py checkpointing uses mkdir/rmdir, and under MPI there
was a race condition.
mkdir needs to be run on only one CPU and then a barrier to prevent
working processors from using the directory before it exists.
Added methods domain.MPIBarrier and domain.onMasterProcessor() to
implement this technique.
A more general solution might be possible in the future.

Removed some commented out lines.
Modified DataExpanded to not throw when creating objects with zero
samples.
Modified toString() to report "(data contains no samples)" rather than
printing a blank line.
Modified DataExpanded::dump() and load so that they do not attempt so
save/load the ids and data fields if the data object contains no
samples.

All about making DataEmpty instances throw.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Exposed getDim from AbstractDomain to python to fix bug.
Added isEmpty member to DataAbstract to allow it to throw is queries are
made about a DataEmpty instance.
Added exceptions to DataAbstract, DataEmpty and Data to prevent calls
being made against DataEmpty objects.
The following still work as expected on DataEmpty instances
copy, getDomain, getFunctionSpace, isEmpty, isExpanded, isProtected,
isTagged, setprotection.
You can also call interpolate, however it should throw if you try to
change FunctionSpaces.

Added canTag methods to FunctionSpace and AbstractDomain (and its
offspring).
This checks to see if the domain supports tags for the given type of
function space.
Constructors for DataTagged now throw exceptions if you attempt to make
a DataTagged with a FunctionSpace which does not support tags.
To allow the default constructor to work, NullDomain has a single
functioncode which "supports" tagging.
Fixed a bug in DataTagged::toString and DataTypes::pointToString.
Added FunctionSpace::getListOfTagsSTL.
algorithm(DataTagged, BinaryFunction) in DataAlgorithm now only
processes tags known to be in use.
This fixes mantis issue #0000186.
Added comment to Data.h intro warning about holding references if the
underlying DataAbstract changes.
_python_ unit tests have been updated to test TaggedData with invalid
FunctionSpaces and to give the correct answers to Lsup etc.

Added Data::copySelf() [Note: this is exposed as copy() in python].
This method returns a pointer to a deep copy of the target.
There are c++ tests but no python tests for this yet.
All DataAbstracts now have a deepCopy() which simplifies the
implementation of the compy methods.

Install() on Mac OS was naming shared libs file.dylib...not OK for our
libraries for python calling C++ (escriptcpp.so and finleycpp.so).
This is because python's dlopen() calls only look for .so files.

Make operator=() on exception non-virtual. Should silence the Altix compiler.
However, if an exception is cast to another parent type, and
operator=() is called when the exception so cast is an l-value in the assignment,
an "incomplete" assignment will occur, copying only the parent class members of the
l-value in the assignmnet.

linux_gcc_eg_options.py:
remove the std99 option, it is no longer needed as the code compiles without
C 1999 extension (need for these extensions elinminated in windows port).
Turn on all warnings except unknown pragmas. Should catch a lot of stuff.
SConstruct:
Impassioned plea
system_dep.h:
Add the standard incantation for dealing with const declarations
in C code called from C and C++
blocktimer:
Get the calling interface right for C code called from C and C++
and use __const as defined in system_dep.h
(Should be re-factored into compiler_dep.h file).
MeshAdapterFactory.cpp:
Since we have (effectively) no control over netCDF policy,
cast const char *'s to char *'s

Merge in /branches/windows_from_1456_trunk_1620_merged_in branch.
You will find a preserved pre-merge trunk in tags under tags/trunk_at_1625.
That will be useful for diffing & checking on my stupidity.
Here is a list of the conflicts and their resolution at this
point in time.
=================================================================================
(LLWS == looks like white space).
finley/src/Assemble_addToSystemMatrix.c - resolve to branch - unused var. may be wrong.....
finley/src/CPPAdapter/SystemMatrixAdapter.cpp - resolve to branch - LLWS
finley/src/CPPAdapter/MeshAdapter.cpp - resolve to branch - LLWS
paso/src/PCG.c - resolve to branch - unused var fixes.
paso/src/SolverFCT.c - resolve to branch - LLWS
paso/src/FGMRES.c - resolve to branch - LLWS
paso/src/Common.h - resolve to trunk version. It's omp.h's include... not sure it's needed,
but for the sake of saftey.....
paso/src/Functions.c - resolve to branch version, indentation/tab removal and return error
on bad unimplemented Paso_FunctionCall.
paso/src/SolverFCT_solve.c - resolve to branch version, unused vars
paso/src/SparseMatrix_MatrixVector.c - resolve to branch version, unused vars.
escript/src/Utils.cpp - resloved to branch, needs WinSock2.h
escript/src/DataExpanded.cpp - resolved to branch version - LLWS
escript/src/DataFactory.cpp - resolve to branch version
=================================================================================
This currently passes tests on linux (debian), but is not checked on windows or Altix yet.
This checkin is to make a trunk I can check out for windows to do tests on it.
Known outstanding problem is in the operator=() method of exceptions
causing warning messages on the intel compilers.
May the God of doughnuts have mercy on my soul.

Merge of branches/windows_from_1431_trunk.
Revamp of the exception system.
Fix unused vars and signed/unsigned comparisons.
defined a macro THROW(ARG) in the system_dep.h's to
deal with the expectations of declarations on different architectures.
Details in the logs of branches/windows_from_1431_trunk.
pre-merge snapshot of the trunk in tags/trunk_at_1452

This trunk now compiles and links under windows.
It passes the FunctionSpace.__str__() test 50000 times on windows.
It fails scons run_tests and scons_py_tests as it always did, but I have an idea why.
Please remember to add the appropriate dll linkage info to declarations that are to be visible outside a .so or .dll. Terry (BTW, PGH here) spent a lot of hacks clearing out blocktimer from the rest of the code when these 5 or 6 lines would have done the trick.

inserted sys.exit(1) into the tests so scons can detect the failure of the test.
A similar statement has been removed from an earlier as it produces problems on 64bit Linux. Previously exit(0) was called in case of success but now this is not done in order to avoid a fatal end of the program. in the case of an error in the test there could be a fatal error so but I guess that this not really a problem.
PS: the fact that signal 0 was returned even for the case of an error lead to the illusion that all tests have been completed successfully.

Completed mesh.dump(file) and mesh=LoadMesh(file) by adding TagMap and
implementing MPI parallelism.
Now allocating ElementFile for ContactElements even if there are none.
Removed file Mesh_dump.c since dump/loadMesh are in CPPAdapter/MeshAdapter*.cpp.

The MPI branch is hereby closed. All future work should be in trunk.
Previously in revision 1295 I merged the latest changes to trunk into trunk-mpi-branch.
In this revision I copied all files from trunk-mpi-branch over the corresponding
trunk files. I did not use 'svn merge', it was a copy.

Add the esys and lib directories to the repository.
Remove the IS_WINDOWS_PLATFORM from the SConscripts, and
do the logic once in SConstruct.
SConstruct now includes example options files if the hostname_options file is not present.
This needs some more work for the altix.
The tests now depend upon the build target. This is important it seems, as there appears to be the
possibility of calling linking different libraries
against incompatible versions of sub-libraries.
This addressed most of the exceptions we were getting on windows.
All the useNetCDF logic is now done by SConstruct.
Made the init_target part of the build alias so that __ini__.py is created on a fresh checkout.
py_tests mostly pass on windows, only need to track down the exception in run_tests.

Some changes to make things run on windows. There is still a problem with netcdf an long file names on windows but there is the suspicion that this is a bigger problem related to boost (compiler options). In fact runs with large numbers of iteration/time steps tend to create seg faults.

As the LinearPDE uses coefficients as reduced even if they are handed
over as full the projector runs into a problem when reduced and full
arguments are used in the same projector. Now the projector resets the
coefficients befor starting the projection.

The modification fixes a problem with the garbage collection in python.
The problem seems that a default value of a method argument is seen as
being dependend on the the instance of the class. This produces a
circular dependence and can stop the garbage collection to delete the
object. The situation becomes in particulary bad if the class provides a
__del__ method as it is not clear where to break the circle.
We need to revisit all python classes in escript & Co to remove this
possible problem.

In VC++ boost has problems with numarray arguments from python. This
fixes that problem by taking python::object arguments from the python
level and converting it into python::numeric::array on the C++ level.
This hasn't been tested with VC++ yet.
Moreover the two Data methods dealing with big numarrays as argument and
return value have been removed.

Added explicit destructors to all Exception classes.
Fixed an ifdef in TestCase.cpp
Made the conditional definition of M_PI in LocalOps.h
depend only on M_PI being undefined.
Replace dynamically dimensioned arrays in DataFactory & DataTagged with malloc.
sort() method of list does not take a named argument
(despite the manual's claims to the contary).

The set/getRefVal functions of Data objects have been removed (mainly to avoid later problems with MPI).
Moreover, a faster access to the reference id of samples has been introduced. I don't think that anybody will
profit form this at this stage but it will allow a faster dump of data objects.

escript data objects can now be saved to netCDF files, see http://www.unidata.ucar.edu/software/netcdf/.
Currently only constant data are implemented with expanded and tagged data to follow.
There are two new functions to dump a data object
s=Data(...)
s.dump(<filename>)
and to recover it
s=load(<filename>, domain)
Notice that the function space of s is recovered but domain is still need.
dump and load will replace archive and extract.
The installation needs now the netCDF installed.

the FinleyReade accepts now gmsh files (use format="gmsh")
and
Simulations are accepting Models only. Moreover, there is a test now
if a all Models targeted by a model in the simulation or a subsimulation
are included in simulation or a subsimulation.

I have done some clarification on functions that allow to access individual data point values in a Data object.
The term "data point number" is always local on a MPI process and referes to the value (data_point_in_sample, sample)
as a single identifyer (data_point_in_sample + sample * number_data_points_per_sample). a "global data point number"
referes to a tuple of a processour id and local data point number.
The function convertToNumArrayFromSampleNo has been removed now and convertToNumArrayFromDPNo renamed to getValueOfDataPoint.
There are two new functions:
getNumberOfDataPoints
setValueOfDataPoint
This allows you to do things like:
in=Data(..)
out=Data(..)
for i in xrange(in.getNumberOfDataPoints())
in_loc=in.getValueOfDataPoint(i)
out_loc=< some operations on in_loc>
out.setValueOfDataPoint(i,out_loc)
Also mindp is renamed to minGlobalDataPoint and there is a new function getValueOfGlobalDataPoint. While in MPI the functions getNumberOfDataPoints and getValueOfDataPoint are working locally on each process (so the code above is executed in parallel).
the latter allows getting a single value across all processors.

fixes on ESySXML to get runmodel going.
* object ids are now local for read and write of XML
* ParameterSets are now written with class name
* ParameterSets linked by other ParameterSets (previously only Models) are written now
* list are now lists of str (rather than bools), lists with bool, int or float are mapped to numarray
* numarray are writen with generic types Bool, Int, Float (portability!)

modellib.WriteVTK has been rewritten. Instead of only three data objects scalar,
vector, tensor it takes now up to 20 data objects data0 ... data19 and writes it into a
single VTK file. There is also the possibilty to define individiual name tags name0,..., name19.
If no name is given the corresponding attribute name of the Link target is used.
This simplifies the usage and increases efficiency.

DataSources added to modelframe/EsysXML, and tests to run_xml.py. Currently does not actually handle data
sources,
just references. Functionality is in progress.
EsysXML format (URI can be a local file reference, or a remote reference such as an ftp site, fileformat
is currently any string descriptor, such as finleyMesh or gmtdata):
<Parameter type="DataSource">
<Name>
uritest
</Name>
<Value>
<DataSource>
<URI>
somelocalfile.txt
</URI>
<FileFormat>
text
</FileFormat>
</DataSource>
</Value>
</Parameter>

Some modifications to the binary operations +,-,*/, pow.
The code is a bit simpler now and more efficient has there is
no reseising required now. the resizing method has been removed as
it is very, very inefficient. Even serial code should be faster now.
It is now forbidden to do an inplace update of scalar data object with an object
of rank >0 as this is very slow (and does not make much sense).

Tensor products for Data objects are now computed by a C++ method
C_GeneralTensorProduct, which calls C function matrix_matrix_product
to do the actual calculation.
Can perform product with either input transposed in place, meaning
without first computing the transpose in a separate step.

changes to escript/py_src/pdetools.py and /escript/src/Data.h/.cpp to
make the Locator work in MPI. escript::Data::mindp now returns a 3 tuple,
with the MPI rank of the process on which the minimum value occurs
included. escript::Data::convertToNumArrayFromDPNo also takes the ProcNo
to perform the MPI reduction.
This had to be implemented in both the MPI and non-MPI versions to allow
the necesary changes to the Python code in pdetools.py. In the non-MPI
version ProcNo is set to 0. This works for the explicit scripts tested
thus far, however if it causes problems in your scripts contact Ben or
Lutz, or revert the three files (pdetools.py, Data.h and Data.cpp) to
the previous version.

coordinates, element size and normals returned by corresponding
FunctionSpace mesthods are now protected against updates. So
+=, -=, *=, /=, setTaggedValue, fillFromNumArray will through an
excpetion.
The FunctionSpace class does nut buffer the oordinates, element size and
normals yet.

Large number of changes to Finley for meshing in MPI.
- optimisation and neatening up of rectcanglular mesh generation code
- first and second order 1D, 2D and 3D rectangular meshes are now
available in finley and escript using MPI.
- reduced meshes now generated in MPI, and interpolation to and from
reduced data types now supported.

+ Updated compilation options for Cognac to squeeze out a bit more performance
+ Now compiles using the Intel Math headers (mathimf.h) rather than plain math.h on both Win32 and Linux platforms when using the Intel compiler. Gives a small boost to performance on Altix and is essential on Windows

The test for the contact normal has been modified to take in cosideration the fact that the normal is unique up to the factor +/-1.
Now the test checks the kllength of the normal for 1 and the angle to the reference normal.

Changes relating to the MPI version of escript
The standard OpenMP version of escript is unchanged
- updated data types (Finley_Mesh, Finley_NodeFile, etc) to store meshes
over multiple MPI processes.
- added CommBuffer code in Paso for communication of Data associated
with distributed meshes
- updates in Finley and Escript to support distributed data and operations
on distributed data (such as interpolation).
- construction of RHS in MPI, so that simple explicit schemes (such as
/docs/examples/wave.py without IO and the Locator) can run in MPI.
- updated mesh generation for first order line, rectangle and brick
meshes and second order line meshes in MPI.
- small changes to trunk/SConstruct and trunk/scons/ess_options.py to
build the MPI version, these changes are turned off by default.

A few changes in the build mechanism and the file structure so scons can build release tar files:
* paso/src/Solver has been moved to paso/src
* all test_.py are now run_.py files and are assumed to be passing python tests. they can run by
scons py_tests and are part of the release test set
* escript/py_src/test_ are moved to escript/test/python and are installed in to the build directory
(rather then the PYTHONPATH).
* all py files in test/python which don't start with run_ or test_ are now 'local_py_tests'. they are installed i
by not run automatically.
* CppUnitTest is now treated as a escript module (against previous decisions).
* scons realse builds nor tar/zip files with relvant source code (src and tests in seperate files)
the python tests don't pass yet due to path problems.

+ NEW BUILD SYSTEM
This commit contains the new build system with cross-platform support.
Most things work are before though you can have more control.
ENVIRONMENT settings have changed:
+ You no longer require LD_LIBRARY_PATH or PYTHONPATH to point to the
esysroot for building and testing performed via scons
+ ACcESS altix users: It is recommended you change your modules to load
the latest intel compiler and other libraries required by boost to match
the setup in svn (you can override). The correct modules are as follows
module load intel_cc.9.0.026
export
MODULEPATH=${MODULEPATH}:/data/raid2/toolspp4/modulefiles/gcc-3.3.6
module load boost/1.33.0/python-2.4.1
module load python/2.4.1
module load numarray/1.3.3

The sparse solver can be called by paso now.
the building has been change to reduce some code redundancy:
now all scons SCscripts are importing scons/esys_options.py which
imports platform specific settings.

Some chnages required for oder numarray versions. mai problem is that
operations on array objects with rank 0 sometimes return float rather
than arrays. This problem seems to be fixed in newer releases.