dorie issueshttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues2020-01-09T12:51:07Zhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/140Implement a global mass conservation operator2020-01-09T12:51:07ZSantiago Ospina De Los Ríossospinar@gmail.comImplement a global mass conservation operatorThe following discussion from !96 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/96#note_20802): (+12 comments)
> 2. Check global mass conservation.
>
> Similar to the mass conservation test of the Richards solver, we can run the coupled solver in multiple homogeneous and heterogeneous test cases and evaluate the solute mass conservation. Requires a separate test executable or a general check similar to the one implemented for the flux reconstruction.The following discussion from !96 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/96#note_20802): (+12 comments)
> 2. Check global mass conservation.
>
> Similar to the mass conservation test of the Richards solver, we can run the coupled solver in multiple homogeneous and heterogeneous test cases and evaluate the solute mass conservation. Requires a separate test executable or a general check similar to the one implemented for the flux reconstruction.Solute Transport FeatureEnhancementModel:RichardsModel:TransportTo DoSantiago Ospina De Los Ríossospinar@gmail.comSantiago Ospina De Los Ríossospinar@gmail.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/63[meta] Parameters and Parametrization2019-05-16T23:27:51ZSantiago Ospina De Los Ríossospinar@gmail.com[meta] Parameters and Parametrization## Description
Today's parameter objects have several problems.
* Maybe the most important one is that it's working with dynamic polymorphism even though we only have one parametrization, this is simply not acceptable for functions that do virtually no computation (like `Interpolation` and `MualemVanGenuchten`) and is even worst for functions that only have to access data. Local operators are usually memory bounded, and having this kind of polymorphism affects directly its performance.
* Another problem is that they are strongly coupled with the input of parameters and with its use in the local operator. Since the base structure is an array, it affects directly the partitioning in parallel, plus strange artifacts on the solution in the interface of two parameter cells.
## Proposal
Change parameters and parametrizations objects such that they are based on a continuous or element-wise representation. Gmsh and DFG readers in dune have ways to identify elements (in codims 0 and dim) and therefore also ways to attach parameters to the grid.
Thus, my proposal is to create parameter grids that contain the parameters in a more clean and efficient way. I already have a bit of the code and seems to be a good approach to any kind of input data that is
$`C^{-1}`$ and $`C^0`$, however, the main problem then would be the parameter field generator `pfg`. My approach for that would be just to implement my proposal such that we can attach manually data to Gmsh and DFG, and later, we can take a look how to generate the data directly to for the Gmsh and DFG files.
## Procedures
- [x] #71: Build new parameter structures on top of current implementation
- [x] #89: Introduce [`yaml-cpp`](https://github.com/jbeder/yaml-cpp) as dependency
- [x] #86: Implement new parameter input scheme (with `yaml-cpp`)
- [x] #110: Revamp scaling implementation and add input of global scaling fields
- [ ] Add deprecation warnings to branch `1.1-stable`
- [ ] Add Mualem-Brooks-Corey parameterization
## Description
Today's parameter objects have several problems.
* Maybe the most important one is that it's working with dynamic polymorphism even though we only have one parametrization, this is simply not acceptable for functions that do virtually no computation (like `Interpolation` and `MualemVanGenuchten`) and is even worst for functions that only have to access data. Local operators are usually memory bounded, and having this kind of polymorphism affects directly its performance.
* Another problem is that they are strongly coupled with the input of parameters and with its use in the local operator. Since the base structure is an array, it affects directly the partitioning in parallel, plus strange artifacts on the solution in the interface of two parameter cells.
## Proposal
Change parameters and parametrizations objects such that they are based on a continuous or element-wise representation. Gmsh and DFG readers in dune have ways to identify elements (in codims 0 and dim) and therefore also ways to attach parameters to the grid.
Thus, my proposal is to create parameter grids that contain the parameters in a more clean and efficient way. I already have a bit of the code and seems to be a good approach to any kind of input data that is
$`C^{-1}`$ and $`C^0`$, however, the main problem then would be the parameter field generator `pfg`. My approach for that would be just to implement my proposal such that we can attach manually data to Gmsh and DFG, and later, we can take a look how to generate the data directly to for the Gmsh and DFG files.
## Procedures
- [x] #71: Build new parameter structures on top of current implementation
- [x] #89: Introduce [`yaml-cpp`](https://github.com/jbeder/yaml-cpp) as dependency
- [x] #86: Implement new parameter input scheme (with `yaml-cpp`)
- [x] #110: Revamp scaling implementation and add input of global scaling fields
- [ ] Add deprecation warnings to branch `1.1-stable`
- [ ] Add Mualem-Brooks-Corey parameterization
v2.0 ReleaseDiscussionDoingEnhancementMetaLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/52[meta] Deploy DORiE automatically2019-02-11T15:35:02ZLukas Riedelmail@lukasriedel.com[meta] Deploy DORiE automatically### Description
Let's use the CI/CD system for continuous delivery! Stable branches and the most recent version of DORiE (`master`), Docker images, and the documentation, should be deployed to public hosts automatically once building finishes.
### Tasks
- [x] #57: Add license information
- [x] #58: Add contribution guide
- [x] #56: Build Docker image and deploy
- [ ] #44: Reduce size of Docker image
- ~~[ ] #51: Deploy source code to public repository~~
- ~~[ ] #50: Deploy wiki to public repository~~### Description
Let's use the CI/CD system for continuous delivery! Stable branches and the most recent version of DORiE (`master`), Docker images, and the documentation, should be deployed to public hosts automatically once building finishes.
### Tasks
- [x] #57: Add license information
- [x] #58: Add contribution guide
- [x] #56: Build Docker image and deploy
- [ ] #44: Reduce size of Docker image
- ~~[ ] #51: Deploy source code to public repository~~
- ~~[ ] #50: Deploy wiki to public repository~~v1.0 ReleaseMetaTo Dohttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/44Build lightweight Docker image2019-07-30T12:03:48ZLukas Riedelmail@lukasriedel.comBuild lightweight Docker imageDocker now offers [multi-stage builds](https://docs.docker.com/v17.09/engine/userguide/eng-image/multistage-build/) for generating images with lower memory footprint. The idea is to compile DORiE and then move _only_ its binaries to another very small container (derived from the `alpine` image, for example). This requires the executables to be compiled as [static binaries](https://www.ianlewis.org/en/creating-smaller-docker-images-static-binaries).
The Docker image should be automatically compiled from certain pushed branches (like `release` or `master`), but only if tests succeed. This means to open up the `deploy` stage in the of CI/CD. The image has to be compiled and some tests have to be performed with it (I guess?). The image is then deployed to Docker Hub.Docker now offers [multi-stage builds](https://docs.docker.com/v17.09/engine/userguide/eng-image/multistage-build/) for generating images with lower memory footprint. The idea is to compile DORiE and then move _only_ its binaries to another very small container (derived from the `alpine` image, for example). This requires the executables to be compiled as [static binaries](https://www.ianlewis.org/en/creating-smaller-docker-images-static-binaries).
The Docker image should be automatically compiled from certain pushed branches (like `release` or `master`), but only if tests succeed. This means to open up the `deploy` stage in the of CI/CD. The image has to be compiled and some tests have to be performed with it (I guess?). The image is then deployed to Docker Hub.v1.0 ReleaseTo Dohttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/9[meta] Include Cook Book exercises2020-01-27T15:47:00ZLukas Riedelmail@lukasriedel.com[meta] Include Cook Book exercises_Note:_ This is a meta-task. It bundles several tasks together and is only closed once all these tasks are finished.
### Aims
Dorie was used in the [`PoTS` WS 16/17 lecture](https://elearning2.uni-heidelberg.de/course/view.php?id=13481). The students received some input files for exemplary simulations and had to analyse the results. These simulations can be used for a Cook Book inside our documentation. On the other hand, some new documentation has to be created for the transport part.
### Tasks
* [x] ~"Model:Richards" !157 Infiltration in homogeneous medium
* [ ] ~"Model:Richards" Water retention and hydraulic conductivity curve parameterization
* [ ] ~"Model:Richards" Infiltration in heterogeneous medium
* [ ] ~"Model:Richards" Infiltration in miller-similar medium
* [ ] ~"Model:Richards" Boundary conditions
* [ ] ~"Model:Richards" Adaptive grid refinement
* [ ] ~"Model:Richards" Unstructured grids
* [ ] ~"Model:Transport" !158 Solute transport in homogeneous medium
* [ ] ~"Model:Transport" Effective hydrodynamic dispersion tensor parameterization
<!-- Remember to mention tasks with '#' here, once they are created. -->
### People involved
@sospinar
@lriedel
### Related meta-tasks
<!-- Meta-tasks of other groups that require coordination -->
<!--
PLEASE READ THIS
A meta task is used to organise and discuss several regular tasks.
When creating this meta task, please take care of the following:
- When new tasks that belong to this meta-task are created,
link them here, and add them as tasks
- Attach the correct labels
- Mention the people that should get involved
- Assign the correct milestone (if available)
-->_Note:_ This is a meta-task. It bundles several tasks together and is only closed once all these tasks are finished.
### Aims
Dorie was used in the [`PoTS` WS 16/17 lecture](https://elearning2.uni-heidelberg.de/course/view.php?id=13481). The students received some input files for exemplary simulations and had to analyse the results. These simulations can be used for a Cook Book inside our documentation. On the other hand, some new documentation has to be created for the transport part.
### Tasks
* [x] ~"Model:Richards" !157 Infiltration in homogeneous medium
* [ ] ~"Model:Richards" Water retention and hydraulic conductivity curve parameterization
* [ ] ~"Model:Richards" Infiltration in heterogeneous medium
* [ ] ~"Model:Richards" Infiltration in miller-similar medium
* [ ] ~"Model:Richards" Boundary conditions
* [ ] ~"Model:Richards" Adaptive grid refinement
* [ ] ~"Model:Richards" Unstructured grids
* [ ] ~"Model:Transport" !158 Solute transport in homogeneous medium
* [ ] ~"Model:Transport" Effective hydrodynamic dispersion tensor parameterization
<!-- Remember to mention tasks with '#' here, once they are created. -->
### People involved
@sospinar
@lriedel
### Related meta-tasks
<!-- Meta-tasks of other groups that require coordination -->
<!--
PLEASE READ THIS
A meta task is used to organise and discuss several regular tasks.
When creating this meta task, please take care of the following:
- When new tasks that belong to this meta-task are created,
link them here, and add them as tasks
- Attach the correct labels
- Mention the people that should get involved
- Assign the correct milestone (if available)
-->v2.0 ReleaseDocumentationDoingMetaModel:RichardsModel:Transporthttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/193setting of [grid]extensions seems to not work with gmsh grids2020-05-25T06:09:57ZHannes Bausersetting of [grid]extensions seems to not work with gmsh gridsSetting of [grid]extensions seems to not work with gmsh grids.
### Summary
The configuration file guide states for the extensions setting in the configuration file:
`Physical extensions of the domain in meters. Given in x, then y, then z-direction. If a mesh file is imported, they have to match its maximum extensions.`
However, if I import a mesh file, the extensions seem to not matter (I only checked this qualitatively).
### Steps to reproduce
The appended example has a gmsh file with extensions 2x2x2.1, the config file only has extensions of 1x1x1.
The results seem to have extensions of 2x2x2.1.
(The example is merely the case I was working on. If you need a minimal example, let me know).
### What is the current _bug_ behaviour?
Extensions in the config file seem to not matter when loading a gmsh file.
However, the extensions are required. Without specifying them the following error is thrown:
`Aborting DORiE after exception: RangeError [get:/Users/hbauser/DORiE/dune-common/dune/common/parametertree.hh:183]: Cannot parse value "NONE" for key "richards..grid.extensions"RangeError [parseRange:/Users/hbauser/DORiE/dune-common/dune/common/parametertree.hh:237]: as a range of items of type double (0 items were extracted successfully)`
### What is the expected _correct_ behaviour?
If the extensions do indeed not matter, it should not be required to specify them.
If they do matter, I think it would be good to have a check whether they match the gmsh file (since the impact is not immediately obvious).
### Relevant logs, screenshots, files...?
[gmsh_3Dslanted_dimtest.zip](/uploads/76b781d220f6b0560f602a7c8680444f/gmsh_3Dslanted_dimtest.zip)Setting of [grid]extensions seems to not work with gmsh grids.
### Summary
The configuration file guide states for the extensions setting in the configuration file:
`Physical extensions of the domain in meters. Given in x, then y, then z-direction. If a mesh file is imported, they have to match its maximum extensions.`
However, if I import a mesh file, the extensions seem to not matter (I only checked this qualitatively).
### Steps to reproduce
The appended example has a gmsh file with extensions 2x2x2.1, the config file only has extensions of 1x1x1.
The results seem to have extensions of 2x2x2.1.
(The example is merely the case I was working on. If you need a minimal example, let me know).
### What is the current _bug_ behaviour?
Extensions in the config file seem to not matter when loading a gmsh file.
However, the extensions are required. Without specifying them the following error is thrown:
`Aborting DORiE after exception: RangeError [get:/Users/hbauser/DORiE/dune-common/dune/common/parametertree.hh:183]: Cannot parse value "NONE" for key "richards..grid.extensions"RangeError [parseRange:/Users/hbauser/DORiE/dune-common/dune/common/parametertree.hh:237]: as a range of items of type double (0 items were extracted successfully)`
### What is the expected _correct_ behaviour?
If the extensions do indeed not matter, it should not be required to specify them.
If they do matter, I think it would be good to have a check whether they match the gmsh file (since the impact is not immediately obvious).
### Relevant logs, screenshots, files...?
[gmsh_3Dslanted_dimtest.zip](/uploads/76b781d220f6b0560f602a7c8680444f/gmsh_3Dslanted_dimtest.zip)BugModel:CommonLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/192VTK incompatible to Python 3.82020-05-14T13:42:03ZLukas Riedelmail@lukasriedel.comVTK incompatible to Python 3.8VTK currently cannot be installed via Pip on Python 3.8. This makes configuring DORiE fail as required Python packages cannot be installed.
### Summary
VTK 8.2.1 [does not support Python 3.8](https://gitlab.kitware.com/vtk/vtk/-/issues/17670#note_745510). VTK 9.0 does, but it has not been released yet. Users of more recent Linux distributions like Ubuntu 20.04, where Python 3.8 is the default version, have to downgrade to Python 3.7.
### Steps to reproduce
1. Use Ubuntu 20.04.
2. Follow the installation instructions in the `README.md`.
3. CMake produces an error when installing Python packages into the virtual environment:
```
ERROR: No matching distribution found for vtk (from dorie==0.1)
```
### Proposed workaround
#### on Ubuntu
Use the [deadsnakes](https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa) PPA to have any version of Python available on your system.
```
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt install python3.7
```
When building DORiE, set the Python3.7 executable explicitly by adding the following variable to the `CMAKE_FLAGS`:
```
-DPYTHON_EXECUTABLE=/usr/bin/python3.7
```
#### on macOS
Homebrew has not upgraded to Python3.8 yet, [it seems](https://formulae.brew.sh/formula/python#default). Everything should still work.VTK currently cannot be installed via Pip on Python 3.8. This makes configuring DORiE fail as required Python packages cannot be installed.
### Summary
VTK 8.2.1 [does not support Python 3.8](https://gitlab.kitware.com/vtk/vtk/-/issues/17670#note_745510). VTK 9.0 does, but it has not been released yet. Users of more recent Linux distributions like Ubuntu 20.04, where Python 3.8 is the default version, have to downgrade to Python 3.7.
### Steps to reproduce
1. Use Ubuntu 20.04.
2. Follow the installation instructions in the `README.md`.
3. CMake produces an error when installing Python packages into the virtual environment:
```
ERROR: No matching distribution found for vtk (from dorie==0.1)
```
### Proposed workaround
#### on Ubuntu
Use the [deadsnakes](https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa) PPA to have any version of Python available on your system.
```
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt install python3.7
```
When building DORiE, set the Python3.7 executable explicitly by adding the following variable to the `CMAKE_FLAGS`:
```
-DPYTHON_EXECUTABLE=/usr/bin/python3.7
```
#### on macOS
Homebrew has not upgraded to Python3.8 yet, [it seems](https://formulae.brew.sh/formula/python#default). Everything should still work.Upstreamhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/188Avoid dangling references in lambdas returned by parameterization interface2020-04-27T18:39:18ZLukas Riedelmail@lukasriedel.comAvoid dangling references in lambdas returned by parameterization interface### Overview
The following discussion from !187 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/merge_requests/187#note_48281): (+1 comment)
> I have to accept that I never realized of this `this` on the parameterization interface. And now that I do, I do not like it at all. This lambda is getting passed through at least two classes with these references and is shouting "I will hold a dangling reference whenever you don't expect it". It is _never_ advised to export lambdas with closures that hold references to other scopes (see [CppCoreGuidelines](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Rf-value-capture) or [Effective Modern C++
> ](http://shop.oreilly.com/product/0636920033707.do?cmp=af-code-books-video-product_cj_0636920033707_7708709)).
>
> Now is perhaps too late to complain and ask for a redesign as we have several of them, so I would be pleased with just a big warning to use these functions and those in the parameter classes: "only if you know what you are doing" or with very precise instructions how to avoid the dangling references.
### Proposal
In the lambdas returned by `Dorie::Parameterization::Richards` and `::MualemVanGenuchten`, capture the required parameters by value. *Edit: Parameterizations in the Transport model are affected as well.*
### How to test the implementation?
Evaluate a parameterization function after destroying the parameterization interface which supplied it.
### Related issues### Overview
The following discussion from !187 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/merge_requests/187#note_48281): (+1 comment)
> I have to accept that I never realized of this `this` on the parameterization interface. And now that I do, I do not like it at all. This lambda is getting passed through at least two classes with these references and is shouting "I will hold a dangling reference whenever you don't expect it". It is _never_ advised to export lambdas with closures that hold references to other scopes (see [CppCoreGuidelines](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Rf-value-capture) or [Effective Modern C++
> ](http://shop.oreilly.com/product/0636920033707.do?cmp=af-code-books-video-product_cj_0636920033707_7708709)).
>
> Now is perhaps too late to complain and ask for a redesign as we have several of them, so I would be pleased with just a big warning to use these functions and those in the parameter classes: "only if you know what you are doing" or with very precise instructions how to avoid the dangling references.
### Proposal
In the lambdas returned by `Dorie::Parameterization::Richards` and `::MualemVanGenuchten`, capture the required parameters by value. *Edit: Parameterizations in the Transport model are affected as well.*
### How to test the implementation?
Evaluate a parameterization function after destroying the parameterization interface which supplied it.
### Related issuesBugLow PriorityModel:RichardsModel:TransportLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/181Include growing plant roots as a sink in the Richards source-term function2020-04-09T21:22:48ZSimon LüdkeInclude growing plant roots as a sink in the Richards source-term function
## Description
The goal of my project is to simulate plant growth depending on atmospheric forcing (precipitation or evaporation) using the [Feddes](https://library.wur.nl/WebQuery/wurpubs/fulltext/35358) root-water uptake model.
## Proposal
Include a root hydraulic head depending root function into the [Flow-source function](https://hermes.iup.uni-heidelberg.de/dorie_doc/master/doxygen/html/a01147.html 'doxygen documentation').
It is also possible to include more simple root configurations for future use.
What I want for [my project](http://ts.iup.uni-heidelberg.de/people/simon-luedke/luedke-project/ 'Internal project page') are three connected functionalities of which only one calculates the sink term. It is called Root.
The first of the other two uses the water uptake to calculate biomass production (used for root and shoot growth) called Biomass while the second uses the upper boundary condition and the shoot biomass to calculate the potential transpiration for the next time-step and is called Shoot.
### Main questions:
- how to read in initial parameters and conditions
- how to access the boundary condition at the top boundary (for calculation of T_p)
- how to access h_m
### variables to save every time-step
(possibly as arrays to enable multiple plants in the future)
- Biomass B
- potential Transpiration T_p
- Water-uptake W
- hm (is already being saved)
- (Sink-value (only for possible multiple plants))
### Flow diagram of the model idea
```mermaid
graph TB;
subgraph model
C[Shoot]-->|Tp|A;
B[Richards]-->|hm|A;
B-->|water-uptake|D;
D[Biomass production]-->|Biomass|A
D-->|Biomass|C
A[Root]==>|Sink value|B;
end
subgraph external parameters
S[Soil parameter]-.->B
P[Plant initial conditions and parameter]-.->A
P-.->C
P-.->D
Bo[Boundary conditions]-.->B
Bo-.->|only top|C
Bo-.->|grid extensions|A
end
```
###### Disclaimer:
This is my first Issue so please inform me about what I information I should add or clarify.
## Description
The goal of my project is to simulate plant growth depending on atmospheric forcing (precipitation or evaporation) using the [Feddes](https://library.wur.nl/WebQuery/wurpubs/fulltext/35358) root-water uptake model.
## Proposal
Include a root hydraulic head depending root function into the [Flow-source function](https://hermes.iup.uni-heidelberg.de/dorie_doc/master/doxygen/html/a01147.html 'doxygen documentation').
It is also possible to include more simple root configurations for future use.
What I want for [my project](http://ts.iup.uni-heidelberg.de/people/simon-luedke/luedke-project/ 'Internal project page') are three connected functionalities of which only one calculates the sink term. It is called Root.
The first of the other two uses the water uptake to calculate biomass production (used for root and shoot growth) called Biomass while the second uses the upper boundary condition and the shoot biomass to calculate the potential transpiration for the next time-step and is called Shoot.
### Main questions:
- how to read in initial parameters and conditions
- how to access the boundary condition at the top boundary (for calculation of T_p)
- how to access h_m
### variables to save every time-step
(possibly as arrays to enable multiple plants in the future)
- Biomass B
- potential Transpiration T_p
- Water-uptake W
- hm (is already being saved)
- (Sink-value (only for possible multiple plants))
### Flow diagram of the model idea
```mermaid
graph TB;
subgraph model
C[Shoot]-->|Tp|A;
B[Richards]-->|hm|A;
B-->|water-uptake|D;
D[Biomass production]-->|Biomass|A
D-->|Biomass|C
A[Root]==>|Sink value|B;
end
subgraph external parameters
S[Soil parameter]-.->B
P[Plant initial conditions and parameter]-.->A
P-.->C
P-.->D
Bo[Boundary conditions]-.->B
Bo-.->|only top|C
Bo-.->|grid extensions|A
end
```
###### Disclaimer:
This is my first Issue so please inform me about what I information I should add or clarify.Model:RichardsSimon LüdkeSimon Lüdkehttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/180Homogeneize formatting in the C++ code2020-02-25T13:37:05ZSantiago Ospina De Los Ríossospinar@gmail.comHomogeneize formatting in the C++ code## Description
The c++ code has become a little bit disorderd due that each one of the developers is more fund to a particular coding style. For example, there are several parts of the code that use tabs instead of spaces. Another places the intentantion is 2 spaces instead of 4. The opening of the braces of `if-else` conditions and `for-loops` is sometimes done in the same line and sometimes in the next one, and etc. There is no big issue with each one of these inconsistencies, but in the end, they sum up and make our code look somehow disorganized.
## Proposal
Use automatic tools to ensure that we follow a consistent format scheme through the code.
### Automation
The tool [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html) is available in many systems and can automatically apply specific formatting options. Then, the idea is to agree in a specific format and to store it in the repository with the file `.clang-format` so that is clear that we follow it.
I personally use plain Mozilla settings with the [Mozilla coding style](https://firefox-source-docs.mozilla.org/code-quality/coding-style/index.html) because is the one most similar to the style used in dune. But I am open to any other proposal.
### Old code
Once decided upon specific settings, we should use the tool once over the whole repository to make existing code follow the new format.
### New code
New code may not follow the format we choose for dorie. Hence, I see two ways to enforce new code to follow these rules:
* Allow any format in merge request and pass `clang-format` every now and then.
* Do not accept merge requests without the appropiated format.
The first one is relatively easy to apply for every one but will clutter our git history. The second is more drastic but could be more easily enforced by providing the right tools for it. For example, by providing a [git hook that checks and applies](hooks) the format settings. In the second case, we would have to update our [contributing guide](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/blob/master/CONTRIBUTING.md).
## Related issues
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->## Description
The c++ code has become a little bit disorderd due that each one of the developers is more fund to a particular coding style. For example, there are several parts of the code that use tabs instead of spaces. Another places the intentantion is 2 spaces instead of 4. The opening of the braces of `if-else` conditions and `for-loops` is sometimes done in the same line and sometimes in the next one, and etc. There is no big issue with each one of these inconsistencies, but in the end, they sum up and make our code look somehow disorganized.
## Proposal
Use automatic tools to ensure that we follow a consistent format scheme through the code.
### Automation
The tool [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html) is available in many systems and can automatically apply specific formatting options. Then, the idea is to agree in a specific format and to store it in the repository with the file `.clang-format` so that is clear that we follow it.
I personally use plain Mozilla settings with the [Mozilla coding style](https://firefox-source-docs.mozilla.org/code-quality/coding-style/index.html) because is the one most similar to the style used in dune. But I am open to any other proposal.
### Old code
Once decided upon specific settings, we should use the tool once over the whole repository to make existing code follow the new format.
### New code
New code may not follow the format we choose for dorie. Hence, I see two ways to enforce new code to follow these rules:
* Allow any format in merge request and pass `clang-format` every now and then.
* Do not accept merge requests without the appropiated format.
The first one is relatively easy to apply for every one but will clutter our git history. The second is more drastic but could be more easily enforced by providing the right tools for it. For example, by providing a [git hook that checks and applies](hooks) the format settings. In the second case, we would have to update our [contributing guide](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/blob/master/CONTRIBUTING.md).
## Related issues
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->Discussionhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/178Use more recent version of Ubuntu in Docker2020-02-14T15:46:22ZSantiago Ospina De Los Ríossospinar@gmail.comUse more recent version of Ubuntu in DockerThe following discussion from !112 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/112#note_40968): (+6 comments)
> What do you think about switching to the most recent version of Ubuntu, where we could use a much more recent compiler? Clusters tend to have very outdated software, but we actually got Utopia running on a cluster so I'm quite confident that this could work even if we do not test against old Clang and GCC versions.The following discussion from !112 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/112#note_40968): (+6 comments)
> What do you think about switching to the most recent version of Ubuntu, where we could use a much more recent compiler? Clusters tend to have very outdated software, but we actually got Utopia running on a cluster so I'm quite confident that this could work even if we do not test against old Clang and GCC versions.Dockerhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/175Introduce Model::pre_step() method to set boundary condition times independen...2020-01-10T14:39:53ZLukas Riedelmail@lukasriedel.comIntroduce Model::pre_step() method to set boundary condition times independently from step computationIn the coupled model, the flux reconstruction at the begin of the step uses the previous boundary conditions, but that currently does not matter because we apply the `NextStep` policy when evaluating it.
### Description
During the `Model::step()` method, the Richards model applies the current start time to its boundary condition manager to retrieve the correct BCs during the step. *Before* calling `step()`, the boundary condition manager therefore applies the boundary conditions of the *previous* step.
### Proposal
Extend the base `Model` interface with a virtual `pre_step()` method which does nothing by default. Override this method in the Richards model to apply the current time to the boundary condition manager. Additionally, the flux reconstruction cache can be reset in this method rather than at the end of `step()`.
Within the coupled model, call `pre_step()` on the Richards model *before* fetching the water flux reconstruction, such that the current boundary conditions are correctly applied. Then, call `step()` and retrieve the second reconstruction. See above for an
Additionally, raise a warning in the Richards model if `step()` is called without calling `pre_step()` before.
```c++
void ModelRichardsTransportCoupling<Traits>::step()
{
// ... //
// NOTE: Proposal. Make sure the boundary condition manager returns the BCs for this time step
//_richards->pre_step();
// set initial state of the water flux to container
auto gf_water_flux_begin = _richards->get_water_flux_reconstructed();
igf_water_flux->push(gf_water_flux_begin,time_begin);
// ^----- Currently applies the old BC for this time
// always do an step of richards
_richards->step();
// ^----- Currently enables the correct BCs for this step
// ... //
// set final state of the water flux to container
auto gf_water_flux_end = _richards->get_water_flux_reconstructed();
igf_water_flux->push(gf_water_flux_end,time_end);
// ^----- Applies the correct BC for this time
// ... //
```
### How to test the implementation?
* Extend `test-model-base` to check if `pre_step()` is called during `run()`.
### Related issuesIn the coupled model, the flux reconstruction at the begin of the step uses the previous boundary conditions, but that currently does not matter because we apply the `NextStep` policy when evaluating it.
### Description
During the `Model::step()` method, the Richards model applies the current start time to its boundary condition manager to retrieve the correct BCs during the step. *Before* calling `step()`, the boundary condition manager therefore applies the boundary conditions of the *previous* step.
### Proposal
Extend the base `Model` interface with a virtual `pre_step()` method which does nothing by default. Override this method in the Richards model to apply the current time to the boundary condition manager. Additionally, the flux reconstruction cache can be reset in this method rather than at the end of `step()`.
Within the coupled model, call `pre_step()` on the Richards model *before* fetching the water flux reconstruction, such that the current boundary conditions are correctly applied. Then, call `step()` and retrieve the second reconstruction. See above for an
Additionally, raise a warning in the Richards model if `step()` is called without calling `pre_step()` before.
```c++
void ModelRichardsTransportCoupling<Traits>::step()
{
// ... //
// NOTE: Proposal. Make sure the boundary condition manager returns the BCs for this time step
//_richards->pre_step();
// set initial state of the water flux to container
auto gf_water_flux_begin = _richards->get_water_flux_reconstructed();
igf_water_flux->push(gf_water_flux_begin,time_begin);
// ^----- Currently applies the old BC for this time
// always do an step of richards
_richards->step();
// ^----- Currently enables the correct BCs for this step
// ... //
// set final state of the water flux to container
auto gf_water_flux_end = _richards->get_water_flux_reconstructed();
igf_water_flux->push(gf_water_flux_end,time_end);
// ^----- Applies the correct BC for this time
// ... //
```
### How to test the implementation?
* Extend `test-model-base` to check if `pre_step()` is called during `run()`.
### Related issuesBugLow PriorityModel:CommonModel:RichardsSuggestionhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/169Use BCGS_AMG_SSOR solver for finite volume methods2019-11-25T15:12:11ZLukas Riedelmail@lukasriedel.comUse BCGS_AMG_SSOR solver for finite volume methods### Description
We currently use the `AMG_4_DG` solver in all cases. As its name suggests, the solver is optimized for solving DG problems by separating them into a CG and DG subspace. As finite volume spaces are essentially DG spaces of order zero, this solver works, but it likely performs many no-ops. Specifically, the CG subspace of the problem should be empty. We can therefore switch to a more direct solver.
### Proposal
Use the `BCGS_AMG_SSOR` solver for the finite volume method. It uses the same overall routines as the `AMG_4_DG`, which should ensure that the results remain comparable. It uses `AMG` for preconditioning, `BiCGStab` for solving the problem, and `SSOR` for smoothing the solution.
The PDELab class is called `Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR` and defined in the `dune/pdelab/backend/istl/ovlpistlsolverbackend.hh` header. The class must replace the other linear solver via a conditional type definition based on the polynomial order of the problem. Initialization of the solver object is also different and must be controlled via `if constexpr`.
### How to test the implementation?
Test suite still succeeds.
### Related issues### Description
We currently use the `AMG_4_DG` solver in all cases. As its name suggests, the solver is optimized for solving DG problems by separating them into a CG and DG subspace. As finite volume spaces are essentially DG spaces of order zero, this solver works, but it likely performs many no-ops. Specifically, the CG subspace of the problem should be empty. We can therefore switch to a more direct solver.
### Proposal
Use the `BCGS_AMG_SSOR` solver for the finite volume method. It uses the same overall routines as the `AMG_4_DG`, which should ensure that the results remain comparable. It uses `AMG` for preconditioning, `BiCGStab` for solving the problem, and `SSOR` for smoothing the solution.
The PDELab class is called `Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR` and defined in the `dune/pdelab/backend/istl/ovlpistlsolverbackend.hh` header. The class must replace the other linear solver via a conditional type definition based on the polynomial order of the problem. Initialization of the solver object is also different and must be controlled via `if constexpr`.
### How to test the implementation?
Test suite still succeeds.
### Related issuesEnhancementSuggestionhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/168Improve multiprocessing support for Docker image2020-01-27T14:41:24ZLukas Riedelmail@lukasriedel.comImprove multiprocessing support for Docker image### Description
The Docker image includes an OpenMPI installation and compiles DUNE with MPI. The included executables support parallel execution inside the container. However, there are a few issues with the current implementation.
* Executing the application in parallel requires `--allow-run-as-root`.
**Description:**
By default, the user inside the Docker container has root access inside the container. The Docker daemon ensures that these privileges are not transferred to the host system – in a sense, they are only "faked" to grant full user access inside the container. OpenMPI detects that the user has root access, and therefore requires the `--allow-run-as-root` flag to be passed to the `mpirun` command (running MPI with root priviledges apparently is a major security issue).
To circumvent that, users have to pass the flag through the CLI, which is done by
dorie run --mpi-flags "--allow-run-as-root" <cfg>
**Proposal:**
When building the Docker image, create a new user without root privileges. Starting the container will then create a session for this user instead of the root user. However, this might entail that data cannot be written into the `/mnt` directory anymore.
Also, remove the explicit use of `--allow-run-as-root` in the parallel tests.
* Executing the application in parallel leads to errors in the MPI routine.
**Description:**
The errors take the form
Read -1, expected <number>, errno = 1
This apparently stems from an MPI subroutine called Vader that can not operate as intended. According to [this thread](https://www.bountysource.com/issues/56328379-vader-in-a-docker-container), one option *inside* the container is to disable CMA (whatever that is) via
export OMPI_MCA_btl_vader_single_copy_mechanism=none
which apparently decreases performance. Alternatively, one needs to grant the container the `ptrace` capability when starting it via
docker run --cap-add=SYS_PTRACE <...>
which always has to be done at the user side.
**Proposal:**
Have the DORiE CLI detect if `SYS_PTRACE` is enabled (not sure how this works). If not, deactivate the CMA via the above `export` command and warn the user.
### How to test the implementation?
Testing the final Docker image currently is not part of our tests, so there is no way to ensure this automatically.
### Related issues
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
The Docker image includes an OpenMPI installation and compiles DUNE with MPI. The included executables support parallel execution inside the container. However, there are a few issues with the current implementation.
* Executing the application in parallel requires `--allow-run-as-root`.
**Description:**
By default, the user inside the Docker container has root access inside the container. The Docker daemon ensures that these privileges are not transferred to the host system – in a sense, they are only "faked" to grant full user access inside the container. OpenMPI detects that the user has root access, and therefore requires the `--allow-run-as-root` flag to be passed to the `mpirun` command (running MPI with root priviledges apparently is a major security issue).
To circumvent that, users have to pass the flag through the CLI, which is done by
dorie run --mpi-flags "--allow-run-as-root" <cfg>
**Proposal:**
When building the Docker image, create a new user without root privileges. Starting the container will then create a session for this user instead of the root user. However, this might entail that data cannot be written into the `/mnt` directory anymore.
Also, remove the explicit use of `--allow-run-as-root` in the parallel tests.
* Executing the application in parallel leads to errors in the MPI routine.
**Description:**
The errors take the form
Read -1, expected <number>, errno = 1
This apparently stems from an MPI subroutine called Vader that can not operate as intended. According to [this thread](https://www.bountysource.com/issues/56328379-vader-in-a-docker-container), one option *inside* the container is to disable CMA (whatever that is) via
export OMPI_MCA_btl_vader_single_copy_mechanism=none
which apparently decreases performance. Alternatively, one needs to grant the container the `ptrace` capability when starting it via
docker run --cap-add=SYS_PTRACE <...>
which always has to be done at the user side.
**Proposal:**
Have the DORiE CLI detect if `SYS_PTRACE` is enabled (not sure how this works). If not, deactivate the CMA via the above `export` command and warn the user.
### How to test the implementation?
Testing the final Docker image currently is not part of our tests, so there is no way to ensure this automatically.
### Related issues
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->DockerSupporthttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/155Add spline MvG parameterisation2019-07-04T14:17:58ZSantiago Ospina De Los Ríossospinar@gmail.comAdd spline MvG parameterisation### Description
Prof. Roth has suggested several times to implement the MvG parameterisation as a spline in order to reduce computation requirements. Ole also has experienced a bottleneck that the power operations impose over the overall time.
### Proposal
None yet. Feel free to start a discussion on this.
### How to test the implementation?
It should -approximately- have the same behaviour as the raw MvG parameterization
### Related issues
See #63
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
Prof. Roth has suggested several times to implement the MvG parameterisation as a spline in order to reduce computation requirements. Ole also has experienced a bottleneck that the power operations impose over the overall time.
### Proposal
None yet. Feel free to start a discussion on this.
### How to test the implementation?
It should -approximately- have the same behaviour as the raw MvG parameterization
### Related issues
See #63
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->Accepting MREnhancementModel:RichardsTo Dohttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/145Complete documentation on how to restart an aborted simulation2019-06-21T14:10:44ZLukas Riedelmail@lukasriedel.comComplete documentation on how to restart an aborted simulationThe following discussion from !130 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/130#note_22564):
> @sospinar, I added the document `restart.rst` you wrote to the Cookbook, because it seems to me like it covers a rather special case (at least for now). I did a bit of reformatting but otherwise left it as is, with a warning on its WIP status. Merging it this way would be fine for me.
### Description
The document `restart.rst` has missing sections and should be overhauled in the future.
We actually do not support a proper restart of a simulation. The `restart.rst` page covers the workaround of creating an initial condition H5 file from an output VTK file and inserting it into a new simulation run.
### Proposal
This issue could be extended to serve as a list for gathering WIP doc pages and other issues in the docs.
### Related issues/MRs
* !137: Create a proper cook bookThe following discussion from !130 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/130#note_22564):
> @sospinar, I added the document `restart.rst` you wrote to the Cookbook, because it seems to me like it covers a rather special case (at least for now). I did a bit of reformatting but otherwise left it as is, with a warning on its WIP status. Merging it this way would be fine for me.
### Description
The document `restart.rst` has missing sections and should be overhauled in the future.
We actually do not support a proper restart of a simulation. The `restart.rst` page covers the workaround of creating an initial condition H5 file from an output VTK file and inserting it into a new simulation run.
### Proposal
This issue could be extended to serve as a list for gathering WIP doc pages and other issues in the docs.
### Related issues/MRs
* !137: Create a proper cook bookDocumentationLow PriorityTo Dohttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/139Flux reconstruction yields large error on 3D simplices and 2nd order polynomi...2020-01-09T12:49:03ZLukas Riedelmail@lukasriedel.comFlux reconstruction yields large error on 3D simplices and 2nd order polynomials (only)The following discussion from !105 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/105#note_20006): (+13 comments)
> I'm having troubles with the $`\mathcal{RT}_1`$ elements in 3D. For simplices, the jumps are quite high, while for cubes a local matrix is throwing an error saying that is singular. Not really sure how to proceed here...
### Collected info
A single testing case for the flux reconstruction shows strongly increased flux jump residuals:
| case | error |
| ------ | ------ |
| 2D_1_cube | < 1E-17 |
| 2D_2_cube | < 1E-17 |
| 2D_3_cube | < 1E-17 |
| 2D_1_simplex | < 1E-17 |
| 2D_2_simplex | < 1E-17 |
| 2D_3_simplex | < 1E-17 |
| 3D_1_cube | < 1E-17 |
| 3D_2_cube | < 1E-17 |
| 3D_3_cube | none |
| 3D_1_simplex | < 1E-17 |
| **3D_2_simplex** | **2.70125e-11** |
| 3D_3_simplex | < 1E-17 |
@sospinar:
>>>
We have several sources of numerical inaccuracies
* In the [DG cases](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/105#note_20066), they come because values that should represent the same value are generated by different local matrices; `Ax=b` problems.
* In the evaluation of the fluxes, they come from the Piola transformation, which have the following operations:
```c++
auto J = e.geometry().jacobianInverseTransposed(x); //! Geometry dependent
J.invert();
J.umtv(x,y); //! y += A^T x
y /= J.determinant();
```
>>>
### Proposal
@sospinar:
> Will forward this question to Prof. Bastian.
### How to test the implementation?
### Related issues
See #65, !105The following discussion from !105 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/105#note_20006): (+13 comments)
> I'm having troubles with the $`\mathcal{RT}_1`$ elements in 3D. For simplices, the jumps are quite high, while for cubes a local matrix is throwing an error saying that is singular. Not really sure how to proceed here...
### Collected info
A single testing case for the flux reconstruction shows strongly increased flux jump residuals:
| case | error |
| ------ | ------ |
| 2D_1_cube | < 1E-17 |
| 2D_2_cube | < 1E-17 |
| 2D_3_cube | < 1E-17 |
| 2D_1_simplex | < 1E-17 |
| 2D_2_simplex | < 1E-17 |
| 2D_3_simplex | < 1E-17 |
| 3D_1_cube | < 1E-17 |
| 3D_2_cube | < 1E-17 |
| 3D_3_cube | none |
| 3D_1_simplex | < 1E-17 |
| **3D_2_simplex** | **2.70125e-11** |
| 3D_3_simplex | < 1E-17 |
@sospinar:
>>>
We have several sources of numerical inaccuracies
* In the [DG cases](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/105#note_20066), they come because values that should represent the same value are generated by different local matrices; `Ax=b` problems.
* In the evaluation of the fluxes, they come from the Piola transformation, which have the following operations:
```c++
auto J = e.geometry().jacobianInverseTransposed(x); //! Geometry dependent
J.invert();
J.umtv(x,y); //! y += A^T x
y /= J.determinant();
```
>>>
### Proposal
@sospinar:
> Will forward this question to Prof. Bastian.
### How to test the implementation?
### Related issues
See #65, !105BugModel:Commonhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/138Allow flux reconstruction for non-conforming grids2019-01-09T04:08:38ZSantiago Ospina De Los Ríossospinar@gmail.comAllow flux reconstruction for non-conforming gridsThe following discussion from !105 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/105#note_19782): (+6 comments)
> I have been thinking about the non-conforming grids for flux reconstruction. It is possible, but it is absolutely needed to do something more than local flux reconstruction (that is what this MR is about):
>
> One has to solve the same problem but globally, and constraint the normal fluxes to be equal on the intersections. This procedure leads to the so-called hanging nodes. This is not needed for the conforming grids because normal fluxes are trivially equal for RT elements of the same order. In such a case, each grid entity has all the information to do the flux reconstruction, and computations become local. (Notice that p-adaptivity for the flux reconstruction also needs to solve the global problem).
>
> In that case, some of the things implemented here need to be modified:
>
> * Form a global linear system rather than a local linear system. It means that the Raviart Thomas local engine has to be modified so that it forms the right linear system. (Or just creating another local engine).
> * Local Function Spaces (LFS) cannot be the one I implemented here: It can be either that the LFS of PDELab receives intersections, or that the `MinimalLocalFunctionSpace` can construct the DOF and can form a composite space (yeah, it implies using `TypeTree`).
> * One has to constrain the degrees of freedom associated to the entities of codimension 1 in a very similar way as for hanging nodes in normal FEM (PDELab has a quite old code for it, not sure if it would work).
> * As well as for this MR, the lifting can be done separately (e.g. with this MR infrastructure), or directly together with the flux reconstruction RT elements.
> * Check again the computations in the local operator for the non-conforming Raviart Thomas finite elements (and its test functions).
>
> Since it is quite a lot of work I won't do what I just described and since local flux reconstruction in non-conforming grids is not useful at all for us, and I will disable local flux reconstruction computation for the cube-adaptative cases. @lriedel Are you fine with it?The following discussion from !105 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/105#note_19782): (+6 comments)
> I have been thinking about the non-conforming grids for flux reconstruction. It is possible, but it is absolutely needed to do something more than local flux reconstruction (that is what this MR is about):
>
> One has to solve the same problem but globally, and constraint the normal fluxes to be equal on the intersections. This procedure leads to the so-called hanging nodes. This is not needed for the conforming grids because normal fluxes are trivially equal for RT elements of the same order. In such a case, each grid entity has all the information to do the flux reconstruction, and computations become local. (Notice that p-adaptivity for the flux reconstruction also needs to solve the global problem).
>
> In that case, some of the things implemented here need to be modified:
>
> * Form a global linear system rather than a local linear system. It means that the Raviart Thomas local engine has to be modified so that it forms the right linear system. (Or just creating another local engine).
> * Local Function Spaces (LFS) cannot be the one I implemented here: It can be either that the LFS of PDELab receives intersections, or that the `MinimalLocalFunctionSpace` can construct the DOF and can form a composite space (yeah, it implies using `TypeTree`).
> * One has to constrain the degrees of freedom associated to the entities of codimension 1 in a very similar way as for hanging nodes in normal FEM (PDELab has a quite old code for it, not sure if it would work).
> * As well as for this MR, the lifting can be done separately (e.g. with this MR infrastructure), or directly together with the flux reconstruction RT elements.
> * Check again the computations in the local operator for the non-conforming Raviart Thomas finite elements (and its test functions).
>
> Since it is quite a lot of work I won't do what I just described and since local flux reconstruction in non-conforming grids is not useful at all for us, and I will disable local flux reconstruction computation for the cube-adaptative cases. @lriedel Are you fine with it?Low PriorityModel:CommonModel:RichardsTo Dohttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/132Support restarting of simulations2019-05-17T17:15:43ZSantiago Ospina De Los Ríossospinar@gmail.comSupport restarting of simulations### Description
From discussion at #122:
> @sospinar: **If we write `vtk`, it is logical that we must be able to read them.** It would, for example, allow us to set checkpoints on simulations or just use the end of some simulations to start a new one without any further postprocessing.
> @lriedel: I generally agree. Restarting from a written simulation would be nice to have. But there are two major issues:
>
>1. VTK is a really huge library. It does not only contain data I/O, but (mostly) functions for data visualization (!) and analysis. It is now available through most common package managers, but I can understand that the DUNE devs did not want to include it because it will be unavailable on most clusters.
>
>2. The written VTK file does not include the entire GFS information. This is a particular problem with local grid refinement, where the grid configuration cannot—at least not simply—be inferred from the VTK output. Additionally, one may choose cell or vertex data output, and different levels of subsampling. If this is evaluated as initial condition, one never has the guarantee that the assembled DOF vector is the same as in the simulation run which generated the VTK file.
> @sospinar: Well, that's better than nothing. Hopefully, subsampling can support part of the error one makes when writing into `vtk`s.
> @lriedel: But that's exactly my point. Then the restart option is no real restart, but a simulation that starts from an initial bcondition which is _somewhat_ similar to an old solution. I'm not sure if this is really useful.
>
> I see two general options for a program restart:
>
> 1. Use program checkpoints. At a checkpoint, the full precision DOF vector is written to a file. The grid is stored in the [DUNE grid format](https://www.dune-project.org/doxygen/2.6.0/group__DuneGridFormatParser.html). However, I'm unsure about its capabilities and have never worked with it.
>
> 2. Use a solution from a previous program run as initial condition. This is the cheap way. Grid configurations don't need to match and the old solution is interpolated onto the new grid function space.
But that's better discussed in another Issue.
### Proposal
### How to test the implementation?
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
From discussion at #122:
> @sospinar: **If we write `vtk`, it is logical that we must be able to read them.** It would, for example, allow us to set checkpoints on simulations or just use the end of some simulations to start a new one without any further postprocessing.
> @lriedel: I generally agree. Restarting from a written simulation would be nice to have. But there are two major issues:
>
>1. VTK is a really huge library. It does not only contain data I/O, but (mostly) functions for data visualization (!) and analysis. It is now available through most common package managers, but I can understand that the DUNE devs did not want to include it because it will be unavailable on most clusters.
>
>2. The written VTK file does not include the entire GFS information. This is a particular problem with local grid refinement, where the grid configuration cannot—at least not simply—be inferred from the VTK output. Additionally, one may choose cell or vertex data output, and different levels of subsampling. If this is evaluated as initial condition, one never has the guarantee that the assembled DOF vector is the same as in the simulation run which generated the VTK file.
> @sospinar: Well, that's better than nothing. Hopefully, subsampling can support part of the error one makes when writing into `vtk`s.
> @lriedel: But that's exactly my point. Then the restart option is no real restart, but a simulation that starts from an initial bcondition which is _somewhat_ similar to an old solution. I'm not sure if this is really useful.
>
> I see two general options for a program restart:
>
> 1. Use program checkpoints. At a checkpoint, the full precision DOF vector is written to a file. The grid is stored in the [DUNE grid format](https://www.dune-project.org/doxygen/2.6.0/group__DuneGridFormatParser.html). However, I'm unsure about its capabilities and have never worked with it.
>
> 2. Use a solution from a previous program run as initial condition. This is the cheap way. Grid configurations don't need to match and the old solution is interpolated onto the new grid function space.
But that's better discussed in another Issue.
### Proposal
### How to test the implementation?
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->DiscussionEnhancementLow PriorityModel:CommonTo Dohttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/130Switch to vertex data evaluation on system tests2019-09-09T17:56:21ZSantiago Ospina De Los Ríossospinar@gmail.comSwitch to vertex data evaluation on system tests### Description
The following discussion from !128 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/128#note_19781):
> This requires an update of all system tests. The Python scripts only work with cell data. This is the only reason why `vertexData` is not set to `True` by default.
>
> I have a Python implementation ready which can also evaluate vertex VTKs, based on Oles `dune-vtkdiff`, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/snippets/16. I'll probably use that to update the Python test evaluators "soon"_
### Proposal
...
### How to test the implementation?
...
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
The following discussion from !128 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/128#note_19781):
> This requires an update of all system tests. The Python scripts only work with cell data. This is the only reason why `vertexData` is not set to `True` by default.
>
> I have a Python implementation ready which can also evaluate vertex VTKs, based on Oles `dune-vtkdiff`, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/snippets/16. I'll probably use that to update the Python test evaluators "soon"_
### Proposal
...
### How to test the implementation?
...
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->EnhancementLow PriorityModel:RichardsTo Do