Sustainable processes have recently awaked an increasing interest in the process systems engineering literature. In industry, this kind of problems inevitably required a multi-objective analysis to evaluate the environmental impact in addition to the economic performance. Bio-based processes have the potential to enhance the sustainability level of the energy sector. Nevertheless, such processes very often show variable conditions and present an uncertain behavior. The approaches presented for solving multi-objective problems under uncertainty have neglected the potential effects of different quality streams on the overall system. Here, it is presented an alternative approach, based on a State Task Network formulation, capable of optimizing under uncertain conditions, considering multiple selection criteria and accounting for the material quality effect. The resulting set of Pareto solutions are then assessed using the Elimination and Choice Expressing Reality-IV method, which identify the ones showing better overall performance considering the uncertain parameters space

Gloss is a critical issue in many applications in the coating industry. Gloss depends on optical and rheological properties of complex mixtures, and estimating gloss from basic properties is still a challenge. In order to predict the gloss of an industrial thickened-to-application formulation this work presents a gloss-rheology semi empirical-modeling approach based on a gloss excess function and previous work from other authors. A new matt (low gloss) hybrid waterborne polyurethane dispersion composed out of a self-matting agent (A) and a traditional silica-based matting agent (B) has been studied, and the resulting gloss of the mixture has been correlated to pure component gloss values and dynamic viscosity at medium shear rate. Several modeling options have been tested and their goodness of fit has been determined. The most promising options have been selected and validated towards untrained data sets.

Microgrids are energy systems that can work independently from the main grid in a stable and self-sustainable way. They rely on energy management systems to schedule optimally the distributed energy resources. Conventionally, the main research in this field is focused on scheduling problems applicable for specific case studies rather than in generic architectures that can deal with the uncertainties of the renewable energy sources. This paper contributes a design and experimental validation of an adaptable energy management system implemented in an online scheme, as well as an evaluation framework for quantitatively assess the enhancement attained by different online energy management strategies. The proposed architecture allows the interaction of measurement, forecasting and optimization modules, in which a generic generation-side mathematical problem is modeled, aiming to minimize operating costs and load disconnections. The whole energy management system has been tested experimentally in a test bench under both grid-connected and islanded mode. Also, its performance has been proved considering severe mismatches in forecast generation and load. Several experimental results have demonstrated the effectiveness of the proposed EMS, assessed by the corresponding average gap with respect to a selected benchmark strategy and ideal boundaries of the best and worst known solutions.

Microgrids are energy systems that aggregate distributed energy resources, loads, and power electronics devices in a stable and balanced way. They rely on energy management systems to schedule optimally the distributed energy resources. Conventionally, many scheduling problems have been solved by using complex algorithms that, even so, do not consider the operation of the distributed energy resources. This paper presents the modeling and design of a modular energy management system and its integration to a grid-connected battery-based microgrid. The scheduling model is a power generation-side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data. The operation of the microgrid is complemented with a supervisory control stage that compensates any mismatch between the offline scheduling process and the real time microgrid operation. The proposal has been tested experimentally in a hybrid microgrid at the Microgrid Research Laboratory, Aalborg University.

Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method

The present study aims at proposing a kinetic model that can capture the complexity and non - linear nature of the Fenton and photo - Fenton processes in the degradation of a model pollutant . Moreover, t he pro posed model is also able to account for the effect of the Local Volumetric Rate of Photon Absorption (LVRPA), depending on the radiation field within the annular photoreactor and consequently including the reactor dimensions and lamp characteristics. Parac etamol (PCT) was selected as model pollutant, because it is widely used as antipyretic and analgesic. Three kinetic parameters, accounting for the Fenton - like reaction and the hydroxyl radical attack to hydrogen peroxide and paracetamol, were estimated. Me an Square Error (MSE) and Root Mean Square Error (RMSE) were calculated to validate the model reliability

The present work addresses the optimization of a wastewater network. This problem has been generally addressed by Non Linear Programming and Mixed-Integer Non Linear Programming models. This study explores the potential of considering the component interrelations by taking into account some major types of lumped parameters namely Biological Oxygen Demand (BOD5) and Chemical Oxygen Demand (COD). Moreover, the combination of the Advanced Oxidation Processes (AOPs) with the conventional biological processes is also investigated. Such combination requires evaluating the increase of the biodegradability by using the BOD5/COD ratio method. Hence, the BOD5/COD ratio method is included into the mathematical formulation to model the removal efficiency of lumped parameters by the treatment units. This allows investigating the role of the component interrelation in the water network optimization problem. The comparison of the results obtained by using a conventional mathematical formulation and the proposed new formulation have shown the importance of accounting for the component interrelations in order to ensure the applicability of the attained solutions.

Sustainable process ing ha s attracted recently an increasing interest in the process systems engineering literature. From a practical perspective , addressing sustainability inevitably require s a Multi - Objective analysis and optimization to evaluate the environmental impact in additi on to the economic performance. Bio - based process es are typically used in order to promote sustainability, although the quality of the bio - mass us ually is variable and present an uncertainty behavior . However, the approaches presented so far to address mul ti objective problems under uncertainty have focused on simplifying the optimiz ation model reducing the set of feasible choices and neglecting the potential effects of quality variations streams on the overall system performance . We present here an altern ative approach based on the State Task Network formulation capable to address M ulti - O bjective Optimization problems under uncertain conditions, which includes the material quality effect, all together. The resulting set of solutions are then assessed using ELECTRE IV method, which identif ies the ones showing better overall performance within the uncertain parameters space

This paper presents an unsupervised data-driven method for Fault Detection and Diagnosis (FDD) of nonlinear dynamic processes. The proposed approach is based on the combination of automatic and non-automatic clustering techniques with a data-driven observer based on Multivariate Dynamic Kriging (MDK) metamodels. The proposed framework is studied via its application to a well-known benchmark simulation case study based on the control of a three-tanks system, showing promising performance in terms of accuracy, robustness and simplicity of applications.

Developing data-driven fault detection systems for chemical plants requires managing uncertain data labels and dynamic attributes due to operator-process interactions. Mislabeled data is a known problem in computer science that has received scarce attention from the process systems community. This work introduces and examines the effects of operator actions in records and labels, and the consequences in the development of detection models. Using a state space model, this work proposes an iterative relabeling scheme for retraining classifiers that continuously refines dynamic attributes and labels. Three case studies are presented: a reactor as a motivating example, flooding in a simulated de-Butanizer column, as a complex case, and foaming in an absorber as an industrial challenge. For the first case, detection accuracy is shown to increase by 14% while operating costs are reduced by 20%. Moreover, regarding the de-Butanizer column, the performance of the proposed strategy is shown to be 10% higher than the filtering strategy. Promising results are finally reported in regard of efficient strategies to deal with the presented problem

This work addresses the optimization of supply and distribution chains considering the effect that equipment aging can cause over the performance of the facilities involved in the process. The decaying performance of the facilities is modeled as an exponential equation, thus giving rise to a novel MINLP formulation. The capabilities of the proposed
approach have been tested through its application to a case study considering a simplification of the Spanish natural gas network. Results demonstrate that overlooking the effect of equipment aging can lead to infeasible solutions in practice and show how the proposed model allows overcoming such limitations thus becoming a practical tool to support the decision-making process in the distribution sector.

This paper presents a hybrid approach to enhance the performance of the data-based
Pattern Classification Techniques (PCTs) used for Fault Detection and Diagnosis (FDD)
of nonlinear dynamic noisy processes. The method combines kriging metamodels with
PCT (e.g. Support Vector Machines). The metamodels are used in two different ways;
first, they are used as Multivariate Dynamic Kriging(s) (MDKs) which estimate the
process dynamic behavior/outputs, second, as classical static models which are used for
smoothing noise and imputing missing values of the process actual outputs
measurements. So during the process operations, the estimated and the smoothed actual
outputs are compared, and residual/error signals are generated that is used by the
classifier to detect and diagnose the process possible faults. The method is applied to a
benchmark case study, showing a high enhancement in such PCTs due to the
introduction of the process dynamics information to these PCTs via the MDKs, and by
smoothing the noise and imputing the missing measurements using the static kriging.

This work investigates the application of different metamodeling techniques for enhancing the information quality of the process history databases, through smoothing the noise/outliers and imputing missing data that usually contaminate such databases. The information quality enhancements are aimed at improving the training of the datadriven classification techniques used for Fault Detection and Diagnosis (FDD) of the process. A simulation case study of a Continuous Stirred Tank-Reactor (CSTR) is used to produce training datasets containing noisy, outlier and missing values. Three metamodeling techniques namely; Ordinary Kriging (OK), Artificial Neural Networks (ANN) and Polynomial Regression (PR) are used to smooth the noise and outliers, and to impute the missing values. Next, the FDD performance of the Support Vector Machines (SVM) classifier is compared when it trained with the recuperated datasets by the metamodels, while datasets have noisy, outlier and missing values. The results show high enhancement in the performance of the SVM when it trained with the recuperated data using the metamodels, especially when OK is exploited.

Traditional methods for Fault Detection and Diagnosis (FDD) usually, consider that
processes operate under a single steady condition, but because of several reasons (e.g.:
equipment aging), operation conditions of industrial processes change continuously in
practice. Under these new circumstances, the use of the originally tuned FDD system
would cause false alarms and will reduce the fault classification performance. In this
study, the Hyperplane-Distance Support Vector Machine (HD-SVM) method is
exploited for process FDD to maintain FDD performance when it decays because of the
ageing. Its effectiveness is shown through simulation studies on a CSTR reactor, for
which an aging term is simulated by progressively decreasing the heat transfer
coefficient (5%). This aging will lead to reduce the classification performance
accordingly. Next, performance of HD-SVM, Traditional Incremental Learning (TIL)
and Non-Incremental Learning (NIL) (using all data) are compared. The HD-SVM
incremental learning is shown to reduce the training time of the classifier, while
increasing the accuracy of the classifier. Therefore, HD-SVM is shown to cover the
weaknesses of Traditional incremental learning algorithms to lose possible information
and to improve classification performance in process FDD.

Fault diagnosis (FD) using data-driven methods is essential for monitoring complex process systems, but its performance is severely affected by the quality of the used information. Additionally, processing huge amounts of data recorded by modern monitoring systems may be complex and time consuming if no data mining and/or preprocessing
methods are employed. Thus, features selection for FD is advisable in order to determine the optimal subset of features/variables for conducting statistical analyses or building a machine-learning model. In this work, features selection are formulated as an optimization problem. Several relevancy indices, such as Maximum Relevance
(MR), Value Difference Metric (VDM), and Fit Criterion (FC), and redundancy indices such as Minimum Redundancy (mR), Redundancy VDM (RVDM), and Redundancy Fit Criterion (RFC) are combined to determine the optimal subset of features. Another approach of features selection is based on the optimal performance of the classifier,
which is achieved by a classifier wrapped with genetic algorithm. Efficiency of this strategy is explored considering different classifiers, namely Support Vector Machine (SVM), Decision Tree (DT), K-Nearest Neighbours (KNN) Classifier and Gaussian Naïve Bayes (GNB).
A Genetic algorithm (GA), as a Derivative Free Optimization (DFO) technique, has been used due to the robustness to deal with different kinds of problems. The optimal subset of obtained features has been tested with SVM, DT, KNN, and GNB for the Tennessee-Eastman process benchmark with 19 classes. Results show that, when the
performance of the classifier is used as the objective function the wrapper method obtains the best features set.

The integration of decision-making procedures usually assigned to different hierarchical production systems requires the use of complex mathematical models and high computational efforts, in addition to the need of an extensive management of data and knowledge within the production systems. This work addresses this integration problem and proposes a comprehensive solution approach, as well as guidelines for Computer Aided Process Engineering (CAPE) tools managing the corresponding cyberinfrastructure. This study presents a methodology based on a domain ontology which is used as the connector between the introduced data, the different available formulations developed to solve the decision-making problem, and the necessary information to build the finally required problem instance. The methodology has demonstrated its capability to help exploiting different available decision-making problem formulations in complex cases, leading to new applications and/or extensions of these available formulations in a robust and flexible way.

Recently, energy systems have experienced a change of paradigm, from a large-scale centr alized approach to the in-situ exploitation of renewabl e sources. Special atte ntion has been paid to microgrids, a particular case of distributed generation where consumer nodes include generation and can be either grid- connected or isolated. This work aims to develop a general model to determine the optimal sizing of an energy system under fixed conditions and to analyze the eff ect of considering different cycle patterns on the solution. The mi xed integer linear programming (MILP) formulation proposes allows determining the best combination of available technologi es that satisfies the demand of a given set of scenarios at minimum total cost. The model has been implemented using AIMMS and applied to a case study consisting of a five-member Mediterranean house. The results obtained reveal the need to select the most convenient time cycles for defining the scenarios of the sizing model

Although design is a problem that has been addressed in the literature, maintaining, upgrading and expanding energy distribution networks along the entire life-cycle is a topic that has received scarce attention. The problem includes considering the long-term dependence of the efficiency of the investments along their life span. This work presents a novel model for the optimization of energy distribution networks considering the decaying performance caused by equipment aging. Increasing maintenance costs have been included to model the decaying performance, thus giving rise to an MINLP formulation. A simplified case study based on a real electricity distribution
network has been used as a test bed for the proposed approach. Results show that unrealistic sizing and planning solutions are obtained when the decaying performance is not considered and demonstrate that the proposed MINLP overcomes such limitations.

The increment of electrical and electronic appliances for improving the lifestyle of residential consumers had led to a larger demand of energy. In order to supply their energy requirements, the consumers have changed the paradigm by integrating renewable energy sources to their power grid. Therefore, consumers become prosumers in which they internally generate and consume energy looking for an autonomous operation. This paper proposes an energy management system for coordinating the operation of distributed household prosumers. It was found that better performance is achieved when cooperative operation with other prosumers in a neighborhood environment is achieved. Simulation and experimental results validate the proposed strategy by comparing the performance of islanded prosumers with the operation in cooperative mode

This paper presents a fast Super-Resolution (SR) algorithm based on a selective patch processing. Motivated by the observation that some regions of images are smooth and unfocused and can be properly upscaled with fast interpolation methods, we locally estimate the probability of performing a degradation-free upscaling. Our proposed framework explores the usage of supervised machine learning techniques and tackles the problem using binary boosted tree classifiers. The applied upscaler is chosen based on the obtained probabilities: (1) A fast upscaler (e.g. bicubic interpolation) for those regions which are smooth or (2) a linear regression SR algorithm for those which are ill-posed. The proposed strategy accelerates SR by only processing the regions which benefit from it, thus not compromising quality. Furthermore all the algorithms composing the pipeline are naturally parallelizable and further speed-ups could be obtained.

In this paper, an energy management system is defined as a flexible architecture. This proposal can be applied to home and residential areas when they include generation units. The system has been integrated and tested in a grid-connected microgrid prototype, where optimal power generation profiles are obtained by considering economic aspects

An important problem to be addressed by diagnostic systems in industrial applications is the estimation of faults with incomplete observations. This work discusses different approaches for handling missing data, and performance of data-driven fault diagnosis schemes. An exploiting classifier and combined methods were assessed in Tennessee-Eastman process, for which diverse incomplete observations were produced. The use of several indicators revealed the trade-off between performances of the different schemes. Support vector machines (SVM) and C4.5, combined with k-nearest neighbourhood (kNN), produce the highest robustness and accuracy, respectively. Bayesian networks (BN) and centroid appear as inappropriate options in terms of accuracy, while Gaussian naive Bayes (GNB) is sensitive to imputation values. In addition, feature selection was explored for further performance enhancement, and the proposed contribution index showed promising results. Finally, an industrial case was studied to assess informative level of incomplete data in terms of the redundancy ratio and generalize the discussion. (C) 2015 Elsevier Ltd. All rights reserved.

This paper presents the system integration and hierarchical control implementation in an inverter-based microgrid research laboratory (MGRL) in Aalborg University, Denmark. MGRL aims to provide a flexible experimental platform for comprehensive studies of microgrids. The structure of the laboratory, including the facilities, configurations and communication network, is first introduced. The complete
control system is based on a generic hierarchical control scheme including primary, secondary and tertiary control. Primary control loops are developed and implemented in digital control platform, while system supervision, advanced secondary and tertiary management are realized in a microgrid central controller. The software and hardware schemes are described. Several example case studies are introduced and performed in order to achieve power quality regulation, energy management and flywheel energy storage system control. Experimental results are presented to show the performance of the whole system

The photo-Fenton process is a photochemical process that has proved to be highly efficient in degrading new potentially harmful contaminants. Despite of this, real applications to wastewater treatment plants are still far away, since less attention has been paid, in the past decades, to the development of systematic procedures for the selection of proper control, automation and optimisation strategies. The present work aims at investigating the effectiveness of a model-based approach in carrying out the dynamic optimisation of the recipe of a photo-Fenton process, performed in fed-batch mode (reactant dosage). The purpose was to find out the optimal hydrogen peroxide (H2O2) dosage profile and processing time (tend) that can guarantee the minimum processing cost while ensuring the Total Organic Carbon (TOC) decrease in concentration. The simplified model proposed by Cabrera Reina et al. (2012) was adopted and properly adapted in order to describe the evolution of the system under study when a reactant dosage protocol is performed. The dynamic optimisation problem was addressed applying a direct simultaneous optimisation method. Results have shown a promising model-based optimisation approach that can allow the recipe adjustment in a fast and reliable way so to save money and time of the experimental work.

A microgrid is an energy subsystem composed of
generation units, energy storage, and loads that requires power management in order to supply the load properly according to defined objectives. This paper proposes an online energy management system for a storage based grid-connected microgrid that feeds a critical load. The optimization problem aims to minimize the operating cost while maximizing the power provided by the renewable energy sources. The power references for the distributed energy resources (DER) are scheduled by using CPLEX solver which uses as input current measurements, stored
data and adjusted weather forecast data previously scaled in each iteration considering the current status. The proposed structure is tested in a real time simulation platform (dSPACE 1006) for the microgrid model, by using Labview to data acquisition and Matlab to implement the energy management system. The results
show the effectiveness of the proposed energy management system for different initial conditions of the storage system.

To prevent process interruption and eventual losses, the need for a reliable fault detection and diagnosis system (FDD) is completely acknowledged. Besides the capability to recognize known faults automatically, a further requirement for a FDD is adaptability. If th e model cannot be adapted to deal with changes, variations due to external changes, decaying performance, Poisoning of catalyst etc. the FDD system could perform misleadingly. This paper presents an advantageous of incremental learning algorithm for fault d iagnosis, when a support vector machine algorithm are implemented as a classifier. The method which is followed in order to use the incremental learning algorithm is based on hyperplane - distance (HD) [1] . In the continues reactor which is studied, two cases are compared in order to clarify the role and importance of incremental learning algorithm. Result show t he effectiveness of this method

This paper analyses the operational decision making procedures required to address the simultaneous management of energy supplies and requests in a microgrid scenario, in order to best accommodate arbitrary energy availability profiles resulting from an intensive use of renewable energy sources, and to extensively exploit the eventual flexibility of the energy requirements to be fulfilled. The optimization of the resulting short term scheduling problem in deterministic scenarios is addressed through a MILP (Mixed-Integer Linear Programming) mathematical model, which includes a new hybrid time formulation developed to take profit of the advantages of the procedures based on discrete time representations, while maintaining the ability to identify solutions requiring a continuous time representation, which might be qualitatively different to the ones constrained to consider a fixed time grid for decision-making.
The performance of this new time representation has been studied, taking into account the granularity of the model and analyzing the associated trade-offs in front of other alternatives. The promising results obtained with this new formulation encourage further research regarding the development of decision-making tools for the enhanced operation of microgrids.

This work presents a novel systematic approach for the construction of domain ontolog ies . The s ugge sted approach uses a semi - automatic construction methodology . F or this study , parent - child concept pairs are taken from a previous work. Novel contributions include bu ilding and completing branches, introducing new relations , and resolving inconsistencies and contradictions. For the p rocess s ystems en gineering (PSE) domain the ISA88 Standard is chosen as a promising starting point for automatic text processing. Finally, this work concludes with a discussion of the ISA88 S tandard based on the conclusions tha t can be obtained from the application of this semi - automatic construction methodolog

This work presents a novel systematic approach for the construction of domain ontologies. The suggested approach uses a semi-automatic construction methodology. For this study, parent-child concept pairs are taken from a previous work. Novel contributions include building and completing branches, introducing new relations, and resolving inconsistencies and contradictions. For the process systems engineering (PSE) domain the ISA88 Standard is chosen as a promising starting point for automatic text processing. Finally, this work concludes with a discussion of the ISA88 Standard based on the conclusions that can be obtained from the application of this semi-automatic construction methodology.

This paper investigates data based modelling of complex nonlinear processes, for which a first principle model useful for process monitoring and control is not available. These empirical models may be used as soft sensors in order to monitor a reaction’s progress, so reducing expensive offline sampling and analysis. Three different data modelling techniques are used, namely Ordinary Kriging, Artificial Neural Networks and Support Vector Regression. A simple case is first used to illustrate the problem, assess and validate the modelling approach, and compare the modelling techniques. Next, the methodology is applied to a photo–Fenton pilot plant to model and predict the reaction progress. The results show promising accuracy even when few training points are available, which results in huge savings of time and cost of the experimental work.

This paper investigates data based modelling of complex nonlinear processes, for which a first principle model useful for process monitoring and control is not available. These empirical models may be used as soft sensors in order to monitor a reaction’s progress, so reducing expensive offline sampling and analysis. Three different data modelling techniques are used, namely Ordinary Kriging, Artificial Neural Networks and Support Vector Regression. A simple case is first used to illustrate the problem, assess and validate the modelling approach, and compare the modelling techniques. Next, the methodology is applied to a photo–Fenton pilot plant to model and predict the reaction progress. The results show promising accuracy even when few training points are available, which results in huge savings of time and cost of the experimental work.

Energy is a resource that needs to be managed and decisions need to be made on production, storage, distribution and consumption of energy. Determining how much to produce, where and when, and assigning resources to needs in the most efficient way is a problem that has been addressed in several fields. There are available tools that can be used to formulate and solve this kind of problems. Using them in energy management problems requires starting with the basics of math programming techniques, addressing some standard production planning problems, and adapting the solutions to new particular situations of interest.

This paper presents the development of a
microgrid central controller in an inverter-based intelligent
microgrid (iMG) lab in Aalborg University, Denmark. The iMG
lab aims to provide a flexible experimental platform for
comprehensive studies of microgrids. The complete control
system applied in this lab is based on the hierarchical control
scheme for microgrids and includes primary, secondary and
tertiary control. The structure of the lab, including the lab
facilities, configurations and communication network, is first
introduced. Primary control loops are developed in
MATLAB/Simulink and compiled to dSPACEs for local control
purposes. In order to realize system supervision and proper
secondary and tertiary management, a LabVIEW-based
microgrid central controller is also developed. The software and
hardware schemes are described. An example case is introduced
and tested in the iMG lab for voltage/frequency restoration and
voltage unbalance compensation. Experimental results are
presented to show the performance of the whole system.

Bisphenol A (BPA; 2,2-bis (4-hydroxyphenyl) propane) is an industrial organic chemical basically used in
the plastics industry as a monomer for producing epoxy resins and polycarbonates [1,2]. It is also a well-known
endocrine disruptor agent that contaminates surface waters even at low concentration [3].
Unfortunately, BPA cannot be entirely removed from water solutions by conventional treatments.
Additionally, in some cases, such treatments can lead to a series of by-products with higher endocrine disrupting
effect [4].
Advanced Oxidation Processes (AOPs), among them the Fenton and photo-Fenton processes, are efficient
methods for BPA photodegradation [1]. However, they are energy-intensive processes and their cost is ought to
be improved by reducing the reaction time as well as the consumption of reagents.
In this work, the Fenton and the photo-Fenton degradation of BPA (0,5 L, 30 mg L-1) was addressed. The
process efficiency was evaluated under different H2O2 and Fe(II) initial concentrations (2,37-6,41 mM H2O2 and
1,42·10-2-3,92·10-2 mM iron salt), while other variables were fixed (pH=3, 25ºC, UV light source). The
treatment performance was assessed for a series of assays from a factorial design and was quantified in terms of
the decay rate of total organic carbon (TOC) and the total conversion attained, according to a pseudo first order
kinetics [5-6].
The performance of the mineralization may be characterized by determining the two parameters of
the model, ¿max (or [TOC]8) and k, which can be obtained by fitting the model to the experimental data under the
least squares criterion. The results were plot k in front to identify different clusters and the conditions which
produces higher mineralization rates

The removal of sour gas components from the gas streams using chemical solvents, such as MDEA, is a requirement in most hydrocarbon processing plants. The acid gas constituents (H2S and CO2) react with an aqueous solution in a high-pressure absorber. Subsequently, the solvent is stripped from the acid gas in the regenerator at elevated temperature to reuse it. Figure 1 illustrates a typical gas treating plant employing an alkanolamine.
One of the most frequent faults in gas sweetening is amine foaming, which results in the loss of proper vapor-liquid contact, solution hold up and poor solution distribution. Some root causes of foaming includes accumulation of heavy hydrocarbon and solid particle in amine and antifoam trouble. The consequence is off-spec product, downtime and loss of amine and energy. Whenever operators distinguish foaming based process measurement trends, a short term measure is manual injecting of antifoam agent. However, disadvantage of this approach is misdetection due to existence of numerous variables, low experience and unconscious operators. Also, there are some other process disturbances in the system originated in downstream or upstream that can mislead the operator to detect foaming. On the other hand, overdose injection of antifoam has adverse effects on the filtration system.
This work proposes a decision support software based on principal component analysis for detection of foaming in a sweetening plant. From the many process variables, operators tend to make decisions based on only one or two of them. Consequently, important information included in other variables, or in their relations, is discarded. This work proposes the use of PCA [1] to reduce the monitoring space without discarding relevant process variance. Due to the importance of process disturbances, PCA cannot be applied using their standard statistics (T2 and SPE), due to the
high number of false alarms they produce. However, a 2D scatter plot in the reduced space (Fig. 2) allows performing an early and reliable identification of foaming, thereby supporting operator decisions and reducing operating costs. Results show fast and efficient data analysis for practical fault detection and qualitative decision-making support. Further work is underway regarding quantitative assessment of confidence levels.

Advanced Oxidation Processes (AOP) have been proposed as alternative water treatments coping with recalcitrant organic pollutants [1]. AOPs are based on in-situ generation of highly oxidant hydroxyl radicals. Particularly, in the photo-Fenton process they are produced from ferrous salts (Fe) and hydrogen peroxide (HP). However, this process has been acknowledged to suffer inefficient reactions scavenging HP; which has motivated a large amount of research aimed to determine efficient ratios for the initial concentrations of reactants (Fe/HP).
Dosage is also reported to reduce these side reactions and improve the performance of these processes. Certainly, since they are operated batchwise, the most efficient ratio Fe/HP should not be regarded as an initial value, but as a profile that may undergo optimization. Yet, such optimization problem has not been attempted. A large experimental effort has produced empirical models that cannot be scaled up and do not address the process dynamics, while some first-principle kinetic models that can be found on the literature [2] require a high computational cost for too simple reactions. Therefore, a first issue towards optimization is model selection.
This work adopts the kinetic model by Cabrera Reina [3] based on aggregated components. This model focuses on practical observable variables such as dissolved oxygen and total organic carbon (TOC), and provides a simplified modelling of delayed response of TOC and scavenging reactions. Hence, this work expands it to semi-batch operation and addresses simulation and subsequent optimization of the dosage profile using the Python and Modelica open-source programing languages. Python is used as the core providing functions that Modelica lacks, while the model is implemented in Modelica to take advantage of its model-based language.
The optimization of the HP dosage profile addressed two different scenarios and objective functions. Thus, Pareto frontiers were determined to analyse trade-offs and opportunities, and to aid decision making: i) The TOC reduction to be achieved under time and HP limitations, and ii) The total HP required to attain a given conversion (TOC) within a given time horizon.
Continuous and piecewise optimization approaches were tested and discussed. Results validate the importance of determining efficient dosage profiles for the photo-Fenton process. Model based optimization allows exploring opportunities and trade-offs, and aids decision-making. Hence, this study fosters further work on model fitting for specific applications.

Using commercial process simulators in chemical engineering training is well established. Simulators may provide accurate estimations of chemical processes provided that parameters are thoughtfully fit to experimental reality. Yet, contrasting experimental data and process simulation, beyond thermo data, is hardy experienced. Students usually attend the lab to get parameters for simple operations while use simulators for designing complex processes.
In this work methods and tools have been prepared for linking these steps, from the development of the process model to parameter tuning, and subsequent use of the simulation for process monitoring and diagnosis in a lab pilot plant. Hence, the comparison of plant variables and simulation results allows immediate identification of process malfunction.
The pilot plant consists in a spray dryer with full data-acquisition system. The system is operated only with water and air in order to avoid problems related to modelling complex fluids such as milk. Furthermore, the system is modelled as a heat-exchanger and a multifeed separator. Still, the process has a number of measurements such as: hot air temperature, compressed air pressure, water and wet air mass flows. Additionally, condensed water can be determined through the measurement of the volume produced within a given time interval.
VMGSim is a World Class chemical process simulator for Oil, Gas, and Chemical Industries. Although the case modelled is of limited complexity, it expands to cover various technologies readily available inside VMGSim such as Steady State Simulation, Dynamic Simulation, Model tuning, Industrial OPC communications, etc. This allows exploring various concepts such as shadow plant, soft sensors, data reconciliation, real time diagnosis, etc.
Firstly, a simplified steady-state model of the pilot plant is produced with VMGSim. The correlation of the variables involved is analysed with the Case Study tool. Then, the exercise consists of designing and executing experiments in the pilot plant to produce correlations accordingly so that model parameters can be adjusted using the VMGSim Model Regression tool. Various Objective Functions to be minimised are available in VMGSim such as the Least Squares, the Log Least Squares, or the Absolute Error, using various optimisation methods. This step provides a starting point for the next step, which is dynamics. Again experiments are required to be designed and run for obtaining transient time series to be compared with the results from the corresponding dynamic simulations; in this case, the model is adjusted by an accurate representation of the system pressure drops and a precise design of the parameters of the controllers. This allows to the fit of the remaining model parameters (holdups and capacities).
Finally, useful VMGSim communication features allow easily exporting dynamic data via OPC. Since pilot plant is expected to export data from the SCADA via OPC to a common platform in the near future, the simulation has been modified to represent a real plant until it is ready. A concluding exercise consists in on-line monitoring both source of data, and exploring the opportunities for data reconciliation, fault detection, diagnosis. In this way, tools have been prepared for training chemical engineering students in the integration of simulated models and process data and filling, a gap between lab and simulation experiences.