While 3D cinema is becoming increasingly established, little effort has focused on the general problem of producing a 3D sound scene spatially coherent with the visual content of a stereoscopic-3D (s-3D ... [more ▼]

While 3D cinema is becoming increasingly established, little effort has focused on the general problem of producing a 3D sound scene spatially coherent with the visual content of a stereoscopic-3D (s-3D) movie. The perceptual relevance of such spatial audiovisual coherence is of significant interest. In this thesis, we investigate the possibility of adding spatially accurate sound rendering to regular s-3D cinema. Our goal is to provide a perceptually matched sound source at the position of every object producing sound in the visual scene. We examine and contribute to the understanding of the usefulness and the feasibility of this combination. By usefulness, we mean that the technology should positively contribute to the experience, and in particular to the storytelling. In order to carry out experiments proving the usefulness, it is necessary to have an appropriate s-3D movie and its corresponding 3D audio soundtrack. We first present the procedure followed to obtain this joint 3D video and audio content from an existing animated s-3D movie, problems encountered, and some of the solutions employed. Second, as s-3D cinema aims at providing the spectator with a strong impression of being part of the movie (sense of presence), we investigate the impact of the spatial rendering quality of the soundtrack on the reported sense of presence. The short 3D audiovisual content is presented with three different soundtracks. These soundtracks differ by their spatial rendering quality, from stereo (low spatial coherence) to Wave Field Synthesis (WFS, high spatial coherence). The original stereo version serves as a reference. Results show that the sound condition does not impact on the sense of presence of all participants. However, participants can be classified according to three different levels of presence sensitivity with the sound condition impacting only on the highest level (12 out of 33 participants). Within this group, the spatially coherent soundtrack provides a lower reported sense of presence than the other custom soundtrack. The analysis of the participants' heart rate variability (HRV) shows that the frequency-domain parameters correlate to the reported presence scores. By feasibility, we mean that a large portion of the spectators in the audience should benefit from this new technology. In this thesis, we explain why the combination of accurate sound positioning and stereoscopic-3D images can lead to an incongruence between the sound and the image for multiple spectators. Then, we adapt to s-3D viewing a method originally proposed for 2D images in the literature to reduce this error. Finally, a subjective experiment is carried out to prove the efficiency of the method. In this experiment, an angular error between an s-3D video and a spatially accurate sound reproduced through WFS is simulated. The psychometric curve is measured with the method of constant stimuli, and the threshold for bimodal integration is estimated. The impact of the presence of background noise is also investigated. A comparison is made between the case without any background noise and the case with an SNR of 4 dBA. Estimates of the thresholds and the slopes, as well as their confidence intervals, are obtained for each level of background noise. When background noise is present, the point of subjective equality (PSE) is higher (19.4° instead of 18.3°) and the slope is steeper (-0.077 instead of -0.062 per degree). Because of the overlap between the confidence intervals, however, it is not possible to statistically differentiate between the two levels of noise. The implications for the sound reproduction in a cinema theater are discussed. [less ▲]

After the tragedy of the genocide against Tutsi in 1994, the Rwandan Government has developed and implemented different programmes and interventions in the sector of social protection in order to reduce ... [more ▼]

After the tragedy of the genocide against Tutsi in 1994, the Rwandan Government has developed and implemented different programmes and interventions in the sector of social protection in order to reduce the poverty of the vulnerable populations and so reach the Millennium Development Goals (MDGs). The different interventions done in the context of social security of the poor people are considered as means of increasing the economy, integrating people in employment market, improving human development thanks to a better access to health and education, and reducing poverty. After the context and justification of the research as well as the methods for data collection, this study focuses primarily on the conceptualization of resilience and connected concepts. Secondly, it scrutinizes the impact of the three programmes of social protection that were developed to reduce poverty of the beneficiaries. The first programme is FARG that supports the survivors of the 1994 genocide against the Tutsi in the domain of education, access to accommodation, medical health and income generating activities. The second programme is Ubudehe-VUP-2020. This one helps very poor households via direct financial transfers, creation of jobs and access to financial services. Girinka is the third programme; it contributes to food improvement, subsistence means and land fertility by supplying a dairy cow to poor families. In third position, this research assesses the achievements of the association of the widows survivors of genocide called - Agahozo (AVEGA) as well as the successes so far reached by the widows beneficiaries. The widows consider AVEGA as an important ‘tutor for resilience’ because it not only deals with their psychological aspects related to the tragedies they experienced but also the promotion and development of economic activities that can supply them with an income for their social reintegration at both economic and social levels. Thus, AVEGA involves a diversity of actors in order to help the widows and implement various income generating activities. Research conducted on the field has targeted, in last position, the most resilient widows. These widows have got support from social protection programmes and AVEGA, in order to involve in income generating activities among which the most developed are agriculture, animal rearing and commerce. All in all, the widows of genocide who were surveyed positively commented the support they get from SPP and AVEGA regarding the reinforcement of their economic resilience. [less ▲]

Electromechanical oscillations threaten the secure operation of power systems and if not controlled efficiently can lead to generator outages, line tripping and even large-scale blackouts. Different ... [more ▼]

Electromechanical oscillations threaten the secure operation of power systems and if not controlled efficiently can lead to generator outages, line tripping and even large-scale blackouts. Different damping devices, like Power System Stabilizers (PSSs), Thyristor Controlled Series Compensators (TCSCs), and so on, are installed to damp these oscillations. This thesis proposes a trajectory-based supplementary control to improve damping effects of existing controllers, which treats damping control as a multi-step optimization control problem with discrete dynamics and costs. At each control time, it collects current system states, solves the optimal control problem and superimposes the calculated supplementary inputs on the outputs of existing damping controllers, in order to enhance the damping. These supplementary signals are continuously updated, which allows to adaptively adjust and coordinate a subset of existing damping controllers, and eventually all of them. Two kinds of methods, Model Predictive Control (MPC) and Reinforcement Learning (RL), are used to embody the proposed supplementary damping control. Firstly, a fully centralized MPC scheme is designed based on a linearized, discrete, complete state space model. Its performances are evaluated both in ideal conditions and considering realistic state estimation errors, and computation and communication delays. The effects of the number and type of available damping controllers are also studied. This scheme is further extended into a distributed scheme with the aim of making it more viable for very large-scale or multi-area systems. Different ways of decoupling and coordinating between subsystems are analyzed. Finally, a robust hierarchical multi-area MPC scheme is proposed, introducing a second layer of MPC based controllers at the level of individual power plants and transmission lines. Secondly, a tree-based batch mode RL algorithm is applied to carry on the proposed supplementary damping control. Using a set of dynamic and reward four-tuples, it constructs an approximation of the optimal $Q$-function over a given temporal horizon. The actions greediest with respect to the $Q$-function are applied as supplementary signals to existing damping controllers. The scheme is firstly tested on a single generator, and then on multiple generators. Different reward signals and damping levels are also considered. Finally, the combined control effects of MPC and RL are investigated. [less ▲]

The objective of this thesis is to develop an efficient multi-scale finite element framework to capture the macroscopic localization due to the micro-buckling of cell walls and the size effect phenomena ... [more ▼]

The objective of this thesis is to develop an efficient multi-scale finite element framework to capture the macroscopic localization due to the micro-buckling of cell walls and the size effect phenomena arising in structures made of cellular materials. Under the compression loading, the buckling phenomenon (so--called micro--buckling) of the slender components (cell walls, cell faces) of cellular solids can occur. Even if the tangent operator of the material of which the micro--structure is made, is still elliptic, the presence of the micro--buckling can lead to the loss of ellipticity of the resulting homogenized tangent operator. In that case, localization bands are formed and propagate in the macroscopic structure. Moreover, when considering a cellular structure whose dimensions are close to the cell size, the size effect phenomenon cannot be neglected since deformations are characterized by a strain gradient. On the one hand, a classical multi-scale computational homogenization scheme (so-called first-order scheme) looses accuracy with the apparition of the macroscopic localization or the high strain gradient arising in cellular materials because the underlying assumption of the local action principle, in which the stress state on a macroscopic material point depends only on the strain state at that point, is no--longer suitable. On the other hand, the second-order multi-scale computational homogenization scheme proposed by Kouznetsova exhibits a good ability to capture such phenomena. Thus this second--order scheme is improved in this thesis with the following novelties so that it can be used for cellular materials. First, at the microscopic scale, the periodic boundary condition is used because of its efficiency. As the meshes generated from cellular materials exhibit a large void part on the boundaries and are not conforming in general, the classical enforcement based on the matching nodes cannot be applied. A new method based on the polynomial interpolation without the requirement of the matching mesh condition on opposite boundaries of the representative volume element (RVE) is developed. Next, in order to solve the underlying macroscopic Mindlin strain gradient continuum of this second-order scheme by the displacement-based finite element framework, the presence of high order terms (related to the higher stress and strain) leads to many complications in the numerical treatment. Indeed, the resolution requires the continuities not only of the displacement field but also of its first derivatives. This work uses the discontinuous Galerkin (DG) method to weakly impose these continuities. This proposed second--order DG--based FE2 scheme appears to be easily integrated into conventional parallel finite element codes. Finally, the proposed second-order DG-based FE2 scheme is used to model cellular materials. As the instability phenomena are considered at both scales, the path following technique is adopted to solve both the macroscopic and microscopic problems. The micro--buckling leading to the macroscopic localization and the size effect phenomena can be captured within the proposed framework. [less ▲]

The Kou watershed, situated in the Southwestern part of Burkina Faso, has succumbed since a couple of decades to a typical theater play of anarchistic water management. With its 1.800 km², this small ... [more ▼]

The Kou watershed, situated in the Southwestern part of Burkina Faso, has succumbed since a couple of decades to a typical theater play of anarchistic water management. With its 1.800 km², this small watershed includes the second largest city of Burkina Faso (Bobo-Dioulasso), a former state run irrigated rice scheme and several informal agricultural zones. Despite the abundance of water resources, most water users find themselves regularly faced with shortages due to an increase in population and low irrigation efficiencies. Local stakeholders are hence in need of easy to use and low-cost decision support tools for the monitoring and exploitation of the water resources at different spatial and user levels. A top-to-bottom string of adapted water management tools has successfully been installed to tackle the problems: from watershed (top) to field level (bottom), not to mention the 1200 ha irrigation scheme. Land use maps have been derived from satellite and aerial images. Combined with data from a network of hydrologic gauging stations, regional water use maps were established. Hot spots in inefficient water use could be geographically identified and more detailed actions undertaken. Scheme Information Management Information System (SIMIS) was put in place for the management of the regions irrigated rice scheme. A more equitable distribution for the ever diminishing available water resources could be elaborated. A public-private partnership was installed to guarantee its sustainability. Day to day water use on irrigated plots was monitored by soil humidity and crop canopy measurements. A simple field-crop-water balance model AquaCrop was calibrated and validated, and is used by extension workers to draft optimal irrigation charts. Each tool is applied independently, requiring only limited data; but their combined results contribute to an improved integrated water management. [less ▲]

Prostate cancer is the second leading cause of cancer death in men. Although androgen ablation remains the most effective management option, most patients with advanced disease progress to castration ... [more ▼]

Prostate cancer is the second leading cause of cancer death in men. Although androgen ablation remains the most effective management option, most patients with advanced disease progress to castration resistant prostate cancer (CRPC), within two years of treatment. This results, in part, from the increase in the anti-apoptotic molecules expression following androgen withdrawal. Among the proteins involved in this phenomenon, clusterin, also known as testosterone repressed message-2 (TRPM- 2), which exists in two forms: a pro-apoptotic nuclear form (nClU) and a secreted survival factor (sClU). In our study we investigated the role of the secreted form of clusterin in preventing cells from TNFα-induced apoptosis. For this, we first generated a sCLU inducible stable prostatic cancer MLL rat cell line by using the Tet-On gene expression system. With this model we revealed a new mechanism by which sCLU promotes survival in androgenindependent prostate cancer cells, implicating its receptor megalin and the Akt survival pathway. By applying a comparative proteomic analysis in the androgen-independent epithelial cell line MLLTet-sClu induced to overexpress sClu or non induced control-cells, we identified five proteins known to play a role in cancer. These proteins candidates are heat shock proteins Hsp90 and Hsp70, osteopontin (bone sialoprotein, OPN), proliferating cell nuclear antigen (PCNA) and ADP-ribosylation factor 1 (Arf1). Altogether, our data provide new mechanistic insight in sCLU dependent activation of the major survival pathway upregulated in refractory prostate cancer. The identification of the new sCLU protein targets open new avenues for more research to elucidate the significance of clusterin in prostate cancer progression and resistance to therapy. [less ▲]

Forage quality or nutritive value is related to chemical composition, which can be determinated by laboratory methods. The NIR technique in comparison with classical methods is non-destructive, non ... [more ▼]

Forage quality or nutritive value is related to chemical composition, which can be determinated by laboratory methods. The NIR technique in comparison with classical methods is non-destructive, non-polluting, fast and relatively inexpensive per analysis. Investigations on nutritional quality of Carpathians Apuseni Mountains (Romania) grasslands are rarely performed with NIR technique. Therefore, the objective of the thesis was to develop non-destructive methods for evaluating the quality of feed originating from the Gârda area of the Carpathians Apuseni Mountains (Romania) potentially and to similar grassland arround the world. The first task was to study the potential of NIR spectroscopy for building a spectral database for forage quality based on a large collection of semi-natural grassland samples, using a ‘local’ calibration model built by the Walloon Agricultural Research Centre (CRA-W), in Belgium, to determine various parameters (e.g., protein, dry matter, ash, fibre, fat, aNDFom, ADF, lignin, digestibility, crude energy) from samples collected worldwide, outside Romania. The second task was to develop calibration models for an NIR-HSI system, which involved larger spectral data registration as an image. Until now, analyses to determine plant species were based on botanical composition evaluation, including visual observation, which is a subjective method involving identifying plants directly in the field. Distinguishing samples of pure grassland species can be time consuming, and it was therefore decided to build a spectral database of pure samples and then discriminate these samples into binary and ternary artificial sample mixtures. The main objective of these tasks was to identify the botanical families to which the samples belonged (Poaceae, Fabaceae and Other Botanical Families [OBF]). The focus was not on quantity monitoring, but rather on determining forage quality from stationary experiments in the grasslands. To conclude, this research has shown that it is possible to develop calibration models not only for quality assessment, but also for sample discrimination in dry powder samples. It was intended, that the mathematical models constructed and the database obtained, would be used for future research. [less ▲]

This thesis addresses the biogeochemical cycles in the Black Sea (BS) during the shifting environmental context that affected the BS during the last decades of the 20th century. The study is based on ... [more ▼]

This thesis addresses the biogeochemical cycles in the Black Sea (BS) during the shifting environmental context that affected the BS during the last decades of the 20th century. The study is based on sophisticated data analysis tools and on the development and implementation of a coupled 3D biogeochemical model on the BS domain. The long term variability of the BS hydrodynamical structure was first examined on the basis of in-situ profiles (1950-2012), satellite imagery (1985-2000) and 3D modelling (1960-2000). Profiles of temperature and salinity were used to derive vertical characteristics of the BS structure: the mixed layer depth and the cold content of the Cold Intermediate Layer. To untangle the spatial and temporal trends from this heterogeneous dataset, a general methodology was proposed and embedded in the data analysis software DIVA. The detrended climatologies and long-term time series provided by this approach were used to assess statistical relationships with local atmospheric conditions. Satellite data (sea surface temperature and altimetry) and model results were then analyzed to relate observable surface dynamics to internal hydrodynamic properties. The main multivariate modes of variability of the BS hydrodynamic structure were highlighted on the basis of Empirical Orthogonal Function analysis. Their temporal evolution was explained by the occurrences of specific atmospheric patterns, identified on the basis of neural algorithm analysis and related to the phases of well known teleconnection systems (i.e. the North Atlantic and East Asia/West Russia oscillations). To study the dynamics of eutrophication in the shallow Black Sea NorthWestern Shelf (BS-NWS), a benthic model component was developed that considers the environmental control on diagenetic processes and the bottom shear stress restriction on organic matter deposition. The model accurately reproduced the seasonal and spatial variability depicted by in-situ estimates of benthic nutrients and oxygen fluxes in the BS-NWS. Outputs were used to review the role of the benthic component in BS biogeochemical cycles. The multi-decadal simulations, enabled by the low computational requirements of the benthic-pelagic coupling approach, revealed an inertial component in the dynamics of eutrophication resulting from the accumulation of organic matter during the years of important nutrient loads. This refined resolution of the BS-NWS biogeochemistry allowed us to study the phenomenon of seasonal hypoxia, which is believed to have played a part in the sudden collapse of the fisheries stocks in the late 80s. An index H, combining the spatial and temporal extension of the seasonal hypoxic event, was proposed to quantify the annual intensity of hypoxia as a pressure on benthic communities. We have shown that hypoxia was first triggered in the late 70s by high nitrogen loads, and sustained by sedimentary organic matter accumulation after a rapid reduction of these loads in the 90s. After 2000, warmer summers again led to a increase of the H-index, by entraining hypoxic events of smaller spatial extension but increased duration. A practical relationship distinguishing the impacts of eutrophication and climatic drivers was proposed to assess the effect of their projected values on the future intensity of hypoxia. [less ▲]

The present work aims at improving practices in the management of wastewater from flotation of ores in the Katanga province and suggests the recycling considering its advantages on the environment ... [more ▼]

The present work aims at improving practices in the management of wastewater from flotation of ores in the Katanga province and suggests the recycling considering its advantages on the environment safeguarding, the sustainable management of hydric resources and the economy of flotation reagents standpoint. It focuses on the determination of the best process water-recycling rate in flotation of copper - cobalt oxidised ores from the Luiswishi deposit and on the explanation of phenomena implicated in the depression of malachite and heterogenite in the recycled water presence. The studied ores have been sulphidised (NaSH) prior to flotation with KAX using the process water recovered from the industrial effluents and a Lab scale replication of the New Concentrator in Kipushi (NCK) flow sheet to simulate the full-size plant operations. The following methodological approach has been adopted: • The lab flotation tests of the pulps originating from the NCK grinding circuit while varying the proportion of the recycled process water added to the feed water in view to determine the proportion which gives a concentrate grading at least 2% Co at the recovery of 80% and at least 7% Co at the recovery of 60% respectively at the rougher and cleaner stages; • The study of the effects from the recycled water chemical components on flotation of malachite and heterogenite through flotation of the studied ores in the presence of S2O32-, SO42-, HCO3-, Ca2+ and Mg2+ introduced in the feed water (demineralised water) through dissolution of their analytical graded-salts and based on the follow-up of the Cu-Co recovery and the roughing flotation concentrates mineralogical analysis by the polarised light microscopy, the X- rays diffraction and the scanning electron microscopy; • The study of the behaviour of malachite and heterogenite based firstly on electrochemical investigations of the pulp (pH, Eh, Es and DO), the leaching tests and sulphidisation of malachite and heterogenite with NaSH in presence of S2O32-, SO42- and HCO3. Secondly, based on thermodynamical calculations for the establishment of the Pourbaix diagrams of the systems Cu(Co-Cu) – Chemical species – Water at 25 °C and the Drift spectroscopic analysis (4000 à 400 cm-1) of malachite after sulphidisation with NaSH and agitation with KAX in the presence of S2O32-, SO42- and HCO3-. The obtained results have shown that the process water recycling is successful when 20% of the recycled water is added to the feed water since one obtains a concentrate grading 2% Co at the recovery of 80% at the rougher stage. However, considering the significant drops in the grade and the recovery of cobalt in the concentrate observed at the cleaner stage, a proportion of 10% has been suggested as optimal for the overall flotation circuit because 82% cobalt were recovered at rougher stage bringing at the cleaner stage a concentrate grading 9.5% Co at the recovery of 63%. Beyond 10%, the process water recycling has proved detrimental to flotation efficiency owing to the build-up of chemical species (S2O32-, SO42-, HCO3-, Ca2+, Mg2+ and Cl-) in the feed water, which becomes corrosive and scaling leading to depression of malachite and heterogenite. This depression results from an increase in the valuable minerals hydrophilicity boosted-up by their strong dissolution in water in the presence of S2O32-, SO42 and HCO3- leading to alterations in their surface properties and the exaggerated liberation of copper and cobalt ions in solution responsible for the overconsumption of NaSH and KAX. [less ▲]

Summary: Cananga odorata (Lam.) Hook f. & Thomson forma genuina (Annonaceae) commonly called ylang-ylang is an essential oil tree used in the preparation of cosmetics. Despite this plant is an ... [more ▼]

Summary: Cananga odorata (Lam.) Hook f. & Thomson forma genuina (Annonaceae) commonly called ylang-ylang is an essential oil tree used in the preparation of cosmetics. Despite this plant is an indispensable source of incomes for the producing islands, and despite its essential oil is valued by the cosmetic industry, only few studies have been conducted. Ylang-ylang essential oil sector is dying and conservation and valorization strategies are urgently needed to preserve it. During this thesis, we tried to gather the necessary data to achieve this goal. We thus characterized the variability of ylang-ylang at morphological, genetic and chemical levels. This study revealed an important variability in the chemical composition of ylang-ylang essential oil in the study area. In order to suggest valorization and conservation strategies tailored to the current context, we needed to know the causes responsible for the chemical polymorphism. These causes are to be found probably in the environmental variability. Indeed, the genetic structure only revealed few consistencies with the chemical polymorphism. It would be though wiser to further investigate these two potential causes of the chemical variability of ylang-ylang as our study is pioneer in this field. On the basis of the data collected, we then proposed a market-oriented valorization of the genetic and chemical diversity (uniformity of the quality and the composition of ylang-ylang essential oils, raw material sources and quality labels). In terms of genetic and chemical conservation, we propose a conservation strategy mainly based on an on-farm conservation in plantation. We also propose integrated valorization strategies of the on-farm conservation (direct sales and tourism). [less ▲]

The lightweight of porous nanocomposites makes them attractive materials for various applications such as thermal and sound barriers, shock absorbers, insulation, packaging, and their porous structure is ... [more ▼]

The lightweight of porous nanocomposites makes them attractive materials for various applications such as thermal and sound barriers, shock absorbers, insulation, packaging, and their porous structure is very interesting in bone tissue engineering. Moreover, the incorporation of appropriate carbonaceous nanoparticles into polymeric foams contributes to the reinforcement of their mechanical performances but also renders them electrically conductive, consequently extending their potential interest in electromagnetic shielding (EMI) and electrostatic discharge (ESD) applications for instance. In this PhD thesis, we aim at designing various polymeric foams containing a conductive nanofiller (carbon nanotubes) and to identify the main morphological parameters (pore size, cell density, cell wall thickness,…) that affect and govern the final properties of the foams. In this work, the electrical conductivity of the foams is the main property investigated because it is governing their performances as materials for EMI absorbers, the main application targeted in this work. These important morphology/electrical conductivity relationships would indeed be very useful to guide the foam development towards the material with the best performances for the targeted applications. Two different foaming methods are used in this work: (i) the supercritical CO2 (scCO2) foaming technology and (ii) the freeze-drying process. The first technique enables to produce isotropic foams with spherical closed cells structures and the second one, oriented anisotropic foams with cylindrical open cells. The variation of the foaming parameters allows preparing foams with a large panel of morphologies required for the establishment of the structure/properties relationships. In parallel to this main objective, an improvement of the overall conductive performances of the nanocomposites foams is also investigated through the optimization of the foam morphology and the content in conductive nanofillers. [less ▲]

ABSTRACT : Our research examines the sustainability of an acculturation heuristic discourse to Linear Algebra, prototypical gradual development of this discipline in a dialectical process with elementary ... [more ▼]

ABSTRACT : Our research examines the sustainability of an acculturation heuristic discourse to Linear Algebra, prototypical gradual development of this discipline in a dialectical process with elementary geometry. Our theoretical frameworks are borrowed from mathematics Didactic : mainly, Theory of Didactic Situations by Brousseau and Anthropological Theory of Didactics by Chevallard. These theories have enabled us to problematize the studied issue and make it a teaching "phenomenon" by giving intelligibility to the observations in a way that they are interpreted in a falsifiable hypothesis ( in Popper's sense ) which may integrate them all . In this context, the strict meaning given by Lakatos to heuristic speech has been broadly extended in terms of fundamental situation and praxeologies “modeling” according to Job and Schneider. A rational reading of historical and epistemological development of linear algebra then allowed us to build a "reference epistemological model" not only allowing us to answer to the question: "what is it to do linear algebra ?", but also to legitimize the referent heuristic discourse that served us as “phénoménotechnique”. Considering that our study is well marked, we have experimented in "secondary" and "university" institutions a heuristic device based on some selected aspects of the referent heuristic discourse. The analysis of observable that empirical work has identified has led us to identify several factors that combine to determine the ecological fragility of this speech and are at different levels of scale didactic codetermination (by Chevallard) : - The institutional variation of the curriculum of secondary education part in relation to linear systems. Including an emphasis on a resolution systems contract at the expense of their discussion, with a focus placed on the substitution method ; little technological discourse on the principles of equivalence systems and no intelligibility in terms of beams ; little work on equations as constraints with an impact on the lack of connection between the solution space of a system and the number of "independent" constraints, and finally, no ecological niche for the study of systems due to fragmentation of knowledge involved in multiple chapters. - The constraints inherent in the process of didactic transposition in terms of depersonalization and “désyncrétisation” of the knowledge but also in terms of progress in teaching time or chronogenesis. - Constraints related to teaching : a teacher accommodation to a socio-constructivist paradigm which appears as disguised exposition (ostension) and some ineffective activities ; a university pedagogy marked by “applicationnisme”, protectionism and a timing organization that puts theory on a pedestal where it is no longer in question. - An allegiance to the "mathematical" institution that is manifested, for instance, by some emblematic gestures linked to the mathematic rigor and which do not have great functionality. - Finally, an influence of the work of mathematicians on the didactic transposition, in the era of modern mathematics, which still inspires today didactic way, impeding (or being an obstacle for) linear algebra construction in dialectic with the geometry. The selected scenario for the role of interpretive hypothesis articulating these factors questions, in turn, all our education system which is too inspired by deductive theories in their completed text application. [less ▲]

During the last decade, extensive investigations were performed to achieve optical phase retarders with a space-variant orientation of their fast axis. These retarders present unique behaviors and they ... [more ▼]

During the last decade, extensive investigations were performed to achieve optical phase retarders with a space-variant orientation of their fast axis. These retarders present unique behaviors and they can be used for several applications such as polarization analysis, beam splitting, phase mask coronagraphy, optical tweezers … The present thesis is dedicated to the development of space-variant retarders made out of liquid crystal polymers recorded by polarization holography. The liquid crystals define the fast axis orientation of the retarder. Polarization holography is based on the superimposition of differently polarized beams to achieve the electric field required to properly align the liquid crystals without mechanical action. In the present work, we start with an introduction about the space-variant retarders, their characteristics and current recording methods, which usually require mechanical action. In the second chapter, polarization holography and several simple examples are presented as well as the liquid crystal polymers, their generic recording process and the first prototypes. In chapter three, the first application that we developed is exposed. It consists in a polarization analysis method based on a retarder characterized by a variation in one dimension of its fast axis orientation. The principle of the method, numerical simulations and the first results are exposed. Chapter four cares about the second application based on a separator of polarization state. The mathematical model and its application to shearography are exposed. In chapter five, another kind of retarders is introduced. These retarders are characterized by a rotation of their fast axis along the center of the retarder. Their properties, the recording systems and the first prototypes are detailed and analyzed. In the last chapter, the application of our retarder to coronagraphy is presented and their performances are computed for different configurations based on experimental constrains. Finally, we conclude with the improvements of our applications and future uses of these retarders. [less ▲]

Livestock is an important source of income in most developing countries. In Africa, it often makes up 10% to 20% of the gross national product at level. One of the major constraints to the development of ... [more ▼]

Livestock is an important source of income in most developing countries. In Africa, it often makes up 10% to 20% of the gross national product at level. One of the major constraints to the development of this sector is animal diseases, which sometimes generate significant economic losses with social consequences that are often very burdensome for farmers. They restrict trade between countries. Reducing the impact of these constraints necessarily involves the prevention and the control of diseases. For this purpose, an adequate knowledge of the epidemiology of the diseases is a prerequisite to define a strategy for their prevention and/or the design of appropriate monitoring measures. One of the essential tools of production remains the epidemiological information network for the surveillance of animal diseases. It is also a tool for decision in international trade involving livestock products. For this purpose, its effectiveness is a guarantee for its credibility. An effective network system must be well organized, meet scientific standards and satisfy the efficiency characteristics which are sensitivity, specificity, acceptability, responsiveness and cost. In west and central Africa, epidemiological surveillance networks of animal diseases are mostly created in the 1990s through the Pan African Program for the Control of Epizootics. A 2004 assessment carried out by this program revealed that these networks are at different stages and found weaknesses in their efficiency. To contribute to the improvement of these systems, an analysis of some performance parameters of these networks has been conducted and suggestions for improvement were made. To achieve this overall objective, the following specific objectives were listed: (i) analyze the technical and functional organization of epidemiological surveillance networks in West and Central Africa; (ii) compare the effectiveness of active surveillance and passive surveillance, the two main monitoring methods used by the networks in West and Central Africa using the case study of the epidemiological surveillance network, in Chad, namely REPIMAT; (iii) assess the sensitivity of an epidemiological surveillance network from an approach based on prevalence of a disease such as Foot and Mouth Disease; (iv) develop performance indicators for regular monitoring of the epidemiological surveillance network for animal diseases in West and Central Africa, again by taking the case of REPIMAT; and finally, (v) estimate the cost incurred by an epidemiological surveillance network in West and Central Africa, for example the REPIMAT. Each of these specific objectives leads to a specific study of which results are presented below: Organization of epidemiological surveillance networks in West and Central Africa The survey on technical and institutional organizations networks in west and central Africa was organised on the basis of a written questionnaire. It involved nine networks of which five were in West Africa (Senegal, Burkina Faso, Ivory Coast, Togo, Guinea) and four in Central Africa (Cameroon, Central African Republic, Democratic Republic of Congo and Chad). The results of this survey showed that the oldest epidemiological surveillance network is that of Chad, REPIMAT. There are more similarities than differences between these networks. In general, network monitoring of animal diseases in west and central Africa are technically and institutionally well formalized. The establishment and operation of the networks surveyed are mainly financed by foreign aid. In general, these are epidemiological surveillance networks that monitor several diseases. All countries surveyed have a central national laboratory for the analysis of samples collected. However, only four countries (Cameroon, Côte d'Ivoire, Guinea and Senegal) can make a diagnosis of all diseases selected for monitoring. The laboratories are considered as partners with surveillance networks in most of the countries surveyed. All networks use the PID/ARIS for data management. In addition to the latter, countries such as Guinea, DRC, Senegal and Chad use a national database developed with Access®. On average, 26% of veterinary stations on the total networks surveyed are involved in the monitoring. This proportion varies from 7% to 91%. However, insufficient diagnostic capacity of laboratories and inadequate operational steering committees are the two main weaknesses of the networks concerned with this survey. Comparison between active and passive surveillance within the network of epidemiological surveillance of animal diseases in Chad The comparison between active and passive surveillance involved 106 REPIMAT surveillance stations randomly divided into 52 active surveillance stations and 54 passive surveillance stations. Vaccination status of nine diseases and their respective prevalence levels are monitored by the network. A work plan was developed for each station. The stations of active surveillance make monthly visits to four herds (villages) to look for monitored diseases and also organise four information meetings with farmers how to react in case of suspicion of the monitored diseases. Passive surveillance stations only organise, monthly, four information meetings with farmers. Suspicions in each station are recorded on a specific form developed for each disease. The agent mentions if the suspicion is performed following a breeder call, a visit from herds or a sensitization meeting. Monitoring lasted 24 months. The results of this study showed that regardless of the type of surveillance, diseases monitored with the exception of rare diseases (Rinderpest, and Rift Valley fever) are reported by the monitoring agents. However, we note that the number of calls recorded following suspicions of farmers (41%) was significantly higher (p <0.05) than suspicions made during visits to herds (30%) or in meetings (29%). For moderately prevalent diseases, the suspicions are mainly calls farmers (77%) and regardless of the type of monitoring (73% for active surveillance and 84% for passive surveillance). On the other hand, for FMD, a disease with high prevalence, 37% of suspicions are recorded by visiting farms. Overall, no significant difference was observed between the types of surveillance because of a low rate of disease onset during sensitization meetings by the active surveillance stations. Passive surveillance stimulated by awareness meetings appears to be a mode for surveillance in the conditions of Chad and cheaper. However, for rare diseases, the specific methods of active surveillance (such as, for example, sentinel flocks) seems preferable. Evaluation of the sensitivity of the animal disease epidemiological surveillance network for Foot and Mouth Disease in Chad Evaluation of the sensitivity of the animal disease epidemiological surveillance network for Foot and Mouth Disease in Chad The study on the network sensitivity was carried out in REPIMAT by taking the surveillance of FMD as an example. FMD is the disease most frequently suspected by REPIMAT. However, the reporting of cases is limited to clinical suspicion. The samples for the purpose of laboratory diagnostic for confirmation of these suspicions are not made. In order to assess the sensitivity of REPIMAT for this disease, a serological survey was conducted in eight of the nine regional delegations with the highest cattle population of the country. The samples were analyzed by the National Reference Laboratory for FMD in Brescia (Italy) with the support of the European Commission action against FMD. The 3ABC and SP-ELISA tests were used for the detection of antibodies and the serotype of the virus. The number of FMD suspicions reported within the network was compared with the seroprevalence. Epidemiological information on the disease, including the circulating serotypes in Chad, was also provided. A total of 796 cattle sera were collected. The seroprevalence rate at individual level was 35.6% (95% CI: 32.2 to 39.0) and that at the herd level was 61.9% (95% CI: 51.9 to 71.2). A strong correlation was observed between the estimated prevalence and number of clinical suspicions reported within REPIMAT. The disease is present in all livestock regional delegations surveyed with a high prevalence in the delegations located in the south, the wettest area, and where cross-border movements are the most important. Serotypes A, O, SAT1and SAT2 were identified. Development of operating performance indicators of Chad epidemiological surveillance network for animal diseases: REPIMAT The maintenance and effectiveness of a disease monitoring system requires regular evaluation to identify timely deficiencies that may occur. For this purpose, the performance indicators are essential tools. One approach for developing performance indicators as well as their application in the operation of 43 monitoring stations REPIMAT was carried out. An analysis of the objectives and operation mechanism of REPIMAT allowed retaining three main components, namely the field workers, the animation cell and the laboratory. The activities of each of these components were listed. The analysis of the outcomes of these activities resulted in the development of the performance indicators that can be used in the operation of REPIMAT. The application of these indicators has highlighted the weaknesses of each component. Estimated cost of a network for animal diseases epidemiological surveillance in Central Africa: the case of Chad network In sub-Saharan Africa, most of the networks for epidemiological surveillance of animal diseases were temporarily financed by external aid. The sustainability of such decision support tools should have been insured by national public funds. The objective of this study was to estimate the costs involved in running an animal disease epidemiological surveillance network by taking the example of such network in Chad (REPIMAT) and its weight in the state budget. These costs were then compared to those of other epidemiological surveillance in West Africa networks. The results of this study showed that the total annual operating and implementing cost of REPIMAT is estimated at € 666 349 (437 096 291 FCFA) for the entire system comprising 106 monitoring stations constituting the local level, 26 livestock sectors, nine regional livestock delegations representing the intermediate level and an animation cell constituting the central level. This cost represents only 3% (2% of fixed costs and 1% of variable costs) of the budget allocated by the Chadian Ministry of Livestock. Fixed costs (72%) weighed more than variable costs (28%) regardless of the levels of intervention. This estimate is similar to the estimated costs of epidemiological surveillance networks in Benin, Ghana, Mauritania and Senegal. Considering only the variable costs (operation), the annual cost of operating a surveillance station, the most important entity in the system was only 932 € or 611 352 FCFA. The surveillance cost is mainly related at the local level (surveillance stations) and intermediate level (livestock sectors and regional livestock delegations) to the cost involved in health surveillance as well as the equipment it requires. This thesis allowed to analyze some parameters of effectiveness of a surveillance network for animal diseases including general organization, type of surveillance, sensitivity, cost and to develop a tool for continuous monitoring of operating a network. It is difficult to meet all the efficiency criteria of an animal disease surveillance network, however, the few parameters studied which are interrelated will help if they are used properly to improve the efficiency of an epidemiological surveillance system of animal diseases in sub-Saharan Africa. [less ▲]

Mitral valve dysfunction is a relatively common heart disease which typically requires mechanical valve replacement, with consequent high social and economic costs. More specifically, ischemic mitral ... [more ▼]

Mitral valve dysfunction is a relatively common heart disease which typically requires mechanical valve replacement, with consequent high social and economic costs. More specifically, ischemic mitral insufficiency following myocardial infarction has a dynamic behavior that can lead to failure in its detection in certain patients, creating a situation with increased risk of morbidity and mortality. Improving the tracking and the control of valvular pathologies is therefore crucial, as it offers significant opportunities to improve care, costs and prognosis for patients with this disease. To study heart and cardiac valve dysfunction, cardiologists need information about detailed pressure and flow dynamics around and through the valves, atria and ventricles. However, non-invasive information about pressure is currently limited to indices at specific times and invasive catheterization data, which is more traumatic for the patient, is not usually routinely available. One alternative to this involves mathematical modeling of the cardiovascular system which offers a non-invasive and inexpensive way of studying cardiac and circulatory dynamics. This is particularly beneficial where detailed, continuous measurements may not be practicable. This study consisted of the development of a multi-scale closed-loop model of the cardiovascular system that accounted for progressive mitral valve aperture area over the entire cardiac cycle. This multi-scale model, which included detailed mitral valve and left atrium models, was tested over a range of physiological situations and clinical data. The goal was to validate the model’s ability to reproduce clinically measured physiological and pathophysiological behavior in a manner that would enable a model to be made patient-specific using available data. The resulting model was designed to be made patient-specific, and thus capture and reproduce the patient’s unique hemodynamic state on both global and local scales. In particular, it was shown to provide significant information about the patient’s mitral valve dynamics and the detailed flow dynamics and pressure around it. These data are not currently available without extensive, invasive measurements, and this therefore represents a significant step forward in model-based sensing and diagnosis. It is hoped that the model and methods developed in this study will be a powerful tool in assisting medical teams in investigating, tracking, diagnosing and controlling the cardiovascular system. More specifically, the mitral valve, as well as other similar valves, could be directly monitored to improve the diagnosis, costs and prognosis of valvular dysfunction. Furthermore, the overall results justify detailed in vivo animal experiments to thoroughly validate these models and methods in advance of clinical trials. [less ▲]

Experimentally determining the three-dimensional structure of a protein is a slow and expensive process. Nowadays, supervised machine learning techniques are widely used to predict protein structures, and ... [more ▼]

Experimentally determining the three-dimensional structure of a protein is a slow and expensive process. Nowadays, supervised machine learning techniques are widely used to predict protein structures, and in particular to predict surrogate annotations, which are much less complex than 3D structures. This dissertation presents, on the one hand, methodological contributions for learning multiple tasks simultaneously and for selecting relevant feature representations, and on the other hand, biological contributions issued from the application of these techniques on several protein annotation problems. Our first methodological contribution introduces a multi-task formulation for learning various protein structural annotation tasks. Unlike the traditional methods proposed in the bioinformatics literature, which mostly treated these tasks independently, our framework exploits the natural idea that multiple related prediction tasks should be designed simultaneously. Our empirical experiments on a set of five sequence labeling tasks clearly highlight the benefit of our multi-task approach against single-task approaches in terms of correctly predicted labels. Our second methodological contribution focuses on the best way to identify a minimal subset of feature functions, {\em i.e.}, functions that encode properties of complex objects, such as sequences or graphs, into appropriate forms (typically, vectors of features) for learning algorithms. Our empirical experiments on disulfide connectivity pattern prediction and disordered regions prediction show that using carefully selected feature functions combined with ensembles of extremely randomized trees lead to very accurate models. Our biological contributions are mainly issued from the results obtained by the application of our feature function selection algorithm on the problems of predicting disulfide connectivity patterns and of predicting disordered regions. In both cases, our approach identified a relevant representation of the data that should play a role in the prediction of disulfide bonds (respectively, disordered regions) and, consequently, in protein structure-function relationships. For example, the major biological contribution made by our method is the discovery of a novel feature function, which has - to our best knowledge - never been highlighted in the context of predicting disordered regions. These representations were carefully assessed against several baselines such as the 10th Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition. [less ▲]

Among the neurodegenerative amyloidoses, ten disorders, referred to as polyglutamine (polyQ) diseases and including Huntington's disease and several spinocerebellar ataxias, are associated with ten ... [more ▼]

Among the neurodegenerative amyloidoses, ten disorders, referred to as polyglutamine (polyQ) diseases and including Huntington's disease and several spinocerebellar ataxias, are associated with ten proteins within which a polyQ tract is expanded above a threshold of typically 35-45 glutamine residues. Such expanded polyQ tracts lead to the aggregation of the host protein into amyloid fibrils that accumulate in the nucleus of some populations of neurons; these aggregates or some of their precursors are thought to contribute to neuronal death. So far, no preventive or curative treatment exists for these devastating pathologies. While the expansion of the polyQ tract above the threshold is the determinant factor for aggregation, recent studies suggest that non-polyQ regions of these proteins can play a significant role, either preventative or facilitative, in the aggregation process. The general principles governing the complex interplay between the role of the expanded polyQ tract and the role of the non-polyQ regions in the aggregation process are not well understood yet. In order to develop therapeutic strategies, it is important to better understand this complex interplay. To contribute to this aim, we have engineered chimeric proteins via the insertion of polyQ repeats of various lengths (23, 30, 55 and 79Q) into two sites (197 and 216) of the BlaP beta-lactamase from Bacillus licheniformis 749/C. The properties of these chimeric proteins recapitulate the characteristic features of the disease-associated polyQ proteins, i.e. (i) there is a minimum number of inserted glutamines (threshold) required to trigger the aggregation of the chimeras into amyloid fibrils, and (ii) above the threshold, the longer the polyQ tract, the faster the aggregation. Interestingly, for the same polyQ length, the chimeras with insertions in position 216 have an increased propensity to form amyloid fibrils compared to their counterparts with insertions in position 197. These findings highlight the strong influence of the overall protein context on aggregation triggered by expanded polyQ tracts. This thesis addresses the use of the variable domains of camelid heavy-chain antibodies, referred to as nanobodies or VHHs, as structural and mechanistic probes to better understand the different aggregating properties of the two sets of BlaP-polyQ chimeras (197 and 216). We have also performed limited proteolysis experiments and transglutaminase-mediated reactions on the monomeric form of the BlaP-polyQ chimeras to further investigate the effects of the polyQ insertions on the structure and dynamics of the BlaP moiety, as well as the structure of the polyQ tract itself. From the blood of a llama immunised with BlaP197(Gln)55, we isolated more than 60 VHHs specific to the BlaP-polyQ chimeras. Twenty eight of them were produced, purified and characterised. These VHHs were found to be all specific to the BlaP moiety and could be classified into four different groups recognising distinct epitopes on the surface of BlaP. One representative VHH of each group (i.e. cAb-A3S, cAb-H7S, cAb-F11N and cAb-G10S) was selected as probe to investigate the mechanism of aggregation of the BlaP-polyQ chimeras. The epitope of three of them was determined by X-ray diffraction and/or by NMR spectroscopy. Although they recognise distinct epitopes and exhibit different affinities for BlaP, the binding of the four VHHs significantly slows down the aggregation of all the BlaP-polyQ chimeras investigated (i.e. BlaP197(Gln)55, BlaP197(Gln)79 and BlaP216(Gln)79). The extent of inhibition depends however on the chimera and on the experimental conditions. We show that the inhibition of the aggregation of BlaP197(Gln)55 and BlaP197(Gln)79 upon binding of the four VHHs is correlated with the stabilisation of their native state. In the case of BlaP216(Gln)79, the extent of inhibition could not be only correlated to the stabilisation of its native state; the location of the epitope of the VHH is instead also determinant. This observation demonstrates that the lower thermodynamic stability of BlaP216(Gln)79 is not the unique factor responsible for its increased aggregation propensity. It also further highlights the complexity of the aggregation mechanism of polyQ proteins and the strong influence of the non-polyQ regions on the amyloid fibril formation triggered by the expanded polyQ tract. All together our results suggest that antibodies or antibody fragments raised against the non-polyQ regions of polyQ proteins associated with diseases could constitute a relevant therapeutic strategy. They also further demonstrate the power of nanobodies as probes to get a deeper knowledge of the underlying mechanisms of amyloid fibril formation. The preliminary limited proteolysis and transglutamination experiments obtained suggest that the polyQ tracts are all flexible, except that of 23 glutamines inserted in position 197 of BlaP, which seems to be more rigid than the others. The results obtained confirm that, globally, the structure of BlaP is not significantly modified by the insertions while the 216 chimeras seem more dynamic than the 197 chimeras. [less ▲]

Sinonasal aspergillosis is a sever fungal rhinosinusitis mainly affecting large breed dogs in their mid ages. Its most common causal agent is A. fumigatus, a fungus that is largely spread in the ... [more ▼]

Sinonasal aspergillosis is a sever fungal rhinosinusitis mainly affecting large breed dogs in their mid ages. Its most common causal agent is A. fumigatus, a fungus that is largely spread in the atmosphere. As of today, the diagnostic and treatment for this disease remain a challenge for the practicing veterinary doctor. Very little data is available to explain why such a ubiquitous fungus induces a sever rhinosinusitis in otherwise healthy dogs, while other dogs do not present any sign of fungal infection. The authors of a study analysing the expression of mRNA encoding for certain cytokines and chimiokines in the nasal mucosa of SNA affected dogs, propose the assumption that dogs develop a protective immunity (Th1) against A. fumigatus, but that it could be blocked by an excessively intense regulating immunity (massive production of IL-10). Indeed, it is commonly described that in humans affected by invasive aspergillosis, as well as in mice-based models, the production of immunoregulating cytokines (IL-10) should be considered as a sign of the escalation of the sickness and an absence of its remission. The objective of this thesis was to investigate the adaptive immunity reaction of SNA affected dogs based on the assumption that sick subjects develop a protective immunity that is antagonised by a disproportional regulating immunity. Three axes of analysis have been considered to answer this objective. The first looked into the difference of expression and/or production of cytokines and transcriptor factors prototypics of the different adaptative and regulatory immunological paths: Th1 (IFN-γ and Tbet) – Th2 (IL-4 and GATA3) – Th17 (IL-17A and RORc) and Treg (IL-10 and FoxP3) in PBMC of affected and healthy dogs after A. fumigatus stimulation. Secondly, an analysis of genes by microarray has been carried on nasal mucosa biopsies of affected and healthy dogs. Thirdly, the promotor zone of the gene encoding IL-10 in dogs has been analysed by sequencing. This study has been done within three cohortes of dogs: Rottweiler-Labrador and Golden containing affected and healthy dogs. The objective was to investigate, as it is the case in human medicine, the possibility of a genetic modification as a factor susceptible of leading to SNA development. The results of the first study revealed that: (1) the PBMC of half the controls dogs and every affected dogs expressed a relevant overexpression of IFN-γ. This increase was significantly more important within PBMC of affected dogs. The analysis of IFN-γ production in culture supernatants was in accordance with these last observations. A significant increase in the expression of mRNA coding for Tbet was also observed in half of the PBMC of affected dogs. (2) a significant increase in expression of mRNA encoding IL-4 was observed in the PBMC of most of the affected and healthy dogs. This increase was significantly higher in the PBMC of affected dogs than in the healthy dogs. (3) the PBMC of most control and affected dogs also revealed an increase in expression of mRNA encoding IL-17A. This increase was statistically more important in the PBMC of affected dogs than in the healthy dogs. (4) a relevant decrease in mRNA expression encoding IL-10 was observed in the PBMC of more than half of the affected dogs. The expression of the mRNA encoding IL-10 was significantly smaller in the PBMC of affected dogs than in the healthy dogs. The microarray analysis showed that: (1) amongst the 49 overexpressed biological groups, 13 were associated to the immunological or inflammatory process; (2) the nasal mucosa of affected dogs presented an increase in the expression of genes encoding for molecules involved directly (IFN-γ and STAT4) and indirectly (IL-16, CCL3, CCL4, and CXCL10) in the development of protective Th1 immunity, as well as molecules involved in the regulatory branch of the immune response (IL-16 and Ikaros). The sequencing analysis of the promotor region of the gene encoding IL-10 revealed the presence of polymorphisms. Three polymorphisms were observed more frequently in clones belonging to the three studied cohorts, excepted for the clones belonging to SNA affected Rottweiler. The polymorphisms observed in dogs were not similar to those described in humans. The first study showed an increase in the expression of mRNa encoding IFN-γ - Tbet – IL-4 and IL-17A in most of the PBMC of the affected dogs, and a decrease in the expression of IL-10 in comparison with the PBMC of healthy dogs. Similar results were observed in mice repeatedly affected by A. fumigatus. The suggested hypothesis was that an intense Th17 immunity resulted in a massive inflammatory reaction leading to a favourable environment were A. fumigatus was able to proliferate as hyphae. In return, hyphae would lead to the development of a non-protective Th2 immunity. It is tempting to suggest that the same hypothesis could be made for dogs affected by SNA. In order to reinforce this hypothesis, we should compare the expression of the different molecules involved in the Th17 immunity inside the nasal mucosa of affected and healthy dogs. Additionally, we should be running a kinetic study based on the expression of prototypical cytokines in parallel with the analysis of the production of these cytokines in culture supernatants. Ideally these studies should use DC and lymphocytes isolated from the nasal mucosa of affected and healthy dogs. In conclusion, an new hypothesis could be formulated: the possibility that not the overstimulation of the regulatory branch of the immunity response but an overstimulation of the Th17 branch of the immune response could be the cornerstone of the incapacity of dogs to clear from their SNA. The results of the microarray study were partially in accordance to the starting hypothesis. Indeed, the results showed an overexpression of genes involved in in the development of the protective Th1 (IFN-γ, STAT4, IL-16, CCL3, CCL4, and CXCL10) as well as genes involved in the regulatory path of the adaptive immunity (IL-16 and Ikaros). But the results of this study did not show an increase in IL-10. No conclusion could be drawn from these results; indeed, they were only the reflection of a fixed image at a given moment and we cannot consider qPCR results as the exact replica of the production of cytokines in the microenvironment. Nevertheless this study pointed out new possible areas of research. The results obtained after the sequencing of the promotor zone of the gene encoding IL-10 did not show any clear difference between affected and healthy dogs. However, this study was undertaken with a very limited number of dogs. In order to further assess the possibility of a genetic modification as the cornerstone of the development of SNA, more dogs should be analysed and the sequencing analysis should be run in parallel with an ELISA analysis. [less ▲]

Tropical Coral reefs are among the richest and most important ecosystem on Earth. This success would not be possible without the symbiosis established between corals and unicellular algae of the genus ... [more ▼]

Tropical Coral reefs are among the richest and most important ecosystem on Earth. This success would not be possible without the symbiosis established between corals and unicellular algae of the genus Symbiodinium that provide them with photosynthesis-derived carbon. Unfortunately, with the climatic upheaval that we witness today, the long-term survival of coral reefs could be in jeopardy. Massive loss of symbiotic algae, a phenomenon known as coral bleaching, becomes indeed more and more frequent throughout the globe and already urged scientists to study its mechanisms for more than a decade. Their research highlighted the central role of reactive oxygen species in the collapse of symbiosis. They also established that the expulsion of Symbiodinium from its host is mainly operated through the death of the host cell. The ensuing events, although determining the eventual survival of the energetically compromised coral, are however much less detailed. In this work, we decided to investigate these “post-bleaching” events and focused our efforts on the evaluation of cell proliferation and mucocyte number, for the role they may respectively play in regenerative processes and heterotrophic feeding. For this purpose, we worked with the sea anemone model A. pallida in which we analyzed the incorporation of a thymidine analogue (EdU). After preliminary experiments assessing the general repartition and the circadian variations of cellular proliferation in healthy specimens, we conducted a series of bleaching experiments using a variety of stresses. Every treatment, namely cold and darkness, heat and light or exposition to a photosynthesis inhibitor, drastically reduced the Symbiodinium density. This reduction was always accompanied by important histological modifications. In every case, we highlighted an increase in cellular proliferation in both the ectodermis and the gastrodermis as well as an increase in ectodermal mucocyte density. These values returned then to normal as algae that survived the stress progressively repopulated anemones. Further experiments showed that, following bleaching, a small fraction of the newly produced ectodermal cells migrate to the gastrodermis. Along with new gastrodermal cells, they most probably operate a regeneration of the wounded tissue, differentiating into host cells in order to harbor new algae. Another experiment also indicated that a small but significant part of ectodermal newly produced cells might differentiate into mucocytes, therefore explaining their increased density in bleached individuals. We hypothesize that the higher amount of mucus produced, in addition to providing protection against various aggravating stresses, would be a way to efficiently increase the feeding capacity of the bleached cnidarians. This heterotrophic shift would therefore allow a sufficient energy income until full restoration of the symbiosis. This work emphasizes the need to focus more attention on the post-bleaching period, a critical time in which some modifications might be decisive for coral and coral reef survival. [less ▲]

Due to the overall process complexity, studies about percussive drilling usually focus on a limited set of the subprocesses underlying it, e.g., the hammer thermodynamics or the interaction between the ... [more ▼]

Due to the overall process complexity, studies about percussive drilling usually focus on a limited set of the subprocesses underlying it, e.g., the hammer thermodynamics or the interaction between the bit and the rock. Following this paradigm, the assessment of the process performance is typically performed by considering a single percussive activation and a single interaction cycle between the bit and the rock, from arbitrary initial conditions. The need for an integrated approach to evaluate drilling performance, based on the dynamical interaction of the subprocesses underlying drilling, is evident. Such an approach requires simplified models, however, as the computational cost associated with full scale models is simply unbearable. In this thesis, three dynamical integrated models are proposed and a preliminary analysis is conducted for a reference configuration and around it. The models couple three modules that represent: (i) the dynamics of the mechanical system, (ii) the interaction between the bit and the rock, and (iii) the activation of the mechanical system. For each module, simple representations are considered; of particular importance is the bit/rock interaction model which is a generalization to repeated interactions of experimental evidence observed for a single interaction. In the first model, the dynamics of a rigid bit is cast into a drifting oscillator and the activation modeled as a periodic impulsive force. The second and third models account for the dynamics of the piston and the activation results from the impact of the piston on the bit. They are respectively based on elastic and rigid representations of the two bodies. In the rigid model, analytical results of wave propagation in thin rods are used to represent the contact interaction between the piston and the bit. In the elastic model, wave propagation is resolved. Their preliminary analysis has revealed the occurrence of complex dynamical responses in the space of parameters. Expected trends are recovered around a reference configuration corresponding to a low-size hammer, with an increase of the rate of penetration with the feed force and the percussive frequency. The latter is seen to have a strong influence on the rate of penetration. Interestingly, our analyses show that when the activation period has the same order of magnitude as the timescale associated with the bit/rock interaction, a lower power consumption is observed, indicating a possible resonance phenomenon in the drilling system. Also, the predictions of the rigid model are shown to be in good agreement with the ones of the elastic model, in the explored range of parameters. Given the piecewise linear nature of the proposed models, dedicated numerical tools have been developed to conduct their analysis. As such, the thesis proposes a high-order time integration scheme for structural dynamics as well as a novel framework to evaluate the accuracy of such schemes, and a root-solving module to perform event-detection for coupling with event-driven integration strategies. Specific to the framework is the account for both structural damping and external forcing in the evaluation of the scheme order of accuracy. Specific to the root-solving module is the forcing of event occurrence in the localization procedure. [less ▲]

After four years of doctoral research on the topic of iron in the roof framing, a detailed typologic timeline could be established. It covers the period between the thirteenth and eighteenth century. The ... [more ▼]

After four years of doctoral research on the topic of iron in the roof framing, a detailed typologic timeline could be established. It covers the period between the thirteenth and eighteenth century. The development of such an inventory was possible by the development of an interdisciplinary methodology combining dendrochronology, archaeometallurgy and techniques of building archeology. Reflections and comparisons crossed with nearly 750 armatures from the investigation of 68 roof or attic parts, mainly located in Wallonia, helped to inventory 89 differents frames iron models. [less ▲]

In many statistical studies, an observation is evident: the available data are regularly right-censored. A censorship arises when, for different reasons, the data time of interest can not be observed. A ... [more ▼]

In many statistical studies, an observation is evident: the available data are regularly right-censored. A censorship arises when, for different reasons, the data time of interest can not be observed. A data is so right-censored if, instead of observing its time of interest, a lower bound of this time is considered for this data. For example, the study duration can be shorter than the time of interest leading then to a correspondence between the observed times and the study end time. Moreover, these data can be obtained from cross-sectional process. Cross-sectional process selects only data in progress at a fixed time to constitute the studied sample, determining the data followed for the study. Therefore, cross-sectional process introduces left truncation. A data is described as left-truncated if its time of interest is larger or equal to a fixed time. It is in this context this thesis has been elaborated. The considered estimation problems for such data will be studied with a nonparametric or semiparametric approach. An approach is nonparametric or semiparametric if none assumption is supposed about the belonging to parametric family for the time of interest distribution function, solely based on qualitative hypotheses. These estimation methods have thus the advantage to be based on weaker assumptions in comparison with the parametric approaches. The aim of the different researches developed in this thesis is to improve the current estimation techniques. This thesis is organised in four parts. The first part (first chapter) determines the context of our researches through practical examples and a significant but not exhaustive literature overview as well as our motivation about the different researches presented in this thesis. To conclude this first part, our contributions in these researches are briefly explained. The second part (second chapter) presents a new estimation procedure for the parameters of the parametric conditional variance in the heteroscedastic regression situation applied to right-censored data. This procedure constructs artificial data to replace censored data exploiting a heteroscedastic regression model and then defines the optimal parameters from the least squares method. The interest of this research is to fill a gap in the current literature. The third part (third and fourth chapters) studies, in a regression context, the cross-sectional data, i.e. left-truncated and right-censored data, where the conditional truncation distribution function is supposed to be known. The innovation of the method proposed here consists in the use of information contained in the conditional truncation distribution function for the nonparametric estimation methods. Finally, the fourth part (fifth chapter) is devoted to the cross-sectional data examination but this time for nonparametric estimation of the time of interest distribution function. In this chapter, the truncation distribution function is supposed to belong to a parametric family and not known anymore. The relevance of this approach is due to this weaker assumption than one in the above part. This information about the truncation distribution function is also introduced in the nonparametric estimation. This thesis concludes with a set of suggestions related to possible future researches in these statistical fields. [less ▲]

The widespread use of antibiotics has caused bacteria to become drug resistant. Efforts have been done by many research groups to identify new molecules as inhibitors or inactivators of resistant proteins ... [more ▼]

The widespread use of antibiotics has caused bacteria to become drug resistant. Efforts have been done by many research groups to identify new molecules as inhibitors or inactivators of resistant proteins like Penicillin Binding Proteins (PBPs) or to find new targets which allow putting peptidoglycan (PG) biosynthesis under control. The aim of this work is to develop a new labeling method for PG allowing to identify novel antibiotics and worthy target for inhibition. In Escherichia coli, meso-diaminopimelic acid (A2pm) is a key molecule for reticulation of PG. During this work, we synthesized meso-lanthionine, the monosulfur analogue of meso A2pm, and some of its derivatives. As these compounds are susceptible to enter PG biosynthesis via MurE enzymatic addition onto UDP (S) Ala γ (R) Glu, their use in a new labeling method has been investigated. Another approach was the introduction of labeled lanthionines in PG via the recycling pathway by replacement of the natural tripeptide (S) Ala γ (R)-Glu-meso-A2pm with tripeptide analogues containing modified lanthionines. We have developed a new stereoselective synthesis of the three lanthionine diastereomers (meso-Lan, (R,R)-Lan and (S,S)-Lan) and of an α-benzylated analogue. Good yields at the gram scale and excellent diastereomeric excesses (>99%), without any enrichment step, were obtained in aqueous solution. Biological experiments showed the incorporation of meso-Lan and (R,R)-Lan into PG. Upon this results, [35S]lanthionine diastereomers could be used to study the biosynthesis of PG and its turnover in relation to cell growth and division. Unfortunately, the α-benzylated lanthionine was not incorporated. This result indicates that the introduction of lanthionines with a substituted aromatic group at this position is not feasible via the reaction catalysed by MurE. We have also developed a synthesis of protected lanthionines bearing an α-alkyl group useful for the preparation of PG building block analogues. An enantiomeric excess of 97% was obtained for the alkylation of an oxazoline precursor. An SN2 reaction with a cysteine on the β carbon of cyclic sulfamidate precursors was highly successful, despite the sterically crowded α-carbon. In this way, we have obtained several protected α-alkyl lanthionines. These products are promising building blocks for peptide synthesis. In this report, we also describe a diastereoselective synthesis of α-substituted tripeptides. These compounds, containing a fluorescent tag or a photoactivatable group situated on an α carbon, could potentially be used to study PG biosynthesis. We have obtained chiral α alkyl sulfamidates with enantiomeric excess of 89 97%. Those sulfamidates are excellent electrophiles with cysteine to provide lanthionines. The lanthionines were then readily converted into tripeptides (S)-Ala-γ-(R)-Glu-(R,S/R,R)-α-benzyl-lanthionine and phenylazido analogues in a one pot reaction. Those tripeptides were substrates of Mpl in vitro. We did not observe any incorporation of these tripeptides in the PG of E. coli. [less ▲]

With the exception of the Johannesburg Stock Exchange, African stock markets are characterized, in their vast majority, by a marginal number of listed companies, low capitalization, low trading volume and ... [more ▼]

With the exception of the Johannesburg Stock Exchange, African stock markets are characterized, in their vast majority, by a marginal number of listed companies, low capitalization, low trading volume and low liquidity. These characteristics raise questions about the volatility, efficiency, optimal compensation risks to investors and, more generally, to the price adjustment speed to the information conveyed by the volume of transactions. The objective of this doctoral work is to study the behavior of African stock markets through volatility, price, return and volume of transactions in connection with the performance and volatility. Compared to the volatility modeling, the study shows that the EGARCH model is the most suitable process may be applied to the data of African stock markets for estimating and forecasting volatility. For efficiency, the study shows that only the market of South Africa seems to be informationally efficient in the weak form. The analysis of the risk-return relationship reveals that only less developed markets (except Botswana) show a positive and significant premium risk. Such a result means that the least developed African stock markets in the sample reach rewarded accordingly risks for investors. For asymmetry, the study shows that there is presence of leverage for all emerging markets, indicating that in these markets, the bad news has a greater impact on volatility than good news of the same magnitude. In contrast, for less developed markets, there is no leverage. With respect to the relationship between the volume of transactions, performance and volatility, the study shows that emerging African stock markets showed a more or less close to the behavior of other emerging and developed markets of the world in terms of leverage, contemporary positive relationship and causality between trading volume and performance on the one hand, and between trading volume and volatility, on the other hand. In contrast, for other less developed stock markets, the impact of volume on return and volatility is less noticeable. [less ▲]

This thesis provides methods to speed up the acoustical ray tracing for room acoustics. These methods uses 3 complementary axes that have been implemented, tested and analyzed: CPU vectorization, a ... [more ▼]

This thesis provides methods to speed up the acoustical ray tracing for room acoustics. These methods uses 3 complementary axes that have been implemented, tested and analyzed: CPU vectorization, a geometry preprocessing based on visibility and a decrease in the number of traced rays by using variable size receivers. This results in a overall gain that ranges from 120 to 150 on a 8core CPU. [less ▲]

Cell division in the Gram negative bacterium Escherichia coli is a highly coordinated mechanism involving various physiological functions such as chromosome segregation, cell envelope invagination, peptidoglycan synthesis at the division site and separation of the daughter cells. All these functions require a high level of spatio-temporal regulation in order to preserve the physical integrity of the cell. At least 20 proteins required for a proper cell division are recruited to the division site to form a supramolecular complex called the divisome. This thesis work focused on three major components of the E. coli division machinery: the N-acetylmuramyl L-alanine amidase AmiC, the LytM factor NlpD and the lipid II flippase FtsW. These proteins are recruited at midcell at a late stage of cell division. FtsW is an integral membrane protein crucial for the translocation of the peptidoglycan precursor from the cytoplasm to the periplasm where it will be processed to produce septal peptidoglycan. AmiC acts as a septal peptidoglycan hydrolase that allow the separation of the daughter cells. This enzyme has been shown to be activated by the LytM factor NlpD. The crystal structure of AmiC from E. coli presented in this work confirms the presence of an inhibitory helix in the active site. The AmiC variant lacking this helix exhibits by itself an activity comparable to that of the wild type AmiC activated by NlpD. Furthermore, the direct interaction between AmiC and NlpD has been detected by microscale thermophoresis with an apparent Kd of about 13 µM. The crystal structure of AmiC also reveals the β-sandwich fold of the AMIN domain, responsible for the septal targeting of AmiC to the division site. The two symmetrical four-stranded β-sheets exhibit highly conserved motifs on the two outer faces. Along with the peptidoglycan binding capacity of the AMIN domain, results obtained so far suggest that the AMIN domain could be involved in the recognition of a specific peptidoglycan architecture or a composition different than the lateral peptidoglycan. Production screenings of FtsW from different strains were realized and FstW from E. coli was purified. This challenging project will require additional efforts to obtain sufficient amount of protein for structural investigation. Information gathered in this work confirms the high level of regulation of the hydrolytic activity at the septum and gives a structural basis for a more precise molecular characterization of the division site targeting. Disruption or over-activation of these regulation mechanisms could represent a new strategy in the development of antibacterial compounds. [less ▲]

The ionic liquids are salts having the particularity of being molten at temperatures lower than 100 °C. Consequently, these new solvents mainly made up of ions have original physicochemical properties ... [more ▼]

The ionic liquids are salts having the particularity of being molten at temperatures lower than 100 °C. Consequently, these new solvents mainly made up of ions have original physicochemical properties. Within a few years, the ionic liquids passed from a laboratory curiosity to a true field of research impossible to circumvent, currently in full rise. Indeed, the replacement of usual organic solvents in catalytic and/or separation processes by these neoteric solvents offers many advantages but also new opportunities for the “Green Chemistry”. However, the systematic exploitation of the ionic liquids as reactional media rests in particular on the understanding of their chemical properties, which for some of them are still scarcely known, such as the acidity for example. This thesis thus aims to undertake a study of the acido-basic properties of (and in) these solvents, and more particularly to determine the accessible levels of acidity for acid solutions (with added HOTf or HNTf2) in second generation ionic liquids such as [HNEt3][NTf2], [BMIm][NTf2], [BHIm][NTf2], [BMIm][BF4], [BMIm][OTf], [BMIm][PF6] and [BMIm][SbF6]. In order to evaluate these acidity levels, we propose two different methods, each one resting on an extra-thermodynamic assumption. The first, the Hammett acidity function H0, is based on the protonation equilibrium of indicators whose pKa's are proposed as solvent independent. The second, the Strehlow potentiometric function R0(H+), consists in measuring, in a given solvent, the electrochemical potential of the proton compared to the ferricinium/ferrocene redox couple whose potential is supposed to be independent of the solvent, and then to refer it versus the Normal Hydrogen Electrode (NHE) in water. The two methods lead to the same conclusions. Firstly, the ionic liquids are generally contaminated by residual basic impurities (from solvents needed for the synthesis…) which need to be neutralized before reaching the acidity characteristic of the medium. The levels of acidity then obtained are very high and can reach values as high as R00(H+) = -10 in the case of [BMIm][BF4]. Then, the accessible level of acidity in an ionic liquid depends mainly on the nature of its anion, and not of that of its cation. We thus obtain the following classification, by decreasing acidity: [PF6-] > [BF4-] > [NTf2-] > [OTf-], indicating that the triflate is the more solvating anion It was found however that the Hammett acidity function led, for the same concentration in acid, to different levels of acidity, depending on the indicator used. The ionic liquids would consequently be media less dissociating than estimated in the literature and the Hammett function would then be related to an apparent acidity (H0)app, underestimating the real acidity. Finally, a difference in acidity between HNTf2 and HOTf is observed in [BMIm][NTf2] and [BMIm][OTf], HNTf2 showing an acidic character stronger than HOTf. On the other hand, in [BMIm][OTf] these two acids show the same acidity since that of HNTf2 has been leveled by solvent. [less ▲]

Micturition disorders are common in veterinary medicine and can occur already in puppies. The congenital urethral sphincter mechanism incompetence represents the second cause of urinary incontinence in ... [more ▼]

Micturition disorders are common in veterinary medicine and can occur already in puppies. The congenital urethral sphincter mechanism incompetence represents the second cause of urinary incontinence in puppies but data on the development and the evolution of the continence mechanism are lacking in veterinary medicine. This study could be the first step in the understanding of the pathophysiology of congenital urethral sphincter mechanism incompetence which resolves spontaneously in half of the affected dogs after the first of the second oestral cycle. The lower urinary tract has been widely investigated in adult female dogs, particularly by urodynamics. Urodynamics is a useful technique providing informations on the vesico-urethral function but some limitations are described. The major limitation is the need for sedation or anaesthesia as it is not feasable to obtain interpretable data from an awake animal. A second limitation is associated with the technique of retrograde cystometry because the technique of bladder filling and the filling rate are not physiological. Therefore, techniques have been developed such as diuresis cystometry or ambulatory urodynamics to decrease the impact of these limitations. In urological research, telemetry allows the investigation of different urodynamic parameters without the need for physical or chemical restraint and during several micturition cycles. The objectives of this work were to study the development and the evolution of the continence mechanism during the prepubertal period and during the two first oestral cycles and to study the vesico-urethral function with telemetry. This technique will be standardized and afterwards will be used to study different current urological drugs commonly used in veterinary medicine. In the first study, we showed that the values of the urodynamic and morphometric parameters of the lower urogenital tract varied in function of the growth of the bitches and also in function of the different phases of the œstrus cycle. The end of the prepubertal period was characterized by an increase of the urethral pressures (maximal urethral pressure, MUP ; maximal urethral closure pressure, MUCP ; integrated pressure, IP) and functional urethral length (FPL) but also by an increase of the urethral (UL) and vaginal lengths (VL) measured by vagino-urethrography compared with values at earlier times. The bladder function was characterized by a non-linear increase of the bladder capacity, which was reached at 6 months of age and by an increase of the bladder threshold pressure observed from 7 months of age. During the oestrus cycle, the urethral pressures significantly decreased during the oestrus and the early dioestrus which are characterized by a decrease in oestrogen plasmatic concentration and by an increase in progesterone plasmatic concentration. The highest values of FPL were observed during the follicular phase (œstrus, prooestrus). Values of UL could not be determined during œstrus of the first cycle because urethras were not identifiable on vaginourethrograms obtained at that time. During the second œstrus cycle, UL was significantly higher during œstrus and early dioestrus compared to anoestrus. No significant variation in Pth was observed during the first cycle ; however during the second cycle, the lowest Pth values were observed during late anoestrus. Pth values were also lower during œstrus compared to proestrus. The luteal phase was associated with an increase in bladder capacity that reached the highest value during the dioestrus. The VL decreased progressively during the second cycle to reach the lowest value during anoestrus. The MUCP and UL were significantly higher during the anoestrus of both cycles compared to the values obtained when the dogs were 9 months old. The same observation was made for IP at the first anoestrus. The increase in values of these parameters during periods with low hormonal influence could suggest an impact of the growth but also a constant improvement of bladder function all along the study. It was interesting to observe that the position of the bladder was variable during the prepubertal period as she was either in an intrapelvic or in an intra-abdominal position. After the prepubertal period, the bladder was always in an intra-abdominal position. In the second study, we showed that a single administration of ephedrine or phenylpropanolamine (PPA) was able to significantly modify the urodynamic and the morphometric parameters. Using conventional urodynamics, both ephedrine or PPA increased the urethral pressures during 4 hours and the FPL during 2 hours. The integrated pressure was elevated during 6 hours with PPA and its value was higher than the value obtained with ephedrine. On the other hand, the FPL remained elevated during 18 hours with ephedrine. Using telemetry, ephedrine, and to a lesser extent PPA, modified bladder function. The administration of ephedrine was associated with an increase in bladder threshold volume and a decrease of the detrusor threshold pressure. An increase in bladder volume was observed after PPA administration. Both drugs modified significantly the hemodynamic parameters. Arterial pressures were significantly increased during 4 to 6 hours after the administration of both drugs and were associated with a decrease in heart rate during 12 hours. These results were in agreement with previous studies and confirmed the fact that sympathetic drugs must be used with caution in incontinent dogs suspected of cardio-vascular diseases. In the third study, we showed that interpretable values of urodynamic parameters were obtained with telemetry. Physiological data were obtained from telemetric recordings because no sedative or analgesic drugs were used during the recording and bladder filling was natural. The location of the implant inside a subcutaneous cavity in the left flank did not disturb the dogs and the quality of the signals were good when the receiver was placed on the lateral wall of the metabolic cage. Telemetry allows continuous recording of values of urodynamic parameters (abdominal pressure, bladder pressure, detrusor pressure and bladder threshold volume) and values of smooth muscle electrical activity parameters. In the first step of the study, conventional urodynamics was compared to telemetry. The values of bladder threshold volume obtained by telemetry was significantly lower than the values obtained by conventional urodynamics. No difference in values of threshold bladder pressure was observed between the two techniques. In the second step of the study, the repeatability of telemetric recordings was assessed between day and night. A good repeatability was obtained with the night recordings. Comparing day and night recordings, higher bladder threshold volumes and lower detrusor threshold pressures were obtained during the night. No variation of the urodynamic parameters was observed during the bladder filling phase. The smooth muscle urethral electrical activity did not vary during the bladder filling phase. The frequency of micturition was not different between day and night but the frequency of involuntary detrusor contractions was higher during the day. In the third step of the study, the effect of drugs currently used to treat different micturition disorders were studied. Oestriol and duloxetine increased the electrical activity at day 8 compared to days 0, 1 and 15. No significant effect of PPA, oestriol, oxybutynin, bethanechol and duloxetine was observed on the values of urodynamic parameters or on the frequency of involuntary detrusor contractions. The results obtained in this study suggest that circadian variations may influence urodynamic measurements and that long-term telemetric studies of the lower urinary tract should be conducted during the night to obtain repeatable recordings. Further pharmacokinetic and pharmacodynamics studies are needed to confirm the effects of the different drugs on vesico-urethral function. [less ▲]

ABSTRACT This work consists of the analysis of the palynoflora (dinoflagellates and sporomorphs) identified from four petroleum exploration wells drilled through the Nkapa Formation (Douala Basin ... [more ▼]

ABSTRACT This work consists of the analysis of the palynoflora (dinoflagellates and sporomorphs) identified from four petroleum exploration wells drilled through the Nkapa Formation (Douala Basin, Cameroon). The vertical distribution of index taxa in the Moulongo, Ngata, Mamiwater and North Matanda wells allowed establishing a biostratigraphic frame of the first half of the Paleogene of the Douala Basin and identifying the Paleocene/Eocene boundary in each well through a comparison with surrounding sedimentary basins. Seventy dinoflagellate species have been identified, among which 30 are stratigraphically informative. Four biozones have been established, three for the Paleocene and one for the Early Eocene. The biozone 1 is characterized by Cretaceous species such as Cerodinium diebelii, Lejeunecysta hyalina, Andalusiella gabonensis, Palaeocystodinium australinum and Palaeocystodinium golzowense. The biozone 2 is defined by the acme of Areoligera coronata, Adnatosphaeridium multispinosum and Glaphyrocysta ordinata. The biozone 3 is defined by the acme of several Apectodinium species (A. hyperacanthum, A. homomorphum, A. paniculatum, A. parvum, A. quinquelatum). The biozone 4 is defined by the occurrence of Eocene taxa such as Deflandrea cf. oebisfeldensis, Hystrichosphaeridium tubiferum and Wetzeliella sp. Based on the position of the Paleocene/Eocene boundary in the Moulongo well, correlations have been established with the others sedimentary sequences analysed. Our study shows that, as previously demonstrated for Nigeria in an other study, the acme of Apectodinium in the Douala Basin can be attributed a Late Paleocene age and hence occurred before the carbon isotope excursion (CIE) of the Paleocene-Eocene boundary and the coeval Paleocene-Eocene thermal maximum (PETM). The acme of Apectodinium in the Douala Basin is thus markedly diachronous with the Apectodinium acme identified in various Nordic Basins, where it is contemporaneous with the PETM (earliest Eocene). Ninety-four sporomorph species have been recognized, among which fifty are stratigraphically informative. Three biozones have been established. The biozone 1 is predominantly characterized by Palmae- and Proteaceae-type pollens, the latter with Cretaceous characteristics. The biozone 2, Late Paleocene in age, is characterized by a transition from original floras towards more « modern » floras. The biozone 3 is earliest Eocene in age and shows the early steps of the extant « Leguminosae flora » of West Africa. Together with the biostratigraphic analysis, a palaeoenvironmental reconstruction is proposed, based on dinoflagellate ecology, on the evolution of the dinoflagellate /sporomorph ratio, as well as on the reconstruction of the plant environments in the studied sedimentary sequences. The evolution of the Douala Basin during the timespan studied occurred according to two geographical axes: a WE (Moulongo-Ngata) axis and a SSW-NNE (Moulongo-Mamiwater-North Matanda) axis. During the Early and Middle Paleocene, the Douala Basin was open towards the sea and showed a coastal-estuarine environment, with marginal mangroves and lowland swamp forests. During the Late Paleocene, the marine character of the sedimentation was less prominent as brackish fluvio-lagoonal intertidal environments developped. They were accompanied by gallery forests and surrounded by dense, periodically flooded, forests northwards and by forests on wet soils eastwards. During the earliest Eocene, a marine regression occurred. The palaeoenvironments included confined coastal lagoon systems with a peripheral impoverished local flora northwards, and moist, dense, evergreen forests eastwards. [less ▲]

Aiming to define irrigation strategies improving the water productivity by folder crops under water scarcity in the irrigated perimeter of Tadla (Morocco), this work combines field experimentation and ... [more ▼]

Aiming to define irrigation strategies improving the water productivity by folder crops under water scarcity in the irrigated perimeter of Tadla (Morocco), this work combines field experimentation and modeling. The field study of crop response to water stress is important to maximize yield and improve agricultural water use efficiency (WUE) in areas where water resources are limited. On the silage maize, the results showed that water deficit affected plant height growth, accelerated the senescence of the leaves and reduced the leaf area index. Dry matter yields varied from 3.9 t.ha-1 under T5 (20% ETc) to 16.4 t.ha-1 under T1 (100% ETc). The establishment of the water budget by growth phase showed that the water use efficiency was higher during the linear phase of growth. WUE calculated at harvest varied between 2.99 kg.m-3 under T1 and 1.84 kg.m-3 under T5. The actual evapotranspiration under T1 (100% ETc) was 478 mm and 463 mm in 2009 and 2010, respectively. The yield response factor (Ky) for the silage maize for both growth seasons was 1.12. The ETc of silage maize was determined using lysimeter drainage at 415 mm. the mean values of crop coefficients Kc were 0.56, 1.22 and 1.05 for beginning phase, mid-season and at harvest (grain milky pasty stage) respectively. Drip irrigation allows obtaining dry matter yields similar to flood irrigation but with less water and saves about 30% of irrigation water applied. Over five cycles, berseem dry biomass yields achieved under T1 are 14.3 and 13.9 t/ha in 2009/10 and 2010/11 respectively. The yield reductions by applying 60% of water requirements are 40 and 42% in 2009/10 and 2010/11 respectively. Berseem daily productivity increases with more water applied with the highest value of 102 kg DM/ ha/ day. The dry matter content increases with water stress. The mean values range between 12.3 and 23.7% under T1 (100% ETc) and T4 (40% ETc) respectively. The contribution of without irrigation cycles (rainy period) on the total annual yield may vary from 35% to 52% under treatments T1 and T4 respectively. Water balance achieved by water regime shows that drainage losses increase with more water applied especially in the first cycle. WUE is low during the first cycle, optimal in 2nd, 3rd and 4th cycle and decreases in the last one with water stress. Global WUE of berseem determined over the entire crop period (slope of the regression line) is 3.37 kg.m-3. The yield response factor (Ky) for the berseem for both growth season was 1.11. Berseem ETc determined by drainage lysimeter was 520 mm. The Kc values were estimated for each cycle for all three phases: initial, development (median) and mid-season. Maximum yield average under drip irrigation was 15.7 t/ha and obtained with 411 mm of water supply allowing to save 57% of water compared to traditional irrigation technique. Comparing the behavior of six alfalfa varieties most commonly practiced in the irrigated perimeter of Tadla shows that the "Super Siriver" cultivars followed by «Trifecta» have higher yield potential and higher tolerance to water deficit. Alfalfa maximum annual yield obtained was 24.2 t.ha-1. The contribution of the spring cycles to the annual yields range from 55% under T1 (100% ETc) to 65% under T4 (40 % ETc). In addition to water quantities, alfalfa yields depend on time application during a growth cycle. WUE varies from cycle to another and from one season to another. The maximum value was 2.57 kg.m-3 and obtained in spring 2011, while the low value was 0.64 kg.m-3 and obtained in winter 2010. WUE decreases with water stress, with mean values of 1.83, 1.67, 1.54 and 1.23 kg.m-3 under T1 (100% ETc), T2 (80% ETc), T3 (60% ETc) and T4 (40% ETc) respectively. The yield response factor (Ky) of alfalfa was 0.92. The determination of the alfalfa water requirements was performed on the basis of cycle’s calendar during two years in 2010 and 2011. The values founded for flood irrigation are 1388 and 1364 mm respectively for the two years. Drip irrigation allows achieving similar dry mater yield to flood irrigation with less water and agronomic efficiency. Water applied under T1 in drip irrigation with 50 cm of spacing between ramps was less than water requirements (of alfalfa) by about 7% and 18% in 2010 and 2011 respectively. Under the same treatment in flood irrigation, the water requirements are exceeded by 16% and 21% in 2010 and 2011 respectively. Two crop models, PILOTE and CropSyst, had been selected to be tested on their ability to simulate the growth and yield of the studied crops under the edaphic-climatic conditions of Tadla. Tested on silage maize, both models correctly simulated the growth and development of the crop under different water regimes. The parameters of both models are validated and shown effective for simulation of biomass, leaf area index and soil water storage. Although PILOTE requires less parameters and data than CropSyst, it often proves to be more successful in simulating the biomass of silage maize and water balance. As to berseem, predictions of biomass by CropSyst seem to be more accurate than PILOTE model. The latter was best at predicting the soil water reserve on the soil depth exploited by the roots (0-80 cm). Given its ease of integrating daily climatic data for several years, CropSyst model was chosen to test its ability to simulate the crop rotation of berseem and maize silage. The results show that this model correctly simulates the evolution of biomass and yields of the two crops considered in rotation during three years. Modeling the growth and production of alfalfa is made by both models outside the crop installation period (seeding year). After calibration and validation achieved, the model CropSyst simulates adequately biomass and soil water reserve under all water regimes considered while PILOTE best simulations were limited to non-stressed treatment T1 (100% ETc). Although CropSyst model takes into account several parameters in the simulation of alfalfa growth, their simplifications (unique values) reduce its performance in more water stress situation. Less parameters considered in the PILOTE model makes it validation difficult for perennial crops such as alfalfa. CropSyst model was used to evaluate irrigation practices of farmers and develop irrigation virtual scenarios for the three crops studied. The assessment shows that virtual scenario developed for alfalfa that applying 1600 mm of irrigation water amount through 14 applications divided into six irrigations during the spring, six in summer, one in the fall and another in early winter maximizes irrigation water efficiency (1.21 kg/m3) and achieve a yield of 23.1 t/ha (95% of the yield potential). In the case of maize, if water is available, application of 648 mm according to the combination [2 irrigations in initial phase (after sowing), 2 irrigations in linear and two in final phases] allows to achieve high biomass yield and better water use. On berseem, the simulation results confirm that the adoption of the scenario that provides 625 mm through 7 irrigations (3 in autumn, 2 in winter and 2 in spring) allows obtaining 14.1 t/ha of dry matter which represents 94% of the yield potential of the 6454 cultivar. This scenario allows greater water efficiency (1.24 kg/m3) and results in low water drainage losses estimated at about 17% of applied water. The comparison of the two cropping systems represented by alfalfa and silage maize-berseem rotation shows that the rotation allows the better water use and mobilizes less water than alfalfa, which is distinguished by its profitability. Finally, the coupling of the results of three years (2008 to 2011) "in situ" experimentations with the simulations of scenarios by CropSyst and PILOTE models has shown to be highly effective in order to improve the folder crops irrigation in the Tadla irrigated area in Morocco. [less ▲]

Acetic acid bacteria (AAB) are used industrially to produce different kinds of bioproducts. AAB encounter very aggressive conditions during acetous fermentation (AF) including high acid and ethanol ... [more ▼]

Acetic acid bacteria (AAB) are used industrially to produce different kinds of bioproducts. AAB encounter very aggressive conditions during acetous fermentation (AF) including high acid and ethanol concentrations, low pH and also abrupt increases in temperature. In subtropics such as central and southern parts of Iran, fruits and also the by-products of the fruit processing industries are used to produce different kinds of foods. However, because of high temperature and deficiency in water resource, fermentation industries face many restrictions during spring and summer. One of the main problematic restrictions is the low productivity or the ceases of fermentation due to the use of non-thermotolerant microorganisms. In this case, initiation of a new fermentation run needs efficient starter. In previous studies in Walloon Center of Industrial Biology, Acetobacter senegalensis, a novel thermo-tolerant bacterium, was isolated and used to produce vinegar starter and acetic acid at high temperature. However, in those studies, the viability and vitality of the starter were not evaluated under stress conditions. In addition, since most kinds of industrial vinegars have low prices, the use of high-priced nutrients for the production of low quantity of starter is not commercially cost-effective. In the present study, with a deep look to the acetous fermentation requirements, we analyzed the fundamental and applied aspects of A. senegalenisis resistance to stress inducers. Proteomics-based techniques and flow cytometry methods in combination with different biomass production techniques were used to develop a fermentation process improving cell viability and vitality during freeze-drying process and revitalization procedure. In addition, the trend of cell senescence during storage of starter and its effect on some bio-molecules were studied. In the first part of the study, the quality of the produced biomass was improved in order to achieve an acetic acid tolerant biomass. Adaptive laboratory evolution technique (ALE) enabled cells to grow rapidly in higher concentration of acetic acid. The results of 2D-DiGE on the produced biomass revealed that structural and regulatory proteins were expressed differently under various conditions (Chapter II and III). Use of acetic acid in combination with glucose in a fed-batch fermentation mode could induce a physiological condition in A. senegalensis which was close to the physiological state of cells oxidizing ethanol. In addition, the presence of acetic acid in fermentation media could cause a cross-adaptation and improved the tolerance of cells to stressors (ethanol, low pH and acetic acid). Interestingly, by using this method for production of biomass, the rate of growth on ethanol improved significantly. In parallel to the first part of the study, we exhibited the influence of different stress on the produced biomass. Cell envelope integrity and respiration (dehydrogenase activity) were the two important targets for adverse effects of the stress. Assessment of the cell envelope integrity and respiration system of produced biomass by Multiparametric Flow Cytometry (MFC) method (Chapter II) demonstrated that the detrimental effects of ethanol and acetic acid depended on the carbon sources and fermentation conditions used for pre-adaptation. Respiration system and cell envelope integrity of cross-adapted iv cells were not compromised after exposure to different concentrations of ethanol and acetic acid. Thus, according to the obtained results, by using a mixture of acetic acid and glucose as carbon sources, it is possible to enhance not only the viability of cells but also induce tolerance to physicochemical stress during downstream process. Our investigation about the freeze-drying process provided a better understanding of lethal and sub-lethal damage to cells (Chapter IV). The results showed that drying process had the greatest effect on the viability and vitality of A. senegalensis especially by affecting on the cell envelope. In addition, entrance into viable but non-culturable state (VBNC) was initiated during the drying process and enhanced during storage period. Analysis of the stored cell proteome by 2D-DiGE and western blotting (Chapter V) revealed that high storage temperature could induce a kind of senescence in the cells by different modifications in cellular proteome such as insolubility, degradation and carbonylation of cellular proteins and shift of isoelectric point. Carbonylation of the proteins involved in transcriptional and translational process could cause cell death whereas VBNC formation at low storage temperature seemed to be due to other deteriorative reactions such as fatty acid peroxidation. At the end of this dissertation, the discussion (Chapter VI) provides a general overview of the results and compares our findings with earlier studies. Potential industrial applications are reviewed and suggestions for further research are made. [less ▲]

The Central African forests are characterized by high species richness. While little light generally reaches their litter layer, among the species encountered are major long-lived light-demanding logged ... [more ▼]

The Central African forests are characterized by high species richness. While little light generally reaches their litter layer, among the species encountered are major long-lived light-demanding logged trees. These species may have established in the past because of large disturbances. Indeed, major climate changes have occurred in the past few millennia, in some areas coupled with strong human occupations. Among these light-demanding species we find the assamela / afrormosia (Pericopsis elata (Harms) Meeuwen, Fabaceae), a large high trading value tree of African semi-deciduous moist forests. Nowadays this logged species suffers from significant regeneration problems on its natural range from Ghana to the Democratic Republic of Congo. Therefore, it is included in CITES Appendix II and is recorded as “Endangered A1cd” on the IUCN Red List. No convincing solution can be found in the available scientific literature to overcome the deficiency of its regeneration. In addition, little or no information is available describing ca. its population dynamics, genetics, silviculture or the probable origin of its stands. It is in this context that the present PhD was undertaken. In this study, we have adopted a multidisciplinary approach to understanding the dynamics of this population rich in long-lived light-demanding species logged in the Congo basin. Moreover, we have suggested ways for their management and conservation in a changing world. To do so, we have adopted as a study-model the case of P. elata in southeastern Cameroon. The main results of the study show that, on a local scale, physico-chemical soil parameters have no influence on the presence or absence of this clustered species. However, on the same scale, large quantities of charcoal were found in the soil, mainly inside those clusters. The anthracological analysis has also shown that the vegetation at the time of the burn was similar to that of today. Numerous fragments of pottery were also found in the top soil layers only inside clusters formed by the species. Finally, some 14C datings go back to ca. 200 years BP, which is approximately the average age of concerned clusters. This body of evidence leads us to conclude on an ancient form of shifting cultivation as the most likely origin of the assamela populations currently present in southeastern Cameroon. In addition to its current population structure within the study area, several parameters controlling its population dynamics were estimated. While the annualized natural mortality of the species reaches about 1%, its average diameter growth rate is 0.31 cm.year-1, which is a relatively low value compared to other long-lived light-demanding species. Selective logging seems to have only a light influence on the behavior of the species. On the other hand, the impact of logging on its seed tree populations is only ca. 12%. Each seed tree bears fruit on average only once every five years. The minimum diameter of reproduction and the effective flowering diameter were respectively 32 and 37 cm. The recovery rate, varying greatly from one country to another, is more than 100% in Cameroon, where the conservation status assigned to the species seems excessive. A sufficient regeneration by planting should allow the perpetuation of the assamela populations in the long run, as with other major logged light-demanding trees. [less ▲]

Classical model-checking tools verify concurrent programs under the traditional "Sequential Consistency" (SC) memory model, in which all accesses to the shared memory are immediately visible globally, and ... [more ▼]

Classical model-checking tools verify concurrent programs under the traditional "Sequential Consistency" (SC) memory model, in which all accesses to the shared memory are immediately visible globally, and where model-checking consists in verifying a given property when exploring the state space of a program. However, modern multi-core processor architectures implement relaxed memory models, such as "Total Store Order" (TSO), "Partial Store Order" (PSO), or an extension with locks such as "x86-TSO", which allow stores to be delayed in various ways and thus introduce many more possible executions, and hence errors, than those present in SC. Of course, one can force a program executed in the context of a relaxed memory system to behave exactly as in SC by adding synchronization operations after every memory access. But this totally defeats the performance advantage that is precisely the motivation for implementing relaxed memory models instead of SC. Thus, when moving a program to an architecture implementing a relaxed memory model (which includes most current multi-core processors), it is essential to have tools to help the programmer check if correctness (e.g. a safety property) is preserved and, if not, to minimally introduce the necessary synchronization operations. The proposed verification approach uses an operational store-buffer-based semantics of the chosen relaxed memory models and proceeds by using finite automata for symbolically representing the possible contents of the buffers. Store, load, commit and other synchronization operations then correspond to operations on these finite automata. The advantage of this approach is that it operates on (potentially infinite) sets of buffer contents, rather than on individual buffer configurations, and that it is compatible with partial-order reduction techniques. This provides a way to tame the explosion of the number of possible buffer configurations, while preserving the full generality of the analysis. It is thus possible to even check designs that may contain cycles. This verification approach then serves as a basis to a memory fence insertion algorithm that finds how to preserve the correctness of a program when it is moved from SC to TSO or PSO. Its starting point is a program that is correct for the sequential consistency memory model (with respect to a given safety property), but that might be incorrect under TSO or PSO. This program is then analyzed for the chosen relaxed memory model and when errors are found (a violated safety property), memory fences are inserted in order to avoid these errors. The approach proceeds iteratively and heuristically, inserting memory fences until correctness is obtained, which is guaranteed to happen. [less ▲]

The common carp is one of the most important freshwater fish species in aquaculture, and its colourful subspecies koi is grown for personal pleasure and competitive exhibitions. Both two subspecies are ... [more ▼]

The common carp is one of the most important freshwater fish species in aquaculture, and its colourful subspecies koi is grown for personal pleasure and competitive exhibitions. Both two subspecies are economically important. In the late 1990s, a highly contagious and lethal pathogen called koi herpesvirus (KHV) or cyprinid herpesvirus 3 (CyHV-3) began to cause severe financial losses in these two carp industries worldwide. In 2005, CyHV-3 has been classified in the Alloherpesviridae family of the order Herpesvirales. Because of its economic importance and its numerous original biological properties, CyHV 3 became rapidly an attractive subject for applied and fundamental research. However, to date, there is a little information on the roles of individual CyHV-3 genes in the biology of CyHV-3 infection or its pathogenesis. Moreover, there is a lack of safe and efficacious vaccine for the control of CyHV-3 disease. The goal of this thesis was to study the roles of CyHV-3 ORF134 encoding an IL-10 homologue in the biology of the infection. CyHV-3 ORF134 has been predicted to contain an 84 bp intron flanked by 2 exons encoding together a 179 amino acid product. Transcriptomic analyses reveal that ORF134 is expressed as a spliced early-late gene. The identification of the CyHV-3 secretome was achieved using 2D-LC MS/MS proteomic approach. This method led to the identification of 5 viral and 46 cellular proteins in concentrated infected cell culture supernatant. CyHV-3 ORF12 and ORF134 were amongst the most abundant proteins detected. To investigate the roles of ORF134 in the biological of the infection, a strain deleted for ORF134 and a derived revertant strain were produced by using BAC cloning and prokaryotic recombination technologies. Comparison of these strains demonstrated that CyHV-3 ORF134 does not contribute significantly to viral growth in vitro or to virulence in vivo in the present laboratory setting. The present study addressed for the first time the in vivo role of a vIL-10 encoded by a member of the family Alloherpesviridae. This study has been published in Veterinary Research. During the course of the first study, we obtained an unexpected recombination event while we were reconstituting infectious virus from mutated BAC plasmids. To generate a revertant ORF134 Del galK strain, CCB cells were co-transfected with the FL BAC ORF134 Del galK plasmid and the pGEMT-TK vector to remove the BAC cassette inserted in the ORF55 locus (encoding thymidine kinase). One of the clones obtained had an unexpected recombination leading to the deletion of ORF56 and ORF57 in addition to the expected deletion of ORF134. Unexpectedly, this triple deleted strain replicated efficiently in vitro, exhibited an attenuated phenotype in vivo and was proved to confer in a dose dependent manner an immune protection against a lethal challenge. The goal of the second experimental chapter was to investigate the role of the ORF56-57 and ORF134 deletions in the observed safety/efficacy profile of the triple deleted recombinant. To reach this goal, a collection of recombinant strains were produced using BAC cloning technologies, characterized and tested in vivo for their safety/efficacy profile. The results obtained demonstrated that the ORF56-57 deletion is responsible for the phenotype observed and that ORF134 deletion does not contribute to this phenotype significantly. Finally, the immune protection conferred by ORF56-57 deleted recombinant was investigated by challenging immunized fish with a wild type strain expressing luciferase as a reporter gene. In vivo imaging system (IVIS) analyses of immunized and challenged fish demonstrated that the immune response induced by the ORF56-57 deleted strain was able to prevent subclinical infection of the challenge strain. In conclusion, the present thesis addressed both fundamental and applied aspects of CyHV-3. For the first time, it investigated in vivo the roles of a viral IL-10 homologue encoded by a member of the family Alloherpesviridae. Importantly, it identified the ORF56-57 loci as target for production of safe and efficacious attenuated recombinant vaccines. [less ▲]

Cranial cruciate ligament rupture (CCLR) is the first cause of osteoarthrosis in dogs. A recent publication revealed that the economic impact of CCLR repair in States was more than one billion dollars ... [more ▼]

Cranial cruciate ligament rupture (CCLR) is the first cause of osteoarthrosis in dogs. A recent publication revealed that the economic impact of CCLR repair in States was more than one billion dollars. Although the CCLR is the most frequent cause of lameness of the hindlimb presented in referral practices, no technique has been shown to be superior to another. The amount of publications on the topic, no less than 160 papers during the last 6 years in 5 of the most important journals in veterinary surgery, reveals the lack of consensus in this field. Intracapsular techniques and physiotherapy are the most common methods used in human medicine to treat a CCLR but these techniques are unsatisfactory in veterinary medicine. The extracapsular techniques used for decades are progressively replaced by dynamic stabilisation whom TPLO and TTA are the most common. The latter technique is an adaptation of a procedure used in human and firstly described in 1976 by P. Maquet. Although the technique does not necessitate any implant in human, the surgery described in dogs uses many implants in order to stabilise the advanced tibial crest. The positioning of these implants necessitates an invasive approach of the medial face of the tibia. The goals of this work are to progressively adapt the TTA originally described in dogs in order to make it as simple as it is done in human medicine. This simplification is in agreement with the actual trend toward minimally invasive surgery. Referring to its inventor in human medicine, we named our procedure in dogs the modified Maquet Technique (MMT). In our first study, the osteotomy described for the TTA was modified in order to create a “cortical hinge” at the most distal part of the tibial crest. Following this new osteotomy, resistance to traction of the tibial tuberosity was evaluated with 3 scenari: 1) intact cortical hinge with figure of 8 wire cerclage to maintain the tibial crest, 2) intact cortical hinge without any mean to maintain the tibial crest and 3) accidentally broken cortical hinge with figure of 8 wire cerclage to maintain the crest. This biomechanical study showed that when the cortical hinge was intact, the wire cerclage did not bring much to the resistance of the montage and that the tibial crest withstood tensions above the forces encountered within the stifle of a dog at walk. However, when the cortical hinge was broken, resistance to traction is significantly lower compared to the two other groups. In a second study, we applied our technique to 20 dogs presented with CCLR. Despite the absence of a force plate to evaluate objectively the recovery, the MMT showed encouraging results and subjectively similar to other techniques of dynamic stabilisation. The mean healing time was less than 7 weeks. No major complication was experienced during the study. A prospective study with force plate would allow to compare our results with those already published on TTA. Thanks to our experience ex-vivo as well as clinical, we realised that the osteotomy, as we described it in order to create the cortical hinge, was not ideal. Indeed, the hole at the distal end of the osteotomy, supposed to prevent propagation of a potential fissure during advancement of the crest, not only did not consistently prevent the apparition of a fissure but was frequently the location of the fracture of the crest. Instead of concentrating all the stress in the bone at the level of the cortical hinge, we designed a longer incision, parallel to the cortex distally, in order to decrease the stress. This third study allowed us to study in depth this new osteotomy. The advancement was proportional to the bodyweight, to the angular deformation and to the width of the cortical hinge. Thanks to this new osteotomy, for every dog, the advancement permitted was well above the usual clinical requirement. Beside, resistance to traction of the tibial tuberosity was superior compared to our previous osteotomy. The fourth study aimed to simplify furthermore the MMT and reach our goal to perform the MMT as simple as the Maquet procedure is performed in human medicine. After a monotonic biomechanical study testing different materials (Kyon titanium cage, porous titanium block or biphasic synthetic bone (BSB) of different porosities), the ones withstanding forces encountered in vivo were submitted to cyclic testing to evaluate their subsidence within the gap of the osteotomy without any mean of fixation. The porous titanium block and the biphasic synthetic bone porosity 60% block were tested cyclically. Along the 200 000 cycles, none of them subsided, showing that the friction was enough to maintain the block within the gap. Moreover, the block of BSB withstood the 200 000 cycles. Our intensive work on tibial tuberosity advancement led us to observe an underestimation of the required advancement with the current template. Although this has never been mentioned in the literature, taking into account this underestimation is mandatory to correctly perform the surgery. We firstly studied the intra and intervariability of patellar tendon angle measurement, reflecting indirectly the measurement of the advancement. Then in a second paper, we quantified the underestimation and we provided tables in order to correct it. [less ▲]

The topic of object recognition is a central challenge of computer vision. In addition to being studied as a scientific problem in its own right, it also counts many direct practical applications. We ... [more ▼]

The topic of object recognition is a central challenge of computer vision. In addition to being studied as a scientific problem in its own right, it also counts many direct practical applications. We specifically consider robotic applications involving the manipulation, and grasping of everyday objects, in the typical situations that would be encountered by personal service robots. Visual object recognition, in the large sense, is then paramount to provide a robot the sensing capabilities for scene understanding, the localization of objects of interests and the planning of actions such as the grasping of such objects. This thesis presents a number of methods that tackle the related tasks of object detection, localization, recognition, and pose estimation in 2D images, of both specific objects and of object categories. We aim at providing techniques that are the most generally applicable, by considering those different tasks as different sides of a same problem, and by not focusing on a specific type of image information or image features. We first address the use of 3D models of objects for continuous pose estimation. We represent an object by a constellation of points, corresponding to potentially observable features, which serve to define a continuous probability distribution of such features in 3D. This distribution can be projected onto the image plane, and the task of pose estimation is then to maximize its “match” with the test image. Applied to the use of edge segments as observable features, the method is capable of localizing and estimating the pose of non-textured objects, while the probabilistic formulation offers an elegant way of dealing with uncertainty in the definition of the models, which can be learned from observations — as opposed to being available as hand-made CAD models. We also propose a method, framed in a similar probabilistic formulation, in order to obtain, or reconstruct such 3D models, using multiple calibrated views of the object of interest. A larger part of this thesis is then interested in exemplar-based recognition methods, using directly 2D example images for training, without any explicit 3D information. The appearance of objects is also defined as probability distributions of observable features, defined in a nonparametric manner through kernel density estimation, using image features from multiple training examples as supporting particles. The task of object localization is cast as the cross-correlation of distributions of features of the model and of the test image, which we efficiently solve through a voting-based algorithm. We then propose several techniques to perform continuous pose estimation, yielding a precision well beyond a mere classification among the discrete, trained viewpoints. One of the proposed method in this regard consists in a generative model of appearance, capable of interpolating the appearance of learned objects (or object categories), which then allows optimizing explicitly for the pose of the object in the test image. Our model of appearance, initially defined in general terms, is applied to the use of edge segments and of intensity gradients as image features. We are particularly interested in the use of gradients extracted at a coarse scale, and defined densely across images, as they can effectively represent shape as they capture the shading onto smooth non-textured surfaces. This allows handling some cases, common in robotic applications, of objects of primitive shapes with little texture and few discriminative details, which are challenging to recognize with most existing methods. The proposed contributions, which all integrate seamlessly in a same coherent framework, proved successful on a number of tasks and datasets. Most interestingly, we obtain performance on well-studied tasks of localization in clutter and pose estimation, well above baseline methods, often on par with or superior to state-of-the-art method individually designed for each of those specific tasks, whereas the proposed framework is similarly applied to a wide range of problems. [less ▲]

In this thesis we present an original ab-initio study of the evolution of antiferrodistortive (AFD), anti-polar electric (APE), and ferroelectric (FE) instabilities in various ABO3 oxides of perovskite ... [more ▼]

In this thesis we present an original ab-initio study of the evolution of antiferrodistortive (AFD), anti-polar electric (APE), and ferroelectric (FE) instabilities in various ABO3 oxides of perovskite structure, as well as their structural and dynamic properties. The main goal is to understand better the microscopic origin of the antiferroelectricity exhibited in these compounds. Three prototypical compounds are studied in detail : PbZrO3 , NaNbO3 , and SrZrO3. After a general introduction on ABO3 compounds, and the ab-initio techniques, we review the concept of antiferroelectricity in perovskites, highlighting some ambiguities in the usual definition and the necessity of turning to what we call a modern definition of antiferroelectricity. First, we highlight that it is the rigidity of the oxygen cage that tends to favor the FE distortion compared to the APE instability. Although illustrated on BaTiO3 , this argument is general, and confirmed by the inspection of the phonons dispersion curves of the ABO3 compounds in whom the strongest instability of the FE/APE branch is systematically at Γ. We show that the emergence of a stable or meta-stable APE distortion appear naturally through a coupling with other instabilities. The presence of AFD modes turns out to be a concrete way to create mixed FE/AFD and APE/AFD phases, crucial for the emergence of antiferroelectricity (AFE). This clarifies why the known AFE compounds systematically include AFD distortions. In this context, since the FE, APE and AFD instabilities are usually in competition, the coexistence of FE, APE and AFD instabilities of strong amplitudes seems required to create mixed phases combining them. This establishes the context convenient to the development of FE and AFE metastable phases close in energy. Another important element concerns the need of a first order AFE-FE transition under electric field producing a double hysteresis loop, typical of AFE compounds. Here also the AFD modes could play a key-role by allowing the emergence of FE/AFD and APE/AFD phases close in energy and developing distinct tilt patterns. These various elements give a new perspective on AFE and allow us to have a more precise idea of the origin of the AFE behavior in perovskites. We identify some key intrinsic characteristics allowing the prediction of materials with the propensity of developing an AFE behavior. [less ▲]

In the past few decades, various hybrid nano-vehicles have been developed as new drug delivery systems (DDS), in which inorganic and organic components are integrated within a nano-object. An ideal DDS ... [more ▼]

In the past few decades, various hybrid nano-vehicles have been developed as new drug delivery systems (DDS), in which inorganic and organic components are integrated within a nano-object. An ideal DDS should satisfy the conflicting requirements for high stability in extracellular fluid, so that it maintains its integrity during the in vivo circulation; however, it becomes labile upon the activation of internal or external stimuli after targeting to the disease sites, allowing the triggered release of therapeutic agents. The aim of this thesis was to build different hybrid nano-vehicles, explore the possibility to manipulate the release behaviors and evaluate their potential biomedical application. The first part presents an original work on reversibly-crosslinked nanogels based on poly(vinyl alcohol)-b-poly (Nvinylcaprolactam) copolymers. The second part is devoted to stimuli-responsive hybrid nanovehicles, composed of inorganic cores, e.g. maghemite nanoparticles or gold nanorods, and a stimuli-responsive polymer corona, e.g. poly(vinyl alcohol)-b-poly(acrylic acid) or poly(ethyl glycol)-b-poly(N-vinylcaprolactam). The third part focuses on core-shell nanoparticles made of a maghemite core and a mesoporous silica shell, while phase-changed molecules, e.g. 1-tetradecanol with melting temperature of 39 °C, were introduced as gatekeepers to regulate the release behaviors. These different nanostructures were developed as DDS to accommodate cargo molecules, and the triggered cargo release upon variation in pH or temperature, activation of reductive agent or presence of glucose was explored. Moreover, remote stimuli, e.g. alternating magnetic field or near infrared light, were also applied to trigger the release. Studies on cytotoxicity, cellular uptake and in vitro triggered release with cell culture are also described. [less ▲]

Until recently, breeding values were estimated based on phenotypes measured on the individual and its relatives, and the notion that the covariance between breeding values is proportionate to the kinship ... [more ▼]

Until recently, breeding values were estimated based on phenotypes measured on the individual and its relatives, and the notion that the covariance between breeding values is proportionate to the kinship coefficient. Advances in genomics now allow for direct analysis of the genome and identification of the loci that determine the breeding values of individuals. As a consequence, marker assisted selection and genomic selection have become more effective and are replacing conventional selection. The identification of loci influencing the traits of interest requires the use of advanced statistical methods that are constantly evolving. In the context of this thesis, we have (i) contributed to the development of gene mapping methods, (ii) applied these methods to map loci influencing both metric and meristic traits, and (iii) contributed to the development of methods for the integration of genomic information in livestock breeding and management. The mapping methods that we have helped developing distinguish themselves mainly by the fact that (i) they exploit haplotype information (by means of a hidden markov model) which should increase the linkage disequilibrium with causative variants and hence detection power, (ii) they can simultaneously extract linkage information within families, and linkage disequilibrium information across the population, and (iii) they correct for population stratification by means of a random polygenic effect, and (iv) they can be applied to binary as well as quantitative traits. We have applied these and other methods to map loci influencing (i) quantitative hematological parameters in a porcine line-cross, and (ii) binary traits including diseases in bovine and non-syntenic Copy Number Variants in cattle, horse and human. In fine, we have contributed to the development of methods for the utilization of marker information in animal selection and production. We have extended the haplotype-based mapping method to allow imputation and have evaluated the utility of this approach in scenarios mimicking reality. We have also contributed to the development of a method to quantify somatic cell counts in the milk of individual cows by genotyping a sample of milk from the farm’s tank (hence a mixture of milk from all cows on the farm) Our work has resulted in the development of a software package (“GLASCOW”) that is increasingly used by the community to map genes influencing complex traits, primarily binary. By using this tool, we have contributed to the localization of several trait loci in pig, cattle, horse and human. We have contributed to the development of approaches that reduce the costs of genomic analyses in livestock by, on the one hand, complementing real SNP genotypes with genotypes obtained in silico by means imputation, and, on the other hand, by developing a method to deconvolute genotypes obtained on DNA pools. [less ▲]

Understanding underlying mechanisms of common diseases is one of the major goals of current research in medicine. As most of these disorders are linked to genetic factors, identification of the associated ... [more ▼]

Understanding underlying mechanisms of common diseases is one of the major goals of current research in medicine. As most of these disorders are linked to genetic factors, identification of the associated variants forms an excellent strategy towards the elucidation of molecular and cellular dysfunctions, and in fine could lead to better personalised diagnostics and treatments. Genome-Wide Association Studies (GWAS) aim to discover variants spread over the genome that could lead, in isolation or in combination, to a particular trait or an unfortunate phenotype such as a disease. The basic idea behind these studies is to statistically analyse the genetic differences between groups of healthy (controls) and diseased (cases) individuals. Advances in genetic marker technology indeed allow for dense genotyping of hundreds of thousands of Single Nucleotide Polymorphisms (SNPs) per individual. This allows to characterise representative samples composed of several hundreds to several thousands of cases and controls, each one characterised by up to a million of genetic markers sampling the genomic variations among these individuals. The standard approach to genome wide association studies is based on univariate hypothesis tests. In this approach each genetic marker is analysed in isolation from the others, in order to assess its potential association with the studied phenotype, in practice by the computation of so-called p-values based on some statistical assumptions about the data-generation mechanism. Because of the very high ratio between the large number of SNPs genotyped and the limited number of individuals, multiple-testing corrections need to be applied when carrying out these analyses, leading to reduced statistical power. While this standard approach has been at the basis of many novel loci unravelled in the last years for several complex diseases, it has several intrinsic limitations. A first limitation is that this approach does not directly account for correlations among the explanatory variables. A second intrinsic limitation of GWAS is that they can't account for genetic interactions, i.e. causal effects that are only observed when specific combinations of mutations and/or non-mutations are present at the same time. The third limitation of univariate approaches is that they do not directly allow to assess the genetic risk, since many of the identified markers (with similarly small p-values) actually account for the same underlying causal factor: exploiting their information to predict the genetic risk is hence far from straightforward. Within bioinformatics, machine learning has actually become one of the major potential sources of progress. As a matter of fact, biology has become nowadays one of the main drivers of research in machine learning, and is by itself already a very competitive research field. Among the subfields of machine learning, supervised learning and its extensions such as semi-supervised learning, stand out as the most mature and at the same time most rapidly evolving area of research. Within this context, the purpose of this thesis was to study the application of random forest types of methods to genome wide association studies, with the twofold goal of (i) inferring predictive models able to asses disease risk and (ii) to identify causal mutations explaining the phenotype. The choice of this family of methods was originally motivated by the fact that these methods are a priori well suited for that kind of analysis due to some of their interesting properties. They are indeed able to deal efficiently with very large amounts of data without relying on strong assumptions about the underlying mechanisms linking genetic and environmental factors to phenotypes, and they can also provide interpretable information, in the form of scorings and/or rankings of SNPs so as to help in the identification of causal genetic loci. In the first part of this manuscript, we analyse the state-of-the art in the application field of genome wide association studies and in supervised machine learning, and subsequently describe in details the three tree-based ensemble methods that we have implemented and applied in our research; in Part II, we report our empirical investigations, in three successive steps, namely i.) a preliminary study on simulated datasets yielding controlled conditions with known ground-truth and allowing for a first sanity check of the T-Trees methods, in ideal conditions; ii.) a detailed study on a given real-life dataset concerning Crohn's disease, where we try to understand the main features of the three different algorithms in terms of predictive accuracy and capability of identification of relevant genetic information, and their sensitivity with respect to various kinds of quality control procedures and algorithmic parameters; iii.) a systematic replication study, where we confirm, on 7 different datasets from the Wellcome Trust Case Control Consortium, the main outcomes of our study on the Crohn's disease, while using default parameter settings. [less ▲]

This study focuses on an analysis of water resource management in cities of developing countries with very rapid rates of urbanization. The objectives are twofold: firstly, to highlight the factors which ... [more ▼]

This study focuses on an analysis of water resource management in cities of developing countries with very rapid rates of urbanization. The objectives are twofold: firstly, to highlight the factors which caused and which govern poor water resources management these cities and secondly to develop a coherent sustainable urban water resources management strategy. The study was conducted in the Abiergue watershed in Yaoundé-Cameroon. The methodology used is based on holistic and participative approaches resulting from the combination of principles of "Integrated Water Resources Management-IWRM" and "Ecosystem and Human Health-ECOHEALTH". Political, economic, social, environmental, historical and mesological factors have resulted in a water resources management system marked by many inadequacies and constraints. Poor access to drinking water and sanitation, recurrent flooding, endemic waterborne diseases, water pollution etc are all permanent pressures that contribute to the impoverishment of households and hamper the development of the area in particular and that of the town of Yaoundé in general. The poor water resources management in the Abiergué watershed is an indicator of an urban governance crisis. Three scenarios to improve this situation were developed: maintaining the status quo; destruction of buildings and a scenario based on IWRM and ECOHEALTH. The optimal strategy integrates principles from IWRM and ECOHEALTH with the base scenario 3.4 focused on actors. Using a sequential process, it takes into account actions on water resources, health and reforms on the legislative, regulatory and institutional framework. Many constraints like land tenure, corruption, limited financial resources and poor urban governance may undermine the implementation of this strategy despite the potential of the watershed. All indicators suggest that water resources management in the Abiergue watershed is poor. There is therefore an urgency to implement this strategy within the framework of the ongoing IWRM process in Cameroon. "Time for Solutions", the theme of the 6th World Water Forum held in Marseille in 2012 is fully translated into action through this study. [less ▲]

Bacillus subtilis was used to inhibit mould growth during red sorghum malting. Improved conditions for achieving good malt properties were studied and mathematical models are proposed for the induction ... [more ▼]

Bacillus subtilis was used to inhibit mould growth during red sorghum malting. Improved conditions for achieving good malt properties were studied and mathematical models are proposed for the induction and the repression phases of α- and β-amylase synthesis. The problems associated with the hydrolysis of β-glucans and the biocontrol steeping effect on β-glucanase activities are discussed. The effect of the biocontrol treatment and that of phytohormones produced by the bacterial strain used on the synthesis of specific red sorghum enzymes are elucidated. Gibberellic acid and abscisic acid diffusion and cross-talk as factors affecting the synthesis of red sorghum malt α- and β-amylase activities are also discussed. The production of 3-indole acetic acid (IAA) by B. subtilis S499 was also a focus in this study and the determination of the conditions for improving the production of indole-3-acetic acid are determined. [less ▲]

In Vietnam, the agrarian systems have evolved considerably during the socio-economic transformation period initiated in the late 1980s with the political reform (Doi Moi). In a region around the capital ... [more ▼]

In Vietnam, the agrarian systems have evolved considerably during the socio-economic transformation period initiated in the late 1980s with the political reform (Doi Moi). In a region around the capital, where the process of industrialization, urbanization, and international integration has been accelerating, a number of questions about the sustainability of agrarian systems are raising. By diagnosing and analysing the dynamics of agrarian systems from 1980 to 2010, this study aims to provide decision-makers with some sectorial and territorial policy options authorizing the sustainable development of agriculture and rural society in the new socio-economic context. Combining the historical, adaptive, and systematic approaches, this study shows that farmers in Hai Duong province adapted effectively to the socio-economic and institutional changes, notably by transforming part of the rice land areas into other agricultural land use purposes such as fish ponds, animal buildings, vegetable fields and fruit orchards. These rapid changes, however, do not go in the direction of improving the sustainability of agrarian systems. Farm holders are now facing with many technical and economic contradictions whereas land issues are not only related to the agricultural purposes. Competition functions in land use, fragmentation of plots, the imperfection of the land market and rising property values are all emerging. The prospects for sustainability of agrarian systems are analysed under different scenarios which highlight the complexity of policy options. The recommendations are made not only for the agricultural sector in general, but also for different agrarian systems in specific regions. [less ▲]

The family of Ets factors is one of the largest families of human transcription factors. This thesis aimed at molecularly and biologically defining these well known though not well-understood ... [more ▼]

The family of Ets factors is one of the largest families of human transcription factors. This thesis aimed at molecularly and biologically defining these well known though not well-understood transcription factors using a system biology approach. Combination of interactomic and bioinformatics tools gave rise to the first interactome of the human Ets factors built on more than 400 interactions with nearly 300 interaction partners. We fragmented our interactome in 24 functional highly intraconnected sub-networks (clusters, or CL) and highlighted a new role for Ets transcription factors in mRNA processing (CL1). Steady-state levels of mRNAs result from the balance between transcription and mRNA decay, two events sitting at both ends of the mRNA life. In the traditional and still prominent view, these events are spatially, functionally and temporally independent. Here, we showed that the Erg subfamily of Ets factors (composed of the three members ERG, FLI1, and FEV) regulate mRNA decay of specific targets via promoter-mediated mRNA imprinting with sequence-specific RNA-binding proteins (CL1) and the CCR4-NOT deadenylation complex. This constitutes the first evidence, regardless of the organism, that DNA-binding transcription factors recruit RNA-binding proteins and mRNA decay components to control the degradation of specific mRNAs. This is also the most detailed demonstration of a functional coupling between transcription and mRNA decay machineries in humans. We showed that ERG promoted mRNA decay of key mitotic regulators, among which the Aurora kinases Aurora A and B. Depletion of ERG prevented degradation of Aurora A and B mRNAs during mitosis, leading to aberrant levels of Aurora proteins, accumulation of centrosome and mitotic spindle defects, and ultimately mitosis blockage. This consists in a significant advance in understanding of mitosis progression that was until now thought to be exclusively regulated by post-translational modifications and proteasomal degradation of proteins. Our results show that, in the contrary of bulk mRNAs whose decay is inhibited during mitosis, degradation of specific transcripts is a prerequisite for normal mitosis progression. [less ▲]

Positioning is a fundamental issue in mobile robot applications, and it can be achieved in multiple ways. Among these methods, triangulation based on angle measurements is widely used, robust, and ... [more ▼]

Positioning is a fundamental issue in mobile robot applications, and it can be achieved in multiple ways. Among these methods, triangulation based on angle measurements is widely used, robust, and flexible. In this thesis, we present an original beacon-based angle measurement system, an original triangulation algorithm, and a calibration method, which are parts of an absolute robot positioning system in the 2D plane. Also, we develop a theoretical model, useful for evaluating the performance of our system. In the first part, we present the hardware system, named BeAMS, which introduces several innovations. A simple infrared receiver is the main sensor for the angle measurements, and the beacons are common infrared LEDs emitting an On-Off Keying signal containing the beacon ID. Furthermore, the system does not require an additional synchronization channel between the beacons and the robot. BeAMS introduces a new mechanism to measure angles: it detects a beacon when it enters and leaves an angular window. This allows the sensor to analyze the temporal evolution of the received signal inside the angular window. In our case, this feature is used to code the beacon ID. Then, a theoretical framework for a thorough performance analysis of BeAMS is provided. We establish the upper bound of the variance and its exact evolution as a function of the angular window. Finally, we validate our theory by means of simulated and experimental results. The second part of the thesis is concerned with triangulation algorithms. Most triangulation algorithms proposed so far have major limitations. For example, some of them need a particular beacon ordering, have blind spots, or only work within the triangle defined by the three beacons. More reliable methods exist, but they have an increasing complexity or they require to handle certain spatial arrangements separately. Therefore, we have designed our own triangulation algorithm, named ToTal, that natively works in the whole plane, and for any beacon ordering. We also provide a comprehensive comparison between other algorithms, and benchmarks show that our algorithm is faster and simpler than similar algorithms. In addition to its inherent efficiency, our algorithm provides a useful and unique reliability measure, assessable anywhere in the plane, which can be used to identify pathological cases, or as a validation gate in data fusion algorithms. Finally, in the last part, we concentrate on the biases that affect the angle measurements. We show that there are four sources of errors (or biases) resulting in inaccuracies in the computed positions. Then, we establish a model of these errors, and we propose a complete calibration procedure in order to reduce the final bias. Based on the results obtained with our calibration setup, the angular RMS error of BeAMS has been evaluated to 0.4 deg without calibration, and to 0.27 deg, after the calibration procedure. Even for the uncalibrated hardware, BeAMS has a better performance than other prototypes found in the literature and, when the system is calibrated, BeAMS is close to state of the art commercial systems. [less ▲]

The use of supercritical CO2 (scCO2) as alternative to traditional organic solvents and the valorization of biomass are interesting approaches to reduce the ecological footprint of chemical processes. On ... [more ▼]

The use of supercritical CO2 (scCO2) as alternative to traditional organic solvents and the valorization of biomass are interesting approaches to reduce the ecological footprint of chemical processes. On the other hand, emulsions offers many advantages over bulk and solution processes for polymerization reactions including limited environmental impact, ease of products recovery and increased reaction rate. In this context, this thesis aims to design novel fluorinated sugar-based surfactants able to stabilize water/CO2 (W/C) emulsion systems and explore their potential as template for polymerization reactions. Such surface active agents were prepared either by lipase-catalyzed esterification of mannose with fluorinated acid derivatives or following chemoenzymatic approaches involving very efficient and versatile "click" chemistries like the thiol-Michael addition or the thiol-ene/-yne reactions. The W/C interfacial activity of these novel glycosurfactants was confirmed by tensiometry as well as their ability to form stable W/C microemulsions. Then, we tested a range of these neutral fluorinated carbohydrate esters as stabilizers for the CO2-in-water (C/W) emulsion polymerization. In particular, the radical polymerization of acrylamide was performed in the continuous aqueous phase of a C/W high internal phase emulsion (HIPE) leading to highly interconnected macroporous polymer matrices, also called polyHIPEs. In this case, we emphasized a clear dependence of morphology of the porous structure with the concentration and the structure of the glycosurfactant. Thanks to the electrical neutrality of these fluorinated glycosurfactants which confers them a lower sensitivity to the ionic forces compared to their ionic counterparts, we could extend this system to the polymerization of ionic liquid monomers. Porous poly(ionic liquid)s were thus formed by emulsion polymerization for the first time and exhibit spherical cells interconnected by pores with size (~ 1 μm) among the lowest reported for polyHIPEs produced from C/W emulsions. The emulsion C/W templating methodology based on the designed fluorinated glycosurfactants thus appears as a technique of choice for the preparation of valuable macroporous polymer matrices. [less ▲]

In this thesis several aspects of beyond SM physics schemes have been treated. In particular, two categories of models have been considered: ($i$) models with extra Higgs ElectroWeak (EW) doublets (multi ... [more ▼]

In this thesis several aspects of beyond SM physics schemes have been treated. In particular, two categories of models have been considered: ($i$) models with extra Higgs ElectroWeak (EW) doublets (multi-Higgs-doublet models), ($ii$) models with new fermion EW singlets (type-I seesawmodels). In the first category, two problems associated with the most general two-Higgs-Doublet Model (2HDM) and with the three-Higgs-Doublet Model (3HDM) have been tackled. In the former case the scalar mass spectrum has been derived in a basis-invariant fashion whereas in the later, after introducing a general procedure for the minimization of highly symmetric potentials, the minimization of an $S_4$ and of an $A_4$ 3HDM has been analyzed. In the second category, the possibility of envisaging seesaw-like models yielding sizeable lepton-flavor-violating decay rates has been investigated. With the models at hand, the corresponding charged lepton-flavor-violating phenomenology has been studied focusing on rare muon decays, for which forthcoming lepton-flavor-violating experiments will be able to prove large parts of their parameter space. [less ▲]

The numerical simulation of wave-like phenomena occurring in large or infinite domains is a great challenge for a wide range of technological and scientifical problems. A classical way consists in ... [more ▼]

The numerical simulation of wave-like phenomena occurring in large or infinite domains is a great challenge for a wide range of technological and scientifical problems. A classical way consists in considering only a limited computational domain with an artificial boundary that requires a specific treatment. In this thesis, \textit{absorbing layers} are developed and studied for time-dependent problems in order to deal with such artificial boundary. A large part of this thesis is dedicated to the \textit{perfectly matched layers} (PMLs), which exhibit appealing properties. They are first studied in a fundamental case with non-dispersive linear scalar waves. A procedure for building PMLs is proposed for convex domains with regular boundary. It permits a great flexibility when choosing the shape of the computational domain. After, the issue of choosing PML parameters is addressed with the aim of optimizing the PML effectiveness in discrete contexts. The role of each parameter, including the so-called \textit{absorption function}, is highlighted by means of analytical and numerical results. A systematic comparison of different kinds of absorption functions is performed for several classical numerical schemes (based on finite differences, finite volumes or finite elements). Then, while the PMLs do not a priori account for incoming signals generated outside the computational domain, different problem formulations that account for such forcing are detailed and compared. The interest of the whole approach is finally illustrated with two- and three-dimensional numerical examples in electromagnetism and acoustics, using a discontinuous finite element scheme. In regional oceanic models, modeling open-sea boundaries brings new difficulties. Indeed, additional linear/nonlinear dynamics are involved and the external forcing is generally poorly known. In this context, different absorbing layers and the widely used Flather boundary condition are compared by means of classical benchmarks. The choice of the absorption function and the way of prescribing the external forcing are discussed in specific marine cases. [less ▲]

Our thesis work focuses on collaborative 3D GIS and considers two main aspects governing their implementation: a conceptual framework for an approach to design these systems and a technical framework ... [more ▼]

Our thesis work focuses on collaborative 3D GIS and considers two main aspects governing their implementation: a conceptual framework for an approach to design these systems and a technical framework dealing with the main issues of integration of multiple data sources from different partners in a 3D collaborative database. [less ▲]

This experimental work deals with sports balls trajectories. Those dense projectiles are laun- ched in air at several hundred kilometers per hour. In this situation, ball trajectories depend on the fluid ... [more ▼]

This experimental work deals with sports balls trajectories. Those dense projectiles are laun- ched in air at several hundred kilometers per hour. In this situation, ball trajectories depend on the fluid flow around them which occurs at high Reynolds number (Re > 1000). The first effect we consider is the fluid drag. This friction reduces the range and gives rise to trajectories very different from parabola which are non symmetric toward the top. This kind of trajectories occurs in badminton for high clears. Nicollo Tartaglia was the first to draw those curves observing the trajectories of cannon balls. However, the air doesn’t only limit the forward motion. When balls spin, the Robin-Magnus effect produces a lateral force and curves the trajectory. This is studied in the case of clearances in soccer. Lateral aerodynamic forces also exist when the ball has no spin. The turbulent behavior of the flow around a spherical particle provides lateral forces with complex temporal dependency. This induces zigzag trajectories which are occasionally observed in volley, soccer and baseball. We inspect the condition of occurrence of this phenomenon. Then, the case of non spherical balls are considered. Such balls are used in rugby, football and badminton. Shuttlecocks have the propriety to fly the nose ahead which oblige them to flip after each racket impact. Finally, we study the motion of a fluid particle with the particular case of a Leidenfrost liquid ring. A such object is created by approaching an annular magnet from a paramagnetic liquid oxygen drop. The closing dynamics of this non wetting ring is described with by the way of a potential flow approach. [less ▲]

This work started as part of a Specific Targeted Research Project, ADONIS (FP-6 of the European Commission) and the aim of the project was the development of active targeting gold nanoparticles for ... [more ▼]

This work started as part of a Specific Targeted Research Project, ADONIS (FP-6 of the European Commission) and the aim of the project was the development of active targeting gold nanoparticles for optoacoustic imaging, from chemistry to biology. The establishment of a biosensor composed of antibody-functionalized gold nanorods is achieved on a model of tumor, in our case prostate cancer. Prostate cancer is a major public health problem in our industrialized countries, indeed it is the most frequent cancer and the second leading cause of death by cancer in men [1]. A major challenge in prostate cancer oncology is to develop more accurate, precise and less invasive tools for early stage diagnostic, including more accurate imaging assessments than those currently available. An efficient imaging technique which significantly improves the sensitivity and the specificity of the diagnostic and enables prediction of the cancer behavior would be extremely valuable to oncologists. Briefly the developed biosensor model consists of a gold nanorod – designed to convert a primary optical excitation into a detectable acoustic signal – coupled with a monoclonal antibody that targets prostate cancer cells for a specific recognition. Improved access to the target can be achieved by targeting accessible extracellular domain of a membrane protein, here the Prostate Specific Membrane Antigen (PSMA) [2]. PSMA is a transmembrane protein considered as a suitable biomarker for prostate cancer [3] and which is under intense investigation for use as an imaging and therapeutic target. PSMA is highly expressed in prostate cancers and also expressed in the tumor associated neovasculature of most solid cancers [4]. Before biological assessments the cytotoxic surfactant, essential to form rod-shaped nanoparticles, is exchanged by a mixture of two functionalized polyethylene glycol (PEG) molecules: HS-PEG-OMe for nanoparticle passivation and HS-PEG-NH2 for subsequent coupling with the antibody. The different cytotoxicity assays are achieved to establish the toxic threshold of the surfactant in order to know what CTAB concentration maybe tolerable on the cells. This argument is important during the displacement of the surfactant, based on successive centrifugations, because the whole discard of CTAB seem to be time-consuming or even routinely unfeasible. Once this threshold drawn up, the PEGylated GNRs can be assessed on cancer cells, what seems being a common in vitro investigation. However unexpected issues came up during the experiments and had to be considered due to the properties of the nanomaterial. Nevertheless, after cytotoxicity assessment of PEGylated nanoparticles, the biosensor binding on targeted cells was assessed by fluorescence and scanning electron microscopies, two straightforward and flexible techniques. The antibody coupled to the gold nanorod is specific to the human prostate carcinoma LNCaP cell line, reported to express PSMA which is an admitted biomarker of this cell line [5]. Finally, in order to complete the specific targeting of the biosensor, the antibody-coupled gold nanorods are injected in nude mice to evaluate their biodistribution and bioaccumulation for which inductively coupled plasma mass spectrometry (ICP/MS) is the technique of choice. Preliminary optoacoustic imaging is the ultimate step for the state-of-theart of the developed biosensor. Although the promising end results, particularly biodistribution assays, new questioning swarm and this is more and more discussed in publications due to the in vivo use of nanomaterials. Owing to their increasingly extensive use, their nanometer sizes and their physiological contact (more or less long), controlling the interaction of nanoparticles with biological systems became a fundamental challenge of nanomedicine [6]. Therefore the protein opsonization on the gold nanorods is a tremendous study and is accomplished via mass spectrometry analyses. [less ▲]

Oscillators---whose steady-state behavior is periodic rather than constant---are observed in every field of science. While they have been studied for a long time as closed systems, they are increasingly ... [more ▼]

Oscillators---whose steady-state behavior is periodic rather than constant---are observed in every field of science. While they have been studied for a long time as closed systems, they are increasingly regarded as open systems, that is, systems that interact with their environment. Because their functions involve interconnection, the relevance of input--output systems theory to model, analyze, and control oscillators is obvious. Yet, due to the nonlinear nature of oscillators, methodological tools to study their systems properties remain scarce. In particular, few studies focus on the interface between two fundamental descriptions of oscillators, namely the (internal) state-space representation and the (external) circle representation. Starting with the pioneering work of Arthur Winfree, the phase response curve of an oscillator has emerged as the fundamental input--output characteristic linking both descriptions. The present dissertation aims at studying the systems properties of oscillators through the properties of their phase response curve. The main contributions of this dissertation are the following. We distinguish between two fundamental classes of oscillators. These classes differ in the local destabilizing mechanism that transforms the stable equilibrium of a globally dissipative system into a periodic orbit. To address input--output systems questions in the space of response curves, we equip this space with the right metrics and develop a (local) sensitivity analysis of infinitesimal phase response curves. This main contribution of the thesis is completed by the numerical tools required to turn the abstract developments into concrete algorithms. We illustrate how these analysis tools allow to address pertinent systems questions about models of circadian rhythms (robustness analysis and system identification) and of neural oscillators (model classification). These two biological rhythms are exemplative of both main classes of oscillators. We also design elementary control strategies to assign the phase of an oscillator. Motivated by an inherent limitation of infinitesimal methods for relaxation type of oscillators, we develop the novel geometric concept of ``singularly perturbed phase response curve' which exploits the time-scale separation to predict the phase response to finite perturbations. In conclusion, the present dissertation investigates input--output systems analysis of oscillators through their phase response curve at the interface between their external and internal descriptions, developing theoretical and numerical tools to study models arising in the biology of cellular rhythms. [less ▲]