Workplace performance of a loose-fitting powered air purifying respirator during nanoparticle synthesis

Nanoparticle (particles with diameter ≤100 nm) exposure is recognized as a potentially harmful size fraction for pulmonary particle exposure. During nanoparticle synthesis, the number concentrations in the process room may exceed 10 × 10<sup>6</sup> cm<sup>−3</sup>. During such conditions, it is essential that the occupants in the room wear highly reliable high-performance respirators to prevent inhalation exposure. Here we have studied the in-use program protection factor (PPF) of loose-fitting powered air purifying respirators, while workers were coating components with TiO<inf>2</inf> or Cu<inf>x</inf>O<inf>y</inf> nanoparticles under a hood using a liquid flame spray process. The PPF was measured using condensation particle counters, an electrical low pressure impactor, and diffusion chargers. The room particle concentrations varied from 4 × 10<sup>6</sup> to 40 × 10<sup>6</sup> cm<sup>−3</sup>, and the count median aerodynamic diameter ranged from 32 to 180 nm. Concentrations inside the respirator varied from 0.7 to 7.2 cm<sup>−3</sup>. However, on average, tidal breathing was assumed to increase the respirator concentration by 2.3 cm<sup>−3</sup>. The derived PPF exceeded 1.1 × 10<sup>6</sup>, which is more than 40 × 10<sup>3</sup> times the respirator assigned protection factor. We were unable to measure clear differences in the PPF of respirators with old and new filters, among two male and one female user, or assess most penetrating particle size. This study shows that the loose-fitting powered air purifying respirator provides very efficient protection against nanoparticle inhalation exposure if used properly.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Physics, Research group: Aerosol Synthesis, National Research Centre for the Working Environment, Finnish Institute of Occupational Health, Helsinki University, TNO

The health of a software ecosystem is argued to be a key indicator of well-being, longevity and performance of a network of companies. In this paper, we address what scientific literature actually means with the concept of ‘ecosystem health’ by selecting relevant articles with systematic literature review. Based on the final set of 38 papers, we found that despite a common base, the term has been used to depict a wide range of hoped characteristics of a software ecosystem. However, the number of studies addressing the topic is shown to grow while empirical studies are still rare. Thus, further studies should aim to standardize the terminology and concepts in order to create a common base for future work. Further work is needed also to develop early indicators that warn and guides companies on problems with their ecosystems.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Pori Department, Research group: Business Ecosystems, Networks and Innovations, Managing digital industrial transformation (mDIT), VTT Technical Research Centre of Finland, University of Turku, University of Turku, Turku School of Economics, Department of Management and Entrepreneurship, Innovation and Knowledge Economy, VTT Technical Research Centre of Finland

Bibliographical note

UX work in startups: Current practices and future needs

Startups are creating innovative new products and services while seeking fast growth with little resources. The capability to produce software products with good user experience (UX) can help the startup to gain positive attention and revenue. Practices and needs for UX design in startups are not well understood. Research can provide insight on how to design UX with little resources as well as to gaps about what kind of better practices should be developed. In this paper we describe the results of an interview study with eight startups operating in Finland. Current UX practices, challenges and needs for the future were investigated. The results show that personal networks have a significant role in helping startups gain professional UX advice as well as user feedback when designing for UX. When scaling up startups expect usage data and analytics to guide them towards better UX design.

Using the entity-attribute-value model for olap cube construction

When utilising multidimensional OLAP (On-Line Analytic Processing) analysis models in Business Intelligence analysis, it is common that the users need to add new, unanticipated dimensions to the OLAP cube. In a conventional implementation, this would imply frequent re-designs of the cube's dimensions. We present an alternative method for the addition of new dimensions. Interestingly, the same design method can also be used to import EAV (Entity-Attribute-Value) tables into a cube. EAV tables have earlier been used to represent extremely sparse data in applications such as biomedical databases. Though space-efficient, EAV-representation can be awkward to query. Our EAV-to-OLAP cube methodology has an advantage of managing many-to-many relationships in a natural manner. Simple theoretical analysis shows that the methodology is efficient in space consumption. We demonstrate the efficiency of our approach in terms of the speed of OLAP cube re-processing when importing EAV-style data, comparing the performance of our cube design method with the performance of the conventional cube design.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Research Community on Data-to-Decision (D2D), Helsinki Institute of Physics, European Organization for Nuclear Research

Contributors: Thanisch, P., Niemi, T., Niinimaki, M., Nummenmaa, J.

Number of pages: 14

Pages: 59-72

Publication date: 2011

Host publication information

Title of host publication: Perspectives in Business Informatics Research - 10th International Conference, BIR 2011, Proceedings

Using building simulation to model the drying of flooded building archetypes

With a changing climate, London is expected to experience more frequent periods of intense rainfall and tidal surges, leading to an increase in the risk of flooding. This paper describes the simulation of the drying of flooded building archetypes representative of the London building stock using the EnergyPlus-based hygrothermal tool 'University College London-Heat and Moisture Transfer (UCL-HAMT)' in order to determine the relative drying rates of different built forms and envelope designs. Three different internal drying scenarios, representative of conditions where no professional remediation equipment is used, are simulated. A mould model is used to predict the duration of mould growth risk following a flood on the internal surfaces of the different building types. Heating properties while keeping windows open dried dwellings fastest, while purpose built flats and buildings with insulated cavity walls were found to dry slowest.

Background: Urothelial pathogenesis is a complex process driven by an underlying network of interconnected genes. The identification of novel genomic target regions and gene targets that drive urothelial carcinogenesis is crucial in order to improve our current limited understanding of urothelial cancer (UC) on the molecular level. The inference of genome-wide gene regulatory networks (GRN) from large-scale gene expression data provides a promising approach for a detailed investigation of the underlying network structure associated to urothelial carcinogenesis. Methods: In our study we inferred and compared three GRNs by the application of the BC3Net inference algorithm to large-scale transitional cell carcinoma gene expression data sets from Illumina RNAseq (179 samples), Illumina Bead arrays (165 samples) and Affymetrix Oligo microarrays (188 samples). We investigated the structural and functional properties of GRNs for the identification of molecular targets associated to urothelial cancer. Results: We found that the urothelial cancer (UC) GRNs show a significant enrichment of subnetworks that are associated with known cancer hallmarks including cell cycle, immune response, signaling, differentiation and translation. Interestingly, the most prominent subnetworks of co-located genes were found on chromosome regions 5q31.3 (RNAseq), 8q24.3 (Oligo) and 1q23.3 (Bead), which all represent known genomic regions frequently deregulated or aberated in urothelial cancer and other cancer types. Furthermore, the identified hub genes of the individual GRNs, e.g., HID1/DMC1 (tumor development), RNF17/TDRD4 (cancer antigen) and CYP4A11 (angiogenesis/ metastasis) are known cancer associated markers. The GRNs were highly dataset specific on the interaction level between individual genes, but showed large similarities on the biological function level represented by subnetworks. Remarkably, the RNAseq UC GRN showed twice the proportion of significant functional subnetworks. Based on our analysis of inferential and experimental networks the Bead UC GRN showed the lowest performance compared to the RNAseq and Oligo UC GRNs. Conclusion: To our knowledge, this is the first study investigating genome-scale UC GRNs. RNAseq based gene expression data is the data platform of choice for a GRN inference. Our study offers new avenues for the identification of novel putative diagnostic targets for subsequent studies in bladder tumors.

Two models for hydraulic cylinders in flexible multibody simulations

In modelling hydraulic cylinders interaction between the structural response and the hydraulic system needs to be taken into account. In this chapter two approaches for modelling flexible multibody systems coupled with hydraulic actuators i.e. cylinders are presented and compared. These models are the truss-elementlike cylinder and bending flexible cylinder models. The bending flexible cylinder element is a super-element combining the geometrically exact Reissner-beam element, the C1-continuous slide-spring element needed for the telescopc movement and the hydraulic fluid field. Both models are embeded with a friction model based on a bristle approach. The models are implemented in a finite element enviroment. In time the coupled stiff differential equation system is integrated using the L-stable Rosenbrock method.

A new figure of merit for single transverse mode operation and an accurate procedure for calculating the coupling coefficient in distributed feedback lasers with laterally-coupled ridge waveguide surface grating structures are introduced. Based on the difference in optical confinement between the pumped and un-pumped regions in the transverse plane, the single transverse mode figure of merit is effective and easy to calculate, while the improved coupling coefficient calculation procedure gives experimentally confirmed better results than the standard calculation approaches.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Optoelectronics Research Centre, Research group: Semiconductor Technology and Applications

Contributors: Uusitalo, T., Virtanen, H., Dumitrescu, M.

Number of pages: 2

Pages: 79-80

Publication date: 17 Aug 2016

Host publication information

Title of host publication: 16th International Conference on Numerical Simulation of Optoelectronic Devices, NUSOD 2016

Towards usability heuristics for games utilizing speech recognition

Speech recognition technology has reached the maturity required by serious business applications, and the game industry is increasingly adopting the technology. Since usability is one of the key elements of enjoyability and, thus, the successfulness of games, a thorough analysis of the elements, properties and effects of this new user interface is needed. However, there seems to be no existing speech interface usability analysis methods for computer games. A pragmatic and rigorous framework, which the game industry could easily adopt, could help the utilization of speech recognition technology. In this paper, we discuss the usefulness of voice recognition in games and propose usability heuristics for games utilizing speech recognition.

Dataflow languages enable describing signal processing applications in a platform independent fashion, which makes them attractive in today's multiprocessing era. RVC-CAL is a dynamic dataflow language that enables describing complex data-dependent programs such as video decoders. To this date, design automation toolchains for RVC-CAL have enabled creating workstation software, dedicated hardware and embedded application specific multiprocessor implementations out of RVC-CAL programs. However, no solution has been presented for executing RVC-CAL applications on generic embedded multiprocessing platforms. This paper presents a dataflow-based multiprocessor communication model, an architecture prototype that uses it and an automated toolchain for instantiating such a platform and the software for it. The complexity of the platform increases linearly as the number of processors is increased. The experiments in this paper use several instances of the proposed platform, with different numbers of processors. An MPEG-4 video decoder is mapped to the platform and executed on it. Benchmarks are performed on an FPGA board.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), Dept. of Computer Science and Engineering, Univ of Oulu

Towards an approach for evaluating the quality of requirements

In engineering design, the needs of stakeholders are often captured and expressed in natural language (NL). While this facilitates such tasks as sharing information with nonspecialists, there are several associated problems including ambiguity, incompleteness, understandability, and testability. Traditionally, these issues were managed through tedious procedures such as reading requirements documents and looking for errors, but new approaches are being developed to assist designers in collecting, analysing, and clarifying requirements. The quality of the end-product is strongly related to the clarity of requirements and, thus, requirements should be managed carefully. This paper proposes to combine diverse requirements quality measures found from literature. These metrics are coherently integrated in a single software tool. This paper also proposes a new metric for clustering requirements based on their similarity to increase the quality of requirement model. The proposed methodology is tested on a case study and results show that this tool provides designers with insight on the quality of individual requirements as well as with a holistic assessment of the entire set of requirements.

Software Quality Assurance is a complex and time-expensive task. In this study we want to observe how agile developers react to just-in-time metrics about the code smells they introduce, and how the metrics influence the quality of the output.

The increasing number of cores in System on Chips (SoC) has introduced challenges in software parallelization. As an answer to this, the dataflow programming model offers a concurrent and reusability promoting approach for describing applications. In this work, a runtime for executing Dataflow Process Networks (DPN) on multicore platforms is proposed. The main difference between this work and existing methods is letting the operating system perform Central processing unit (CPU) load-balancing freely, instead of limiting thread migration between processing cores through CPU affinity. The proposed runtime is benchmarked on desktop and server multicore platforms using five different applications from video coding and telecommunication domains. The results show that the proposed method offers significant improvements over the state-of-art, in terms of performance and reliability.

Topological patterns for scalable representation and analysis of dataflow graphs

Tools for designing signal processing systems with their semantic foundation in dataflow modeling often use high-level graphical user interfaces (GUIs) or text based languages that allow specifying applications as directed graphs. Such graphical representations serve as an initial reference point for further analysis and optimizations that lead to platform-specific implementations. For large-scale applications, the underlying graphs often consist of smaller substructures that repeat multiple times. To enable more concise representation and direct analysis of such substructures in the context of high level DSP specification languages and design tools, we develop the modeling concept of topological patterns, and propose ways for supporting this concept in a high-level language. We augment the dataflow interchange format (DIF) language-a language for specifying DSP-oriented dataflow graphs-with constructs for supporting topological patterns, and we show how topological patterns can be effective in various aspects of embedded signal processing design flows using specific application examples.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), University of Maryland, National Instruments, Air Force Research Laboratory Information Directorate, Department of Electrical and Computer Engineering

To network or not to network? Analysis of the Finnish software industry-A networking approach

The purpose of this paper is to study the role of networking in the development and present situation of Finnish software companies. Although the target of interest of this study is Finland, the conclusions can also to some extent be applied to other countries with mature software industries. In Finland there is uniquely wide longitudinal material on the software business available; the software industry survey is an annual study targeted for the branch, which has already been repeated for 18 consecutive years. The study shows that networking has been a key trend in the industry and also a driver for internationalization, but as it has not been identified very well in networking literature concerning the software industry, there is a clear need for further examination of software industry networks.

Bibliographical note

Timely report production from WWW data sources

In business intelligence, reporting is perceived by users as the most important area. Here, we present a case study of data integration for reporting within the World Health Organization (WHO). WHO produces Communicable Disease Epidemiological Profiles for emergency affected countries. Given the nature of emergencies, the production of these reports should be timely. In order to automate the production of the reports, we have introduced a method of integrating data from multiple sources by using the RDF (Resource Description Framework) format. The model of the data is described using an RDF ontology, making validation of the data from multiple sources possible. However, since RDF is highly technical, we have designed a graphical tool for the end user. The tool can be used to configure the data sources of a given report. After this, data for the report is generated from the sources. Finally, templates are used to generate the reports.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Research Community on Data-to-Decision (D2D), European Organization for Nuclear Research, Helsinki Institute of Physics, World Health Organization Avenue Appia 20

Device-to-device (D2D) communications is expected to become an integral part of the future 5G cellular systems. The connectivity performance of D2D sessions is heavily affected by the dynamic changes in the signal-to-interference ratio (SIR) caused by random movement of communicating pairs over a certain bounded area of interest. In this paper, taking into account the recent findings on the movement of users over a landscape, we characterize the probability density function (pdf) of SIR under stochastic motion of communicating D2D pairs on planar fractals. We demonstarte that the pdf of SIR depends on the fractal dimension and the spatial density of trajectories. The proposed model can be further used to investigate timedependent user-centric performance metrics including the link data rate and the outage time.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Electronics and Communications Engineering, Research group: Emerging Technologies for Nano-Bio-Info-Cogno, Peoples’ Friendship University of Russia, Keldysh Institute of Applied Mathematics, Department of Applied Probability and Informatics, Moscow City University

Theory driven design and real proto typing of biomass pyrolitic stove

This article introduces a design approach integrating early design phase and model based engineering in order to develop innovative biomass gasifier system for rural communities in Africa. The need for such a systemic perspective is imposed by the imbrication of technical, ecological and cultural issues that cannot be ignored while designing new technology. The article proposes an integrated generic design theory approaches to discover and rank by order of importance system's variables and to single out most desired design parameters. A pre-design user requirement assessment was carried out to identify detailed stove's functions. Causal-ordering diagrams sketched for system's modelling. System functions were described graphically and synthesized through simple linear algebraic matrices. Contradictions in system functions were solved using Theory of Inventive Thinking (TRIZ 40). And system's optimization was done through simple Taguchi experimentation method. A two level L8 degree of freedom Taguchi table was used in the experimentation and optimization of the pyrolitic stove. The design approach was exemplified using the case of the "AKIBA" biomass stove.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Kenya Industrial Research and Development Institute (KIRDI), Aalto University School of Engineering, Department of Engineering Design and Production, Aalto University

Contributors: Ogeya, M. C., Coatanéa, E., Medyna, G.

Number of pages: 10

Pages: 69-78

Publication date: 2013

Host publication information

Title of host publication: Proceedings of the International Conference on Engineering Design, ICED

We consider the use of the Fisher-Snedecor F distribution, which is defined as the ratio of two chi-squared variates, to model composite fading channels. In this context, the root-mean-square power of a Nakagami- m signal is assumed to be subject to variations induced by an inverse Nakagami- m random variable. Comparisons with physical channel data demonstrate that the proposed composite fading model provides as good, and in most cases better, fit to the data compared to the generalized- K composite fading model. Motivated by this result, simple novel expressions are derived for the key statistical metrics and performance measures of interest.

The development of constructability using BIM as an intensifying technology

According to the several international research and development articles, completed building plans that take care of constructability issues, contributes the achievement of construction objectives of time, cost and quality.A good constructability improves construction performance, productivity and quality. Building information modeling (BIM) has the similar effect to construction. BIM simulates the construction project in a virtual environment. It is possible to make constructability adjustments in the model, and practice construction before it is actualized.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Life Cycle Effectiveness of the Built Environment (LCE@BE), Aalto University

The assessment of constructability: BIM cases

The constructability appraisal methods developed so far are based on evaluating and analyzing major design components and systems of an entire building, such as structural systems, materials and production techniques. There is still only a limited knowledge of the assessment methods of constructability using BIM as an intensifying technology. Forming constructability to a more explicit and measurable concept quantitative and qualitative assessment methods that can be applied systematically will be needed.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Life Cycle Effectiveness of the Built Environment (LCE@BE), Aalto University

Technostress and social networking services: Uncovering strains and their underlying stressors

Numerous users of social networking sites and services (SNS) suffer from technostress and its various strains that hinder well-being. Despite a growing research interest on technostress, the extant studies have not explained what kinds of various strains can SNS use create and how can these strains be traced back to different stressors. To address this gap in research, we employed a qualitative approach by narrative interviews. As a contribution, our findings introduce four SNS strains (concentration problems, sleep problems, identity problems, and social relation problems) and explain how they link with different underlying SNS stressors. As practical implications, the findings of this study can help technostressed users to identify their SNS strains, understand how they are created, and increase their possibilities to avoid the strains in the future.

Particle filters (PFs) have been used for the nonlinear estimation for a number of years. However, they suffer from the impoverishment phenomenon. It is brought by resampling which intends to prevent particle degradation, and therefore becomes the inherent weakness of this technique. To solve the problem of sample impoverishment and to improve the performance of the standard particle filter we propose a modification to this method by adding a sampling mechanism inspired by optimisation techniques, namely, the pattern search, particle swarm optimisation, differential evolution and Nelder-Mead algorithms. In the proposed methods, the true state of the target can be better expressed by the optimised particle set and the number of meaningful particles can be grown significantly. The efficiency of the proposed particle filters is supported by a truck-trailer problem. Simulations show that the hybridised particle filter with Nelder-Mead search is better than other optimisation approaches in terms of particle diversity.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Signal Processing, University of Toledo, Bowling Green State University

Structural Similarity Index with Predictability of Image Blocks

Structural similarity index (SSIM) is a widely used full-reference metric for assessment of visual quality of images and remote sensing data. It is calculated in a block-wise manner and is based on multiplication of three components: similarity of means of image blocks, similarity of contrasts and a correlation factor. In this paper, two modifications of SSIM are proposed. First, a fourth multiplicative component is introduced to SSIM (thus obtaining SSIM4) that describes a similarity of predictability of image blocks. A predictability for a given block is calculated as a minimal value of mean square error between the considered block and the neighboring blocks. Second, a simple scheme for calculating the metrics SSIM and SSIM4 for color images is proposed and optimized. Effectiveness of the proposed modifications is confirmed for the specialized image databases TID2013, LIVE, and FLT. In particular, the Spearman rank order correlation coefficient (SROCC) for the recently introduced FLT Database, calculated between the proposed metric color SSIM4 and mean opinion scores (MOS), has reached the value 0.85 (the best result for all compared metrics) whilst for SSIM it is equal to 0.58.

Bibliographical note

Structural influence of gene networks on their inference: Analysis of C3NET

Background: The availability of large-scale high-throughput data possesses considerable challenges toward their functional analysis. For this reason gene network inference methods gained considerable interest. However, our current knowledge, especially about the influence of the structure of a gene network on its inference, is limited.Results: In this paper we present a comprehensive investigation of the structural influence of gene networks on the inferential characteristics of C3NET - a recently introduced gene network inference algorithm. We employ local as well as global performance metrics in combination with an ensemble approach. The results from our numerical study for various biological and synthetic network structures and simulation conditions, also comparing C3NET with other inference algorithms, lead a multitude of theoretical and practical insights into the working behavior of C3NET. In addition, in order to facilitate the practical usage of C3NET we provide an user-friendly R package, called c3net, and describe its functionality. It is available from https://r-forge.r-project.org/projects/c3net and from the CRAN package repository.Conclusions: The availability of gene network inference algorithms with known inferential properties opens a new era of large-scale screening experiments that could be equally beneficial for basic biological and biomedical research with auspicious prospects. The availability of our easy to use software package c3net may contribute to the popularization of such methods.Reviewers: This article was reviewed by Lev Klebanov, Joel Bader and Yuriy Gusev.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Research Community on Data-to-Decision (D2D), University of Cambridge, Computational Biology and Machine Learning Lab., Faculty of Medicine, Health and Life Sciences, Queen's University, Belfast, Northern Ireland, Cambridge Research Institute

Spectral modeling of time series with missing data

Singular spectrum analysis is a natural generalization of principal component methods for time series data. In this paper we propose an imputation method to be used with singular spectrum-based techniques which is based on a weighted combination of the forecasts and hindcasts yield by the recurrent forecast method. Despite its ease of implementation, the obtained results suggest an overall good fit of our method, being able to yield a similar adjustment ability in comparison with the alternative method, according to some measures of predictive performance.

Software vulnerability life cycles and the age of software products: An empirical assertion with operating system products

This empirical paper examines whether the age of software products can explain the turnaround between the release of security advisories and the publication vulnerability information. Building on the theoretical rationale of vulnerability life cycle modeling, this assertion is examined with an empirical sample that covers operating system releases from Microsoft and two Linux vendors. Estimation is carried out with a linear regression model. The results indicate that the age of the observed Microsoft products does not affect the turnaround times, and only feeble statistical relationships are present for the examined Linux releases. With this negative result, the paper contributes to the vulnerability life cycle modeling research by presenting and rejecting one theoretically motivated and previously unexplored question. The rejection is also a positive result; there is no reason for users to fear that the turnaround times would significantly lengthen as operating system releases age.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: University of Turku, Department of Information Technology

Software evolution and time series volatility: An empirical exploration

The paper presents the first empirical study to examine econometric time series volatility modeling in the software evolution context. The econometric volatility concept is related to the conditional variance of a time series rather than the conditional mean targeted in conventional regression analysis. The software evolution context is motivated by relating these variance characteristics to the proximity of operating system releases, the theoretical hypothesis being that volatile characteristics increase nearby new milestone releases. The empirical experiment is done with a case study of FreeBSD. The analysis is carried out with 12 time series related to bug tracking, development activity, and communication. A historical period from 1995 to 2011 is covered under a daily sampling frequency. According to the results the time series dataset contains visible volatility characteristics, but these cannot be explained by the time windows around the six observed major FreeBSD releases. The paper consequently contributes to the software evolution research field with new methodological ideas, as well as with both positive and negative empirical results.

Sir distribution in D2D environment with non-stationary mobility of users

Fifth generation (5G) cellular systems are expected to rely on the set of advanced networking techniques to further enhance the spatial frequency reuse. Device-todevice (D2D) communications is one of them allowing users to establish opportunitic direct connections. The use of direct communications is primarily determined by the signal-to-interference ratio (SIR). However, depending on the users movement, the SIR of an ative connection is expected to drastically fluctuate. In this work we develop an analytical framework allowing to predict the channel quality between two moving entities in a filed of moving interfering stations. Assuming users movement driven by Fokker-Planck equation we obtain the empirical probability density function of SIR. The proposed methodology can be used to solve problems in the area of stochastic control of D2D communications in cellular networks.

Sir analysis in square-shaped indoor premises

The increased wireless network densification has resulted in availability of wireless access points (AP) in almost each and every indoor location (room, office, etc.). To provide complete in-building coverage very often an AP is deployed per room. In this paper we analyze signal-to-interference (SIR) ratio for wireless systems operating in neighboring rooms separated by walls of different materials by explicitly taking into account the propagation and wall penetration losses. Both AP and direct device-to-device (D2D) configurations are addressed. Our numerical results indicate that the performance of such system is characterized by both the loss exponent describing the propagation environment of interest and wall materials. We provide the numerical results for typical wall widths/materials and analyze them in detail.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Department of Electronics and Communications Engineering, Research group: Emerging Technologies for Nano-Bio-Info-Cogno, Peoples’ Friendship University of Russia, Russian Academy of Sciences, Peoples' Friendship University of Russia

Simulation studies targeting high-power narrow-linewidth emission from DFB lasers are presented. The linewidth and output power calculations take into account the mirror losses, including the grating and the facets, as well as spontaneous emission noise, effective refractive index, power and carrier density variations inside the cavity. The longitudinal power and carrier density distributions have been evaluated and their effects on longitudinal spatial hole burning and possible side mode lasing are discussed.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Optoelectronics Research Centre, Research group: Semiconductor Technology and Applications

Contributors: Virtanen, H., Uusitalo, T., Dumitrescu, M.

Number of pages: 2

Pages: 153-154

Publication date: 17 Aug 2016

Host publication information

Title of host publication: 16th International Conference on Numerical Simulation of Optoelectronic Devices, NUSOD 2016

Simulations and experimental results of high-frequency photon-photon resonance are used to examine the possibilities to extend the direct modulation bandwidth in dual-mode distributed feedback lasers beyond the conventional limit set by the carrier-photon resonance.

Bibliographical note

Search reliability and search efficiency of combined Lévy-Brownian motion: Long relocations mingled with thorough local exploration

A combined dynamics consisting of Brownian motion and Lévy flights is exhibited by a variety of biological systems performing search processes. Assessing the search reliability of ever locating the target and the search efficiency of doing so economically of such dynamics thus poses an important problem. Here we model this dynamics by a one-dimensional fractional Fokker-Planck equation combining unbiased Brownian motion and Lévy flights. By solving this equation both analytically and numerically we show that the superposition of recurrent Brownian motion and Lévy flights with stable exponent α <1, by itself implying zero probability of hitting a point on a line, leads to transient motion with finite probability of hitting any point on the line. We present results for the exact dependence of the values of both the search reliability and the search efficiency on the distance between the starting and target positions as well as the choice of the scaling exponent α of the Lévy flight component.

Revenue models of application developers in android market ecosystem

Mobile application ecosystems have growth rapidly in the past few years. Increasing number of startups and established developers are alike offering their products in different marketplaces such as Android Market and Apple App Store. In this paper, we are studying revenue models used in Android Market. For analysis, we gathered the data of 351,601 applications from their public pages at the marketplace. From these, a random sample of 100 applications was used in a qualitative study of revenue streams. The results indicate that a part of the marketplace can be explained with traditional models but free applications use complex revenue models. Basing on the qualitative analysis, we identified four general business strategy categories for further studies.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Managing digital industrial transformation (mDIT), Turku Centre for Computer Science, Business and Innovation Development (BID), University of Turku

G protein-coupled receptors (GPCRs) control cellular signaling and responses. Many of these GPCRs are modulated by cholesterol and polyunsaturated fatty acids (PUFAs) which have been shown to co-exist with saturated lipids in ordered membrane domains. However, the lipid compositions of such domains extracted from the brain cortex tissue of individuals suffering from GPCR-associated neurological disorders show drastically lowered levels of PUFAs. Here, using free energy techniques and multiscale simulations of numerous membrane proteins, we show that the presence of the PUFA DHA helps helical multi-pass proteins such as GPCRs partition into ordered membrane domains. The mechanism is based on hybrid lipids, whose PUFA chains coat the rough protein surface, while the saturated chains face the raft environment, thus minimizing perturbations therein. Our findings suggest that the reduction of GPCR partitioning to their native ordered environments due to PUFA depletion might affect the function of these receptors in numerous neurodegenerative diseases, where the membrane PUFA levels in the brain are decreased. We hope that this work inspires experimental studies on the connection between membrane PUFA levels and GPCR signaling.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Physics, University of Helsinki, Institute of Organic Chemistry and Biochemistry, Academy of Sciences of the Czech Republic, Universitat Autònoma de Barcelona, University of Texas Health Science Center at Houston, MEMPHYS

Bibliographical note

Reconfigurable miniature sensor nodes for condition monitoring

The wireless sensor networks are being deployed at escalating rate for various application fields. The ever growing number of application areas requires a diverse set of algorithms with disparate processing needs. The wireless sensor networks also need to adapt to the prevailing energy conditions and processing requirements. The preceding reasons rule out the use of a single fixed design. Instead a general purpose design that can rapidly adapt to different conditions and requirements is desired. In lieu of the traditional inflexible wireless sensor node consisting of a micro-controller, radio transceiver, sensor array and energy storage, we propose a rapidly reconfigurable miniature sensor node, implemented with a transport triggered architecture processor on a low-power Flash FPGA. Also power consumption and silicon area usage comparison between 16-bit fixed and floating point and 32-bit floating point implementations is presented in this paper. The implemented processors and algorithms are intended for rolling bearing condition monitoring, but can be fully extended for other applications as well.

Reconfigurable computing for future vision-capable devices

Mobile devices have been identified as promising platforms for interactive vision-based applications. However, this type of applications still pose significant challenges in terms of latency, throughput and energy-efficiency. In this context, the integration of reconfigurable architectures on mobile devices allows dynamic reconfiguration to match the computation and data flow of interactive applications, demonstrating significant performance benefits compared to general purpose architectures. This paper presents concepts laying on platform level adaptability, exploring the acceleration of vision-based interactive applications through the utilization of three reconfigurable architectures: A low-power EnCore processor with a Configurable Flow Accelerator co-processor, a hybrid reconfigurable SIMD/MIMD platform and Transport-Triggered Architecture-based processors. The architectures are evaluated and compared with current processors, analyzing their advantages and weaknesses in terms of performance and energy-efficiency when implementing highly interactive vision-based applications. The results show that the inclusion of reconfigurable platforms on mobile devices can enable the computation of several computationally heavy tasks with high performance and small energy consumption while providing enough flexibility.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Signal Processing Research Community (SPRC), Univ of Oulu, University of Santiago de Compostela (USC)

The upcoming Reconfigurable Video Coding (RVC) standard from MPEG (ISO / IEC SC29WG11) defines a library of coding tools to specify existing or new compressed video formats and decoders. The coding tool library has been written in a dataflow/actor-oriented language named CAL. Each coding tool (actor) can be represented with an extended finite state machine and the data communication between the tools are described as dataflow graphs. This paper proposes an approach to model the CAL actor network with Parameterized Synchronous Data Flow and to derive a quasi-static multiprocessor execution schedule for the system. In addition to proposing a scheduling approach for RVC, an extension to the well-known permutation flow shop scheduling problem that enables rapid run-time scheduling of RVC tasks, is introduced.

Quantifying the non-ergodicity of scaled Brownian motion

We examine the non-ergodic properties of scaled Brownian motion (SBM), a non-stationary stochastic process with a time dependent diffusivity of the form $D(t)\simeq {t}^{\alpha -1}$. We compute the ergodicity breaking parameter EB in the entire range of scaling exponents α, both analytically and via extensive computer simulations of the stochastic Langevin equation. We demonstrate that in the limit of long trajectory lengths T and short lag times Δ the EB parameter as function of the scaling exponent α has no divergence at α = 1/2 and present the asymptotes for EB in different limits. We generalize the analytical and simulations results for the time averaged and ergodic properties of SBM in the presence of ageing, that is, when the observation of the system starts only a finite time span after its initiation. The approach developed here for the calculation of the higher time averaged moments of the particle displacement can be applied to derive the ergodic properties of other stochastic processes such as fractional Brownian motion.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Physics, Institute for Physics and Astronomy, University of Potsdam, Akhiezer Institute for Theoretical Physics, Kharkov Institute of Physics and Technology, Institute for Physics AndAstronomy, Humboldt-Universität zu Berlin, Shahid Beheshti University

Properties of graph distance measures by means of discrete inequalities

In this paper, we investigate graph distance measures based on topological graph measures. Those measures can be used to measure the structural distance between graphs. When studying the scientific literature, one is aware that measuring distance/similarity between graphs meaningfully has been intricate. We demonstrate that our measures are well-defined and prove bounds for investigating their value domain. Also, we generate numerical results and demonstrate that the measures have useful properties.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Faculty of Biomedical Sciences and Engineering, Research group: Computational Medicine and Statistical Learning Laboratory (CMSL), Research group: Predictive Society and Data Analytics (PSDA), University of Applied Sciences Upper Austria, Nankai University, Institute for Bioinformatics and Translational Research, Laboratory of Biosystem Dynamics, Institute of Biosciences and Medical Technology, Institute for Intelligent Production, The City College of New York (CUNY)

Context: Unhandled code exceptions are often the cause of a drop in the number of users. In the highly competitive market of Android apps, users commonly stop using applications when they find some problem generated by unhandled exceptions. This is often reflected in a negative comment in the Google Play Store and developers are usually not able to reproduce the issue reported by the end users because of a lack of information. Objective: In this work, we present an industrial case study aimed at prioritizing the removal of bugs related to uncaught exceptions. Therefore, we (1) analyzed crash reports of an Android application developed by a public transportation company, (2) classified uncaught exceptions that caused the crashes; (3) prioritized the exceptions according to their impact on users. Results: The analysis of the exceptions showed that seven exceptions generated 70% of the overall errors and that it was possible to solve more than 50% of the exceptions-related issues by fixing just six Java classes. Moreover, as a side result, we discovered that the exceptions were highly correlated with two code smells, namely “Spaghetti Code” and “Swiss Army Knife”. The results of this study helped the company understand how to better focus their limited maintenance effort. Additionally, the adopted process can be beneficial for any Android developer in understanding how to prioritize the maintenance effort.

Bibliographical note

Preoperative simulation for the planning of microsurgical clipping of intracranial aneurysms

Introduction: The safety and success of intracranial aneurysm (IA) surgery could be improved through the dedicated application of simulation covering the procedure from the 3-dimensional (3D) description of the surgical scene to the visual representation of the clip application. We aimed in this study to validate the technical feasibility and clinical relevance of such a protocol. Methods: All patients preoperatively underwent 3D magnetic resonance imaging and 3D computed tomography angiography to build 3D reconstructions of the brain, cerebral arteries, and surrounding cranial bone. These 3D models were segmented and merged using Osirix, a DICOM image processing application. This provided the surgical scene that was subsequently imported into Blender, a modeling platform for 3D animation. Digitized clips and appliers could then be manipulated in the virtual operative environment, allowing the visual simulation of clipping. This simulation protocol was assessed in a series of 10 IAs by 2 neurosurgeons. Results: The protocol was feasible in all patients. The visual similarity between the surgical scene and the operative view was excellent in 100% of the cases, and the identification of the vascular structures was accurate in 90% of the cases. The neurosurgeons found the simulation helpful for planning the surgical approach (ie, the bone flap, cisternal opening, and arterial tree exposure) in 100% of the cases. The correct number of final clip(s) needed was predicted from the simulation in 90% of the cases. The preoperatively expected characteristics of the optimal clip(s) (ie, their number, shape, size, and orientation) were validated during surgery in 80% of the cases. Conclusions: This study confirmed that visual simulation of IA clipping based on the processing of high-resolution 3D imaging can be effective. This is a new and important step toward the development of a more sophisticated integrated simulation platform dedicated to cerebrovascular surgery.

Power Mitigation by Performance Equalization in a Heterogeneous Reconfigurable Multicore Architecture

This paper presents an integrated self-aware computing model mitigating the power dissipation of a heterogeneous reconfigurable multicore architecture by dynamically scaling the operating frequency of each core. The power mitigation is achieved by equalizing the performance of all the cores for an uninterrupted exchange of data. The multicore platform consists of heterogeneous Coarse-Grained Reconfigurable Arrays (CGRAs) of application-specific sizes and a Reduced Instruction-Set Computing (RISC) core. The CGRAs and the RISC core are integrated with each other over a Network-on-Chip (NoC) of six nodes arranged in a topology of two rows and three columns. The RISC core constantly monitors and controls the performance of each CGRA accelerator by adjusting the operating frequencies unless the performance of all the CGRAs is optimally balanced over the platform. The CGRA cores on the platform are processing some of the most computationally-intensive signal processing algorithms while the RISC core establishes packet based synchronization between the cores for computation and communication. All the cores can access each other’s computational and memory resources while processing the kernels simultaneously and independently of each other. Besides general-purpose processing and overall platform supervision, the RISC processor manages performance equalization among all the cores which mitigates the overall dynamic power dissipation by 20.7 % for a proof-of-concept test.

Structural aspects of crystalline tin oxide and its interfaces with composition Sn2O3 are considered computationally based on first principles density functional calculations. The possibility of formation of different nonstoichiometric tin oxide crystals and SnO2/SnO interfaces is shown. The lowest total energy per Sn2O3 unit was evaluated for a layered Sn2O3 crystal, where oxygen vacancies are arranged into the (101) plane in a rutile structure system. Interface structures with orientations SnO2(101)/SnO(001) and SnO2(100)/SnO(100), corresponding to composition Sn2O 3 are only slightly less stable. Their estimated interface energies are 0.15 J m-2 and 0.8 J m-2, respectively. All geometries have components similar to well-known rutile structure SnO 2 and litharge structure SnO geometries. Most stable Sn 2O3 crystals include SnO6 octahedra similar to those found in rutile structure SnO2.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Institute of Physics

Contributors: Mäki-Jaskari, M. A., Rantala, T. T.

Number of pages: 9

Pages: 33-41

Publication date: Jan 2004

Peer-reviewed: Yes

Publication information

Journal: Modelling and Simulation in Materials Science and Engineering

In this paper, we present a transparent mechanical stimulation device capable of uniaxial stimulation, which is compatible with standard bioanalytical methods used in cellular mechanobiology. We validate the functionality of the uniaxial stimulation system using human-induced pluripotent stem cells-derived cardiomyocytes (hiPSC-CMs). The pneumatically controlled device is fabricated from polydimethylsiloxane (PDMS) and provides uniaxial strain and superior optical performance compatible with standard inverted microscopy techniques used for bioanalytics (e.g., fluorescence microscopy and calcium imaging). Therefore, it allows for a continuous investigation of the cell state during stretching experiments. The paper introduces design and fabrication of the device, characterizes the mechanical performance of the device and demonstrates the compatibility with standard bioanalytical analysis tools. Imaging modalities, such as high-resolution live cell phase contrast imaging and video recordings, fluorescent imaging and calcium imaging are possible to perform in the device. Utilizing the different imaging modalities and proposed stretching device, we demonstrate the capability of the device for extensive further studies of hiPSC-CMs. We also demonstrate that sarcomere structures of hiPSC-CMs organize and orient perpendicular to uniaxial strain axis and thus express more maturated nature of cardiomyocytes.

General information

Publication status: E-pub ahead of print

MoE publication type: A1 Journal article-refereed

Organisations: Research group: Micro and Nanosystems Research Group, BioMediTech, Risø Campus, Tampere University of Applied Sciences, Eindhoven University of Technology, Tampere University Hospital

Dataflow models of computation are widely used for the specification, analysis, and optimization of Digital Signal Processing (DSP) applications. In this paper a new meta-model called PiMM is introduced to address the important challenge of managing dynamics in DSP-oriented representations. PiMM extends a dataflow model by introducing an explicit parameter dependency tree and an interface-based hierarchical compositionality mechanism. PiMM favors the design of highly-efficient heterogeneous multicore systems, specifying algorithms with customizable trade-offs among predictability and exploitation of both static and adaptive task, data and pipeline parallelism. PiMM fosters design space exploration and reconfigurable resource allocation in a flexible dynamic dataflow context.

Performance evaluation of a flow control algorithm for network-on-chip

Network-on-chip (NoC) has been proposed for SoC (System-on-Chip) as an alternative to on-chip bus-based interconnects to achieve better performance and lower energy consumption. Several approaches have been proposed to deal with NoCs design and can be classified into two main categories, design-time approaches and run-time approaches. Design-time approaches are generally tailored for an application domain or a specific application by providing a customized NoC. All parameters, such as routing and switching schemes, are defined at design time. Run-time approaches, however, provide techniques that allow a NoC to continuously adapt its structure and its behavior (i.e., at runtime). In this paper, performance evaluation of a flow control algorithm for congestion avoidance in NoCs is presented. This algorithm allows NoC elements to dynamically adjust their inflow by using a feedback control-based mechanism. Analytical and simulation results are reported to show the viability of this mechanism for congestion avoidance in NoCs.

Passive condition pre-enforcement for rights exporting

Condition pre-enforcement is one of the known methods for rights adaptation. Related to the integration of the rights exporting process, we identify issues introduced by condition pre-enforcement and potential risks of granting unexpected rights when exporting rights back and forth. We propose a solution to these problems in a form of a new algorithm called Passive Condition Pre-enforcement (PCP), and discuss the impact of PCP to the existing process of rights exporting.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Research Community on Data-to-Decision (D2D)

Contributors: Lu, W., Nummenmaa, J., Zhang, Z.

Number of pages: 14

Pages: 241-254

Publication date: 2015

Host publication information

Title of host publication: Perspectives in Business Informatics Research - 14th International Conference, BIR 2015, Proceedings

In recent work, a graphical modeling construct called "topological patterns" has been shown to enable concise representation and direct analysis of repetitive dataflow graph sub-structures in the context of design methods and tools for digital signal processing systems (Sane et al. 2010). In this paper, we present a formal design method for specifying topological patterns and deriving parameterized schedules from such patterns based on a novel schedule model called the scalable schedule tree. The approach represents an important class of parameterized schedule structures in a form that is intuitive for representation and efficient for code generation. Through application case studies involving image processing and wireless communications, we demonstrate our methods for topological pattern representation, scalable schedule tree derivation, and associated dataflow graph code generation.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), University of Maryland, Department of Electrical and Computer Engineering

Digital predistortion (DPD) is a widely adopted baseband processing technique in current radio transmitters. While DPD can effectively suppress unwanted spurious spectrum emissions stemming from imperfections of analog RF and baseband electronics, it also introduces extra processing complexity and poses challenges on efficient and flexible implementations, especially for mobile cellular transmitters, considering their limited computing power compared to basestations. In this paper, we present high data rate implementations of broadband DPD on modern embedded processors, such as mobile GPU and multicore CPU, by taking advantage of emerging parallel computing techniques for exploiting their computing resources. We further verify the suppression effect of DPD experimentally on real radio hardware platforms. Performance evaluation results of our DPD design demonstrate the high efficacy of modern general purpose mobile processors on accelerating DPD processing for a mobile transmitter.

Overview of the MPEG reconfigurable video coding framework

Video coding technology in the last 20 years has evolved producing a variety of different and complex algorithms and coding standards. So far the specification of such standards, and of the algorithms that build them, has been done case by case providing monolithic textual and reference software specifications in different forms and programming languages. However, very little attention has been given to provide a specification formalism that explicitly presents common components between standards, and the incremental modifications of such monolithic standards. The MPEG Reconfigurable Video Coding (RVC) framework is a new ISO standard currently under its final stage of standardization, aiming at providing video codec specifications at the level of library components instead of monolithic algorithms. The new concept is to be able to specify a decoder of an existing standard or a completely new configuration that may better satisfy application-specific constraints by selecting standard components from a library of standard coding algorithms. The possibility of dynamic configuration and reconfiguration of codecs also requires new methodologies and new tools for describing the new bitstream syntaxes and the parsers of such new codecs. The RVC framework is based on the usage of a new actor/ dataflow oriented language called CAL for the specification of the standard library and instantiation of the RVC decoder model. This language has been specifically designed for modeling complex signal processing systems. CAL dataflow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. The paper gives an overview of the concepts and technologies building the standard RVC framework and the non standard tools supporting the RVC model from the instantiation and simulation of the CAL model to software and/or hardware code synthesis.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), Department of Electrical and Computer Engineering, University of Maryland, Ericsson Research, Xilinx Research Labs, CRPP, UBL

Organizational structure and the periphery of the gene regulatory network in B-cell lymphoma.

The physical periphery of a biological cell is mainly described by signaling pathways which are triggered by transmembrane proteins and receptors that are sentinels to control the whole gene regulatory network of a cell. However, our current knowledge about the gene regulatory mechanisms that are governed by extracellular signals is severely limited. The purpose of this paper is three fold. First, we infer a gene regulatory network from a large-scale B-cell lymphoma expression data set using the C3NET algorithm. Second, we provide a functional and structural analysis of the largest connected component of this network, revealing that this network component corresponds to the peripheral region of a cell. Third, we analyze the hierarchical organization of network components of the whole inferred B-cell gene regulatory network by introducing a new approach which exploits the variability within the data as well as the inferential characteristics of C3NET. As a result, we find a functional bisection of the network corresponding to different cellular components. Overall, our study allows to highlight the peripheral gene regulatory network of B-cells and shows that it is centered around hub transmembrane proteins located at the physical periphery of the cell. In addition, we identify a variety of novel pathological transmembrane proteins such as ion channel complexes and signaling receptors in B-cell lymphoma.

Optimization of Flexible Filter Banks Based on Fast Convolution

Multirate filter banks can be implemented efficiently using fast-convolution (FC) processing. The main advantage of the FC filter banks (FC-FB) compared with the conventional polyphase implementations is their increased flexibility, that is, the number of channels, their bandwidths, and the center frequencies can be independently selected. In this paper, an approach to optimize the FC-FBs is proposed. First, a subband representation of the FC-FB is derived. Then, the optimization problems are formulated with the aid of the subband model. Finally, these problems are conveniently solved with the aid of a general nonlinear optimization algorithm. Several examples are included to demonstrate the proposed overall design scheme as well as to illustrate the efficiency and the flexibility of the resulting FC-FB.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Electronics and Communications Engineering, Research group: Wireless Communications and Positioning

OP2A: How to improve the quality of the web portal of open source software products

Open Source Software (OSS) communities do not often invest in marketing strategies to promote their products in a competitive way. Even the home pages of the web portals of well-known OSS products show technicalities and details that are not relevant for a fast and effective evaluation of the product's qualities. So, final users and even developers who are interested in evaluating and potentially adopting an OSS product are often negatively impressed by the quality perception they have from the web portal of the product and turn to proprietary software solutions or fail to adopt OSS that may be useful in their activities. In this paper, we define OP2A, an evaluation model and we derive a checklist that OSS developers and web masters can use to design (or improve) their web portals with all the contents that are expected to be of interest for OSS final users. We exemplify the use of the model by applying it to the Apache Tomcat web portal and we apply the model to 47 web sites of well-known OSS products to highlight the current deficiencies that characterize these web portals.

On the effect of deformation twinning and microstructure to strain hardening of high manganese austenitic steel 3D microstructure aggregates at large strains

The hardening and deformation characteristics of Hadfield microstructure are studied to investigate the effect of microstucture to the material behavior. A crystal plasticity model including dislocation slip and deformation twinning is employed. The role of deformation twinning to the overall strain hardening of the material is evaluated for two different grain structures. Large compressive strains are applied on 3D microstructural aggregates representing the uniform and non-uniform grain structures of Hadfield steels. The grain structure has an effect on the strain hardening rate as well as on the overall hardening capability of the microstructure. A major reason causing the difference in strain hardening arises from the different twin volume fraction evolution influenced by intra-grain and inter-grain interactions. A mixture of large and small grains was found to be more favorable for twinning and thus resulting in a greater hardening capability than uniform grain size.

In this paper, results of image denoising efficiency prediction for filter based on discrete cosine transform (DCT) for the case of spatially correlated additive Gaussian Noise (SCGN) are given. The considered noise model is analyzed for different degrees of spatial correlation that produce varying non-homogeneous spectrum of the noise. PSNR metric is exploited to assess denoising efficiency. It is shown in this paper, that a prediction of denoising efficiency has high accuracy for data distorted by noise with different degrees of spatial correlation, and require low computational resources.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Department of Signal Processing, Research group: Computational Imaging-CI, Kharkiv National Aerospace University

Contributors: Rubel, O., Lukin, V., Egiazarian, K.

Number of pages: 5

Pages: 750-754

Publication date: 12 Apr 2016

Host publication information

Title of host publication: 2016 13th International Conference on Modern Problems of Radio Engineering, Telecommunications and Computer Science (TCSET)

On Confidences and Their Use in (Semi-)Automatic Multi-Image Taxa Identification

We analyzed classification confidences in biological multi-image taxa identification problems, where each specimen is represented by multiple images. We observed that confidences can be exploited to progress toward semi-automated identification process, where images are initially classified using a convolutional neural network and taxonomic experts manually inspect only the samples with a low confidence. We studied different ways to evaluate confidences and concluded that the difference of the largest and second largest values in unnormalized network outputs leads to best results. Furthermore, we compared different ways to use image-wise confidences when deciding on the final identification using all the input images of a specimen. The best results were obtained using a confidence-weighted sum rule over the unnormalized outputs. This approach also outperformed the evaluated supervised decision method.

Networks for systems biology: Conceptual connection of data and function

The purpose of this study is to survey the use of networks and network-based methods in systems biology. This study starts with an introduction to graph theory and basic measures allowing to quantify structural properties of networks. Then, the authors present important network classes and gene networks as well as methods for their analysis. In the last part of this study, the authors review approaches that aim at analysing the functional organisation of gene networks and the use of networks in medicine. In addition to this, the authors advocate networks as a systematic approach to general problems in systems biology, because networks are capable of assuming multiple roles that are very beneficial connecting experimental data with a functional interpretation in biological terms.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Research Community on Data-to-Decision (D2D), Computational Biology and Machine Learning Lab., Faculty of Medicine, Health and Life Sciences, Queen's University, Belfast, Northern Ireland, Institute for Bioinformatics and Translational Research

Model for evaluating additive manufacturing feasibility in end-use production

In practical design work, a designer needs to consider the feasibility of a part for a manufacturing using additive manufacturing (AM) instead of conventional manufacturing (CM) technology. Traditionally and by default parts are assumed to be manufactured using CM and using AM as an alternative need to be justified. AM is currently often a more expensive manufacturing method than CM, but its employment can be justified due to number of reasons: improved part features, faster manufacturing time and lower cost. Improved part features means usually reduced mass or complex shape. However, in low volume production lower manufacturing time and lower part cost may rise to the most important characteristics. In this paper, we present a practical feasibility model, which analyses the added value of using AM for manufacturing. The approach is demonstrated in the paper on four specific parts. They represent real industrial design tasks that are ordered from an engineering office company. These parts were manufactured by Selective Laser Meting (SLM) technology and the original design done for conventional manufacturing is also presented and used for comparison purpose.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Automation Technology and Mechanical Engineering, Research area: Design, Development and LCM, Research area: Manufacturing and Automation, Enmac Ltd

Contributors: Ahtiluoto, M., Ellman, A., Coatanea, E.

Number of pages: 10

Pages: 799-808

Publication date: 2019

Peer-reviewed: Yes

Publication information

Journal: Proceedings of the International Conference on Engineering Design, ICED

Model-Based Dynamic Scheduling for Multicore Signal Processing

This paper presents a model-based design method and a corresponding new software tool, the HTGS Model-Based Engine (HMBE), for designing and implementing dataflow-based signal processing applications on multi-core architectures. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), a recently-introduced software tool for implementing scalable workflows for high performance computing applications on compute nodes with high core counts and multiple GPUs. HMBE integrates model-based design approaches, founded on dataflow principles, with advanced design optimization techniques provided in HTGS. This integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process using a principled approach. In this paper, we present HMBE with an emphasis on the model-based design approaches and the novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE via two case studies: an image stitching application for large microscopy images and a background subtraction application for multispectral video streams.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Pervasive Computing, Research area: Computer engineering, University of Maryland, National Institute of Standards and Technology

Startups operate with small resources in time pressure. Thus, building minimal product versions to test and validate ideas has emerged as a way to avoid wasteful creation of complicated products which may be proven unsuccessful in the markets. Often, design of these early product versions needs to be done fast and with little advance information from end-users. In this paper we introduce the Minimum Viable User eXperience (MVUX) that aims at providing users a good enough user experience already in the early, minimal versions of the product. MVUX enables communication of the envisioned product value, gathering of meaningful feedback, and it can promote positive word of mouth. To understand what MVUX consists of, we conducted an interview study with 17 entrepreneurs from 12 small startups. The main elements of MVUX recognized are Attractiveness, Approachability, Professionalism, and Selling the Idea. We present the structured framework and elements’ contributing qualities.

Bibliographical note

Measurement theory and dimensional analysis: Methodological impact on the comparison and evaluation process

Comparison and ranking of solutions are central tasks of the design process. Designers have to deal with decisions simultaneously involving multiple criteria. Those criteria are often inconsistent in the sense that they are expressed according to different types of metrics. This means that usual engineering performance indicators are expressed according to physical quantities (i.e. SI system) and indicators such as preference functions can be "measured" by using other type of qualitative metrics. This aspect limits the scientific consistency of design because a coherent scientific framework will at first require the creation of a unified list of fundamental properties. A combined analysis of the measurement theory, the General Design Theory (GDT) and the dimensional analysis theory give an interesting insight in order to create guidelines for establishing a coherent measurement system. This article establishes a list of fundamental requirements. We expect that these guidelines can help engineers and designers to be more aware of the drawbacks linked with the use of wrong comparison procedures and limitations associated with the use of weak measurement scales. This article makes an analysis of the fundamental aspects available in major scientific publications related to comparison, provides a synthesis of these basic concepts and unifies those concepts together from a designing perspective. A practical design methodology using the fundamental results of this article as prerequisites has been implemented by the authors.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: LGI, LGI Laboratory, Helsinki University of Technology, Aalto University

Host publication information

Title of host publication: 19th International Conference on Design Theory and Methodology and 1st International Conference on Micro and Nano Systems, presented at - 2007 ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC/CIE2007

In recent years, parameterized dataflow has evolved as a useful framework for modeling synchronous and cyclo-static graphs in which arbitrary parameters can be changed dynamically. Parameterized dataflow has proven to have significant expressive power for managing dynamics of DSP applications in important ways. However, efficient hardware synthesis techniques for parameterized dataflow representations are lacking. This paper addresses this void; specifically, the paper investigates efficient field programmable gate array (FPGA)-based implementation of parameterized cyclo-static dataflow (PCSDF) graphs. We develop a scheduling technique for throughput-constrained minimization of dataflow buffering requirements when mapping PCSDF representations of DSP applications onto FPGAs. The proposed scheduling technique is integrated with an existing formal schedule model, called the generalized schedule tree, to reduce schedule cost. To demonstrate our new, hardware-oriented PCSDF scheduling technique, we have designed a real-time base station emulator prototype based on a subset of long-term evolution (LTE), which is a key cellular standard.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), National Instruments, University of Maryland, Department of Electrical and Computer Engineering

Local network-based measures to assess the inferability of different regulatory networks

The purpose of this study is to compare the inferability of various synthetic as well as real biological regulatory networks. In order to assess differences we apply local network-based measures. That means, instead of applying global measures, we investigate and assess an inference algorithm locally, on the level of individual edges and subnetworks. We demonstrate the behaviour of our local network-based measures with respect to different regulatory networks by conducting large-scale simulations. As inference algorithm we use exemplarily ARACNE. The results from our exploratory analysis allow us not only to gain new insights into the strength and weakness of an inference algorithm with respect to characteristics of different regulatory networks, but also to obtain information that could be used to design novel problem-specific statistical estimators. [Includes supplementary material]

Lean software startup – an experience report from an entrepreneurial software business course

This paper offers blueprints for and reports upon three years experience from teaching the university course “Lean Software Startup” for information technology and economics students. The course aims to give a learning experience on ideation/innovation and subsequent product and business development using the lean startup method. The course educates the students in software business, entrepreneurship, teamwork and the lean startup method. The paper describes the pedagogical design and practical implementation of the course in sufficient detail to serve as an example of how entrepreneurship and business issues can be integrated into a software engineering curriculum. The course is evaluated through learning diaries and a questionnaire, as well as the primary teacher’s learnings in the three course instances. We also examine the course in the context of CDIO and show its connection points to this broader engineering education framework. Finally we discuss the challenges and opportunities of engaging students with different backgrounds in a hands-on entrepreneurial software business course.

The lean manufacturing philosophy includes several methods that aim to remove waste from production. This paper studies lean manufacturing methods and how simulation is used to consider them. In order to do this, it reviews papers that study simulation together with lean methods. The papers that are reviewed are categorized according to the lean methods used and result types obtained. Analysis is performed in order to gain knowledge about the volumes of occurrence of different methods and result types. Typical methods in the papers are different types of value stream mapping and work-in-process models. An exploratory analysis is performed to reveal the relationships between the methods and result types. This is done using association analysis. It reveals the methods that are commonly studied together in the literature. The paper also lists research areas that are not considered in the literature. These areas are often related to the analysis of variation.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Department of Mechanical Engineering and Industrial Systems, Research area: Manufacturing and Automation, Aalto University, Department of Engineering Design and Production

This article documents a study on artificial neural networks (ANNs) applied to the field of engineering and more specifically a study taking advantage of prior domain knowledge of engineering systems to improve the learning capabilities of ANNs by reducing the dimensionality of the ANNs. The proposed approach ultimately leads to training a smaller ANN, offering advantage in training performances such as lower Mean Squared Error, lower cost and faster convergence. The article proposes to associate functional architecture, Pi numbers, and causal graphs and presents a design process to generate optimized knowledge-based ANN (KB-ANN) topologies. The article starts with a literature survey related to ANN and their topologies. Then, an important distinction is made between system behavior centered topologies and ANN centered topologies. The Dimensional Analysis Conceptual Modeling (DACM) framework is introduced as a way of implementing the system behavior centered topology. One case study is analyzed with the goal of defining an optimized KB-ANN topology. The study shows that the KB-ANN topology performed significantly better in term of the size of the required training set than a conventional fully-connected ANN topology. Future work will investigate the application of KB-ANNs to additive manufacturing.

The present paper proposes a structured Product Development Lifecycle (PDL) model to deal with the concept design stage of complex assemblies. The proposed method provides a systematic approach to design, aimed to improve requirements management, project management and communication among stakeholders as well as to avoid project failures reducing project development time. This research also provides suggestions and recommendations for utilizing different analysis, synthesis and assessment methodologies along with the proposed approach. The process developed, named Iterative and Participative Axiomatic Design Process (IPADeP), is consistent with ISO/IEC 15288:2008 – “Systems and software engineering”, and INCOSE Systems engineering handbook. It is an iterative and incremental design process, participative and requirements driven, based on the theory of Axiomatic Product Development Lifecycle (APDL). IPADeP provides a systematic methodology in which, starting from a set of experts’ assumptions, a number of conceptual solutions are generated, analysed and evaluated. Based on the results obtained, new iterations can be performed for each level of decomposition while product requirements are refined. In this paper, we applied IPADeP to the initial phase of conceptual design activities for DEMO divertor-to-vacuum vessel locking system in order to propose new innovative solutions.

Bibliographical note

Integration of dataflow-based heterogeneous multiprocessor scheduling techniques in GNU radio

As the variety of off-the-shelf processors expands, traditional implementation methods of systems for digital signal processing and communication are no longer adequate to achieve design objectives in a timely manner. There is a necessity for designers to easily track the changes in computing platforms, and apply them efficiently while reusing legacy code and optimized libraries that target specialized features in single processing units. In this context, we propose an integration workflow to schedule and implement Software Defined Radio (SDR) protocols that are developed using the GNU Radio environment on heterogeneous multiprocessor platforms. We show how to utilize Single Instruction Multiple Data (SIMD) units provided in Graphics Processing Units (GPUs) along with vector accelerators implemented in General Purpose Processors (GPPs). We augment a popular SDR framework (i.e, GNU Radio) with a library that seamlessly allows offloading of algorithm kernels mapped to the GPU without changing the original protocol description. Experimental results show how our approach can be used to efficiently explore design spaces for SDR system implementation, and examine the overhead of the integrated backend (software component) library.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), University of Maryland, Department of Electrical and Computer Engineering, Virginia Tech, Laboratory for Telecommunications Sciences

Instrumentation-Driven Validation of Dataflow Applications

Dataflow modeling offers a myriad of tools for designing and optimizing signal processing systems. A designer is able to take advantage of dataflow properties to effectively tune the system in connection with functionality and different performance metrics. However, a disparity in the specification of dataflow properties and the final implementation can lead to incorrect behavior that is difficult to detect. This motivates the problem of ensuring consistency between dataflow properties that are declared or otherwise assumed as part of dataflow-based application models, and the dataflow behavior that is exhibited by implementations that are derived from the models. In this paper, we address this problem by introducing a novel dataflow validation framework (DVF) that is able to identify disparities between an application’s formal dataflow representation and its implementation. DVF works by instrumenting the implementation of an application and monitoring the instrumentation data as the application executes. This monitoring process is streamlined so that DVF achieves validation without major overhead. We demonstrate the utility of our DVF through design and implementation case studies involving an automatic speech recognition application, a JPEG encoder, and an acoustic tracking application.

Background: Gene networks are considered to represent various aspects of molecular biological systems meaningfully because they naturally provide a systems perspective of molecular interactions. In this respect, the functional understanding of the transcriptional regulatory network is considered as key to elucidate the functional organization of an organism. Results: In this paper we study the functional robustness of the transcriptional regulatory network of S. cerevisiae. We model the information processing in the network as a first order Markov chain and study the influence of single gene perturbations on the global, asymptotic communication among genes. Modification in the communication is measured by an information theoretic measure allowing to predict genes that are 'fragile' with respect to single gene knockouts. Our results demonstrate that the predicted set of fragile genes contains a statistically significant enrichment of so called essential genes that are experimentally found to be necessary to ensure vital yeast. Further, a structural analysis of the transcriptional regulatory network reveals that there are significant differences between fragile genes, hub genes and genes with a high betweenness centrality value. Conclusion: Our study does not only demonstrate that a combination of graph theoretical, information theoretical and statistical methods leads to meaningful biological results but also that such methods allow to study information processing in gene networks instead of just their structural properties.

Influence of specimen type and reinforcement on measured tension-tension fatigue life of unidirectional GFRP laminates

It is well known that standardised tension-tension fatigue test specimens of unidirectional (UD) glass-fibre-reinforced plastics (GFRP) laminates tend to fail at end tabs. The true fatigue life is then underestimated. The first objective of this study was to find for UD GFRP laminates a test specimen that fails in the gauge section. The second objective was to compare fatigue performance of two laminates, one having a newly developed UD powder-bound fabric as a reinforcement and the other having a quasi-UD stitched non-crimp fabric as a reinforcement. In the first phase, a rectangular specimen in accordance with the ISO 527-5 standard and two slightly different dog-bone shaped specimens were evaluated by means of finite element modelling. Subsequent comparative fatigue tests were performed for the laminates with the three specimen types. The results showed that the test specimen type has a significant effect on the failure mode and measured fatigue life of the laminates. A significantly higher fatigue life was measured for the laminate with the powder-bound fabric reinforcement when compared to the laminate with the stitched reinforcement.

Influence of relative humidity and physical load during storage on dustiness of inorganic nanomaterials: implications for testing and risk assessment

Dustiness testing using a down-scaled EN15051 rotating drum was used to investigate the effects of storage conditions such as relative humidity and physical loading on the dustiness of five inorganic metal oxide nanostructured powder materials. The tests consisted of measurements of gravimetrical respirable dustiness index and particle size distributions. Water uptake of the powders during 7 days of incubation was investigated as an explanatory factor of the changes. Consequences of these varying storage conditions in exposure modelling were tested using the control banding and risk management tool NanoSafer. Drastic material-specific effects on powder respirable dustiness index were observed with the change in TiO<inf>2</inf> from 30 % RH (639 mg/kg) to 50 % RH (1.5 mg/kg). All five tested materials indicate a decreasing dustiness index with relative humidity increasing from 30 to 70 % RH. Test of powder water uptake showed an apparent link with the decreasing dustiness index. Effects of powder compaction appeared more material specific with both increasing and decreasing dustiness indices observed as an effect of compaction. Tests of control banding exposure models using the measured dustiness indices in three different exposure scenarios showed that in two of the tested materials, one 20 % change in RH changed the exposure banding from the lowest level to the highest. The study shows the importance of powder storage conditions prior to tests for classification of material dustiness indices. It also highlights the importance of correct storage information and relative humidity and expansion of the dustiness test conditions specifically, when using dustiness indices as a primary parameter for source strength in exposure assessment.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Materials Science, Research group: Materials Characterization, Engineering materials science and solutions (EMASS), Department of Micro and Nanotechnology, Denmark Technical University DTU, Finnish Institute of Occupational Health, CIC biomaGUNE, National Research Centre for the Working Environment

Inferring the conservative causal core of gene regulatory networks

Background: Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically.Results: In this paper, we introduce a novel gene regulatory network inference (GRNI) algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently.Conclusions: For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.

Independent Loops Search in Flow Networks Aiming for Well-Conditioned System of Equations

We approach the problem of choosing linearly independent loops in a pipeflow network as choosing the best-conditioned submatrix of a given larger matrix. We present some existing results of graph theory and submatrix selection problems, based on which we construct three heuristic algorithms for choosing the loops. The heuristics are tested on two pipeflow networks that differ significantly on the distribution of pipes and nodes in the network.

Improved Session Continuity in 5G NR with Joint Use of Multi-Connectivity and Guard Bandwidth

The intermittent millimeter-wave radio links as a result of human-body blockage are an inherent feature of the 5G New Radio (NR) technology by 3GPP. To improve session continuity in these emerging systems, two mechanisms have recently been proposed, namely, multi-connectivity and guard bandwidth. The former allows to establish multiple spatially-diverse connections and switch between them dynamically, while the latter reserves a fraction of system bandwidth for sessions changing their state from non-blocked to blocked, which ensures that the ongoing sessions have priority over the new ones. In this paper, we assess the joint performance of these two schemes for the user- and system-centric metrics of interest. Our numerical results reveal that the multi-connectivity operation alone may not suffice to increase the ongoing session drop probability considerably. On the other hand, the use of guard bandwidth significantly improves session continuity by somewhat compromising new session drop probability and system resource utilization. Surprisingly, the 5G NR system implementing both these techniques inherits their drawbacks. However, complementing it with an initial AP selection procedure effectively alleviates these limitations by maximizing the system resource utilization, while still providing sufficient flexibility to enable the desired trade-off between new and ongoing session drop probabilities.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Electrical Engineering, Department of Chemistry and Bioengineering, Peoples’ Friendship University of Russia

Implementation of a Multirate Resampler for Multi-carrier Systems on GPUs

Efficient sample rate conversion is of widespread importance in modern communication and signal processing systems. Although many efficient kinds of polyphase filterbank structures exist for this purpose, they are mainly geared toward serial, custom, dedicated hardware implementation for a single task. There is, therefore, a need for more flexible sample rate conversion systems that are resource-efficient, and provide high performance. To address these challenges, we present in this paper an all-software-based, fully parallel, multirate resampling method based on graphics processing units (GPUs). The proposed approach is well-suited for wireless communication systems that have simultaneous requirements on high throughput and low latency. Utilizing the multidimensional architecture of GPUs, our design allows efficient parallel processing across multiple channels and frequency bands at baseband. The resulting architecture provides flexible sample rate conversion that is designed to address modern communication requirements, including real-time processing of multiple carriers simultaneously.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Mechanics and Design, Department of Civil Engineering, Life Cycle Effectiveness of the Built Environment (LCE@BE), Academy of Sciences of the Czech Republic, Institute of Computer Science of the Academy of Sciences of the Czech Republic, Department of Civil and Structural Engineering, Aalto University

Wireless standards are evolving rapidly due to the exponential growth in the number of portable devices along with the applications with high data rate requirements. Adaptable software based signal processing implementations for these devices can make the deployment of the constantly evolving standards faster and less expensive. The flagship technology from the IEEE WLAN family, the IEEE 802.11ac, aims at achieving very high throughputs in local area connectivity scenarios. This article presents a software based implementation for the Multiple Input and Multiple Output (MIMO) transmitter and receiver baseband processing conforming to the IEEE 802.11ac standard which can achieve transmission bit rates beyond 1Gbps. This work focuses on the Physical layer frequency domain processing. Various configurations, including 2×2 and 4×4 MIMO are considered for the implementation. To utilize the available data and instruction level parallelism, a DSP core with vector extensions is selected as the implementation platform. Then, the feasibility of the presented software-based solution is assessed by studying the number of clock cycles and power consumption of the different scenarios implemented on this core. Such Software Defined Radio based approaches can potentially offer more flexibility, high energy efficiency, reduced design efforts and thus shorter time-to-market cycles in comparison with the conventional fixed-function hardware methods.

Bibliographical note

Hierarchical coordination of periodic genes in the cell cycle of Saccharomyces cerevisiae

Background: Gene networks are a representation of molecular interactions among genes or products thereof and, hence, are forming causal networks. Despite intense studies during the last years most investigations focus so far on inferential methods to reconstruct gene networks from experimental data or on their structural properties, e.g., degree distributions. Their structural analysis to gain functional insights into organizational principles of, e.g., pathways remains so far under appreciated. Results: In the present paper we analyze cell cycle regulated genes in S. cerevisiae. Our analysis is based on the transcriptional regulatory network, representing causal interactions and not just associations or correlations between genes, and a list of known periodic genes. No further data are used. Partitioning the transcriptional regulatory network according to a graph theoretical property leads to a hierarchy in the network and, hence, in the information flow allowing to identify two groups of periodic genes. This reveals a novel conceptual interpretation of the working mechanism of the cell cycle and the genes regulated by this pathway. Conclusion: Aside from the obtained results for the cell cycle of yeast our approach could be exemplary for the analysis of general pathways by exploiting the rich causal structure of inferred and/or curated gene networks including protein or signaling networks.

We carry out the semiclassical expansion of the one-particle density matrix up to the second order in h. We use the method of Grammaticos and Voros based on the Wigner transform of operators. We show that the resulting density matrix is Hermitian and idempotent in contrast with the well-known result of the semiclassical Kirzhnits expansion. Our density matrix leads to the same particle density and kinetic energy density as in the literature, and it satisfies the consistency criterion of the Euler equation. The derived Hermitian density matrix clarifies the ambiguity in the usefulness of gradient expansion approximations and might reignite the development of density functionals with semiclassical methods.

Harnessing the complexity of gene expression data from cancer: From single gene to structural pathway methods

: High-dimensional gene expression data provide a rich source of information because they capture the expression level of genes in dynamic states that reflect the biological functioning of a cell. For this reason, such data are suitable to reveal systems related properties inside a cell, e.g., in order to elucidate molecular mechanisms of complex diseases like breast or prostate cancer. However, this is not only strongly dependent on the sample size and the correlation structure of a data set, but also on the statistical hypotheses tested. Many different approaches have been developed over the years to analyze gene expression data to (I) identify changes in single genes, (II) identify changes in gene sets or pathways, and (III) identify changes in the correlation structure in pathways. In this paper, we review statistical methods for all three types of approaches, including subtypes, in the context of cancer data and provide links to software implementations and tools and address also the general problem of multiple hypotheses testing. Further, we provide recommendations for the selection of such analysis methods.Reviewers: This article was reviewed by Arcady Mushegian, Byung-Soo Kim and Joel Bader.

Graph based representation and analyses for conceptual stages

What is the fundamental similarity between investing in stock of a company, because you like the products of this company, and selecting a design concept, because you have been impressed by the esthetic quality of the presentation made by the team developing the concept? Except that both decisions are based on a surface analysis of the situations, they both reflect a fundamental human's cognitive feature. Human brain is profoundly trying to minimize the efforts required to solve a cognitive task and is using when possible an automatic mode relying on recognition, memory, and causality. This mode is even used in some occasion without the engineer being conscious of it. Such type of tendencies are naturally pushing engineers to rush into known solutions, to avoid analyzing the context of a design problem, to avoid modelling design problems and to take decision based on isolated evidences. Those behaviors are familiar to experience teachers and engineers. This tendency is magnified by the time pressure imposed to the engineering design process. Early phases in particular have to be kept short despite the large impact of decisions taken at this stage. Few support tools are capable of supporting a deep analysis of the early design conditions and problems regarding the fuzziness and complexity of the early stage. The present article is hypothesizing that the natural ability of humans to deal with cause-effects relations push toward the massive usage of causal graphs analysis during the design process and specifically during the early phases. A global framework based on graphs is presented in this paper to efficiently support the early stages. The approach used to generate graphs, to analyze them and to support creativity based on the analysis is forming the central contribution of this paper.

To help developers during the Scrum planning poker, in our previous work we ran a case study on a Moonlight Scrum process to understand if it is possible to introduce functional size metrics to improve estimation accuracy and to measure the accuracy of expert-based estimation. The results of this original study showed that expert-based estimations are more accurate than those obtained by means of models, calculated with functional size measures. To validate the results and to extend them to plain Scrum processes, we replicated the original study twice, applying an exact replication to two plain Scrum development processes. The results of this replicated study show that the accuracy of the effort estimated by the developers is very accurate and higher than that obtained through functional size measures. In particular, SiFP and IFPUG Function Points, have low predictive power and are thus not help to improve the estimation accuracy in Scrum.

Exploiting statically schedulable regions in dataflow programs

Dataflow descriptions have been used in a wide range of Digital Signal Processing (DSP) applications, such as multi-media processing, and wireless communications. Among various forms of dataflow modeling, Synchronous Dataflow (SDF) is geared towards static scheduling of computational modules, which improves system performance and predictability. However, many DSP applications do not fully conform to the restrictions of SDF modeling. More general dataflow models, such as CAL (Eker and Janneck 2003), have been developed to describe dynamically-structured DSP applications. Such generalized models can express dynamically changing functionality, but lose the powerful static scheduling capabilities provided by SDF. This paper focuses on the detection of SDF-like regions in dynamic dataflow descriptions-in particular, in the generalized specification framework of CAL. This is an important step for applying static scheduling techniques within a dynamic dataflow framework. Our techniques combine the advantages of different dataflow languages and tools, including CAL (Eker and Janneck 2003), DIF (Hsu et al. 2005) and CAL2C (Roquier et al. 2008). In addition to detecting SDF-like regions, we apply existing SDF scheduling techniques to exploit the static properties of these regions within enclosing dynamic dataflow models. Furthermore, we propose an optimized approach for mapping SDF-like regions onto parallel processing platforms such as multi-core processors.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), University of Maryland, Xilinx Research Labs, UBL, Department of Electrical and Computer Engineering

Entropy analysis of word-length series of natural language texts: Effects of text language and genre

We estimate the n-gram entropies of natural language texts in word-length representation and find that these are sensitive to text language and genre. We attribute this sensitivity to changes in the probability distribution of the lengths of single words and emphasize the crucial role of the uniformity of probabilities of having words with length between five and ten. Furthermore, comparison with the entropies of shuffled data reveals the impact of word length correlations on the estimated n-gram entropies.

Modification of a fatigue criterion valid for homogeneous multiaxial stress states to account for the beneficial effect of stress gradients is traditionally performed by modifying the stress terms in the fatigue criterion and thereby introducing new parameters that need to be calibrated. Here the stress terms are left unchanged and, instead, the parameters in the fatigue criterion are modified. This modification is performed, in principle, along the lines of Siebel and Stieler and it introduces Neuber's parameter as the only new parameter; however, as soon as the ultimate strength of the material is known, also Neuber's parameter is known. Therefore, the methodology introduced implies that no new calibration process is needed. Here a specific fatigue criterion valid for homogeneous multiaxial stress states is enhanced by this procedure and predictions of this simple approach are compared with a broad range of experimental data and good accuracy is achieved. Moreover, the approach adopted can be applied to other fatigue criteria than the one considered here.

This paper investigates the performance of energy detection-based spectrum sensing over Fisher-Snedecor F fading channels. To this end, an analytical expression for the corre- sponding average detection probability is firstly derived and then this is extended to account for collaborative spectrum sensing. The complementary receiver operating characteristics (ROC) are analyzed for different conditions of the average signal-to- noise ratio (SNR), time-bandwidth product, multipath fading, shadowing and number of collaborating users. It is shown that the energy detection performance is strongly linked to the severity of the multipath fading and amount of shadowing, whereby even small variations in either of these physical phenomena significantly impact the detection probability. Also, the versatile modeling capability of the Fisher-Snedecor F distribution is veridfied in the context of energy detection based spectrum sensing as it provides considerably more accurate characterization than the conventional Rayleigh fading model. To confirm the validity of the analytical results presented in this paper, we compare them with the results of some simulations.

Dwelling design needs to consider multiple objectives and uncertainties to achieve effective and robust performance. A multi-objective robust optimisation method is outlined and then applied with the aim to optimise a one-story archetype in Delhi to achieve a healthy low-energy design. EnergyPlus is used to model a sample of selected design and uncertainty inputs. Sensitivity analysis identifies significant parameters and a meta-model is constructed to replicate input-output relationships. The meta-model is employed in a hybrid multi-objective optimisation algorithm that accounts for uncertainty. Results demonstrate the complexities of achieving a low energy consumption and healthy indoor environmental quality.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: University College London, University of Oxford

Contributors: Nix, E., Das, P., Taylor, J., Davies, M.

Number of pages: 8

Pages: 2093-2100

Publication date: 1 Jan 2015

Host publication information

Title of host publication: Proceedings of the 2014 Building Simulation and Optimization Conference

Effect of paint baking treatment on the properties of press hardened boron steels

This study comprehends the effect of a typical paint baking process on the properties of press hardened boron steels. Bake hardening response of four 22MnB5 steels with different production histories and two other boron steels of 30MnB5 and 34MnB5 type were analyzed. In particular, the effect of steel carbon content and prior austenite grain size on the strength of the bake hardening treated steels was investigated. Press hardened steels showed a relatively strong bake hardening effect, 80–160 MPa, in terms of yield strength. In addition, a clear decrease in ultimate tensile strength, 30–150 MPa, was observed due to baking. The changes in tensile strength showed a dependency on the carbon content of the steel: higher carbon content led to a larger decrease in tensile strength in general. Smaller prior austenite grain size resulted in a higher increase in yield strength despite the micro-alloyed 34MnB5. Transmission electron microscopy analysis carried out for the 34MnB5 revealed niobium rich mixture carbides of (Nb, Ti)C, which have most likely influenced the different bake hardening response. The present results indicate that the bake hardening response of press hardened steels depends on both prior austenite grain size and carbon content, but is also affected by other alloying elements. The observed correlation between prior austenite grain size and bake hardening response can be used to optimize the production of the standard grades of 22MnB5 and 30MnB5. In addition, our study suggests that baking process improves the post-uniform elongation and ductile fracture behavior of 34MnB5, but do not significantly influence the ductile fracture mechanisms of 22MnB5 and 30MnB5 representing lower strength levels.

Today, software teams can deploy new software versions to users at an increasing speed – even continuously. Although this has enabled faster responding to changing customer needs than ever before, the speed of automated customer feedback gathering has not yet blossomed out at the same level. For these purposes, the automated collecting of quantitative data about how users interact with systems can provide software teams with an interesting alternative. When starting such a process, however, teams are faced immediately with difficult decision making: What kind of technique should be used for collecting user-interaction data? In this paper, we describe the reasons for choosing specific collecting techniques in three cases and refine a previously designed selection framework based on their data. The study is a part of on-going design science research and was conducted using case study methods. A few distinct criteria which practitioners valued the most arose from the results.

Bibliographical note

Ecosystems Here, There, and Everywhere — A Barometrical Analysis of the Roots of ‘Software Ecosystem’

This study structures the ecosystem literature by using a bibliometrical approach in analysing theoretical roots of ecosystem studies. Several disciplines, such as innovation, management and software studies have established own streams in the ecosystem research. This paper reports the results of analysing 601 articles from the Thomson Reuters Web of Science database, and identifies ten separate research communities which have established their own thematic ecosystem disciplines. We show that five sub-communities have emerged inside the field of software ecosystems. The software ecosystem literature draws its theoretical background from (1) technical, (2) research methodology, (3) business, (4) management, and (5) strategy oriented disciplines. The results pave the way for future research by illustrating the existing and missing links and directions in the field of the software ecosystem.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Pori Department, Research group: Business Ecosystems, Networks and Innovations, VTT Technical Research Centre of Finland, University of Turku

Bibliographical note

Distant speech separation using predicted time-frequency masks from spatial features

Speech separation algorithms are faced with a difficult task of producing high degree of separation without containing unwanted artifacts. The time-frequency (T-F) masking technique applies a real-valued (or binary) mask on top of the signal's spectrum to filter out unwanted components. The practical difficulty lies in the mask estimation. Often, using efficient masks engineered for separation performance leads to presence of unwanted musical noise artifacts in the separated signal. This lowers the perceptual quality and intelligibility of the output. Microphone arrays have been long studied for processing of distant speech. This work uses a feed-forward neural network for mapping microphone array's spatial features into a T-F mask. Wiener filter is used as a desired mask for training the neural network using speech examples in simulated setting. The T-F masks predicted by the neural network are combined to obtain an enhanced separation mask that exploits the information regarding interference between all sources. The final mask is applied to the delay-and-sum beamformer (DSB) output. The algorithm's objective separation capability in conjunction with the separated speech intelligibility is tested with recorded speech from distant talkers in two rooms from two distances. The results show improvement in instrumental measure for intelligibility and frequency-weighted SNR over complex-valued non-negative matrix factorization (CNMF) source separation approach, spatial sound source separation, and conventional beamforming methods such as the DSB and minimum variance distortionless response (MVDR).

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Signal Processing, Research group: Audio research group

With the increasing design dimensionality, it is more difficult to solve Multidisciplinary design optimization (MDO) problems. To reduce the dimensionality of MDO problems, many MDO decomposition strategies have been developed. However, those strategies consider the design problem as a black-box function. In practice, the designers usually have certain knowledge of their problem. In this paper, a method leveraging causal graph and qualitative analysis is developed to reduce the dimensionality of the MDO problem by systematically modeling and incorporating knowledge of the design problem. Causal graph is employed to show the input-output relationships between variables. Qualitative analysis using design structure matrix (DSM) is carried out to automatically find the variables that can be determined without optimization. According to the weight of variables, the MDO problem is divided into two sub-problems, the optimization problem with respect to important variables, and the one with less important variables. The novel method is performed to solve an aircraft concept design problem and the results show that the new dimension reduction and decomposition method can significantly improve optimization efficiency.

In this paper, we present a high data rate implementation of a digital predistortion (DPD) algorithm on a modern mobile multicore CPU containing an on-chip GPU. The proposed implementation is capable of running in real-time, thanks to the execution of the predistortion stage inside the GPU, and the execution of the learning stage on a separate CPU core. This configuration, combined with the low complexity DPD design, allows for more than 400 Msamples/s sample rates. This is sufficient for satisfying 5G new radio (NR) base station radio transmission specifications in the sub-6 GHz bands, where signal bandwidths up to 100 MHz are specified. The linearization performance is validated with RF measurements on two base station power amplifiers at 3.7 GHz, showing that the 5G NR downlink emission requirements are satisfied.

Bibliographical note

Development of an England-wide indoor overheating and air pollution model using artificial neural networks

With the UK climate projected to warm in future decades, there is an increased research focus on the risks of indoor overheating. Energy-efficient building adaptations may modify a buildings risk of overheating and the infiltration of air pollution from outdoor sources. This paper presents the development of a national model of indoor overheating and air pollution, capable of modelling the existing and future building stocks, along with changes to the climate, outdoor air pollution levels, and occupant behaviour. The model presented is based on a large number of EnergyPlus simulations run in parallel. A metamodelling approach is used to create a model that estimates the indoor overheating and air pollution risks for the English housing stock. The performance of neural networks (NNs) is compared to a support vector regression (SVR) algorithm when forming the metamodel. NNs are shown to give almost a 50% better overall performance than SVR.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: University College London, London School of Hygiene and Tropical Medicine, Public Health England

Dataflow programming has received increasing attention in the age of multicore and heterogeneous computing. Modular and concurrent dataflow program descriptions enable highly automated approaches for design space exploration, optimization and deployment of applications. A great advance in dataflow programming has been the recent introduction of the RVC-CAL language. Having been standardized by the ISO, the RVC-CAL dataflow language provides a solid basis for the development of tools, design methodologies and design flows. This paper proposes a novel design flow for mapping RVC-CAL dataflow programs to parallel and heterogeneous execution platforms. Through the proposed design flow the programmer can describe an application in the RVC-CAL language and map it to multi- and many-core platforms, as well as GPUs, for efficient execution. The functionality and efficiency of the proposed approach is demonstrated by a parallel implementation of a video processing application and a run-time reconfigurable filter for telecommunications. Experiments are performed on GPU and multicore platforms with up to 16 cores, and the results show that for high-performance applications the proposed design flow provides up to 4 × higher throughput than the state-of-the-art approach in multicore execution of RVC-CAL programs.

Data Vault Mappings to Dimensional Model Using Schema Matching

In data warehousing, business driven development defines data requirements to fulfill reporting needs. A data warehouse stores current and historical data in one single place. Data warehouse architecture consists of several layers and each has its own purpose. A staging layer is a data storage area to assists data loadings, a data vault modelled layer is the persistent storage that integrates data and stores the history, whereas publish layer presents data using a vocabulary that is familiar to the information users. By following the process which is driven by business requirements and starts with publish layer structure, this creates a situation where manual work requires a specialist, who knows the data vault model. Our goal is to reduce the number of entities that can be selected in a transformation so that the individual developer does not need to know the whole solution, but can focus on a subset of entities (partial schema). In this paper, we present two different schema matchers, one based on attribute names, and another based on data flow mapping information. Schema matching based on data flow mappings is a novel addition to current schema matching literature. Through the example of Northwind, we show how these two different matchers affect the formation of a partial schema for transformation source entities. Based on our experiment with Northwind we conclude that combining schema matching algorithms produces correct entities in the partial schema.

Bibliographical note

Data Flow Algorithms for Processors with Vector Extensions: Handling Actors With Internal State

Full use of the parallel computation capabilities of present and expected CPUs and GPUs requires use of vector extensions. Yet many actors in data flow systems for digital signal processing have internal state (or, equivalently, an edge that loops from the actor back to itself) that impose serial dependencies between actor invocations that make vectorizing across actor invocations impossible. Ideally, issues of inter-thread coordination required by serial data dependencies should be handled by code written by parallel programming experts that is separate from code specifying signal processing operations. The purpose of this paper is to present one approach for so doing in the case of actors that maintain state. We propose a methodology for using the parallel scan (also known as prefix sum) pattern to create algorithms for multiple simultaneous invocations of such an actor that results in vectorizable code. Two examples of applying this methodology are given: (1) infinite impulse response filters and (2) finite state machines. The correctness and performance of the resulting IIR filters and one class of FSMs are studied.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Pervasive Computing, Research area: Computer engineering, Signal Processing Research Community (SPRC), Keysight Technologies, University of Maryland

In this paper we present ensembles of classifiers for automated animal audio classification, exploiting different data augmentation techniques for training Convolutional Neural Networks (CNNs). The specific animal audio classification problems are i) birds and ii) cat sounds, whose datasets are freely available. We train five different CNNs on the original datasets and on their versions augmented by four augmentation protocols, working on the raw audio signals or their representations as spectrograms. We compared our best approaches with the state of the art, showing that we obtain the best recognition rate on the same datasets, without ad hoc parameter optimization. Our study shows that different CNNs can be trained for the purpose of animal audio classification and that their fusion works better than the stand-alone classifiers. To the best of our knowledge this is the largest study on data augmentation for CNNs in animal audio classification audio datasets using the same set of classifiers and parameters. Our MATLAB code is available at https://github.com/LorisNanni.

A wealth of literature exists on computing and visualizing cuts for the magnetic scalar potential of a current carrying conductor via Finite Element Methods (FEM) and harmonic maps to the circle. By a cut we refer to an orientable surface bounded by a given current carrying path (such that the flux through it may be computed) that restricts contour integrals on a curl-zero vector field to those that do not link the current-carrying path, analogous to branch cuts of complex analysis. This work is concerned with a study of a peculiar contour that illustrates topologically unintuitive aspects of cuts obtained from a trivial loop and raises questions about the notion of an optimal cut. Specifically, an unknotted curve that bounds only high genus surfaces in its convex hull is analyzed. The current work considers the geometric realization as a current-carrying wire in order to construct a magnetic scalar potential. Moreover, we consider the problem of choosing an energy functional on the space of maps, suggesting an algorithm for computing cuts via minimizing a conformally invariant functional utilizing Newton iteration.

Continuum approach to high-cycle fatigue. The finite life-time case with stochastic stress history

In this paper, we consider continuum approach for high-cycle fatigue in the case where life-time is finite. The method is based on differential equations and all basic concepts are explained. A stress history is assumed to be a stochastic process and this leads us to the theory of stochastic differential equations. The life-time is a quantity, which tells us when the breakdown of the material happens. In this method, it is naturally a random variable. The basic assumption is, that the distribution of the life-time is log-normal or Weibull. We give a numerical basic example to demonstrate the method.

A finite control set model predictive control strategy for the control of the stator currents of a synchronous reluctance motor driven by a three-level neutral point clamped inverter is presented in this paper. The presented algorithm minimizes the stator current distortions while operating the drive system at switching frequencies of a few hundred Hertz. Moreover, the power electronic converter is protected by overcurrents and/or overvoltages owing to a hard constraint imposed on the stator currents. To efficiently solve the underlying integer nonlinear optimization problem a sphere decoding algorithm serves as optimizer. To this end, a numerical calculation of the unconstrained solution of the optimization problem is proposed, along with modifications in the algorithm proposed in [1] so as to meet the above-mentioned control objectives. Simulation results show the effectiveness of the proposed control algorithm.

Multiple loop formation in polymer macromolecules is an important feature of the chromatin organization and DNA compactification in the nuclei. We analyse the size and shape characteristics of complex polymer structures, containing in general f<inf>1</inf> loops (petals) and f<inf>2</inf> linear chains (branches). Within the frames of continuous model of Gaussian macromolecule, we apply the path integration method and obtain the estimates for gyration radius R<inf>g</inf> and asphericity Â of typical conformation as functions of parameters f<inf>1</inf>, f<inf>2</inf>. In particular, our results qualitatively reveal the extent of anisotropy of star-like topologies as compared to the rosette structures of the same total molecular weight.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Physics, Institute for Physics and Astronomy, University of Potsdam, Institute for Condensed Matter Physics, National Academy of Sciences of Ukraine

Concerted regulation of npc2 binding to endosomal/lysosomal membranes by bis(monoacylglycero)phosphate and sphingomyelin

Niemann-Pick Protein C2 (npc2) is a small soluble protein critical for cholesterol transport within and from the lysosome and the late endosome. Intriguingly, npc2-mediated cholesterol transport has been shown to be modulated by lipids, yet the molecular mechanism of npc2-membrane interactions has remained elusive. Here, based on an extensive set of atomistic simulations and free energy calculations, we clarify the mechanism and energetics of npc2-membrane binding and characterize the roles of physiologically relevant key lipids associated with the binding process. Our results capture in atomistic detail two competitively favorable membrane binding orientations of npc2 with a low interconversion barrier. The first binding mode (Prone) places the cholesterol binding pocket in direct contact with the membrane and is characterized by membrane insertion of a loop (V59-M60-G61-I62-P63-V64-P65). This mode is associated with cholesterol uptake and release. On the other hand, the second mode (Supine) places the cholesterol binding pocket away from the membrane surface, but has overall higher membrane binding affinity. We determined that bis(monoacylglycero)phosphate (bmp) is specifically required for strong membrane binding in Prone mode, and that it cannot be substituted by other anionic lipids. Meanwhile, sphingomyelin counteracts bmp by hindering Prone mode without affecting Supine mode. Our results provide concrete evidence that lipids modulate npc2-mediated cholesterol transport either by favoring or disfavoring Prone mode and that they impose this by modulating the accessibility of bmp for interacting with npc2. Overall, we provide a mechanism by which npc2-mediated cholesterol transport is controlled by the membrane composition and how npc2-lipid interactions can regulate the transport rate.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Physics, Research group: Biological Physics and Soft Matter, University of Helsinki, FIN-00014 University of Helsinki, Minerva Foundation Institute for Medical Research Helsinki, Memphys—Center for Biomembrane Physics, Laboratory of Physics

The model predictive control problem of linear systems with integer inputs results in an integer optimization problem. In case of a quadratic objective function, the optimization problem can be cast as an integer least-squares (ILS) problem. Three algorithms to solve this problem are proposed in this paper. Optimality can be traded in to reduce the computation time. An industrial case study-an inverter-driven electrical drive system-is considered to demonstrate the effectiveness of the presented techniques.

Comparing requirements decomposition within the Scrum, Scrum with Kanban, XP, and Banana development processes

Context: Eliciting requirements from customers is a complex task. In Agile processes, the customer talks directly with the development team and often reports requirements in an unstructured way. The requirements elicitation process is up to the developers, who split it into user stories by means of different techniques. Objective: We aim to compare the requirements decomposition process of an unstructured process and three Agile processes, namely XP, Scrum, and Scrum with Kanban. Method: We conducted a multiple case study with a replication design, based on the project idea of an entrepreneur, a designer with no experience in software development. Four teams developed the project independently, using four different development processes. The requirements were elicited by the teams from the entrepreneur, who acted as product owner and was available to talk with the four groups during the project. Results: The teams decomposed the requirements using different techniques, based on the selected development process. Conclusion: Scrum with Kanban and XP resulted in the most effective processes from different points of view. Unexpectedly, decomposition techniques commonly adopted in traditional processes are still used in Agile processes, which may reduce project agility and performance. Therefore, we believe that decomposition techniques need to be addressed to a greater extent, both from the practitioners’ and the research points of view.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Free University of Bolzano-Bozen, University of Oulu, Former organisation of the author

Bibliographical note

Comb Model with Slow and Ultraslow Diffusion

We consider a generalized diffusion equation in two dimensions for modeling diffusion on a comb-like structures. We analyze the probability distribution functions and we derive the mean squared displacement in x and y directions. Different forms of the memory kernels (Dirac delta, power-law, and distributed order) are considered. It is shown that anomalous diffusion may occur along both x and y directions. Ultraslow diffusion and some more general diffusive processes are observed as well. We give the corresponding continuous time random walk model for the considered two dimensional diffusion-like equation on a comb, and we derive the probability distribution functions which subordinate the process governed by this equation to the Wiener process.

Color Constancy Convolutional Autoencoder

In this paper, we study the importance of pretraining for the generalization capability in the color constancy problem. We propose two novel approaches based on convolutional autoencoders: an unsupervised pre-training algorithm using a fine-tuned encoder and a semi-supervised pre-training algorithm using a novel composite-loss function. This enables us to solve the data scarcity problem and achieve competitive, to the state-of-the-art, results while requiring much fewer parameters on ColorChecker RECommended dataset. We further study the over-fitting phenomenon on the recently introduced version of INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both field and non-field scenes acquired by three different camera models.

The programming capabilities of the Web can be viewed as an afterthought, designed originally by non-programmers for relatively simple scripting tasks. This has resulted in cornucopia of partially overlapping options for building applications. Depending on one’s viewpoint, a generic standards-compatible web browser supports three, four or five built-in application rendering and programming models. In this paper, we give an overview and comparison of these built-in client-side web application architectures in light of the established software engineering principles. We also reflect on our earlier work in this area, and provide an expanded discussion of the current situation. In conclusion, while the dominance of the base HTML/CSS/JS technologies cannot be ignored, we expect Web Components and WebGL to gain more popularity as the world moves towards increasingly complex web applications, including systems supporting virtual and augmented reality.

Characterizing trustworthy digital rights exporting

Digital Rights Management (DRM) is an important business enabler for digital content industry. Rights exporting is one of the crucial tasks in providing the interoperability of DRM. Trustworthy rights exporting is required by both the end users and the DRM systems. We propose a set of principles for trustworthy rights exporting by analysing the characteristic of rights exporting. Based on the principles, we provide some suggestions on how trustworthy rights exporting should be performed.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Research Community on Data-to-Decision (D2D)

Contributors: Lu, W., Zhang, Z., Nummenmaa, J.

Number of pages: 11

Pages: 85-95

Publication date: 2012

Host publication information

Title of host publication: Perspectives in Business Informatics Research - 11th International Conference, BIR 2012, Proceedings

Platforms are defined as multisided marketplaces with business models that enable producers and users to create value together by interacting with each other. In recent years, platforms have benefited from the advances of digitalization. Hence, digital platforms continue to triumph, and continue to be attractive for companies, also for startups. In this paper, we first explore the research of platforms compared to digital platforms. We then proceed to analyze digital platforms as business models, in the context of startups looking for business model innovation. Based on interviews conducted at a technology startup event in Finland, we analyzed how 34 startups viewed their business model innovations. Using the 10 sub-constructs from the business model innovation scale by Clauss in 2016, we found out that the idea of business model innovation resonated with startups, as all of them were able to identify the source of their business model innovation. Furthermore, the results indicated the complexity of business model innovation as 79 percent of the respondents explained it with more than one sub-construct. New technology/equipment, new processes and new customers and markets got the most mentions as sources of business model innovation. Overall, the emphasis at startups is on the value creation innovation, with new proposition innovation getting less, and value capture innovation even less emphasis as the source of business model innovation.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Research group: Business Ecosystems, Networks and Innovations, Industrial and Information Management, VTT Technical Research Centre of Finland

Previous studies have demonstrated that creative design activities benefit from stimuli and that textual prompts might extend the exploration of the design space. However, the number of stimuli to conduct a wide exploration is large and the support of an ICT platform results necessary to manage a creative task effectively because of the presumably large number of generated ideas. Within a project named Startled, a very simple first release of a web application has been developed that supports ideation activities by means of stimuli. Dozens of students enrolled in different courses and Universities have tested the platform and answered a questionnaire, which aimed to elucidate their self-efficacy, perceived workload, ease of use and utility of the present version of the web application. The outcomes show, beyond few differences between students with diverse backgrounds, a majority of neutral and slightly positive answers. The results are not fully satisfying and the authors intend to make the ICT-supported creative tool more guided, user-friendly and intuitive.

Automatic word count estimation (WCE) from audio recordings can be used to quantify the amount of verbal communication in a recording environment. One key application of WCE is to measure language input heard by infants and toddlers in their natural environments, as captured by daylong recordings from microphones worn by the infants. Although WCE is nearly trivial for high-quality signals in high-resource languages, daylong recordings are substantially more challenging due to the unconstrained acoustic environments and the presence of near- and far-field speech. Moreover, many use cases of interest involve languages for which reliable ASR systems or even well-defined lexicons are not available. A good WCE system should also perform similarly for low- and high-resource languages in order to enable unbiased comparisons across different cultures and environments. Unfortunately, the current state-of-the-art solution, the LENA system, is based on proprietary software and has only been optimized for American English, limiting its applicability. In this paper, we build on existing work on WCE and present the steps we have taken towards a freely available system for WCE that can be adapted to different languages or dialects with a limited amount of orthographically transcribed speech data. Our system is based on language-independent syllabification of speech, followed by a language-dependent mapping from syllable counts (and a number of other acoustic features) to the corresponding word count estimates. We evaluate our system on samples from daylong infant recordings from six different corpora consisting of several languages and socioeconomic environments, all manually annotated with the same protocol to allow direct comparison. We compare a number of alternative techniques for the two key components in our system: speech activity detection and automatic syllabification of speech. As a result, we show that our system can reach relatively consistent WCE accuracy across multiple corpora and languages (with some limitations). In addition, the system outperforms LENA on three of the four corpora consisting of different varieties of English. We also demonstrate how an automatic neural network-based syllabifier, when trained on multiple languages, generalizes well to novel languages beyond the training data, outperforming two previously proposed unsupervised syllabifiers as a feature extractor for WCE.

RVC-CAL is an actor-based dataflow language that enables concurrent, modular and portable description of signal processing algorithms. RVC-CAL programs can be compiled to implementation languages such as C/C++ and VHDL for producing software or hardware implementations. This paper presents a methodology for automatic discovery of piecewise-deterministic (quasi-static) execution schedules for RVC-CAL program software implementations. Quasi-static scheduling moves computational burden from the implementable run-time system to design-time compilation and thus enables making signal processing systems more efficient. The presented methodology divides the RVC-CAL program into segments and hierarchically detects quasi-static behavior from each segment: first at the level of actors and later at the level of the whole segment. Finally, a code generator creates a quasi-statically scheduled version of the program. The impact of segment based quasi-static scheduling is demonstrated by applying the methodology to several RVC-CAL programs that execute up to 58 % faster after applying the presented methodology.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Signal Processing Research Community (SPRC), Dept. of Computer Science and Engineering, Univ of Oulu, UBL

Atomistic fingerprint of hyaluronan–CD44 binding

Hyaluronan is a polyanionic, megadalton-scale polysaccharide, which initiates cell signaling by interacting with several receptor proteins including CD44 involved in cell-cell interactions and cell adhesion. Previous studies of the CD44 hyaluronan binding domain have identified multiple widespread residues to be responsible for its recognition capacity. In contrast, the X-ray structural characterization of CD44 has revealed a single binding mode associated with interactions that involve just a fraction of these residues. In this study, we show through atomistic molecular dynamics simulations that hyaluronan can bind CD44 with three topographically different binding modes that in unison define an interaction fingerprint, thus providing a plausible explanation for the disagreement between the earlier studies. Our results confirm that the known crystallographic mode is the strongest of the three binding modes. The other two modes represent metastable configurations that are readily available in the initial stages of the binding, and they are also the most frequently observed modes in our unbiased simulations. We further discuss how CD44, fostered by the weaker binding modes, diffuses along HA when attached. This 1D diffusion combined with the constrained relative orientation of the diffusing proteins is likely to influence the aggregation kinetics of CD44. Importantly, CD44 aggregation has been suggested to be a possible mechanism in CD44-mediated signaling.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Physics, Research group: Biological Physics and Soft Matter, University of Helsinki, MEMPHYS - Centre for Biomembrane Physics, University of Southern Denmark, Institute of Organic Chemistry and Biochemistry, Academy of Sciences of the Czech Republic

Assessment of mutation probabilities of KRAS G12 missense mutants and their long-timescale dynamics by atomistic molecular simulations and Markov state modeling

A mutated KRAS protein is frequently observed in human cancers. Traditionally, the oncogenic properties of KRAS missense mutants at position 12 (G12X) have been considered as equal. Here, by assessing the probabilities of occurrence of all KRAS G12X mutations and KRAS dynamics we show that this assumption does not hold true. Instead, our findings revealed an outstanding mutational bias. We conducted a thorough mutational analysis of KRAS G12X mutations and assessed to what extent the observed mutation frequencies follow a random distribution. Unique tissue-specific frequencies are displayed with specific mutations, especially with G12R, which cannot be explained by random probabilities. To clarify the underlying causes for the nonrandom probabilities, we conducted extensive atomistic molecular dynamics simulations (170 μs) to study the differences of G12X mutations on a molecular level. The simulations revealed an allosteric hydrophobic signaling network in KRAS, and that protein dynamics is altered among the G12X mutants and as such differs from the wild-type and is mutation-specific. The shift in long-timescale conformational dynamics was confirmed with Markov state modeling. A G12X mutation was found to modify KRAS dynamics in an allosteric way, which is especially manifested in the switch regions that are responsible for the effector protein binding. The findings provide a basis to understand better the oncogenic properties of KRAS G12X mutants and the consequences of the observed nonrandom frequencies of specific G12X mutations.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Physics, Research group: Biological Physics and Soft Matter, University of Eastern Finland, University Hospital Tuebingen, Eberhard-Karls University Tuebingen, University of Helsinki, MEMPHYS-Center for Biomembrane Physics

A simulation case study of production planning and control in printed wiring board manufacturing

Production planning and control in printed wiring board (PWB) manufacturing is becoming more difficult as PWB's technology is developing and the production routings become more complex. Simultaneously, the strategic importance of delivery accuracy, short delivery times, and production flexibility is increasing with the highly fluctuating demand and short product life cycles of end products. New principles, that minimize throughput time while guaranteering excellent customer service and adequate capacity utilization, are needed for production planning and control. Simulation is needed in order to develop the new principles and test their superiority. This paper presents an ongoing simulation product that aims at developing the production planning and control of a PWB manufacturer. In the project, a discrete event simulation model is built of a pilot case factory. The model is used for comparing the effect of scheduling, queuing rules, buffer policies, and lot sizes on customer service and cost efficiency.

A prospect for computing in porous materials research: Very large fluid flow simulations

Properties of porous materials, abundant both in nature and industry, have broad influences on societies via, e.g. oil recovery, erosion, and propagation of pollutants. The internal structure of many porous materials involves multiple scales which hinders research on the relation between structure and transport properties: typically laboratory experiments cannot distinguish contributions from individual scales while computer simulations cannot capture multiple scales due to limited capabilities. Thus the question arises how large domain sizes can in fact be simulated with modern computers. This question is here addressed using a realistic test case; it is demonstrated that current computing capabilities allow the direct pore-scale simulation of fluid flow in porous materials using system sizes far beyond what has been previously reported. The achieved system sizes allow the closing of some particular scale gaps in, e.g. soil and petroleum rock research. Specifically, a full steady-state fluid flow simulation in a porous material, represented with an unprecedented resolution for the given sample size, is reported: the simulation is executed on a CPU-based supercomputer and the 3D geometry involves 16,3843 lattice cells (around 590 billion of them are pore sites). Using half of this sample in a benchmark simulation on a GPU-based system, a sustained computational performance of 1.77 PFLOPS is observed. These advances expose new opportunities in porous materials research. The implementation techniques here utilized are standard except for the tailored high-performance data layouts as well as the indirect addressing scheme with a low memory overhead and the truly asynchronous data communication scheme in the case of CPU and GPU code versions, respectively.

Bibliographical note

Applying SCRUM in an OSS development process: An empirical evaluation

Open Source Software development often resembles Agile models. In this paper, we report about our experience in using SCRUM for the development of an Open Source Software Java tool. With this work, we aim at answering the following research questions: 1) is it possible to switch successfully to the SCRUM methodology in an ongoing Open Source Software development process? 2) is it possible to apply SCRUM when the developers are geographically distributed? 3) does SCRUM help improve the quality of the product and the productivity of the process? We answer to these questions by identifying a set of measures and by comparing the data we collected before and after the introduction of SCRUM. The results seem to show that SCRUM can be introduced and used in an ongoing geographically distributed Open Source Software process and that it helps control the development process better.

General information

Publication status: Published

MoE publication type: A4 Article in a conference publication

Organisations: Università degli Studi Dell'Insubria, Former organisation of the author

An origami inspired reconfigurable spiral antenna

Modern day systems often require reconfigurability in the operating parameters of the transmit and receive antennas, such as the resonant frequency, radiation pattern, impedance, or polarization. In this work a novel approach to antenna reconfigurability is presented by integrating antennas with the ancient art of origami. The proposed antenna consists of an inkjet printed center-fed spiral antenna, which is designed to resonate at 1.0GHz and have a reconfigurable radiation pattern while maintaining the 1.0GHz resonance with little variation in input impedance. When flat, the antenna is a planar spiral exhibiting a bidirectional radiation pattern. By a telescoping action, the antenna can be reconfigured into a conical spiral with a directional pattern and higher gain, which gives the antenna a large front-to-back ratio. Construction of the antenna in this manner allows for a simple, lightweight, transportable antenna that can expand to specifications in the field.

An image generator platform to improve cell tracking algorithms simulation of objects of various morphologies, kinetics and clustering

Several major advances in Cell and Molecular Biology have been made possible by recent advances in livecell microscopy imaging. To support these efforts, automated image analysis methods such as cell segmentation and tracking during a time-series analysis are needed. To this aim, one important step is the validation of such image processing methods. Ideally, the "ground truth" should be known, which is possible only by manually labelling images or in artificially produced images. To simulate artificial images, we have developed a platform for simulating biologically inspired objects, which generates bodies with various morphologies and kinetics and, that can aggregate to form clusters. Using this platform, we tested and compared four tracking algorithms: Simple Nearest-Neighbour (NN), NN with Morphology and two DBSCAN-based methods. We show that Simple NN works well for small object velocities, while the others perform better on higher velocities and when clustering occurs. Our new platform for generating new benchmark images to test image analysis algorithms is openly available at (http://griduni.uninova.pt/Clustergen/ClusterGen-v1.0.zip).

Analysis of the damping characteristics of two power electronics-based devices using ‘individual channel analysis and design’

A comparison of the capabilities of two quite distinct power electronics-based ‘flexible AC transmission systems’ devices is presented. In particular, the damping of low frequency electromechanical oscillations is investigated aiming at improving the performance of power systems. The comparison is made using frequency domain methods under the ‘individual channel analysis and design’ framework. A synchronous generator feeding into a system with large inertia is used for such a purpose. Two system configurations including compensation are analysed: (a) in series in the form of a thyristor-controlled series compensator, and (b) in shunt through a static VAr compensator featuring a damping controller. Analyses are carried out to elucidate the dynamic behaviour of the synchronous generator in the presence of the power electronics-based controllers and for the case when no controller is present. Performance and robustness assessments are given particular emphasis. The crux of the matter is the comparison between the abilities of the static VAr compensator and the thyristor-controlled series compensator to eliminate the problematic switch-back characteristic intrinsic to synchronous generator operation by using the physical insight afforded by ‘individual channel analysis and design’.

Analysis of common rail pressure signal of dual-fuel large industrial engine for identification of injection duration of pilot diesel injectors

In this paper, we address the problem of identification of injection duration of common rail (CR) diesel pilot injectors of dual-fuel engines. In these pilot injectors, the injected volume is small and the repeatability of injections and identification of drifts of injectors are important factors, which need to be taken into account in order to achieve good repeatability (shot-to-shot with every cylinder) and therefore a well-balanced engine and furthermore reduced overall wear. This information can then be used for calibration and diagnostics purposes to guarantee engine longevity facilitated by consistent operating conditions throughout the life of the unit. A diagnostics method based on analysis of CR pressure with experimental results is presented in this paper. Using the developed method, the relative duration of injection events can be identified for multiple injectors. We use the phenomenon of drop in rail pressure due to an injection event as a feature of the injection process. The method is based on filtered CR pressure data during and after the injection event. First, the pressure signal during injection is extracted after control of each injection event. After that, the signal is normalized and filtered. Then a derivative of the filtered signal is calculated. Change in the derivative of the filtered signal larger than a predefined threshold indicates an injection event that can be detected and its relative duration can be identified. We present the experimental results and demonstrate the efficacy of the proposed methods using two differenttypes of pressure sensors. We are able to properly identify a change of ≥10 μs (2%, 500 μs) in injection time. This shows that the developed method detects drifts in injection duration and the magnitude of drift. This information can be used for adaptive control of injection duration, so that finally the injected fuel volume is the same as the original.

ALMARVI System Solution for Image and Video Processing in Healthcare, Surveillance and Mobile Applications

ALMARVI is a collaborative European research project funded by Artemis involving 16 industrial as well as academic partners across 4 countries, working together to address various computational challenges in image and video processing in 3 application domains: healthcare, surveillance and mobile. This paper is an editorial for a special issue discussing the integrated system created by the partners to serve as a cross-domain solution for the project. The paper also introduces the partner articles published in this special issue to discuss the various technological developments achieved within ALMARVI spanning all system layers, from hardware to applications. We illustrate the challenges faced within the project based on use cases from the three targeted application domains, and how these can address the 4 main project objectives addressing 4 challenges faced by high performance image and video processing systems: massive data rate, low power consumption, composability and robustness. We present a system stack composed of algorithms, design frameworks and platforms as a solution to these challenges. Finally, the use cases from the three different application domains are mapped on the system stack solution and are evaluated based on their performance for each of the 4 ALMARVI objectives.

The classification of protein structures is an important and still outstanding problem. The purpose of this paper is threefold. First, we utilize a relation between the Tutte and homfly polynomial to show that the Alexander-Conway polynomial can be algorithmically computed for a given planar graph. Second, as special cases of planar graphs, we use polymer graphs of protein structures. More precisely, we use three building blocks of the three-dimensional protein structure - α-helix, antiparallel β-sheet, and parallel β-sheet - and calculate, for their corresponding polymer graphs, the Tutte polynomials analytically by providing recurrence equations for all three secondary structure elements. Third, we present numerical results comparing the results from our analytical calculations with the numerical results of our algorithm - not only to test consistency, but also to demonstrate that all assigned polynomials are unique labels of the secondary structure elements. This paves the way for an automatic classification of protein structures.

Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3x and 1.8x speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Pervasive Computing, Research area: Computer engineering, University of Maryland Baltimore County, National Institute of Standards and Technology, Department of Electrical and Computer Engineering, University of Maryland

A hybrid optimization grey model based on segmented gra and multi-strategy contest for short-term power load forecasting

In this paper, a hybrid grey model with both internal and external optimization is proposed to forecast the short-term power load which has the characteristics of nonlinear fluctuation and random growth. The internal optimization consists of modeling feasibility test and parameter a correction. The external optimization includes three aspects. First, the original series are selected from different viewpoints to construct different forecasting strategies. Second, the predicted day is divided into several smooth segments for separate forecasting. Finally, the different forecasting strategies are implemented respectively in the different segments through grey correlation contest. A practical application verifies that the proposed model has a higher forecasting accuracy and the independency on the choice of initial value.

Ageing first passage time density in continuous time random walks and quenched energy landscapes

We study the first passage dynamics of an ageing stochastic process in the continuous time random walk (CTRW) framework. In such CTRW processes the test particle performs a random walk, in which successive steps are separated by random waiting times distributed in terms of the waiting time probability density function φ (t) ≃ t<sup>-1-α</sup> (0 ≤ α ≤ 2). An ageing stochastic process is defined by the explicit dependence of its dynamic quantities on the ageing time t<inf>a</inf>, the time elapsed between its preparation and the start of the observation. Subdiffusive ageing CTRWs with 0 < α < 1 describe systems such as charge carriers in amorphous semiconducters, tracer dispersion in geological and biological systems, or the dynamics of blinking quantum dots. We derive the exact forms of the first passage time density for an ageing subdiffusive CTRW in the semi-infinite, confined, and biased case, finding different scaling regimes for weakly, intermediately, and strongly aged systems: these regimes, with different scaling laws, are also found when the scaling exponent is in the range 1 < α < 2, for sufficiently long t<inf>a</inf>. We compare our results with the ageing motion of a test particle in a quenched energy landscape. We test our theoretical results in the quenched landscape against simulations: only when the bias is strong enough, the correlations from returning to previously visited sites become insignificant and the results approach the ageing CTRW results. With small bias or without bias, the ageing effects disappear and a change in the exponent compared to the case of a completely annealed landscape can be found, reflecting the build-up of correlations in the quenched landscape.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Physics, Institute for Physics and Astronomy, University of Potsdam, National Institute of Chemistry Ljubljana

Adaptive autoregressive model for reduction of noise in SPECT

This paper presents improved autoregressive modelling (AR) to reduce noise in SPECT images. An AR filter was applied to prefilter projection images and postfilter ordered subset expectation maximisation (OSEM) reconstruction images (AR-OSEM-AR method). The performance of this method was compared with filtered back projection (FBP) preceded by Butterworth filtering (BW-FBP method) and the OSEM reconstruction method followed by Butterworth filtering (OSEM-BW method). A mathematical cylinder phantom was used for the study. It consisted of hot and cold objects. The tests were performed using three simulated SPECT datasets. Image quality was assessed by means of the percentage contrast resolution (CR%) and the full width at half maximum (FWHM) of the line spread functions of the cylinders. The BW-FBP method showed the highest CR% values and the AR-OSEM-AR method gave the lowest CR% values for cold stacks. In the analysis of hot stacks, the BW-FBP method had higher CR% values than the OSEM-BW method. The BW-FBP method exhibited the lowest FWHM values for cold stacks and the AR-OSEM-AR method for hot stacks. In conclusion, the AR-OSEM-AR method is a feasible way to remove noise from SPECT images. It has good spatial resolution for hot objects.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Department of Automation Science and Engineering, Division of Nuclear Medicine, Department of Diagnostic Radiology, Oulu University Hospital, Department of Clinical Physiology and Nuclear Medicine, Joint Authority for Päijät-Häme Social and Health Care

Active scanner control on paper machines

The cross-directional (CD) basis weight control on paper machines is improved by optimizing the path of the scanning measurement. The optimal path results from an LQG problem and depends on how the uncertainty of the present estimate of the basis weight and the intensity of process noise vary in CD. These factors are assessed by how accurately the CD basis weight estimate predicts the measured optical transmittance with a linear adaptive model on synchronized basis weight and transmittance data. Simulations on optimized scanner path in disturbance scenarios are presented, and the practical implementation of scanner control is discussed.

Action and power efficiency in self-organization: The case for growth efficiency as a cellular objective in escherichia coli

Complex systems of different nature self-organize using common mechanisms. One of those is increase of their efficiency. The level of organization of complex systems of different nature can be measured as increased efficiency of the product of time and energy for an event, which is the amount of physical action consumed by it. Here we apply a method developed in physics to study the efficiency of biological systems. The identification of cellular objectives is one of the central topics in the research of microbial metabolic networks. In particular, the information about a cellular objective is needed in flux balance analysis which is a commonly used constrained-based metabolic network analysis method for the prediction of cellular phenotypes. The cellular objective may vary depending on the organism and its growth conditions. It is probable that nutritionally scarce conditions are very common in the nature, and, in order to survive in those conditions, cells exhibit various highly efficient nutrient-processing systems like enzymes. In this study, we explore the efficiency of a metabolic network in transformation of substrates to new biomass, and we introduce a new objective function simulating growth efficiency. We are searching for general principles of self-organization across systems of different nature. The objective of increasing efficiency of physical action has been identified previously as driving systems toward higher levels of self-organization. The flow agents in those networks are driven toward their natural state of motion, which is governed by the principle of least action in physics. We connect this to a power efficiency principle. Systems structure themselves in a way to decrease the average amount of action or power per one event in the system. In this particular example, action efficiency is examined in the case of growth efficiency of E. coli. We derive the expression for growth efficiency as a special case of action (power) efficiency to justify it through first principles in physics. Growth efficiency as a cellular objective of E. coli coincides with previous research on complex systems and is justified by first principles in physics. It is expected and confirmed outcome of this work. We examined the properties of growth efficiency using a metabolic model for Escherichia coli. We found that the maximal growth efficiency is obtained at a finite nutrient uptake rate. The rate is substrate dependent and it typically does not exceed 20 mmol/h/gDW. We further examined whether the maximal growth efficiency could serve as a cellular objective function in metabolic network analysis and found that cellular growth in batch cultivation can be predicted reasonably well under this assumption. The fit to experimental data was found slightly better than with the commonly used objective function of maximal growth rate. Based on our results, we suggest that the maximal growth efficiency can be considered a plausible optimization criterion in metabolic modeling for E. coli. In the future, it would be interesting to study growth efficiency as an objective also in other cellular systems and under different cultivation conditions.

Bibliographical note

Acoustic Modelling

Let us examine the behaviour of sound in a gas or in a liquid medium. From a physical point of view, the sound we hear is created by the pressure change in the medium surrounding us that is sensed by our ears. The equations describing the behaviour of a liquid or a gas are based on well-known equations of fluid mechanics. Therefore in acoustics, they are often referred to as fluids. In the following sections we present a simple wave equation, which is the simplest of (linear) equations used to model acoustical phenomena. Even though the wave equation is quite a simplified model, it has proven to be extremely useful for describing the behaviour of sound in the most common fluid we face every day, namely air.

Bibliographical note

A comparison between joint regression analysis and the AMMI model: A case study with barley

Joint regression analysis (JRA) and additive main effects and multiplicative interaction (AMMI) models are compared in order to (i) access the ability of describing a genotype by environment interaction effects and (ii) evaluate the agreement between the winners of mega-environments obtained from the AMMI analysis and the genotypes in the upper contour of the JRA. An iterative algorithm is used to obtain the environmental indexes for JRA, and standard multiple comparison procedures are adapted for genotype comparison and selection. This study includes three data sets from a spring barley (Hordeum vulgare L.) breeding programme carried out between 2004 and 2006 in Czech Republic. The results from both techniques are integrated in order to advise plant breeders, farmers and agronomists for better genotype selection and prediction for new years and/or new environments.

General information

Publication status: Published

MoE publication type: A1 Journal article-refereed

Organisations: Research Community on Data-to-Decision (D2D), Depto. de Matemática, NOVA University of Lisbon, Dept. of Mathematical and Statistical Methods