Advanced search

Advanced search is divided into two main parts, and one or more groups in each of the main parts. The main parts are the "Search for" (including) and the "Remove from search" (excluding) part. (The excluding part might not be visible until you hit "NOT" for the first time.) You can add new groups to both the including and the excluding part by using the buttons "OR" or "NOT" respectively, and you can add more search options to all groups through the drop down menu on the last row (in each group).

For a result to be included in the search result, is it required to fit all added including parameters (in at least one group) and not fit all parameters in one of the excluding groups. This system with the two main parts and their groups makes it possible to combine two (or more) distinct searches into one search result, while being flexible in removing results from the final list.

Ab initio electronic structure theory is known as a useful tool for prediction of materials properties. However, majority of simulations still deal with calculations in the framework of density functional theory with local or semi-local functionals carried out at zero temperature. We present new methodological solution.s, which go beyond this approach and explicitly take finite temperature, magnetic, and many-body effects into account. Considering Ti-based alloys, we discuss !imitations of the quasiharmonic approximation for the treatment of lattice vibrations, and present an accurate and easily extendable method to calculate free ,energies of strongly anharmonic solids. We underline the necessity to going beyond the state-of-the-art techniques for the determination of effective cluster interactions in systems exhibiting mctal-to-insulator transition, and describe a unified cluster expansion approach developed for this class of materials. Finally, we outline a first-principles method, disordered local moments molecular dynamics, for calculations of thermodynamic properties of magnetic alloys, like Cr1-x,.AlxN, in their high-temperature paramagnetic state. Our results unambiguously demonstrate importance of finite temperature effects in theoretical calculations ofthermodynamic properties ofmaterials.

We review recent developments in the field of first-principles simulations of magnetic materials above the magnetic order disorder transition temperature, focusing mainly on 3d-transition metals, their alloys and compounds. We review theoretical tools, which allow for a description of a system with local moments, which survive, but become disordered in the paramagnetic state, focusing on their advantages and limitations. We discuss applications of these theories for calculations of thermodynamic and mechanical properties of paramagnetic materials. The presented examples include, among others, simulations of phase stability of Fe, Fe-Cr and Fe-Mn alloys, formation energies of vacancies, substitutional and interstitial impurities, as well as their interactions in Fe, calculations of equations of state and elastic moduli for 3d-transition metal alloys and compounds, like CrN and steels. The examples underline the need for a proper treatment of magnetic disorder in these systems. (C) 2015 Elsevier Ltd. All rights reserved.

We report on the experimental observation of the instability of a plasma shell, which formed during the expansion of a laser-ablated plasma into a rarefied ambient medium. By means of a proton radiography technique, the evolution of the instability is temporally and spatially resolved on a timescale much shorter than the hydrodynamic one. The density of the thin shell exceeds that of the surrounding plasma, which lets electrons diffuse outward. An ambipolar electric field grows on both sides of the thin shell that is antiparallel to the density gradient. Ripples in the thin shell result in a spatially varying balance between the thermal pressure force mediated by this field and the ram pressure force that is exerted on it by the inflowing plasma. This mismatch amplifies the ripples by the same mechanism that drives the hydrodynamic nonlinear thin-shell instability (NTSI). Our results thus constitute the first experimental verification that the NTSI can develop in colliding flows.

This work presents the results from an evaluation of stereoscopic versus monoscopic 3D parallel coordinates. The objective of the evaluation was to investigate if stereopsis increases user performance. The results show that stereoscopy has no effect at all on user performance compared to monoscopy. This result is important when it comes to the potential use of stereopsis within the information visualization community.

Many applications (such as system and user monitoring, runtime verification, diagnosis, observation-based decision making, intention recognition) all require to detect the occurrence of an event in a system, which entails the ability to observe the system. Observation can be costly, so it makes sense to try and reduce the number of observations, without losing full certainty about the event’s actual occurrence. In this paper, we propose a formalization of this problem. We formally show that, whenever the event to be detected follows a discrete spatial or temporal pattern, then it is possible to reduce the number of observations. We discuss exact and approximate algorithms to solve the problem, and provide an experimental evaluation of them. We apply the resulting algorithms to verification of linear temporal logics formulæ. Finally, we discuss possible generalizations and extensions, and, in particular, how event detection can benefit from logic programming techniques.

The purpose of this thesis was to visualize the 1.7 billion stars released by the European Space Agency, as the second data release (DR2) of their Gaia mission, in the open source software OpenSpace with interactive framerates and also to be able to filter the data in real-time. An additional implementation goal was to streamline the data pipeline so that astronomers could use OpenSpace as a visualization tool in their research.

An out-of-core rendering technique has been implemented where the data is streamed from disk during runtime. To be able to stream the data it first has to be read, sorted into an octree structure and then stored as binary files in a preprocess. The results of this report show that the entire DR2 dataset can be read from multiple files in a folder and stored as binary values in about seven hours. This step determines what values the user will be able to filter by and only has to be done once for a specific dataset. Then an octree can be created in about 5 to 60 minutes where the user can define if the stars should be filtered by any of the previously stored values. Only values used in the rendering will be stored in the octree. If the created octree can fit in the computer’s working memory then the entire octree will be loaded asynchronously on start-up otherwise only a binary file with the structure of the octree will be read during start-up while the actual star data will be streamed from disk during runtime.

When the data have been loaded it is streamed to the GPU. Only stars that are visible are uploaded and the application also keeps track of which nodes that already have been uploaded to eliminate redundant updates. The inner nodes of the octree store the brightest stars in all its descendants as a level-of-detail cache that can be used when the nodes are small enough in screen space.

The previous star rendering in OpenSpace has been improved by dividing the rendering phase into two passes. The first pass renders into a framebuffer object while the second pass then performs a tonemapping of the values. The rendering can be done either with billboard instancing or point splatting. The latter is generally the faster alternative. The user can also switch between using VBOs or SSBOs when updating the buffers. The latter is faster but requires OpenGL 4.3, which Apple products do not currently support.

The rendering runs with interactive framerates for both flat and curved screen, such as domes/planetariums. The user can also switch dataset during render as well as render technique, buffer objects, color settings and many other properties. It is also possible to turn time on and see the stars move with their calculated space velocity, or transverse velocity if the star lacks radial velocity measurements. The calculations omits the gravitational rotation. The purpose of the thesis has been fulfilled as it is possible to fly through the entire DR2 dataset on a moderate desktop computer and filter the data in real-time. However, the main contribution of the project may be that the ground work has been laid in OpenSpace for astronomers to actually use it as a tool when visualizing their own datasets and also for continuing to explore the coming Gaia releases.

This new publication in the Models and Modeling in Science Education series synthesizes a wealth of international research on using multiple representations in biology education and aims for a coherent framework in using them to improve higher-order learning. Addressing a major gap in the literature, the volume proposes a theoretical model for advancing biology educators’ notions of how multiple external representations (MERs) such as analogies, metaphors and visualizations can best be harnessed for improving teaching and learning in biology at all pedagogical levels.The content tackles the conceptual and linguistic difficulties of learning biology at each level—macro, micro, sub-micro, and symbolic, illustrating how MERs can be used in teaching across these levels and in various combinations, as well as in differing contexts and topic areas. The strategies outlined will help students’ reasoning and problem-solving skills, enhance their ability to construct mental models and internal representations, and, ultimately, will assist in increasing public understanding of biology-related issues, a key goal in today’s world of pressing concerns over societal problems about food, environment, energy, and health. The book concludes by highlighting important aspects of research in biological education in the post-genomic, information age.

Decision support tools for efficient dispatching of fire and rescue resources are developed and evaluated. The tools can give suggestions about which resources to dispatch to new accidents, and help the decision makers in evaluating the current preparedness for handling future accidents. The tools are evaluated using simulation game based experiments, with players from the fire and rescue services. The results indicate that the tools can help the fire and rescue services in identifying the closest resources to new accidents, and to select resources that preserve the preparedness in the area. However, the results also indicate that there is a risk that the tools increase the decision time.

The importance of living a healthy life in an everyday context is promoted in schools and preschools. The discussion often focuses on what food is healthy, and that one should eat enough but not too much. The connection between food and beverages and their role in the body is seldom discussed. Students’ ideas about how the human body functions have been investigated in several studies but few have focused on young children. In this study, we investigate young children’s conceptions related to this topic and how their ideas develop. Seventy-nine pre- and primary school children, aged 4-11, participated in individual focus interviews wherein the children were asked to draw and explain their understanding. Our results confirm several findings observed by other workers. However, in contrast with earlier studies, 10 of seventeen 4-5 year-old children indicated the stomach, and more than half of those children described how food can be utilized in the body to extract energy. Furthermore, the brain was among the most commonly mentioned organs cross all age groups. Interestingly, the level of expertise varied and did not covariate with age. For example, five of eight of the 4 year-old children draw 5-8 organs, while a single 10 year-old child could only mention three. Similarly, two of thirteen 7-year old children provided an almost completely correct description of the digestive tract and its function, while most of the older children expressed a much less developed understanding. The results reflect the wide range of different conceptual ideas that teachers confront in a day-to-day classroom context.

Understanding of the concept ‘life’ and what characterise ‘living things’ is important as a foundation for learning in biology. In a more general view, this understanding can make children develop awareness, respect and responsibility for life as members of a society and in decision making for sustainable development. The present pilot study aim to investigate 5-6 year old pre-school children’s reasoning and representations about living and nonliving things. In cognitive developmental research, the concept of life is well investigated but, questions still remain regarding how children reason around and represent these concepts. Previous research has found that children have difficulties in including plants as living things. Moreover, it is found that young children include e.g. the sun, clouds and rocks as living things. The methods that have been used are often quantitative and use picture-cards with different objects for the children to categorize. In the present pilot study a modified methodology was applied. Children’s drawings of what they consider as living and non-living were collected and picture-cards were used as point of departure for reasoning. In interviews the children were encouraged to explain and express their ideas. The drawings and the cards mainly worked as a meaning making tool for the children. Results from the study will be presented and discussed.

Turbulence and flow eccentricity can be measured by magnetic resonance imaging (MRI) and may play an important role in the pathogenesis of numerous cardiovascular diseases. In the present study, we propose quantitative techniques to assess turbulent kinetic energy (TKE) and flow eccentricity that could assist in the evaluation and treatment of stenotic severities. These hemodynamic parameters were studied in a pre-treated aortic coarctation (CoA) and after several virtual interventions using computational fluid dynamics (CFD), to demonstrate the effect of different dilatation options on the flow field. Patient-specific geometry and flow conditions were derived from MRI data. The unsteady pulsatile flow was resolved by large eddy simulation (LES) including non-Newtonian blood rheology. Results showed an inverse asymptotic relationship between the total amount of TKE and degree of dilatation of the stenosis, where turbulent flow proximal the constriction limits the possible improvement by treating the CoA alone. Spatiotemporal maps of TKE and flow eccentricity could be linked to the characteristics of the jet, where improved flow conditions were favored by an eccentric dilatation of the CoA. By including these flow markers into a combined MRI-CFD intervention framework, CoA therapy has not only the possibility to produce predictions via simulation, but can also be validated pre- and immediate post treatment, as well as during follow-up studies.

Turbulent blood flow is often associated with some sort of cardiovascular disease, e.g. sharp bends and/or sudden constrictions/expansions of the vessel wall. The energy losses associated with the turbulent flow may increase the heart workload in order to maintain cardiac output (CO). In the present study, the amount of turbulent kinetic energy (TKE) developed in the vicinity of an aortic coarctation was estimated pre-intervention and in a variety of post-intervention configurations, using scale-resolved image-based computational fluid dynamics (CFD). TKE can be measured using magnet resonance imaging (MRI) and have also been validated with CFD simulations [1], i.e. a parameter that not only can be quantified using simulations but can also be measured by MRI.

Patient-specific geometry and inlet flow conditions were obtained using contrast-enhanced MR angiography and 2D cine phase-contrast MRI, respectively. The intervention procedure was mimicked using an inflation simulation, where six different geometries were obtained. A scale-resolving turbulence model, large eddy simulation (LES), was utilized to resolve the largest turbulent scales and also to capture the laminar-to-turbulent transition. All cases were simulated using baseline CO and with a 20% CO increase to simulate a possible flow adaption after intervention.

For this patient, results shows a non-linear decay of the total amount of TKE integrated over the cardiac phase as the stenotic cross-sectional area is increased by the intervention. Figure 1 shows the original segmented geometry and two dilated coarctation with corresponding volume rendering of the TKE at peak systole. Due to turbulent transition at a kink upstream the stenosis further dilation of the coarctation tends to restrict the TKE to a plateau, and continued vessel expansion may therefore only induce unnecessary stresses onto the arterial wall.

This patient-specific non-invasive framework has shown the geometrical impact on the TKE estimates. New insight in turbulence development indicates that the studied coarctation can only be improved to a certain extent, where focus should be on the upstream region, if further TKE reduction is motivated. The possibility of including MRI in a combined framework could have great potential for future intervention planning and follow-up studies.

Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. Gradient descent methods are often used to solve this optimization problem since they are very easy to implement and applicable to general nonconvex functionals. They are, however, sensitive to local minima and often display slow convergence. Traditionally, cost functionals have been modified to avoid these problems. In this paper, we instead propose using two modified gradient descent methods, one using a momentum term and one based on resilient propagation. These methods are commonly used in the machine learning community. In a series of 2-D/3-D-experiments using real and synthetic data with ground truth, the modifications are shown to reduce the sensitivity for local optima and to increase the convergence rate. The parameter sensitivity is also investigated. The proposed methods are very simple modifications of the basic method, and are directly compatible with any type of level set implementation. Downloadable reference code with examples is available online.

The municipality of Norrköping is a large organization with lots of different areas of expertise. To make sure that the values are the same throughout the organization is a heavy task but important if you want the brand to be communicated in the right way in all instances of the organization. An internal review of the employees opinions takes place every other year with the last one taking place in 2013. The results from the review made the municipality of Norrköping realise that they needed to increase their internal work with brand knowledge. The municipality of Norrköping took action and created a platform of communications called Let´s Create Norrköping. The problem they stand before now is that they're not sure if the employees have interpreted the information the correct way regarding the brand. This case study aims to investigate how the brand Norrköping is communicated throughout the plattform Let´s Create Norrköping and how the employees perceives the brand of the municipality of Norrköping.

To answer the cause of the report a data collection has been made with interviews divided into two parts. The first part consists of an interview with Tina Vennerholm, brand developer at the communications department of the municipality of Norrköping. The main objective of this interviews was to conclude the main objectives of the platform Let´s Create Norrköping. The second part was conducted with five employees within the organization of the municipality of Norrköping. It was conducted to conclude their point of view of the platform Let´s Create Norrköping. Conclusions that can be drawn from the study include that the employees lack the presence of the representatives of the municipality of Norrköping at their working places. The employees have a positive opinion regarding the platform Let´s Create Norrköping although they have not received any internal information regarding the plattform. The reason to their positive opinions ought to be that they have been affected by external marketing regarding the plattform. This leads to the employees having a hard time with implementing the message of the platform in their work.

Recommendations given to the municipality of Norrköping is to be visible at the various working places, to show the worker which people who are responsible for the communication at their workplace. to give the employees a personal connection to the source of the information that is given which makes it easier for the worker to accept and implement the information.

The employees all have positive connections to the city of Norrköping. That is why the conductors of this study recommend that the municipality of Norrköping should make a connection to the city with the work that is made by the employees on all the different areas in the organization. This is to engage and motivate the employees and make them see and understand the connection between the city and the organization and the part that they play in it.

Signaling data from the cellular networks can provide a means of analyzing the efficiency of a deployed transportation system and assisting in the formulation of transport models to predict its future use. An approach based on this type of data can be especially appealing for transportation systems that need massive expansions, since it has the added benefit that no specialized equipment or installations are required, hence it can be very cost efficient.

Within this context in this paper we describe how such obtained data can be processed and used in order to act as enablers for traditional transportation analysis models. We outline a layered, modular architectural framework that encompasses the entire process and present results from initial analysis of mobile phone call data in the context of mobility, transport and transport infrastructure. We finally introduce the Mobility Analytics Platform, developed by Ericsson Research, tailored for mobility analysis, and discuss techniques for analyzing transport supply and demand, and give indication on how cell phone use data can be used directly to analyze the status and use of the current transport infrastructure.

The following blog post is edited from an email conversation between the authors about the concept of interactive form, which incidentally is the name of a course given at Linköping University. If you do teach a course, it might be a good idea to understand the meaning of the course name.

Designers need to survey the competition and analyze precedent designs, but methods for that purpose have not been evaluated in earlier research. This paper makes a comparative evaluation between competitive analysis and genre analysis. A randomized between-group experiment was conducted where graphic design students were conducted one of the two analysis methods. There were 13 students in one group and 16 in the other. The results show that genre analysis produced more detailed descriptions of precedent designs, but its process was more difficult to understand. It is concluded that genre analysis can be integrated into competitive analysis, to make use of the strengths of both methods in the analysis of precedents.

This paper compares numerical predictions of turbulence intensity with in vivo measurement. Magnetic resonance imaging (MRI) was carried out on a 60-year-old female with a restenosed aortic coarctation. Time-resolved three-directional phase-contrast (PC) MRI data was acquired to enable turbulence intensity estimation. A contrast-enhanced MR angiography (MRA) and a time-resolved 2D PCMRI measurement were also performed to acquire data needed to perform subsequent image-based computational fluid dynamics (CFD) modeling. A 3D model of the aortic coarctation and surrounding vasculature was constructed from the MRA data, and physiologic boundary conditions were modeled to match 2D PCMRI and pressure pulse measurements. Blood flow velocity data was subsequently obtained by numerical simulation. Turbulent kinetic energy (TKE) was computed from the resulting CFD data. Results indicate relative agreement (error a parts per thousand 10%) between the in vivo measurements and the CFD predictions of TKE. The discrepancies in modeled vs. measured TKE values were within expectations due to modeling and measurement errors.

The increase in vehicular traffic has created new challenges in determining the behavior of performance of data and safety measures in traffic. Hence, traffic signals on intersection used as cost effective and time saving tools for traffic management in urban areas. But on the other hand the signalized intersections in congested urban areas are the key source of high traffic density and slow traffic. High traffic density causes the slow network traffic data rate between vehicle to vehicle and vehicle to infrastructure. To match up with the emerging technologies, LTE takes the lead with good packet delivery and versatile to changes in the network due to vehicular movements and density.

This thesis is about analyzing of LTE implementation based on a road traffic density model. This thesis work is aimed to use probability distribution function to calculate density values and develop a real traffic scenario in LTE network using density values.

In order to analyze the traffic behavior, Aimsun simulator software has been used to represent the real situation of traffic density on a model intersection. For a realistic traffic density model field measurement were used for collection of input data. After calibration and validation process, a close to realty results extracted and used a logistic curve of probability distribution function to find out the density situation on each part of intersection. Similar traffic scenarios were implemented on MATLAB based LTE system level simulator.

Results were concluded with the whole traffic scenario of 90 seconds and calculating the throughput at every traffic signal time and section.

It is quite evident from the results that LTE system adopts the change of traffic behavior with dynamic nature and allocates more bandwidth where it is more needed.

In this work, we address the challenge of seamlessly visualizing astronomical data exhibiting huge scale differences in distance, size, and resolution. One of the difficulties is accurate, fast, and dynamic positioning and navigation to enable scaling over orders of magnitude, far beyond the precision of floating point arithmetic. To this end we propose a method that utilizes a dynamically assigned frame of reference to provide the highest possible numerical precision for all salient objects in a scene graph. This makes it possible to smoothly navigate and interactively render, for example, surface structures on Mars and the Milky Way simultaneously. Our work is based on an analysis of tracking and quantification of the propagation of precision errors through the computer graphics pipeline using interval arithmetic. Furthermore, we identify sources of precision degradation, leading to incorrect object positions in screen-space and z-fighting. Our proposed method operates without near and far planes while maintaining high depth precision through the use of floating point depth buffers. By providing interoperability with order-independent transparency algorithms, direct volume rendering, and stereoscopy, our approach is well suited for scientific visualization. We provide the mathematical background, a thorough description of the method, and a reference implementation.

This thesis describes the work done by two students from Link¨oping University during a five month stay at Community Coordinated Modelling Center (CCMC) at the National Aeronautics and Space Administration (NASA). The work includes the implementation of algorithms for rendering time-varying volume simulation data from space weather simulations hosted by the CCMC, as well as visualizing photo sequences taken by the Solar Dynamics Observatory (SDO) satellite orbiting Earth. Both these capabilities are added to the OpenSpace software to create a multi-modal visualization where scientists, as well as museum audiences, can observe the Sun’s activity and its effects on the heliosphere as a whole.

Both the simulation data and the image sequence provided by the SDO are typically larger than what can be fitted into the main memory of modern computers, which requires the data to be streamed from disk. Due to limitations caused by disk and GPU bandwidth, it is not possible to stream the full resolution data sets in interactive frame rates.

A multi-resolution bricking scheme is implemented to allow for interactive visualization of the large volumetric data sets. To decrease GPU memory usage and minimize data streaming, subvolumes are represented using different spatial and temporal resolution depending on their relative importance to the visual quality. By introducing the concept of a memory budget and a streaming budget, the algorithm allows the user to control how the limited memory and streaming resources are utilized.

To decrease the amount of data to be streamed when visualizing image sequences from SDO, a simpler multi-resolution bricking scheme has been implemented. Spatial resolution of different subregions of the image is varied based on their visibility and projected size on the screen. Results show that the presented implementations enable interactive visualization of volumetric space weather data and satellite data. By varying the streaming budget and memory budget for a volumetric simulation, frame rate can be traded against visual quality.

The research work presented in this thesis is concerned with the analysis of the human body as a calibration platform for estimation of a pinhole camera model used in Augmented Reality environments mediated through Optical See-Through Head-Mounted Display. Since the quality of the calibration ultimately depends on a subject’s ability to construct visual alignments, the research effort is initially centered around user studies investigating human-induced noise, such as postural sway and head aiming precision. Knowledge about subject behavior is then applied to a sensitivity analysis in which simulations are used to determine the impact of user noise on camera parameter estimation.

Quantitative evaluation of the calibration procedure is challenging since the current state of the technology does not permit access to the user’s view and measurements in the image plane as seen by the user. In an attempt to circumvent this problem, researchers have previously placed a camera in the eye socket of a mannequin, and performed both calibration and evaluation using the auxiliary signal from the camera. However, such a method does not reflect the impact of human noise during the calibration stage, and the calibration is not transferable to a human as the eyepoint of the mannequin and the intended user may not coincide. The experiments performed in this thesis use human subjects for all stages of calibration and evaluation. Moreover, some of the measurable camera parameters are verified with an external reference, addressing not only calibration precision, but also accuracy.

Abstract [en]

The precision with which users can maintain boresight alignment between visual targets at different depths is recorded for 24 subjects using two different boresight targets. Subjects' normal head stability is established using their Romberg coefficients. Weibull distributions are used to describe the probabilities of the magnitude of head positional errors and the three dimensional cloud of errors is displayed by orthogonal two dimensional density plots. These data will lead to an understanding of the limits of user introduced calibration error in augmented reality systems.

Abstract [en]

The postural sway in 24 subjects performing a boresight calibration task on a large format head-up display is studied to estimate the impact of human limits on boresight calibration precision and ultimately on static registration errors. The dependent variables, accumulated sway path and omni-directional standard deviation, are analyzed for the calibration exercise and compared against control cases where subjects are quietly standing with eyes open and eyes closed. Findings show that postural stability significantly deteriorates during boresight calibration compared to when the subject is not occupied with a visual task. Analysis over time shows that the calibration error can be reduced by 39% if calibration measurements are recorded in a three second interval at approximately 15 seconds into the calibration session as opposed to an initial reading. Furthermore parameter optimization on experiment data suggests a Weibull distribution as a possible error description and estimation for omni-directional calibration precision. This paper extends previously published preliminary analyses and the conclusions are verified with experiment data that has been corrected for subject inverted pendulum compensatory head rotation by providing a better estimate of the position of the eye. With correction the statistical findings are reinforced.

Abstract [en]

The quality of visual registration achievable with anoptical see-through head mounted display (HMD)ultimately depends on the user’s targetingprecision. This paper presents design guidelines forcalibration procedures based on measurements ofusers’ head stability during visual alignment withreference targets. Targeting data was collected from12 standing subjects who aligned a head fixedcursor presented in a see-through HMD withbackground targets that varied in azimuth (0°, ±30°,±60°) and elevation (0°, ±10°). Their data showedthat: 1) Both position and orientation data will needto be used to establish calibrations based on nearbyreference targets since eliminating body swayeffects can improve calibration precision by a factorof 16 and eliminate apparent angular anisotropies.2) Compensation for body sway can speed thecalibration by removing the need to wait for thebody sway to abate, and 3) calibration precision canbe less than 2 arcmin even for head directionsrotated up to 60° with respect to the user’s torsoprovided body sway is corrected. Users ofAugmented Reality (AR) applications overlookinglarge distances may avoid the need to correct forbody sway by boresighting on markers at relativelylong distances, >> 10 m. These recommendationscontrast with those for heads up displays using realimages as discussed in previous papers.

Abstract [en]

The mitigation of registration errors is a central challenge for improving the usability of AugmentedReality systems. While the technical achievements within tracking and display technology continue toimprove the conditions for good registration, little research is directed towards understanding theuser’s visual alignment performance during the calibration process. This paper reports 12 standingsubjects’ visual alignment performance using an optical see-through head mounted display for viewingdirections varied in azimuth (0°, ±30°, ±60°) and elevation (0°, ±10°). Although viewing direction hasa statistically significant effect on the shape of the distribution, the effect is small and negligible forpractical purposes and can be approximated to a circular distribution with a standard deviation of 0.2°for all viewing directions studied in this paper. In addition to quantifying head aiming accuracy with ahead fixed cursor and illustrating the deteriorating accuracy of boresight calibration with increasingviewing direction extremity, the results are applicable for filter design determining the onset and end ofhead rotation.

Ellis, Stephen

2010 (English)In: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society, 2010Conference paper, Published paper (Refereed)

Abstract [en]

The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

Abstract [en]

The parameter estimation variance of the Single Point Active Alignment Method (SPAAM) is studied through an experiment where 11 subjects are instructed to create alignments using an Optical See-Through Head Mounted Display (OSTHMD) such that three separate correspondence point distributions are acquired. Modeling the OSTHMD and the subject's dominant eye as a pinhole camera, findings show that a correspondence point distribution well distributed along the user's line of sight yields less variant parameter estimates. The estimated eye point location is studied in particular detail. Thefindings of the experiment are complemented with simulated datawhich show that image plane orientation is sensitive to the numberof correspondence points. The simulated data also illustrates someinteresting properties on the numerical stability of the calibrationproblem as a function of alignment noise, number of correspondencepoints, and correspondence point distribution.

Abstract [en]

This paper studies the accuracy of the estimated eyepoint of an Optical See-Through Head-Mounted Display (OST HMD) calibrated using the Single Point Active Alignment Method (SPAAM). Quantitative evaluation of calibration procedures for OST HMDs is complicated as it is currently not possible to share the subject’s view. Temporarily replacing the subject’s eye with a camera during the calibration or evaluation stage has been proposed, but the uncertainty of a correct eyepoint estimation remains. In the experiment reported in this paper, subjects were used for all stages of calibration and the results were verified with a 3D measurement device. The nine participants constructed 25 visual alignments per calibration after which the estimated pinhole camera model was decomposed into its intrinsic and extrinsic parameters using two common methods. Unique to this experiment, compared to previous evaluations, is the measurement device used to cup the subject’s eyeball. It measures the eyepoint location relative to the head tracker, thereby establishing the calibration accuracy of the estimated eyepoint location. As the results on accuracy are expressed as individual pinhole camera parameters, rather than a compounded registration error, this paper complements previously published work on parameter variance as the former denotes bias and the latter represents noise. Results indicate that the calibrated eyepoint is on average 5 cm away from its measured location and exhibits a vertical bias which potentially causes dipvergence for stereoscopic vision for objects located further away than 5.6 m. Lastly, this paper closes with a discussion on the suitability of the traditional pinhole camera model for OST HMD calibration.

The precision with which users can maintain boresight alignment between visual targets at different depths is recorded for 24 subjects using two different boresight targets. Subjects' normal head stability is established using their Romberg coefficients. Weibull distributions are used to describe the probabilities of the magnitude of head positional errors and the three dimensional cloud of errors is displayed by orthogonal two dimensional density plots. These data will lead to an understanding of the limits of user introduced calibration error in augmented reality systems.

This paper studies the accuracy of the estimated eyepoint of an Optical See-Through Head-Mounted Display (OST HMD) calibrated using the Single Point Active Alignment Method (SPAAM). Quantitative evaluation of calibration procedures for OST HMDs is complicated as it is currently not possible to share the subject’s view. Temporarily replacing the subject’s eye with a camera during the calibration or evaluation stage has been proposed, but the uncertainty of a correct eyepoint estimation remains. In the experiment reported in this paper, subjects were used for all stages of calibration and the results were verified with a 3D measurement device. The nine participants constructed 25 visual alignments per calibration after which the estimated pinhole camera model was decomposed into its intrinsic and extrinsic parameters using two common methods. Unique to this experiment, compared to previous evaluations, is the measurement device used to cup the subject’s eyeball. It measures the eyepoint location relative to the head tracker, thereby establishing the calibration accuracy of the estimated eyepoint location. As the results on accuracy are expressed as individual pinhole camera parameters, rather than a compounded registration error, this paper complements previously published work on parameter variance as the former denotes bias and the latter represents noise. Results indicate that the calibrated eyepoint is on average 5 cm away from its measured location and exhibits a vertical bias which potentially causes dipvergence for stereoscopic vision for objects located further away than 5.6 m. Lastly, this paper closes with a discussion on the suitability of the traditional pinhole camera model for OST HMD calibration.

The parameter estimation variance of the Single Point Active Alignment Method (SPAAM) is studied through an experiment where 11 subjects are instructed to create alignments using an Optical See-Through Head Mounted Display (OSTHMD) such that three separate correspondence point distributions are acquired. Modeling the OSTHMD and the subject's dominant eye as a pinhole camera, findings show that a correspondence point distribution well distributed along the user's line of sight yields less variant parameter estimates. The estimated eye point location is studied in particular detail. Thefindings of the experiment are complemented with simulated datawhich show that image plane orientation is sensitive to the numberof correspondence points. The simulated data also illustrates someinteresting properties on the numerical stability of the calibrationproblem as a function of alignment noise, number of correspondencepoints, and correspondence point distribution.

The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

Physics in interactive environments, such as computer games, and simulations require well madeand accurate bounding volumes in order to act both realistically and fast. Today it is common to useeither inaccurate boxes or spheres as bounding volumes or to model the volume by hand. Thesemethods are either too inaccurate or require too much time to ever be able to be used in real-time,accurate virtual environments.This thesis presents a method to automatically generate collision hulls for both manifolds and nonmanifolds.This allows meshes to be used in a physical environment in just a few seconds and stillbeing able to behave realistically. The method performs Approximate Convex Decomposition byiteratively dividing the mesh into smaller, more convex parts. Every part is wrapped in a convexhull. Together the hulls make an accurate, but low cost, convex representation of the original mesh.The convex hulls are stored in a bounding volume hierarchy tree structure that enables fast testingfor collision with the mesh.

This thesis presents a GPU accelerated method to compress light field or light field videos. The implementation is based on an earlier work of a full light field compression framework. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding part. We compress by projecting each data point onto a set of dictionaries and seek a sparse representation with the least error. An optimized greedy algorithm to suit computations on the GPU is presented. We benefit of the algorithm outline by encoding the data segmentally in parallel for faster computation speed while maintaining the quality. The results shows a significantly faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interactive compression speed.