This thesis consists of four research papers presenting a microdata analysis approach to assess and evaluate the Parkinson’s disease (PD) motor symptoms using smartphone-based systems. PD is a progressive neurological disorder that is characterized by motor symptoms. It is a complex disease that requires continuous monitoring and multidimensional symptom analysis. Both patients’ perception regarding common symptom and their motor function need to be related to the repeated and time-stamped assessment; with this, the full extent of patient’s condition could be revealed. The smartphone enables and facilitates the remote, long-term and repeated assessment of PD symptoms. Two types of collected data from smartphone were used, one during a three year, and another during one-day clinical study. The data were collected from series of tests consisting of tapping and spiral motor tests. During the second time scale data collection, along smartphone-based measurements patients were video recorded while performing standardized motor tasks according to Unified Parkinson’s disease rating scales (UPDRS).

At first, the objective of this thesis was to elaborate the state of the art, sensor systems, and measures that were used to detect, assess and quantify the four cardinal and dyskinetic motor symptoms. This was done through a review study. The review showed that smartphones as the new generation of sensing devices are preferred since they are considered as part of patients’ daily accessories, they are available and they include high-resolution activity data. Smartphones can capture important measures such as forces, acceleration and radial displacements that are useful for assessing PD motor symptoms.

Through the obtained insights from the review study, the second objective of this thesis was to investigate whether a combination of tapping and spiral drawing tests could be useful to quantify dexterity in PD. More specifically, the aim was to develop data-driven methods to quantify and characterize dexterity in PD. The results from this study showed that tapping and spiral drawing tests that were collected by smartphone can detect movements reasonably well related to under- and over-medication.

The thesis continued by developing an Approximate Entropy (ApEn)-based method, which aimed to measure the amount of temporal irregularity during spiral drawing tests. One of the disabilities associated with PD is the impaired ability to accurately time movements. The increase in timing variability among patients when compared to healthy subjects, suggests that the Basal Ganglia (BG) has a role in interval timing. ApEn method was used to measure temporal irregularity score (TIS) which could significantly differentiate the healthy subjects and patients at different stages of the disease. This method was compared to two other methods which were used to measure the overall drawing impairment and shakiness. TIS had better reliability and responsiveness compared to the other methods. However, in contrast to other methods, the mean scores of the ApEn-based method improved significantly during a 3-year clinical study, indicating a possible impact of pathological BG oscillations in temporal control during spiral drawing tasks. In addition, due to the data collection scheme, the study was limited to have no gold standard for validating the TIS. However, the study continued to further investigate the findings using another screen resolution, new dataset, new patient groups, and for shorter term measurements. The new dataset included the clinical assessments of patients while they performed tests according to UPDRS. The results of this study confirmed the findings in the previous study. Further investigation when assessing the correlation of TIS to clinical ratings showed the amount of temporal irregularity present in the spiral drawing cannot be detected during clinical assessment since TIS is an upper limb high frequency-based measure.

Parkinson’s disease (PD) is a degenerative, progressive disorder of the central nervous system that mainly affects motor control. The aim of this study was to develop data-driven methods and test their clinimetric properties to detect and quantify PD motor states using motion sensor data from leg agility tests. Nineteen PD patients were recruited in a levodopa single dose challenge study. PD patients performed leg agility tasks while wearing motion sensors on their lower extremities. Clinical evaluation of video recordings was performed by three movement disorder specialists who used four items from the motor section of the Unified PD Rating Scale (UPDRS), the treatment response scale (TRS) and a dyskinesia score. Using the sensor data, spatiotemporal features were calculated and relevant features were selected by feature selection. Machine learning methods like support vector machines (SVM), decision trees and linear regression, using 10-fold cross validation were trained to predict motor states of the patients. SVM showed the best convergence validity with correlation coefficients of 0.81 to TRS, 0.83 to UPDRS #31 (body bradykinesia and hypokinesia), 0.78 to SUMUPDRS (the sum of the UPDRS items: #26-leg agility, #27-arising from chair and #29-gait), and 0.67 to dyskinesia. Additionally, the SVM-based scores had similar test-retest reliability in relation to clinical ratings. The SVM-based scores were less responsive to treatment effects than the clinical scores, particularly with regards to dyskinesia. In conclusion, the results from this study indicate that using motion sensors during leg agility tests may lead to valid and reliable objective measures of PD motor symptoms.

Objective: To develop and evaluate machine learning methods for assessment of Parkinson’s disease (PD) motor symptoms using leg agility (LA) data collected with motion sensors during a single dose experiment.

Background: Nineteen advanced PD patients (Gender: 14 males and 5 females, mean age: 71.4, mean years with PD: 9.7, mean years with levodopa: 9.5) were recruited in a single center, open label, single dose clinical trial in Sweden [1].

Methods: The patients performed up to 15 LA tasks while wearing motions sensors on their foot ankle. They performed tests at pre-defined time points starting from baseline, at the time they received a morning dose (150% of their levodopa equivalent morning dose), and at follow-up time points until the medication wore off. The patients were video recorded while performing the motor tasks. and three movement disorder experts rated the observed motor symptoms using 4 items from the Unified PD Rating Scale (UPDRS) motor section including UPDRS #26 (leg agility), UPDRS #27 (Arising from chair), UPDRS #29 (Gait), UPDRS #31 (Body Bradykinesia and Hypokinesia), and dyskinesia scale. In addition, they rated the overall mobility of the patients using Treatment Response Scale (TRS), ranging from -3 (very off) to 3 (very dyskinetic). Sensors data were processed and their quantitative measures were used to develop machine learning methods, which mapped them to the mean ratings of the three raters. The quality of measurements of the machine learning methods was assessed by convergence validity, test-retest reliability and sensitivity to treatment.

Results: Results from the 10-fold cross validation showed good convergent validity of the machine learning methods (Support Vector Machines, SVM) with correlation coefficients of 0.81 for TRS, 0.78 for UPDRS #26, 0.69 for UPDRS #27, 0.78 for UPDRS #29, 0.83 for UPDRS #31, and 0.67 for dyskinesia scale (P<0.001). There were good correlations between scores produced by the methods during the first (baseline) and second tests with coefficients ranging from 0.58 to 0.96, indicating good test-retest reliability. The machine learning methods had lower sensitivity than mean clinical ratings (Figure. 1).

Conclusions: The presented methodology was able to assess motor symptoms in PD well, comparable to movement disorder experts. The leg agility test did not reflect treatment related changes.

The aim of this paper is to develop and evaluate a multi-sensor data fusion platform for quantifying Parkinson’s disease (PD) motor states. More specifically, the aim is to evaluate the clinimetric properties (validity, reliability, and responsiveness to treatment) of the method, using data from motion sensors during lower- and upper-limb tests.

Methods: Nineteen PD patients and 22 healthy controls were recruited in a single center study. Subjects performed standardized motor tasks of Unified PD Rating Scale (UPDRS), including leg agility, hand rotation, and walking after wearing motion sensors on ankles and wrists. PD patients received a single levodopa dose before and at follow-up time points after the dose administration. Patients were video recorded and their motor symptoms were rated by three movement disorder experts. Experts rated each and every test occasions based on the six items of UPDRS-III (motor section), the treatment response scale (TRS) and the dyskinesia score. Spatiotemporal features were extracted from the sensor data. Features from lower limbs and upper limbs were fused. Feature selection methods of stepwise regression (SR), Lasso regression and principle component analysis (PCA) were used to select the most important features. Different machine learning methods of linear regression (LR), decision trees, and support vector machines were examined and their clinimetric properties were assessed.

Results: Treatment response index from multimodal motion sensors (TRIMMS) scores obtained from the most valid method of LR when using data from all tests. Features were selected by SR, and this method resulted in r=0.95 to TRS. The test-retest reliability of TRIMMS was good with intra-class correlation coefficient of 0.82. Responsiveness of the TRIMMS to levodopa treatment was similar to the responsiveness of TRS.

Conclusions: The results from this study indicate that fusing motion sensors data gathered during standardized motor tasks leads to valid, reliable and sensitive objective measurements of PD motor symptoms. These measurements could be further utilized in studies for individualized optimization of treatments in PD.

Objective: To assess the feasibility of measuring Parkinson’s disease (PD) motor symptoms with a multi-sensor data fusion method. More specifically, the aim is to assess validity, reliability and sensitivity to treatment of the methods.

Background: Data from 19 advanced PD patients (Gender: 14 males and 5 females, mean age: 71.4, mean years with PD: 9.7, mean years with levodopa: 9.5) were collected in a single center, open label, single dose clinical trial in Sweden [1].

Methods: The patients performed leg agility and 2-5 meter straight walking tests while wearing motion sensors on their limbs. They performed the tests at baseline, at the time they received the morning dose, and at pre-specified time points until the medication wore off. While performing the tests the patients were video recorded. The videos were observed by three movement disorder specialists who rated the symptoms using a treatment response scale (TRS), ranging from -3 (very off) to 3 (very dyskinetic). The sensor data consisted of lower limb data during leg agility, upper limb data during walking, and lower limb data during walking. Time series analysis was performed on the raw sensor data extracted from 17 patients to derive a set of quantitative measures, which were then used during machine learning to be mapped to mean ratings of the three raters on the TRS scale. Combinations of data were tested during the machine learning procedure.

Results: Using data from both tests, the Support Vector Machines (SVM) could predict the motor states of the patients on the TRS scale with a good agreement in relation to the mean ratings of the three raters (correlation coefficient = 0.92, root mean square error = 0.42, p<0.001). Additionally, there was good test-retest reliability of the SVM scores during baseline and second tests with intraclass-correlation coefficient of 0.84. Sensitivity to treatment for SVM was good (Figure 1), indicating its ability to detect changes in motor symptoms. The upper limb data during walking was more informative than lower limb data during walking since SVMs had higher correlation coefficient to mean ratings.

Conclusions: The methodology demonstrates good validity, reliability, and sensitivity to treatment. This indicates that it could be useful for individualized optimization of treatments among PD patients, leading to an improvement in health-related quality of life.

Parkinson's disease (PD) is a progressive movement disorder caused by the death of dopamine-producing cells in the midbrain. There is a need for frequent symptom assessment, since the treatment needs to be individualized as the disease progresses. The aim of this paper was to verify and further investigate the clinimetric properties of an entropy-based method for measuring PD-related upper limb temporal irregularities during spiral drawing tasks. More specifically, properties of a temporal irregularity score (TIS) for patients at different stages of PD, and medication time points were investigated. Nineteen PD patients and 22 healthy controls performed repeated spiral drawing tasks on a smartphone. Patients performed the tests before a single levodopa dose and at specific time intervals after the dose was given. Three movement disorder specialists rated videos of the patients based on the unified PD rating scale (UPDRS) and the Dyskinesia scale. Differences in mean TIS between the groups of patients and healthy subjects were assessed. Test-retest reliability of the TIS was measured. The ability of TIS to detect changes from baseline (before medication) to later time points was investigated. Correlations between TIS and clinical rating scores were assessed. The mean TIS was significantly different between healthy subjects and patients in advanced groups (p-value = 0.02). Test-retest reliability of TIS was good with Intra-class Correlation Coefficient of 0.81. When assessing changes in relation to treatment, TIS contained some information to capture changes from Off to On and wearing off effects. However, the correlations between TIS and clinical scores (UPDRS and Dyskinesia) were weak. TIS was able to differentiate spiral drawings drawn by patients in an advanced stage from those drawn by healthy subjects, and TIS had good test-retest reliability. TIS was somewhat responsive to single-dose levodopa treatment. Since TIS is an upper limb high-frequency-based measure, it cannot be detected during clinical assessment.

Objectives: The aim of this paper is to investigate whether a smartphone-based system can be used to quantify dexterity in Parkinson’s disease (PD). More specifically, the aim was to develop data-driven methods to quantify and characterize dexterity in PD. Methods: Nineteen advanced PD patients and 22 healthy controls participated in a clinical trial in Uppsala, Sweden. The subjects were asked to perform tapping and spiral drawing tests using a smartphone. Patients performed the tests before, and at pre-specified time points after they received 150% of their usual levodopa morning dose. Patients were video recorded and their motor symptoms were assessed by three movement disorder specialists using three Unified PD Rating Scale (UPDRS) motor items from part III, the dyskinesia scoring and the treatment response scale (TRS). The raw tapping and spiral data were processed and analyzed with time series analysis techniques to extract 37 spatiotemporal features. For each of the five scales, separate machine learning models were built and tested by using principal components of the features as predictors and mean ratings of the three specialists as target variables. Results: There were weak to moderate correlations between smartphone-based scores and mean ratings of UPDRS item #23 (0.52; finger tapping), UPDRS #25 (0.47; rapid alternating movements of hands), UPDRS #31 (0.57; body bradykinesia and hypokinesia), sum of the three UPDRS items (0.46), dyskinesia (0.64), and TRS (0.59). When assessing the test-retest reliability of the scores it was found that, in general, the clinical scores had better test-retest reliability than the smartphone-based scores. Only the smartphone-based predicted scores on the TRS and dyskinesia scales had good repeatability with intra-class correlation coefficients of 0.51 and 0.84, respectively. Clinician-based scores had higher effect sizes than smartphone-based scores indicating a better responsiveness in detecting changes in relation to treatment interventions. However, the first principal component of the 37 features was able to capture changes throughout the levodopa cycle and had trends similar to the clinical TRS and dyskinesia scales. Smartphone-based scores differed significantly between patients and healthy controls. Conclusions: Quantifying PD motor symptoms via instrumented, dexterity tests employed in a smartphone is feasible and data from such tests can also be used for measuring treatment-related changes in patients.

Parking a vehicle can often lead to frustration, air pollution and congestion due to limited availability of parking spaces. With increasing population density this problem can certainly increase unless addressed. Parking lots occupy large areas of scarce land resource therefore it is necessary to identify the driving behaviour in a parking lot to improve it further. This Paper tries study the driving behaviour in the parking lot and for this endeavours it conducted direct observation in three parking lots and used GPS data that was collected prior to this study by the University of Dalarna.

To evaluate the driving behaviour in the parking lot direct observation was conducted to obtain overall indices of the parking lot vehicles movement. The parking route taken by the driver was compared with the optimal path to identify the driving behaviour in parking lot in terms of distance. The collected data was evaluated, filtered and analysed to identify the route, the distance and the time the vehicle takes to find a parking space.

The outcome of the study shows that driving behaviour in the parking lots varies significantly among the parking user where most of the observed vehicles took unnecessary long time to complete their parking. The study shows that 56% of the 430 observed vehicles demonstrated inefficient driving behaviour as they took long driving path rather the than the optimal path. The study trace this behaviour to two factors, first, the absent of parking guidance in the parking lots and the second is the selectivity of the drivers when choosing the parking space.

The study also shows that the ability of GPS data to identify the driving behaviour in the parking lots varies based on the time interval and the type of the device that is being used. The small the time interval the more accurate the GPS data in detecting the driving behaviour in the parking lots.

The mobility of people, freight and information is fundamental to economic and social activities such as commuting, manufacturing, distributing consumer goods and supplying energy. There are two major problems that arise as a result of mobility. The first is economic cost and the second is environmental impact which is of increasing concern in sustainable development due to emission levels, particularly as a result of car use. This study focuses on constructing a network of induced emissions (NOIEs) by using three models and checking the robustness of NOIEs under varying parameters and models. The three models are Stead’s model, the NAEI model, and Oguchi’s model. This study uses the Swedish city of Borlänge as the case study.

Calculating CO2 emissions by constructing the NOIEs using Stead’s model appears to give an underestimation when compared to results from a NOIEs which applies Oguchi’s model. Results when applying the NAEI model in constructing a NOIEs also give an underestimation compared to a NOIEs applying Oguchi’s model. Applying the NAEI model is, however, more accurate than applying Stead’s model in constructing a NOIEs.

The outcomes of this study show that constructing a NOIEs is robust using Oguchi’s model. This model is preferable since it takes into account more important variables such as driving behavior and the length of the road segments which have a significant impact when estimating CO2 emissions.

Extracting knowledge from user-generated content (UGC) in social media platforms is a very hot research topic in the area of machine learning, nonetheless, the main challenge resides in the fact that UGC carries inference, abstraction and subjectivity alongside objectivity. With the aim of recognising the importance of subjectivity as an influential aspect for providing humanoid results from a machine learning algorithm, this study proposes a novel approach to improve Instagram hashtag recommendation by considering sentiment that can be expressed for images. Two main points are studied in this thesis; evaluating the relativity of Instagram image to hashtag for both objective and subjective features of an image and the effect of sentiment on said relativity. This work examines three machine learning methods for hashtag recommendation: AWS service, developed algorithms with and without sentiment considerations. The models are tested on a collected dataset of de-identified Instagram posts in location London gathered from public profiles. The results show that considering sentiment significantly improves Instagram hashtag recommendation.

It would be of both patients’ as well as clinicians’ interest, if diagnosis of Parkinson’s disease (PD) as well as following check-up methods were perfectly sensitive, accurate, reproducible and feasible of objectively classifying motor symptoms of PD. This is an arduous task due to the possible subjectivity of clinical evaluations. In the past decade, attention turns into a multitude of technology based measures (TBMs) to address this need, among which the method of this research is positioned. Author hopes to contribute with a motor assessment method that addresses not only the issue of subjectivity of measurement, but also does not require extensive installments and is easy to use. For this study, data from a clinical trial conducted at Uppsala University Hospital, Sweden in 2015 are used. 7 PD patients and 7 healthy controls each performed 7-13 times each the same motoric gait test, which has been was video recorded. These recordings were showed to clinicians, who rated subjects’ gait and possible dyskinesia on the unified Parkinson's disease rating scale (0-4 rating). Thus the aim of this research was to imitate and automate the tasks of clinicians when diagnosing PD and its symptoms through motoric ratings, using various gait features. These gait features were obtained through quantification of signals from different body parts while patient performs walking motoric test, using image processing. Diagnosis of PD and its symptoms was twofold, as to firstly identify whether the subject has PD and to secondly predict the severity of PD patients symptoms. When classifying subjects into healthy controls and PD patients, classification trees and support vector machines have been deployed, while these achieved 76- 85% accuracy depending on features selected. Following focus was to diagnose severity of PD among patients, while using UPDRS ratings by clinicians as a target variable for supervised learning. Herein, linear regression has been deployed, while average absolute prediction error was 0.25 and correlation of UPDRS ratings with predicted values was 0.84.

It seems that a paper of mine appearing in Computational Statistics & Data Analysis (Carling, 2000) has prompted the development of outlier detection methods for highly skewed data. However, I wrote the paper in the spirit of Exploratory Data Analysis (Tukey, 1977) and I shared Tukey’s opinion, and I still hold it, that skewed data are better to be transformed for approximate symmetry prior to detection of outliers (or other data analyses).

In this chapter, I consider the cognitive biases arising in judgment under uncertainty that jeopardize good decision-making aligned with normative decision theories. This problem raises objections towards intuitive and fast decision-making. It would be appealing if training could be devised for reducing the biases, and I argue that such training is feasible. I relate best-practice in such training and advocate a number of topics to be included in such training for good, intuitive decision-making skills.

To finance transportation infrastructure and to address social and environmental negative externalities of road transports, several countries have recently introduced or consider a distance based tax on trucks. In competitive retail and transportation markets, such tax can be expected to lower the demand and thereby reduce CO2 emissions of road transports. However, as we show in this paper, such tax might also slow down the transition towards e-tailing. Considering that previous research indicates that a consumer switching from brick-and-mortar shopping to e-tailing reduces her CO2 emissions substantially, the direction and magnitude of the environmental net effect of the tax is unclear. In this paper, we assess the net effect in a Swedish regional retail market where the tax not yet is in place. We predict the net effect on CO2 emissions to be positive, but off-set by about 50% because of a slower transition to e-tailing.

The synthetic control method (SCM) is a new, popular method developed for the purpose of estimating the effect of an intervention when only one single unit has been exposed. Other similar, unexposed units are combined into a synthetic control unit intended to mimic the evolution in the exposed unit, had it not been subject to exposure. As the inference relies on only a single observational unit, the statistical inferential issue is a challenge. In this paper, we examine the statistical properties of the estimator, study a number of features potentially yielding uncertainty in the estimator, discuss the rationale for statistical inference in relation to SCM, and provide a Web-app for researchers to aid in their decision of whether SCM is powerful for a specific case study. We conclude that SCM is powerful with a limited number of controls in the donor pool and a fairly short pre-intervention time period. This holds as long as the parameter of interest is a parametric specification of the intervention effect, and the duration of post-intervention period is reasonably long, and the fit of the synthetic control unit to the exposed unit in the pre-intervention period is good.

This study examines the effects of shadow banking on bank efficiency using data onChinese commercial banks during the period 1998–2012. I focus on two aspects: shadowbanking activities inside and outside the commercial banks. Stochastic frontier analysis(SFA) is used to analyze the effects of shadow banking on cost-efficiency. The empiricalresults indicate that the higher relative size of shadow banking inside the commercialbanks, the higher bank cost-efficiency is, while the higher relative size of shadow bankingoutside the commercial banks, the lower cost-efficiency is. This shows that there are gainsfrom shadow banking for the Chinese financial system. It is important for policymakers torealize this but at the same time understand that shadow banking likely implies a tradeoffbetween flexibility for the banking sector and higher risks.

This paper examines the electricity demand, and its determinants, in 29 European countries during the liberalization of the electricity market. Based on panel data for these countries for the years 1995–2015 and using a dynamic partial adjustment model, price elasticities are estimated for both residential and industrial electricity demand. These elasticities and effects of other variables on electricity consumption are estimated using both GMM (generalized method of moments) and ML (maximum likelihood) approaches. It is found that the price elasticities are very small, especially in the short run, while the income elasticities are relatively large, especially for households and in the long run.

The effects of a new IKEA store on retail revenues, employment and inflow of purchasing power in the entry municipalities as well as in neighbouring municipalities were investigated using data from 2000–11. A propensity score-matching method was used to find non-IKEA entry municipalities that were as similar as possible to the entry municipalities based on the situation before entry. The results indicate that IKEA entry increased entry municipality durable goods revenues by about 20% and employment by about 17%. Only small and, in most cases, statistically insignificant effects were found in neighbouring municipalities.

Forgetting is an oft-forgotten art. Many artificial intelligence (AI) systems deliver good performance when first implemented; however, as the contextual environment changes, they become out of date and their performance degrades. Learning new knowledge is part of the solution, but forgetting outdated facts and information is a vital part of the process of renewal. However, forgetting proves to be a surprisingly difficult concept to either understand or implement. Much of AI is based on analogies with natural systems, and although all of us have plenty of experiences with having forgotten something, as yet we have only an incomplete picture of how this process occurs in the brain. A recent judgment by the European Court concerns the "right to be forgotten" by web index services such as Google. This has made debate and research into the concept of forgetting very urgent. Given the rapid growth in requests for pages to be forgotten, it is clear that the process will have to be automated and that intelligent systems of forgetting are required in order to meet this challenge.

The performance of supervised machine learning algorithms is highly dependent on the distribution of the target variable. Infrequent values are more di_cult to predict, as there are fewer examples for the algorithm to learn patterns that contain those values. These infrequent values are a common problem with real data, being the object of interest in many _elds such as medical research, _nance and economics, just to mention a few. Problems regarding classi_cation have been comprehensively studied. For regression, on the other hand, few contributions are available. In this work, two ensemble methods from classi_cation are adapted to the regression case. Additionally, existing oversampling techniques, namely SmoteR, are tested. Therefore, the aim of this research is to examine the inuence of oversampling and ensemble techniques over the accuracy of regression models when predicting infrequent values. To assess the performance of the proposed techniques, two data sets are used: one concerning house prices, while the other regards patients with Parkinson's Disease. The _ndings corroborate the usefulness of the techniques for reducing the prediction error of infrequent observations. In the best case, the proposed Random Distribution Sample Ensemble reduced the overall RMSE by 8.09% and the RMSE for infrequent values by 6.44% when compared with the best performing benchmark for the housing data set.

This article explores whether GTS (gaze time on screen) can be useful as an engagement measure in the screen mediated learning context. Research that exemplifies ways of measuring engagement in the on-line education context usually does not address engagement metrics and engagement evaluation methods that are unique to the diverse contemporary instructional media landscape. Nevertheless, unambiguous construct definitions of engagement and standardized engagement evaluation methods are needed to leverage instructional media's efficacy. By analyzing the results from a mixed methods eye-tracking study of fifty-seven participants evaluating their visual and assembly performance levels in relation to three visual, procedural instructions that are versions of the same procedural instruction, we found that the mean GTS-values in each group were rather similar. However, the original GTS-values outputted from the ET-computer were not entirely correct and needed to be manually checked and cross validated. Thus, GTS appears not to be a reliable, universally applicable automatic engagement measure in screen-based instructional efforts. Still, we could establish that the overall performance of learners was somewhat negatively impacted by lower than mean GTS-scores, when checking the performance levels of the entire group (N = 57). When checking the stimuli groups individually (N = 17, 20, 20), the structural diagram group's assembly time durations were positively influenced by higher than mean GTS-scores.

Purpose: To explore consumers’ attitudes towards e-commerce, in particular online grocery shopping, and its delivery in non-dense areas for the purpose of designing smart last-mile solutions.

Approach: The state-of-the-art of smart e-commerce delivery in dense areas was identified by a review of the literature. It was expected that this knowledge could be transferred to non-dense areas. This prediction was examined and explored further by making use of four focus groups recruited in a Swedish mid-sized town.

Findings: Respondents were generally positive towards e-commerce, although mixed attitudes were found with regard to online grocery shopping. Further, the willingness to pay for flexible, smart and sustainable delivery was low, with a notable exception for local produce.

Originality: The knowledge acquired and solution developed in dense areas is not readily transferred to non-dense areas. There is scope for developing new Business Models for the supply chain of local produce. For the prototype testing and roll-out of smart e-commerce delivery platforms, the online local produce market is recommended.

This study explores mathematics teachers’ conceptions of how the physical environment in classrooms affects their students’ chances for learning. Semi structured interviews were performed with a few Swedish teachers with experience from tackling different physical settings when teaching mathematics. When analysing the interview transcripts preliminary findings are that: teachers appreciate flexibility and control over the physical settings in the classroom; inadequate acoustics are extra problematic in mathematical activities involving verbal interactions between students in small groups; mathematics task solving in peace and quiet is a common part of mathematics lessons and it easily gets disturbed by external noise.

The problem addressed in this thesis is that a considerable proportion of students around the world attend school in inadequate facilities, which is detrimental for the students’ learning outcome. The overall objective in this thesis is to develop a methodology, with a novel approach to involve teachers, to generate a valuable basis for decisions regarding design and improvement of physical school environment, based on the expressed needs for a specific school, municipality, or district as well as evidence from existing research. Three studies have been conducted to fulfil the objective: (1) a systematic literature review and development of a theoretical model for analysing the role of the physical environment in schools; (2) semi structured interviews with teachers to get their conceptions of the physical school environment; (3) a stated preference study with experimental design as an online survey. Wordings from the transcripts from the interview study were used when designing the survey form. The aim of the stated preference study was to examine the usability of the method when applied in this new context of physical school environment. The result is the methodology with a mixed method chain where the first step involves a broad investigation of the specific circumstances and conceptions for the specific school, municipality, or district. The second step is to use the developed theoretical model and results from the literature study to analyse the results from the first step and transform them in to a format that fits the design of a stated preference study. The final step is a refined version of the procedure of the performed stated preference study.

In this paper, we develop an analytical tool for the role of the physical environment in mathematics education. We do this by extending the didactical triangle with the physical environment as a fourth actor and test it in a review of literature concerning the physical environment and mathematics education. We find that one role played by the physical environment, in relation to mathematical content, is to portray the content in focus, such as geometry and scale. When focusing on teachers, students, and the interaction between them, the role of the physical environment appears to be a precondition, either positive (enabling) or negative (hindering). Many of the findings are valid for education in general as well, such as the importance of building status.

Many individuals and businesses make decisions based on freely and easily accessible online reviews. This provides incentives for the dissemination of fake reviews, which aim to deceive the reader into having undeserved positive or negative opinions about an establishment or service. With that in mind, this work proposes machine learning applications to detect fake online reviews from hotel, restaurant and doctor domains. In order to _lter these deceptive reviews, Neural Networks and Support Vector Ma- chines are used. Both algorithms' parameters are optimized during training. Parameters that result in the highest accuracy for each data and feature set combination are selected for testing. As input features for both machine learning applications, unigrams, bigrams and the combination of both are used. The advantage of the proposed approach is that the models are simple yet yield results comparable with those found in the literature using more complex models. The highest accuracy achieved was with Support Vector Machine using the Laplacian kernel which obtained an accuracy of 82.92% for hotel, 80.83% for restaurant and 73.33% for doctor reviews.

The overall aim of this study is to understand how average daily individuals’ accessibility to other individuals has changed in Sweden and what the impact of changes in intra-travel time is on changes in daily individuals’ accessibility in Dalarna County.

This thesis was conducted by applying quantitative research method via secondary data collection method. The required data for the purpose of this study were collected from Official Statistic of Sweden (SCB), Swedish Road Administration (NVDB) and Swedish National Travel Survey (RVU). Research population or target population for this study is all Swedish workforce population, aged 20-64. For the first part of the aim, the entire research population has been investigated and for the second part of the aim, non-probability sampling method (purposive sampling method) has been applied. The datasets have been applied to compute different variables. The variables were computed by using formulas extracted from previous empirical studies and with help of GIS and R software. The relationship between response and predictors variables has been statistically analyzed by multiple linear regression.

The findings indicate that average daily individuals’ accessibility increased within the Swedish context between the years 1990 and 2008. It was found that the most increment was related to years 1995 to 2000. Also the statistical analysis showed that the relationship between the changes in average intra-travel time and changes in average daily individuals’ accessibility was not significant in municipalities in Dalarna County. Meanwhile, it was concluded that among predictor variables, changes in average daily mobility had a significant relationship with the changes in average daily individuals’ accessibility to other individuals within municipalities in Dalarna County.

Data issues due to nonresponse or missing data arises often in company surveys or in firm data. Missing data and nonresponse causes bias. Another problem that causes bias is omitted variables. Accordingly, it will lead to wrong conclusions. The idea behind this licentiate thesis is to address these problems. The aim is to develop an insight into how common problems can be solved by transforming the data and changing the statistical method. There is no claim that the method suggested in the papers is always optimal. Rather, the goal of the papers is to give an awareness of problems that occurs in quantitative business research.

The solar energy share in Sweden will grow up significantly in next a few decades. Such transition offers not only great opportunity but also uncertainties for the emerging solar photovoltaic/thermal (PV/T) technologies. This paper therefore aims to conduct a techno-economic evaluation of a reference solar PV/T concentrator in Sweden for building application. An analytical model is developed based on the combinations of Monte Carlo simulation techniques and multi energy-balance/financial equations, which takes into account of the integrated uncertainties and risks of various variables. In the model, 11 essential input variables, i.e. average daily solar irradiance, electrical/thermal efficiency, prices of electricity/heating, operation &amp; management (OM) cost, PV/T capital cost, debt to equity ratio, interest rate, discount rate, and inflation rate, are considered, while the economic evaluation metrics, such as levelized cost of energy (LCOE), net present value (NPV), and payback period (PP), are primarily assessed. According to the analytical results, the mean values of LCOE, NPV and PP of the reference PV/T connector are observed at 1.27 SEK/kW h (0.127 €/kW h), 18,812.55 SEK (1881.255 €) and 10 years during its 25 years lifespan, given the project size at 10.37 m2 and capital cost at 4482–5378 SEK/m2 (448.2–537.8 €/m2). The positive NPV indicates that the investment on the selected PV/T concentrator will be profitable as the projected earnings exceeds the anticipated costs, depending on the NPV decision rule. The sensitivity analysis and the parametric study illustrate that the economic performance of the reference PV/T concentrator in Sweden is mostly proportional to solar irradiance, debt to equity ratio and heating price, but disproportionate to capital cost and discount rate. Together with additional market analysis of PV/T technologies in Sweden, it is expected that this paper could clarify the economic situation of PV/T technologies in Sweden and provide a useful model for their further investment decisions, in order to achieve sustainable and low-carbon economics, with an expanded quantitative discussion of the real economic or policy scenarios that may lead to those outcomes.

Generalized linear models and its extensions are widely used for analyzing non-normal data. But Poisson mixed model may exhibit inadequate fitting and inference when encounter excessive zero counts. Mixed hurdle model is a preferable method to solve the problem. Nevertheless, it is still a challenge to use the mixed hurdle model to deal with correlated data. There are a few computational procedure for hurdle model can be used to calculate, particularly for the model with random effects being correlated between non-zero and zero response parts. In our paper we display a method to fit the hurdle model with conditionally autoregressive random effects for the spatial data. Based on the extended algorithm, some modifications are made to the existing procedure in R to help us to fit the data. We conduct Monte-Carlo simulation to study the finite sample properties of our model. The result shows that the new procedure fit the model well. The estimation becomes better with the increase of measurement in each subject. At last, we apply the new procedure to a real problem. The dataset is about reindeer spatial distribution related to the wind power establishments at Storliden Mountain in North Sweden. The new procedure gives a better fit of the real problem than a usual Poisson mixed model

This thesis contributes to the heuristic optimization of the p-median problem and Swedish population redistribution.

The p-median model is the most representative model in the location analysis. When facilities are located to a population geographically distributed in Q demand points, the p-median model systematically considers all the demand points such that each demand point will have an effect on the decision of the location. However, a series of questions arise. How do we measure the distances? Does the number of facilities to be located have a strong impact on the result? What scale of the network is suitable? How good is our solution? We have scrutinized a lot of issues like those. The reason why we are interested in those questions is that there are a lot of uncertainties in the solutions. We cannot guarantee our solution is good enough for making decisions. The technique of heuristic optimization is formulated in the thesis.

Swedish population redistribution is examined by a spatio-temporal covariance model. A descriptive analysis is not always enough to describe the moving effects from the neighbouring population. A correlation or a covariance analysis is more explicit to show the tendencies. Similarly, the optimization technique of the parameter estimation is required and is executed in the frame of statistical modeling.

The aim of this paper is to analyse labour turnover in retail firms with stores in different city locations. This case study of a Swedish mid-sized city uses comprehensive longitudinal register data on individuals. In a first step, an unconditional descriptive analysis shows that labour turnover in retail is higher in out-of-town locations, compared to more central locations in the city. In a second step, a generalized linear model (GLM) analysis is conducted where labour turnover in downtown and out-of-town locations are compared. Firm internal and industry factors, as well as employee characteristics, and location-specific factors are controlled for. The results indicate that commuting costs and intra-urban location have no statistically significant effect on labour turnover in retail firms. Instead, firm internal factors, such as human resource management, has a major influence on labour turnover rates. The findings indicate that in particular firms with multiple locations may need to pay extra attention to work conditions across stores in different places in a city, in order to avoid diverging levels of labour mobility. This paper complements previous survey-based studies on labour turnover by using a comprehensive micro-level dataset to analyse revealed rather than stated preferences concerning job-to-job mobility. An elaborated measure of labour turnover is used to analyse differences between shopping areas in different locations within the city. The particular research design used in this paper makes it possible to isolate the effect of intra-organizational conditions by analysing mobility within firms with workplaces in both downtown and out-of-town locations. This is the first comprehensive study of labour turnover and mobility with an intra-urban perspective in the retail sector.

The usage of energy directly leads to a great amount of consumption of the non-renewable fossil resources. Exploiting fossil resources energy can influence both climate and health via ineluctable emissions. Raising awareness, choosing alternative energy and developing energy efficient equipment contributes to reducing the demand for fossil resources energy, but the implementation of them usually takes a long time. Since building energy amounts to around one-third of global energy consumption, and systems in buildings, e.g. HVAC, can be intervened by individual building management, advanced and reliable control techniques for buildings are expected to have a substantial contribution to reducing global energy consumptions. Among those control techniques, the model-free, data-driven reinforcement learning method seems distinctive and applicable. The success of the reinforcement learning method in many artificial intelligence applications has brought us an explicit indication of implementing the method on building energy control. Fruitful algorithms complement each other and guarantee the quality of the optimisation. As a central brain of smart building automation systems, the control technique directly affects the performance of buildings. However, the examination of previous works based on reinforcement learning methodologies are not available and, moreover, how the algorithms can be developed is still vague. Therefore, this paper briefly analyses the empirical applications from the methodology point of view and proposes the future research direction.

This paper presents the first results from a newly developed BI (business intelligence) model for the Swedish horse industry. Compared to previous studies of the impact from the horse industry we are able to present both figures for the national level as well as a decomposition to regional levels.

The size of the horse industry in Sweden for 2016, is measured departing from the expenditure approach, i.e. summing the final use of horse related goods and services. One implication of the approach is that results are comparable with overall GDP figures for a country and with other subsectors of an economy, e.g. the tourism industry or the car producing industry. The model has two main inputs. Firstly, estimates of the geographical position of all Sweden’s 355.500 horses of different type and use, based on JBVs statistics and postal codes from horse associations. Secondly, estimates of the horse owner’s consumption pattern related to their leisure or professional use. Other horse related activities like riding schools, education, race tracks, betting etc. is treated separately, measured and added to the overall calculation.

The preliminary results indicate that the horse industry in Sweden amounts to somewhere in the interval of 26-32 Billion SEK corresponding to approximately 0,5-0,6 percentage of Swedish GDP. Looking at regional variations, the region of Skåne has most horses and consistently also the region with largest share of the horse industry.

The development of large discount retailers, or big-boxes as they are sometimes referred to, are often subject to heated debate and their entry on a market is greeted with either great enthusiasm or dread. For instance, the world’s largest retailer Wal-Mart (Forbes 2014) has a number of anti- and pro-groups dedicated to its being and the event of a Wal-Mart entry tends to be met with protests and campaigns (Decamme 2013) but also welcomed by, for instance, consumers (Davis & DeBonis 2013). Also in Sweden, the entry of a big box is a hot topic and before IKEA’s opening i Borlänge 2013, the first in Sweden in more than five years, great expectations were mixed with worry (Västerbottens-Kuriren 2011).The presence of large scale discount retailers is not, however, a novel phenomenon but a part of a long-term change in retailing that has taken place globally over the past couple of decades (Taylor & Smalling, 2005). As noted by Dawson (2006), the trend in Europe has over the past few decades gone towards an increasing concentration of large firms along with a decrease of smaller firms.This trend is also detectable in the Swedish retail industry. Over the past decade, the retailing industry in Sweden has increased by around 190 Billion SEK, and its share of GDP has risen from 2,7% to 2,9%, while the number of employees have increased from 200 000 to 250 000 (HUI 2013). This growth, however, has not been distributed evenly but rather it has been oriented mainly towards out-of-town retail clusters. Parallel to this development, the number of large retailers has risen at the expense of market shares of smaller independent firms (Rämme et al 2010). Thereby, the presence of large scale retailers is simply part of a changing retail landscape.The effects of this development, where large scale retailing agents relocate shopping to out-of-town shopping areas, have been heavily debated. On the one hand, the big-boxes are accused of displacing independent small retail businesses in the city-centers and the residential areas, resulting in, to some extent, reduced employment opportunities and less availability for the consumers - especially the elderly (Ljungberg et al 2006). In addition, as access to shopping now tends to require some sort of a motorized vehicle, environmental aspects to the discussion have emerged. Ultimately these types of concerns have resulted in calls for regulations against this development (Olsson 2010). On the other hand, the proponents of the new shopping landscape argue that this evolution implies productivity gains, the benefits of lower prices and an increased variety of products (Maican & Orth 2012). Moreover it is argued that it leads to, for instance, better services (such as longer opening hours) and a creative destruction transformation pressure on retailers, which brings about a renewal of city-centerIIretail and services, increasing their attractivity (Bergström 2010). The belief in benefits of a big box entry can be exemplified by the attractivity of IKEA, and the fact that municipalities are prepared to commit to expenses amounting up to hundreds of millions in order to attract the entry of this big-box. Borlänge municipality, for instance, agreed to expenses of about 350 million SEK in order to secure the entry of IKEA, which opened in 2013 (Blomgren 2009).Against this backdrop, the overall effects of large discount retailers become important: Are the economic benefits enough to warrant subsidies or are there, on the contrary, some very compelling grounds for regulations against these types of establishments? In other words; how is overall retail in a region where a store like IKEA enters affected? And how are local retail firms affected?In order to answer these questions, the purpose of this thesis is to study how entry of a big-box retailer affects the entry region. The object of this study is IKEA - one of the world’s largest retailers, with 345 stores, active in over 40 countries and with profits of about 3.3 billion (IKEA 2013; IKEA 2014). By studying the effects of IKEA-entry, both on an aggregated level and on firm level, this thesis intends to find indications of how large discount retail establishments in general can be expected to affect the economic development both in a region overall, but also on the local firm level, something which is of interest to both policymakers as well as the retailing industry in general.The first paper examines the effects of IKEA on retail revenues and employment in the municipalities that IKEA chose to enter between 2000 and 2011; Gothenburg, Haparanda, Kalmar and Karlstad. By means of a matching method we first identify non-entry municipalities that have a similar probability of IKEA entry as the true entry municipalities. Then, using these non-entry municipalities as a control group, the causal effects of IKEA entry can be estimated using a treatment-control approach. We also extend the analysis to examine the spatial impact of IKEA by estimating the effects on retail in neighboring municipalities. It is found that a new IKEA store increases revenues in durable goods trade with 20% in the entry municipality and the number of employees with 17%. Only small, and in most cases statistically insignificant, negative effects were found in neighboring municipalities.It appears that there is a positive net effect on durables retail sales and employment in the entry municipality. However, the analysis is based on data on an aggregated municipality level and thereby it remains unclear if and how the effects vary within the entry municipalities. In addition, the data used in the first study includes the sales and employment of IKEA itself, which could account for the majority of the increases in employment and retail. Thereby the potential spillover effects on incumbent retailers in the entry municipalities cannot be discerned in the first study.IIITo examine effects of IKEA entry on incumbent retail firms, the second paper in this thesis analyses how IKEA entry affects the revenues and employment of local retail firms in three municipalities; Haparanda, Kalmar and Karlstad, which experienced entry by IKEA between 2000 and 2010. In this second study, we exclude Gothenburg due to the fact that big-box entry appears to have weaker effects in metropolitan areas (as indicated by Artz & Stone 2006). By excluding Gothenburg we aim to reduce the geographical heterogeneity in our study. We obtain control municipalities that are as similar as possible to the three entry municipalities using the same method as in the previous study, but including a slightly different set of variables in the selection equation. Using similar retail firms in the control municipalities as our comparison group, we estimate the impact of IKEA entry on revenues and employment for retail firms located at varying distances from the IKEA entry site.The results generated in this study imply that entry by IKEA increases revenues in incumbent retail firms by, on average, 11% in the entry municipalities. In addition, we do not find any significant impact on retail revenues in the city centers of the entry municipalities. However, we do find that retail firms within 1 km of the IKEA experience increases in revenues of about 26%, which indicates large spillover effects in the area nearby the entry site. As expected, this impact decreases as we expand the buffer zone: firms located between 0-2 km experiences a 14% increase and firms in 2-5 km experiences an increase of 10%. We do not find any significant impacts on retail employment.

This thesis provides an understanding of how new audit regulation affect firm growth and how audits affect the cost of capital. To investigate the effect of audit reforms on employment growth, we exploited a Swedish reform made in November 2010 that gave certain firms the option to opt out of previously imposed statutory audits. We find that firms which fulfilled the requirements for voluntary auditing, compared to a control group of similar firms that did not, increased their employment growth rate by 0.39%. Furthermore, the reform was also exploited to investigate if audited financial statements add value for firms in the private debt market. We find that firms with audited financial statements, on average, save 1.26 percentage points on cost of debt compared to firms with unaudited financial statements. Thus, the reform creates a possibility for firms that have the ambition to grow in employment to do so by not auditing, and those who want to grow by investments in capital to do so by reducing the cost of such investments by auditing. However, the current ceiling of the reform is also likely to force some firms to operate at sub-optimal levels, those without having the option to opt out of audit even though they might not accrue any benefit from auditing, at least in the short-run. One can argue that is partly due to how institutions evolve, generally slower than other actors in the society do.

Many European countries have abolished mandatory audits for small firms, but we still lack knowledge on whether this affects small firm growth. A Swedish reform in 2010 made audits voluntary for small firms fulfilling certain requirements, while firms that did not meet these requirements still had mandatory audits. We argue that this regulatory change created an almost perfect natural experiment, which can be exploited to evaluate the effects of the reform on employment growth using a difference-in-difference estimator. Our results show that firms who fulfil the requirements for voluntary auditing, as compared to a control group of firms that does not, increased their employment growth rates by on average 0.39%, corresponding to 5 500 jobs being created in the three years following the reform. It thus seems that voluntary audits are reducing the regulatory burden for small firms, making resources available that can be used to increase the number of employees. The current threshold levels for mandatory audits are still significantly lower in Sweden than in most other European countries, which implies that the policymakers in Sweden could create more jobs in small and medium-sized firms if they increased the size threshold levels for mandatory audits.

Many European countries have abolished mandatory audits for small firms to reduce the regulatory and administrative burden for these firms. However, we still lack knowledge on whether such legislative changes affect employment growth for those firms that become free to choose to have external audits. We investigate this question using a Swedish reform that made audits voluntary for small firms fulfilling certain requirements. The reform created an almost ideal natural experiment, which we use to evaluate the effects of voluntary audits on employment growth for small firms using a difference-in-difference estimator. We find that firms which fulfilled the requirements for voluntary auditing, compared to a control group of similar firms that did not, increased their employment growth rate by 0.39%. This corresponds to 1,830 jobs being created in the year following the reform, suggesting that mandatory audits act as a growth barrier for small firms.