Overweight people claim to be mistreated by the fashion industry. If they were, it would be in line with branding theory supporting the idea of rejecting fat consumers to improve user imagery for fashion brands. However, fashion companies do not confess to such practices.To shed some light on the subject, I have conducted two studies.The first attempts to illustrate what effect, if any, user imagery has on fashion brands. It is an experiment designed to show how the weight of users affects consumers’ perceptions of mass market fashion brands. The findings show that consumers’ impressions of mass market fashion brands are significantly affected by the weight of its users. The effect of male user imagery is ambiguous. For women’s fashion on the other hand, slender users are to be preferred.In the second study I examine what effects these effects have on assortments. I compare the sizes of mass market clothes to the body sizes of the population. No evidence of discrimination of overweight or obese consumers was found -quite the contrary.The reasons for these unexpected findings may be explained by the requirements a brand must fulfil to make management of the customer base for user imagery purposes viable. The brand must be sensitive to user imagery; a requirement that mass market fashion fulfils. However, it must also be feasible for a company to exclude customers, and while garment sizes can be restricted to achieve this, the high volume sales strategy of mass market fashion apparently cannot.

Future vehicles are expected to be equipped with wireless communication tech- nology, that enables them to be “connected” to each others and road infras- tructures. Complementing current autonomous vehicles and automated driving systems, the wireless communication allows the vehicles to interact, cooperate, and be aware of its surroundings beyond their own sensors’ range. Such sys- tems are often referred to as Cooperative Intelligent Transport Systems (C-ITS), which aims to provide extra safety, efficiency, and sustainability to transporta- tion systems. Several C-ITS applications are under development and will require thorough testing and evaluation before their deployment in the real-world. C- ITS depend on several sub-systems, which increase their complexity, and makes them difficult to evaluate.Simulations are often used to evaluate many different automotive appli- cations, including C-ITS. Although they have been used extensively, simulation tools dedicated to determine all aspects of C-ITS are rare, especially human fac- tors aspects, which are often ignored. The majority of the simulation tools for C-ITS rely heavily on different combinations of network and traffic simulators. The human factors issues have been covered in only a few C-ITS simulation tools, that involve a driving simulator. Therefore, in this thesis, a C-ITS simu- lation framework that combines driving, network, and traffic simulators is pre- sented. The simulation framework is able to evaluate C-ITS applications from three perspectives; a) human driver; b) wireless communication; and c) traffic systems.Cooperative Adaptive Cruise Control (CACC) and its applications are cho- sen as the first set of C-ITS functions to be evaluated. Example scenarios from CACC and platoon merging applications are presented, and used as test cases for the simulation framework, as well as to elaborate potential usages of it. Moreover, approaches, results, and challenges from composing the simulation framework are presented and discussed. The results shows the usefulness of the proposed simulation framework.

In the newspaper printing industry, offset is the dominating printing method and the use of multicolour printing has increased rapidly in newspapers during the last decade. The offset printing process relies on the assumption that an uniform film of ink of right thickness is transferred onto the printing areas. The quality of reproduction of colour images in offset printing is dependent on a number of parameters in a chain of steps and in the end it is the amount and the distribution of ink deposited on the substrate that create the sensation and thus the perceived colours. We identify three control points in the offset printing process and present methods for assessing the printing process quality in two of these points:• Methods for determining if the printing plates carry the correct image• Methods for determining the amount of ink deposited onto the newsprintA new concept of colour impression is introduced as a measure of the amount of ink deposited on the newsprint. Two factors contribute to values of the colour impression, the halftone dot-size and ink density. Colour impression values are determined on gray-bars using a CCD-camera based system. Colour impression values can also be determined in an area containing an arbitrary combination of cyan magenta and yellow inks. The correct amount of ink is known either from a reference print or from prepress information. Thus, the deviation of the amount of ink can be determined that can be used as control value by a press operator or as input to a control system.How a closed loop controller can be designed based on the colour impression values is also shown.It is demonstrated that the methods developed can be used for off-line print quality monitoring and ink feed control, or preferably in an online system in a newspaper printing press.

Advertisements (ads) in social networking sites (SNSAs) have been considered by many researchers as a crucial area of research. However, the scope of the existing studies on consumers’ assessments of SNSAs has been very limited. Most of the existing studies on assessing SNSAs have focused on Ducoffe’s (1996) model with its three variables, and they have ignored other related variables like the credibility value and interactivity value of the advertisement, which are more logically related to SNSAs than the traditional ads. Moreover, most of these studies have been skewed towards younger users and have ignored the social networking site (SNS) users from other age categories. Finally, previous studies about the assessment of SNSAs have depended on data collected from users of popular SNSs and ignored active users from the brand communities (fans of brands on SNSs). In this thesis, the present author has emphasized these three points as the major gaps in the literature about assessing SNSAs. Moreover, to deepen our understanding of how SNS users assess SNSAs this study presents the research findings of three published papers with three different purposes and with different levels of analysis.The first article aimed to extend Ducoffe’s (1996) model – which was used in the previous literature in assessing SNSAs – by considering the ads’ credibility and interactivity values in addition to Ducoffe’s (1996) three variables of information value, entertainment value, and irritation value. A multiple regression analysis was used to test the modified model, and based on the regression analysis of testing the five predictors, the model without the irritation value had the best coefficient of determination (R2). Moreover, coefficient analysis to test the given hypothesis and to determine the coefficients of the predictors was used. According to this survey study, the four primary variables that predicted the consumers’ assessment of the SNSAs were the information value, entertainment value, credibility value, and interactivity value. As perceived by the SNS users, the interactivity value was the strongest among the four predictors.Based on the unexpected result ofthe irritation value of the first paper, the second paper focused on testing the extended model of the assessments of SNSAs as perceived by a different research population, in this case, brand communities’ consumers (BCCs). Based on the regression analysis of testing the five predictors, the model with the five predictors had the best coefficient of determination (R2). The coefficient analysis was used to test the given hypothesis, to determine the coefficients of the five predictors, and to form a construct equation for assessing the SNSAs. Based on this survey study, the four variables with significant positive effects on the consumers’ assessment of SNSAs were informativeness, entertainment value, credibility value, and interactivity value, while the fifth dimension (irritation value) had a significant negative coefficient on the consumers’ assessment of SNSAs. Moreover, that study provided a deeper understanding of how the BCCs assess SNSAs, and it contributed to identifying the main characteristics ofthe BCCs on an SNS.The third paper focused on exploring the effect of national culture on the consumers’ assessment of SNSAs. The cultural features of the respondents in that study gave additional evidence about how a nation’s cultural characteristics can influence the consumers’ assessment of SNSAs. This study helped to identify how SNS users from Egypt, the Netherlands, and the United Kingdom assess SNSAs. In this study, one-way analysis of variance with post hoc tests was used to compare the assessments of the three nations. Based on the empirical findings of this survey study, the three groups had significant difference F-ratios for their perception of four of the five variables for assessing SNSAs. Their perceptions of the entertainment value did not significantly differ between the three groups while the interactivity value had the strongest F-ratio.The overall purpose of this study was to deepen our understanding of how SNS users are assessing SNSAs in different settings by considering SNS users, BCCs, and others from various nations. All of the studies presented here have focused on variables for assessing the ads that have been used by other researchers in different research contexts.

A fleet of commercial heavy-duty vehicles is a very interesting application arena for fault detection and predictive maintenance. With a highly digitized electronic system and hundreds of sensors mounted on-board a modern bus, a huge amount of data is generated from daily operations.This thesis and appended papers present a study of an autonomous framework for fault detection, using the data gathered from the regular operation of vehicles. We employed an unsupervised deviation detection method, called Consensus Self-Organising Models (COSMO), which is based on the concept of ‘wisdom of the crowd’. It assumes that the majority of the group is ‘healthy’; by comparing individual units within the group, deviations from the majority can be considered as potentially ‘faulty’. Information regarding detected anomalies can be utilized to prevent unplanned stops.This thesis demonstrates how knowledge useful for detecting faults and predicting failures can be autonomously generated based on the COSMO method, using different generic data representations. The case study in this work focuses on vehicle air system problems of a commercial fleet of city buses. We propose an approach to evaluate the COSMO method and show that it is capable of detecting various faults and indicates upcoming air compressor failures. A comparison of the proposed method with an expert knowledge based system shows that both methods perform equally well. The thesis also analyses the usage and potential benefits of using the Echo State Network as a generic data representation for the COSMO method and demonstrates the capability of Echo State Network to capture interesting characteristics in detecting different types of faults.

The arrival of manycore systems enforces new approaches for developing applications in order to exploit the available hardware resources. Developing applications for manycores requires programmers to partition the application into subtasks, consider the dependence between the subtasks, understand the underlying hardware and select an appropriate programming model. This is complex, time-consuming and prone to error.In this thesis, we identify and implement abstraction layers in compilation tools to decrease the burden of the programmer, increase programming productivity and program portability for manycores and to analyze their impact on performance and efficiency. We present compilation frameworks for two concurrent programming languages, occam-pi and CAL Actor Language, and demonstrate the applicability of the approach with application case-studies targeting these different manycore architectures: STHorm, Epiphany and Ambric.For occam-pi, we have extended the Tock compiler and added a backend for STHorm. We evaluate the approach using a fault tolerance model for a four stage 1D-DCT algorithm implemented by using occam-pi’s constructs for dynamic reconfiguration, and the FAST corner detection algorithm which demonstrates the suitability of occam-pi and the compilation framework for data-intensive applications. We also present a new CAL compilation framework which has a front end, two intermediate representations and three backends: for a uniprocessor, Epiphany, and Ambric. We show the feasibility of our approach by compiling a CAL implementation of the 2D-IDCT for the three backends. We also present an evaluation and optimization of code generation for Epiphany by comparing the code generated from CAL with a hand-written C code implementation of 2D-IDCT.

This thesis and appended papers present the process of tacking the problem of environment modeling for autonomous agent. More specifically, the focus of the work has been semantic mapping of warehouses. A semantic map for such purpose is expected to be layout-like and support semantics of both open spaces and infrastructure of the environment. The representation of the semantic map is required to be understandable by all involved agents (humans, AGVs and WMS.) And the process of semantic mapping is desired to lean toward full-autonomy, with minimum input requirement from human user. To that end, we studied the problem of semantic annotation over two kinds of spatial map from different modalities. We identified properties, structure, and challenges of the problem. And we have developed representations and accompanied methods, while meeting the set criteria. The overall objective of the work is “to develop and construct a layer of abstraction (models and/or decomposition) for structuring and facilitate access to salient information in the sensory data. This layer of abstraction connects high level concepts to low-level sensory pattern.” Relying on modeling and decomposition of sensory data, we present our work on abstract representation for two modalities (laser scanner and camera) in three appended papers. Feasibility and the performance of the proposed methods are evaluated over data from real warehouse. The thesis conclude with summarizing the presented technical details, and drawing the outline for future work.

With an increased demand on productivity and safety in industry, new issues in terms of automated material handling arise. This results in industries not having a homogenous fleet of trucks and driven and driverless trucks are mixed in a dynamic environment. Driven trucks are more flexible than driverless trucks, but are also involved in more accidents. A transition from driven to driverless trucks can increase safety, but also productivity in terms of fewer accidents and more accurate delivery. Hence, reliable and standardized solutions that avoid accidents are important to achieve high productivity and safety. There are two different safety standards for driverless trucks for Europe (EN1525) and U.S. (B56.5–2012) and they have developed differently. In terms of obstacles, they both consider contact with humans. However, a machinery-shaped object has recently been added to the U.S. standard (B56.5–2012). The U.S. standard also considers different materials for different sensors and non-contact sensors. For obstacle detection, the historical contact-sensitive mechanical bumpers as well as the traditional laser scanner used today both have limitations – they do not detect hanging objects. In this work we have identified several thin objects that are of interest in an industrial environment. A test apparatus with a thin structure is introduced for a more uniform way to evaluate sensors. To detect thin obstacles, we used a standard setup of a stereo system and developed this further to a trinocular system (a stereo system with three cameras). We also propose a method to evaluate 3D sensors based on the information from a 2D range sensor. The 3D model is created by measuring the position of a reflector with known position to an object with a known size. The trinocular system, a 3D TOF camera and a Kinect sensor are evaluated with this method. The results showed that the method can be used to evaluate sensors. It also showed that 3D sensor systems have potential to be used on driverless trucks to detect obstacles, initially as a complement to existing safety classed sensors. To improve safety and productivity, there is a need for harmonization of the European and the U.S. safety standards. Furthermore, parallel development of sensor systems and standards is needed to make use of state-of-the-art technology for sensors.

Many consumer products, such as within the computer areas, computer graphics, digital signal processing, communication systems, robotics, navigation, astrophysics, fluid physics, etc. are searching for high computational performance as a consequence of increasingly more advanced algorithms in these applications. Until recently the down scaling of the hardware technology has been able to fulfill these higher demands from the more advanced algorithms with higher clock rates on the chips. This that the development of hardware technology performance has stagnated has moved the interest more over to implementation of algorithms in hardware. Especially within wireless communication the desire for higher transmission rates has increased the interest for algorithm implementation methodologies. The scope of this thesis is mainly on the developed methodology of parabolic synthesis. The parabolic synthesis methodology is a methodology for implementing approximations of unary functions in hardware. The methodology is described with the criteria's that have to be fulfilled to perform an approximation on a unary function. The hardware architecture of the methodology is described and to this a special hardware that performs the squaring operation. The outcome of the presented research is a novel methodology for implementing approximations of unary functions such as trigonometric functions, logarithmic functions, as well as square root and division functions etc. The architecture of the processing part automatically gives a high degree of parallelism. The methodology is founded on operations that are simple to implement in hardware such as addition, shifts, multiplication, contributes to that the implementation in hardware is simple to perform. The hardware architecture is characterized by a high degree of parallelism that gives a short critical path and fast computation. The structure of the methodology will also assure an area efficient hardware implementation.