Designing an application from top to bottom is a challenge for any software architect. Designing an application to be deployed in the cloud adds extra complexity and a variety of questions to the task. One of these questions is how to deploy an application? The most popular choices at this time are either Docker containers or serverless functions. This report presents a comparison between the two deployment methods based on cost and performance. The comparison did not yield a conclusive winner but it did offer some key pointers to help with the decision. Docker containers offer a standardized deployment method for a low price and with good performance. Before choosing Docker the intended market needs to be evaluated given that for each region Docker needs to serve, the price also increases. Serverless functions offer auto-scaling and easy global deployments but suffer from high complexity, slower performance, and an uncertain monthly price tag.

Since JavaScript code that is executed by the Node.js run-time environment is run in a single thread without really utilizing the full power of multi-core systems, fairly new approaches attempt to solve this situation. Some of these approaches are considered well publicly tested and are widely used at the time of writing this document. The objectives for this study are to check which ones of these approaches achieve the better scalability in accordance to the number of handled requests, and to what extent those approaches utilize the multi-core power compared to the raw Node.js environment with the normal CPU scheduling.

Context awareness means sending the right information to the right user at the righttime. Context is our environment, which can be anything around us such as location,lights, noises etc. To make the context interact with our mobile devices or sensors,there must be protocols for communication and data formats for the “sent” or “received”contextual information so we can give very specific context information tothe user. Since this communication and adaptive part is not well understood, in thispaper we are interested in investigating the technology used for adaptation. We willalso explain how this technology works to adapt itself to changes in the environment.

This study aims to find the answers to how an online interactive video editing tool for teachers to use would be designed. To find out the answers to this, students studying to become teachers and experienced teachers were interviewed and used for observations and usability testing of a prototype. In total there were 27 unique data gathering situations with 11 unique participants. The five teacher students who were participating were all teacher students at Linnaeus University in Växjö. The six experienced teachers have been teaching for many years and are currently lecturing teachers about new technology that can be used in the classroom. The result from interviews, observations and literature search contributed to a list of requirements which in turn became a prototype. What has been discovered is that teachers need a tool which is easy to use with interactions and functions such as adding clickable annotations to clips and creating playlists which will help teachers plan ahead and save time during lectures.

As the number of Internet of Things (IoT) devices that are used daily is increasing, the inadequacy of cloud computing to provide neseccary IoT-related features, such as low latency, geographic distribution and location awareness, is becoming more evident. Fog computing is introduced as a new computing paradigm, in order to solve this problem by extending the cloud‟s storage and computing resources to the network edge. However, the introduction of this new paradigm is also confronted by various security threats and challenges since the security practices that are implemented in cloud computing cannot be applied directly to this new architecture paradigm. To this end, various papers have been published in the context of fog computing security, in an effort to establish the best security practices towards the standardization of fog computing. In this thesis, we perform a systematic literature review of current research in order to provide with a classification of the various security threats and challenges in fog computing. Furthermore, we present the solutions that have been proposed so far and which security challenge do they address. Finally, we attempt to distinguish common aspects between the various proposals, evaluate current research on the subject and suggest directions for future research.

Self-adaptive systems are capable of autonomously adjusting their behavior at runtime to accomplish particular adaptation goals. The most common way to realize self-adaption is using a feedback loop(s) which contains four actions: collect runtime data from the system and its environment, analyze the collected data, decide if an adaptation plan is required, and act according to the adaptation plan for achieving the adaptation goals. Existing approaches achieve the adaptation goals by using formal methods, and exhaustively verify all the available adaptation options, i.e., adaptation space. However, verifying the entire adaptation space is often not feasible since it requires time and resources. In this thesis, we present an approach which uses machine learning to reduce the adaptation space in self-adaptive systems. The approach integrates with the feedback loop and selects a subset of the adaptation options that are valid in the current situation. The approach is applied on the simulator of a self-adaptive Internet of Things application which is deployed in KU Leuven, Belgium. We compare our results with a formal model based self-adaptation approach called ActivFORMS. The results show that on average the adaptation space is reduced by 81.2% and the adaptation time by 85% compared to ActivFORMS while achieving the same quality guarantees.

Software testing is something that is very common and is done to increase the quality of and confidence in a software. In this report, an idea is proposed to create a software for GUI regression testing which uses image recognition to perform steps from test cases. The problem that exists with such a solution is that if a GUI has had changes made to it, then many test cases might break. For this reason, REGTEST was created which is a GUI regression testing tool that is able to handle one type of change that has been made to the GUI component, such as a change in color, shape, location or text. This type of solution is interesting because setting up tests with such a tool can be very fast and easy, but one previously big drawback of using image recognition for GUI testing is that it has not been able to handle changes well. It can be compared to tools that use IDs to perform a test where the actual visualization of a GUI component does not matter; It only matters that the ID stays the same; however, when using such tools, it either requires underlying knowledge of the GUI component naming conventions or the use of tools which automatically constructs XPath queries for the components. To verify that REGTEST can work as well as existing tools a comparison was made against two professional tools called Ranorex and Kantu. In those tests, REGTEST proved very successful and performed close to, or better than the other software.

Natural Language Processing (NLP) is a field studying computer processing of human language. Recently, neural network language models, a subset of machine learning, have been used to great effect in this field. However, research remains focused on the English language, with few implementations in other languages of the world. This work focuses on how NLP techniques can be used for the task of grammar and spelling correction in the Swedish language, in order to investigate how language models can be applied to non-English languages. We use a controlled experiment to find the hyperparameters most suitable for grammar and spelling correction on the Göteborgs-Posten corpus, using a Long Short-term Memory Recurrent Neural Network. We present promising results for Swedish-specific grammar correction tasks using this kind of neural network; specifically, our network has a high accuracy in completing these tasks, though the accuracy achieved for language-independent typos remains low.

Artificial general intelligence is not well defined, but attempts such as the recent listof “Ingredients for building machines that think and learn like humans” are a startingpoint for building a system considered as such [1]. Numenta is attempting to lead thenew era of machine intelligence with their research to re-engineer principles of theneocortex. It is to be explored how the ingredients are in line with the design princi-ples of their algorithms. Inspired by Deep Minds commentary about an autonomy-ingredient, this project created a combination of Numentas Hierarchical TemporalMemory theory and Temporal Difference learning to solve simple tasks defined in abrowser environment. An open source software, based on Numentas intelligent com-puting platform NUPIC and Open AIs framework Universe, was developed to allowfurther research of HTM based agents on customized browser tasks. The analysisand evaluation of the results show that the agent is capable of learning simple tasksand there is potential for generalization inherent to sparse representations. However,they also reveal the infancy of the algorithms, not capable of learning dynamic com-plex problems, and that much future research is needed to explore if they can createscalable solutions towards a more general intelligent system.

Internet of things is getting more and more popular in healthcare as it comes with benefits that help with efficiency in saving lives and reduce its cost, but it also presents a new attack vector for an attacker to steal or manipulate information sent between them. This report will focus on three properties in the definition of security, confidentiality, integrity and access control. The report will look into what challenges there is in healthcare IoT today through a literature review and from those challenges look into what could minimise these challenges before a device gets into production. The report found that the lack of standardisation has lead to errors that could be easily prevented by following a guideline of tests as those from the European Union Agency for Network and Information Security, or by running a penetration test with the tools brought up in the report on the device to see what vulnerabilities are present.

For the majority of the computers existence, we humans have interacted with them in a similar way, usually with a strict one-to-one relationship between user and machine. This is reflected by the design of most computers, operating systems and user applications on the market today, which are typically intended to only be operated by a single user. When computers are used for teamwork and cooperation, this design philosophy can be restricting and problematic. This paper investigates the development of shared software intended for multiple users and the impact of the single user bias in this context. A prototype software system was developed in order to evaluate different development methods for shared applications and discover potential challenges and limitations with this kind of software. It was found that the development of applications for multiple users can be severely limited by the target operating system and hardware platform. The authors conclude that new platforms are required to develop shared software more efficiently. These platforms should be tailored to provide robust support for multiple concurrent users. This work was carried out together with SAAB Air Traffic Management in Växjö, Sweden and is a bachelor's thesis in computer engineering at Linnaeus University.

WebGL is a technique that allows the browser to run 3D applications with the help of the GPU. Voronoi diagrams are a set of polygons that can be used to illustrate worlds of islands. In an web-application using Voronoi Polygons to create two dimensional worlds there is a future vision to enable three dimensional behavior. There are multiple frameworks and libraries that can be used to simplify the process of creating 3D applications in the browser. Due to the fact that 3D applications can be performance demanding, an experiment was conducted with BabylonJS and Three.js. In order to evaluate which one of the two performed better, RAM, GPU and CPU were evaluated when translating two dimensional Voronoi heightmaps into a 3D application. The results from this stress test prove that Three.js outperformed BabylonJS.

With the rise of social media, people have gained a platform to express opinions and discuss current subjects with others. This thesis investigates whether a simple sentiment analysis — determining how positive a tweet about a given party is — can be used to predict the results of the Swedish general election and compares the results to betting odds and opinion polls. The results show that while the idea is an interesting one, and sometimes the data can point in the right direction, it is by far a reliable source to predict election outcomes.

This thesis proposes an implementation of battery-powered, time-synchronized wireless nodes that can be deployed in a wireless network topology. Wireless sensor networks are used in a wide variety of scenarios where emphasis is placed on the wireless nodes’ battery life. The main area of focus in this thesis is to examine how wireless nodes can save battery power by utilizing a deep sleep mode and wake up simultaneously using time synchronization to carry out their data communication. This was achieved by deploying five time-synchronized, battery-powered nodes in a wireless network topology. The difference in battery current draw between continuously running nodes and sleep-enabled nodes were measured, as well as the time duration needed by the nodes to successfully send their payloads and route other nodes’ data. The nodes needed between 1502 ms and 3273 ms on average to carry out their data communication, depending on where they were located in the network topology. Measurements show that sleep-enabled nodes on average draw substantially less current than continuously running nodes during a complete data communication cycle. When sleep-enabled nodes were powered by two AA batteries, an increase in battery life of up to 1800% was observed.

The buzz surrounding artificial intelligences continues to grow. They are currently used in a wide variety of systems and appliances, such as video games, virtual personal assistants, and self-driving cars. This paper explores the possibility of a self-learning AI that can play the classic arcade game Q*BERT, using only screenshots as input. It is tested to work on several different screens sizes, and the results are collected and compared to that of a human player, as well as results from previous research. The results are fairly positive. While the AI had a hard time of matching the human player on average score, it did get close to the highest score.

The Service Worker is a programmable proxy that allows the clients to keep offline parts of websites or even the whole domains, receive push notifications, have back-ground synchronization and other features. All of these features are available to the user without having to install an application - the user only visits a website. The service worker has gained popularity due to being a key component in the Progressive Web Applications (PWAs). PWAs have already proven to drastically increase the number of visits and the duration of browsing for websites such as Forbes [1], Twitter [2], and many others. The Service Worker is a powerful tool, yet it is hard for clients to understand the security implications of it. Therefore, all modern browser install the service workers without asking the client. While this offers many conveniences to the user, this powerful technology introduces new security risks. This thesis takes a closer look at the structure of the service worker and focuses on the vulnerabilities of its components. After the literature analysis and some testing using the demonstrator developed during this project, the vulnerabilities of the service worker components are classified and presented in the form of the vulnerability matrix; the mitigations to the vulnerabilities are then outlined, and the two are summarized in the form of security guidelines.

A new high-level language is sought after for implementing and mocking functional-ity on the Axis Communications platform. We analyze what impact the Node.js run-time environment has regarding performance and its ability to perform functionality.The performance refers to metrics on CPU, memory, free disk space and responsetimes and what effect an added Node.js runtime has on the platform. The functional-ity is based on Axis’ ideas about having Node.js run high-level services. A test planvalidates the functionality of a JavaScript service implemented as an API with JSONobjects as a POST and GET methods. To test the performance a test suite that sam-ples the data on a device and saves it like log files on a client. The variable is threedifferent stages, where the current device serves as the baseline. Secondly, to findout what impact Node.js itself has the second stage is with Node.js present and thethird stage represents a device where Node.js and the JavaScript service is put underload. The results show that it is possible to implement a JavaScript service runningunder Node.js since the test plan with its assertions passed on all tests. Regardingperformance and response time we did see a decrease in CPU idle time and memoryand an increase in the response time compared to the baseline.

LoRaWAN is an open networking technology designed for IoT devices that al-lows wireless data transmission over longer ranges than some other wireless tech-nologies, like Wi-Fi or Bluetooth, for devices that are constrained in terms of size,price, and available power. The current design of roaming among networks in LoRaWANis heavily inspired by that of mobile networks, as the use of roaming agreementsis mandated. Roaming agreements create unnecessary administrative overhead thathinders deployments. A roaming model that is quicker and simpler to deploy couldsave money for current users, and could even attract new users to the technology.To circumvent the necessity of roaming agreements, a new, scalable and agreement-less roaming model should be proposed. In this thesis project a literature survey isconducted, investigating similar technologies to find hints or inspiration for a newroaming model. It is found that the broker software architecture pattern put in thecontext of roaming in LoRaWAN suits the requirements quite well, so the new roam-ing model has been developed based on that. A software simulation has been imple-mented to gather data regarding the scalability of the model. It has been found thatthe proposed model is both scalable, and agreement-less.

The digitalization leads to that many physical solutions are replaced by digital once. Especially in the surveillance and security business have humans been replaced by cameras which are monitored from a remote location. As the power of the computers has increased, live video can be analyzed to inform the controller about anomalies which a human eye could have missed. SAAB – Air Traffic Management has a digital solution to provide Air Traffic Service which is called Remote Tower. This project will come up with a recommendation on how SAAB can dynamically render overlay based on live video, which marks the runways and taxiways on the airfield.

This research explores the Automation Interface created by Beckhoff through introducinga compiler solution. Today machine builders have to be able to build machinesor plants in different sizes and provide many variations of the machine orplant types. Automatic code generation can be used in the aspect to reuse code thathas been tested and is configurable to match the desired functionality. Additionally,the use of a pre-existing API could potentially result in less engineering resourceswasted in developing automatic code generation. This thesis aims to evaluate theAutomation Interface (AI) tool created by Beckhoff. This is accomplished throughmeans of incorporating the API functions into a compiler solution. The solution isdesigned to export the information required through an XML-file to generate PLCapplications.The generated PLC-code will be in Structured Text. In order to createa functional PLC-application, the construction of software requirements and testcases are established. The solution is then validated by means of generating a dataloggerto illustrate the usage. The exploratory research revealed both the benefitsand cons of using AI to a compiler solution. The evaluation indicated that the AutomationInterface can reduce engineering effort to produce a compiler solution, butthe learning curve of understanding the underlying components that work with theAPI required a great deal of effort.

Computer applications are no longer local installations on our computers. Many modern web applications and services rely on an internet connection to a centralized server to access the full functionality of the application. High availability architectures can be used to provide redundancy in case of failure to ensure customers always have access to the server. Due to the complexity of such systems and the need for stability, deployments are often avoided and new features and bug fixes cannot be delivered to the end user quickly. In this project, an automation system is proposed to allow for deployments to a high availability architecture while ensuring high availability. The purposed automation system is then tested in a controlled experiment to see if it can deliver what it promises. During low amounts of traffic, the deployment system showed it could make a deployment with a statistically insignificant change in error rate when compared to normal operations. Similar results were found during medium to high levels of traffic for successful deployments, but if the system had to recover from a failed deployment there was an increase in errors. However, the response time during the experiment showed that the system had a significant effect on the response time of the web application resulting in the availability being compromised in certain situations.

Classification of scientific bibliographic data is an important and increasingly more time-consuming task in a “publish or perish” paradigm where the number of scientific publications is steadily growing. Apart from being a resource-intensive endeavor, manual classification has also been shown to be often performed with a quite high degree of inconsistency. Since many bibliographic databases contain a large number of already classified records supervised machine learning for automated classification might be a solution for handling the increasing volumes of published scientific articles. In this study automated classification of bibliographic data, based on two different machine learning methods; Naive Bayes and Support Vector Machine (SVM), were evaluated. The data used in the study were collected from the Swedish research database SwePub and the features used for training the classifiers were based on abstracts and titles in the bibliographic records. The accuracy achieved ranged between a lowest score of 0.54 and a highest score of 0.84. The classifiers based on Support Vector Machine did consistently receive higher scores than the classifiers based on Naive Bayes. Classification performed at the second level in the hierarchical classification system used clearly resulted in lower scores than classification performed at the first level. Using abstracts as the basis for feature extraction yielded overall better results than using titles, the differences were however very small.

The following bachelor thesis explores the design of a GDPR (General Data Protection Regulation) compliant graphical user interface, for an administrative school system. The work presents the process of developing and evaluating a web-based prototype, a platform chosen because of its availability. The aim is to investigate if the design increases the caregivers perception of being in control over personal data, both their own and data related to children in their care. The methods for investigating this subject are grounded in real world research, using both quantitative and qualitative methods.

The results indicate that the users perceive the prototype to be useful, easy to use, easy to learn and that they are satisfied with it. The results also point towards the users feeling of control of both their own and their child’s personal data when using the prototype. The users agree that a higher sense of control also increases their sense of security.

This thesis aims to explore the possibilities of improving the viewer experience of a broadcasted football game with the use of additional information, applied to the footage with the use of motion tracking.

This work was conducted in collaboration with SVT Design. which is the inhouse department of the Swedish television network (SVT) tasked with designing, developing and creating graphical solutions. One of the systems developed by SVT Design is the character generator Caspar CG. Which is an open source CG system used worldwide for broadcasting productions. In the spring of 2018, SVT Design presented the idea of incorporating a motion tracking feature within Caspar CG. This would be a feature which could be used during broadcasted sporting events to provide the viewers with additional information regarding the ongoing event. With the use of motion tracking, the additional information could be presented in a dynamic manner in the sense that the information would follow the motion of the tracked object.

This thesis aimed to answer the following three research questions; What type of information could be displayed? When and how could this information be displayed? and lastly, how could the addition of information change the viewer’s experience of the football game? The conclusions aimed to provide SVT Design with a set of guidelines and requirements regarding the design and implementation of the additional information in a manner that would promote a positive viewer experience.

The methodology applied for this thesis was a qualitative methodology utilizing research activities such as semi-structured interviews featuring three staff members of SVTs department of sport productions. The observation of two broadcasted football games. Along with two focus groups in which the participants were presented with a prototype developed in Adobe After Effects. Consisting of footage from the 2010 FIFA world cup along with additional information that was applied with the use of motion tracking.

Through the analysis of the collected data, several recurring keywords and notions were identified and translated into requirements. The requirements, which was structured around the three research questions. Is for example that the information needs to be player specific and to provide an insight of the potential outcome of the game. Another example of a requirement being that the information is to be displayed when there is a break in action during the game. The result from this thesis indicated that if the specified requirements were met. The additional information applied during the broadcast could provide an improvement of the viewer’s experience of watching the broadcasted football game.

Recent research within the field of Software Engineering have used GitHub, the largest hub for open source projects with almost 20 million users and 57 million repositories, to mine large amounts of source code to get more trustworthy results when developing machine and deep learning models. Mining GitHub comes with many challenges since the dataset is large and the data does not only contain quality software projects. In this project, we try to mine projects from GitHub based on earlier research by others and try to validate the quality by comparing the projects with a small subset of quality projects with the help of software complexity metrics.

Cybersecurity threats have surged in the past decades. Experts agree that conventional security measures will soon not be enough to stop the propagation of more sophisticated and harmful cyberattacks. Recently, there has been a growing interest in mastering the complexity of cybersecurity by adopting methods borrowed from Artificial Intelligence (AI) in order to support automation. Moreover, entire security frameworks, such as DETECT(Decision Triggering Event Composer and Tracker), are designed aimed to the automatic and early detection of threats against systems, by using model analysis and recognising sequences of events and other tropes, inherent to attack patterns.

In this project, I concentrate on cybersecurity threat assessment by the translation of Attack Trees (AT) into probabilistic detection models based on Bayesian Networks (BN). I also show how these models can be integrated and dynamically updated as a detection engine in the existing DETECT framework for automated threat detection, hence enabling both offline and online threat assessment. Integration in DETECT is important to allow real-time model execution and evaluation for quantitative threat assessment. Finally, I apply my methodology to some real-world case studies, evaluate models with sample data, perform data sensitivity analyses, then present and discuss the results.

Artificial neural networks have been gaining attention in recent years due to theirimpressive ability to map out complex nonlinear relations within data. In this report,an attempt is made to use a Long short-term memory neural network for detectinganomalies within electrocardiographic records. The hypothesis is that if a neuralnetwork is trained on records of normal ECGs to predict future ECG sequences, it isexpected to have trouble predicting abnormalities not previously seen in the trainingdata. Three different LSTM model configurations were trained using records fromthe MIT-BIH Arrhythmia database. Afterwards the models were evaluated for theirability to predict previously unseen normal and anomalous sections. This was doneby measuring the mean squared error of each prediction and the uncertainty of over-lapping predictions. The preliminary results of this study demonstrate that recurrentneural networks with the use of LSTM units are capable of detecting anomalies.

The volumes of data which Big Data applications have to process are constantly increasing. This requires for the development of highly scalable systems. Microservices is considered as one of the solutions to deal with the scalability problem. However, the literature on practices for building scalable data-intensive systems is still lacking.

This thesis aims to investigate and present the benefits and drawbacks of using microservices architecture in big data systems. Moreover, it presents other practices used to increase scalability. It includes containerization, shared-nothing architecture, data sharding, load balancing, clustering, and stateless design. Finally, an experiment comparing the performance of a monolithic application and a microservices-based application was performed.

The results show that with increasing amount of load microservices perform better than the monolith. However, to cope with the constantly increasing amount of data, additional techniques should be used together with microservices.

Companies today often have a variety of applications used in the daily work. The problem that companies face with these applications is that they often are brought in to deal with a specific task, and they are often brought in at different times by different third-party developers. This results in the applications being independent units and integrates poor with each other, making work and maintenance with the applications inefficient. To improve efficiency the applications need better integration with each other. Better integration can be achieved by either replacing the current applications with a new software or develop a software that helps the applications communicate.

This project covers the development of the later, an API to improve the efficiency at Volvo Construction Equipment in Braa ̊s. The API is developed with the Enterprise Service Bus (ESB) as inspiration. The purpose of the ESB is to act as a middleware for the applications. Due to time limitations for the project integration between the applications wasn’t achieved. Instead, the focus was set on improving one of the moments in the work process at Volvo, that is verifying information between applications. The verification is today done manually which makes it time-consuming and this is the API set out to deal with. The API results in a reduction and improvement regarding the verification. The API still needs a manual input of data from the applications, but the API has automated the verification of the information between the applications resulting in hours of reduced work for the staff at Volvo.

Distributed ledger technology (DLT) is one of the latest in a long list of digital technologies, which appear to be heading towards a new industrial revolution. DLT has become very popular with the publication of the Bitcoin Blockchain in 2008. However, when we consider its suitability for dynamic networking environments, such as the Internet of Things, issues like transaction fees, scalability, and offline accessibility have not been resolved. The IOTA Foundation has designed the IOTA protocol, which is the data and value transfer layer for the Machine Economy. IOTA protocol uses an alternative blockless Blockchain which claims to solve the previous problems: the Tangle.

This thesis first inquires into the theoretical concepts of both technologies Tangleand Blockchain, to understand them and identify the reasons to be compatible or not with the Internet of Things networking environments. After the analysis, the thesis focuses on the proposed implementation as a solution to address the connectivity issue suffered by the IOTA network. The answer to the problem is the development of a Neighbor Discovery algorithm, which has been designed to fulfill the requirements demanded by the IOTA application.

Dealing with IOTA network setup can be very interesting for the community that is looking for new improvements at each release. Testing the solution in a peer-to-peer specific protocol (PeerSim), with different networking scenarios, allowed us to get valuable and more realistic information. Thus, after analyzing the results, we were able to determine the appropriate IOTA network configuration to build a more reliable and long-lasting network.

Manual deploys and testing of code can be both time-consuming and error-prone. Many repetitive manual steps can lead to critical tests or necessary configuration being forgotten or being skipped due time-constraints resulting in software that doesn’t work as intended when deployed to production. The purpose of this report is to examine whether continuous delivery(CD) could lead to any positive effects and if there are any obstacles for CD in an Episerver project at Sigma ITC. The study was done by implementing a CD pipeline for a project similar to a real project at Sigma then letting the developers work with it and interviewing them about their current workflow, their attitude towards the different steps involved and if they saw any problems with CD for their project. Even if the developers, in general, where positive to CD they had some reservations about it in their current projects. The main obstacle the developers saw where the time/cost which would affect the customer and also some uncertainty around the complexity in testing Episerver. The results show that there could be positive effects of CD even if the project type is not optimal for reaping all the CD benefits, it all depends on people involved seeing a value in testing and the questions around testing in Episerver are straightened out.

Sony’s Support Application team wanted an experiment to be conducted by which they could determine if it was suitable to use Machine Learning to improve the quantity and quality of search results of the in-application search tool. By improving the quantity and quality of the results the team wanted to improve the customer’s journey. A supervised machine learning model was created to classify articles into four categories; Wi-Fi & Connectivity, Apps & Settings, System & Performance, andBattery Power & Charging. The same model was used to create a service that categorized the search terms into one of the four categories. The classified articles and the classified search terms were used to complement the existing search tool. The baseline for the experiment was the result of the search tool without classification. The results of the experiment show that the number of articles did indeed increase but due mainly to the broadness of the categories the search results held low quality.

This thesis investigates which type of user-related/-generated, personal, information is appropriate to share on an interactive public display in a public environment, e.g., user’s names and images, as a mean to enhance interactions with public display applications. This investigation is two-fold, how the content on the public display could be personalized and how interactions with the application can be emphasized. As a specific case of a public display application, an interactive shared music system with a collaborative playlist is chosen. A survey with static prototypes was created and sent out to identify which information users find appropriate sharing, regarding privacy and xyz, but also what information they feel interesting sharing. 47 participants answered the survey and the results informed an iterative design process that generated a series of static and four interactive prototypes.

Four groups with three participants in each (third group with only two) were used to discuss the interactive prototypes highlighting the implemented features. In a focus group style setting, the participants were asked various questions for each of the four prototypes addressing this and that. During the sessions, notes were taken, and it was also audio recorded. All the data from all groups were analyzed and then also compared between the four groups.

The result showed that people are ok with sharing their username and first name. The content on the music system can be personalized with pop-up notifications which show information about user’s choices, what song do they vote for or what song do they add to the playlist. Furthermore, the new features indicated a positive effect.

Multimedia learning is today a part of everyday life. Learning from digital sources on the internet is probably more common than printed material. The goal of this project is to determine if measuring user interaction in a interactive manual can be of use to evaluate the effectiveness of the manual. Since feedback of multimedia learning materials is costly to achieve in face-to-face interaction, automatic feedback data might be useful for evaluating and improving the quality of multimedia learning materials.

In this project an interactive manual was developed for a real-world report generating application. The manual was then tested on 21 test users. Using the k-nearest neighbour machine learning algorithm the results shows that time taken on each step and the number of views on each step did not provide for good evaluation of the manual. Number of faults done by the user was good at predicting if the user would abort the manual and in combination with the number of acceptable interactions the usability data did provide for a better classification then ZeroR classification. The conclusions can be questioned by the small dataset used in this project.

360-degree videos offer an immersive experience which is hard to find in traditional videos. The entire scene is floating around the viewer, and a feeling of being there is common. However, something traditional videos have compared to 360-degree videos is control of the outcome. The filmmakers decide what they want to show and how they want to guide the viewer. The control is still an issue in 360-degree videos. In this thesis will the focus be on how a viewer can be attracted to an important part of a scene. This work is concentrated on methods and techniques in the post-production part of video production. The techniques are mainly video effects.

The user tests involved 16 participants with different backgrounds including an expert in the field. The participants watched three 360-degree videos each with the same content, but with different techniques made in the post-production part to guide them. It was one video with graphical elements to guide them, one with light effects and one with colour effects. Interviews gave a deeper insight into the participant's experience and opinions on the three videos.

The video effects affected the participants positively and negatively. The participants were mostly satisfied with effects consisting of graphical elements but not as much with colour. The users lost a bit of their freedom to explore a scene with the light effects, but they were useful when it came to guiding towards something. The participants did find the guiding lines and spotlight as the most suitable methods to attract attention; the spotlight was the most preferred of the two. The red circle effect and the warm/cold colour effect was the least preferred, the warm/cold colour effect as the least preferable.

The effects helped to attract the viewer to a section of the video, and the user's got a better understanding of the concept. However, more research need to be done to draw attention towards something. A combination of elements like light effects and graphical element effects could improve the post-production part. Research in the future regarding the opportunity to combine techniques from an entire video production needs to be conducted for a significantly more effective way to attract attention to an important side of a scene, without the viewers losing their freedom of exploring, it includes both the post-production side but also methods to attract attention in a set.

This project is about creating a Dashboard with suitable data models containing support ticket statistics for the company Sigma IT Consulting. The work flow used by Sigma today is to manually log in to the system to see the support ticket statistics, which can be a tedious and time consuming process. Furthermore, Sigma do not have any monitoring system for checking the health of their web application services. They have a need for an internal Dashboard containing this information with regularly updates. Our solution is to design suitable data models and implement them within a Dashboard application.

The availability of prospective customer information present on social media platforms has led to many marketing and customer-facing departments utilizing social media data in processes such as demographics research, and sales and campaign planning. However, if your business needs require further filtration of data, beyond what is provided by existing filters, the volume and rate at which data can be manually sifted, is constrained by the speed and accuracy of employees, and their digital competency. The repetitive nature of filtration work, lends itself to automation, that ultimately has the potential to alleviate large productivity bottlenecks, enabling organizations to distill larger volumes of unfiltered data, faster and with greater precision.

This project employs automation and artificial intelligence, to filter Linkedin profiles using customized selection criteria, beyond what is currently available, such as nationality and age. By introducing the ability to produce tailored indices of social media data, automated filtration offers organizations the opportunity to better utilize rich prospective data for more efficient customer review and targeting.

The concept of Computer Vision is not new or fresh. On contrary ideas have been shared and worked on for almost 60 years. Many use cases have been found throughout the years and various systems developed, but there is always a place for improvement. An observation was made, that methods used today are generally focused on a single purpose and implement expensive technology, which could be improved. In this report, we are going to go through an extensive research to find out if a professionally sold, expensive software, can be replaced by an off the shelf, low-cost solution entirely designed and developed in-house. To do that we are going to look at the history of Computer Vision, examples of applications, algorithms, and find general scenarios or computer vision problems which can be solved. We are then going take a step further and define solid use cases for each of the scenarios found. Finally, a prototype solution is going to be designed and presented. After analysing the results gathered we are going to reach out to the reader convincing him/her that such application can be developed and work efficiently in various areas saving investments to businesses.