The use of data is essential for the capabilities of Data- driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. The use of data, however, raises intrinsically the concern of the data privacy, in particular for the individuals that provide data. Hence, data privacy is considered as one of the main non-functional features of the Next Generation Internet. This paper describes the privacy challenges and requirements for collaborative AI application development. We investigate the constraints of using digital right management for supporting collaboration to address the privacy requirements in the regulation.

More and more companies experience problems with maintainability and time-consuming development of automated testing tools. The MPC department at Ericsson Software Technology AB use methods and tools often developed during time pressure that results in time-consuming testing and requires more effort and resources than planned. The tools are also such nature that they are hard to expand, maintain and in some cases they have been thrown out between releases. For this reason, we could identify two major objectives that MPC want to achieve; efficient and maintainable test automation. Efficient test automation is related to mainly how to perform tests with less effort, or in a shorter time. Maintainable test automation aims to keep tests up to date with the software. In order to decide how to achieve these objectives, we decided to investigate which test to automate, what should be improved in the testing process, what techniques to use, and finally whether or not the use of automated testing can reduce the cost of testing. These issues will be discussed in this paper.

The role of information systems has become very important in today’s world. It is not only the business organizations who use information systems but the governments also posses’ very critical information systems. The need is to make information systems available at all times under any situation. Information systems must have the capabilities to resist against the dangers to its services,performance & existence, and recover to its normal working state with the available resources in catastrophic situations. The information systems with such a capability can be called resilient information systems. This thesis is written to define resilient information systems, suggest its meta-model and to explain how existing technologies can be utilized for the development of resilient information system.

During the recent last years, transport of multimedia sessions, such as audio streams and video conferences, over IP has acquired a lot of attention since most of communication technologies are migrating to work over IP. However, sending media streams over IP networks has encountered some problems related to signaling issues. The ongoing research in this area has produced some solutions to this subject. Internet Engineering Task Force (IETF) has introduced Session Initiation Protocol (SIP), which has proved to be an efficient protocol for controlling sessions over IP. While a great deal of research performed in evaluating the performance of SIP and comparing it with its competent protocols such as H.323, studying the delay caused by initiating the session has acquired less attention. In this document, we have addressed the SIP session setup delay problem. In the lab, we have built up a test bed for running several SIP session scenarios. Using different models for those scenarios, we have measured session setup delays for all used models. The analysis performed for each model showed that we could propose some models to be applied for SIP session setup delay components.

The User Equipments (UE) nowadays are able to provide various internet applications and services that raise the demand for high speed data transfer and Quality of Service (QoS). Accordingly, next generation mobile communication systems driven by these demands are expected to provide higher data rates and better link quality compared to the existing systems. Orthogonal Frequency Division Multiple Access (OFDMA) and Single Carrier Frequency Division Multiple Access (SC-FDMA) are strong multiple access candidates for the uplink of the International Mobile Telecommunications-Advanced (IMT-Advanced). These multiple access techniques in combination with other promising technologies such as multi-hops transmission and Multiple-Input-Multiple-Output (MIMO) will be utilized to reach the targeted IMT-Advanced system performance. In this thesis, OFDMA and SC-FDMA are adopted and studied in the uplink of Long Term Evolution (LTE). Two transmission scenarios are considered, namely the single hop transmission and the relay assisted transmission (two hops). In addition, a hybrid multiple access technique that combines the advantages of OFDMA and SC-FDMA in term of low Peak-to-Average Power Ratio (PAPR) and better link performance (in terms of Symbol Error Rate (SER)) has been proposed in relay assisted transmission scenario. Simulation results show that the proposed hybrid technique achieves better end-to-end link performance in comparison to the pure SC-FDMA technique and maintains the same PAPR value in access link. In addition, a lower PAPR is achieved compared to OFDMA case, which is an important merit in the uplink transmission due to the UE’s power resources constraint (limited battery power).

The purpose of this thesis project is to study and evaluate a UWB Synthetic Aperture Radar (SAR) data image formation algorithm, that was previously less familiar and, that has recently got much attention in this field. Certain properties of it made it acquire a status in radar signal processing branch. This is a fast time-domain algorithm named Local Backprojection (LBP). The LBP algorithm has been implemented for SAR image formation. The algorithm has been simulated in MATLAB using standard values of pertinent parameters. Later, an evaluation of the LBP algorithm has been performed and all the comments, estimation and judgment have been done on the basis of the resulting images. The LBP has also been compared with the basic time-domain algorithm Global Backprojection (GBP) with respect to the SAR images. The specialty of LBP algorithm is in its reduced computational load than in GBP. LBP is a two stage algorithm — it forms the beam first for a particular subimage and, in a later stage, forms the image of that subimage area. The signal data collected from the target is processed and backprojected locally for every subimage individually. This is the reason of naming it Local backprojection. After the formation of all subimages, these are arranged and combined coherently to form the full SAR image.

Defect prevention (DP) in early stages of software development life cycle (SDLC) is very cost effective than in later stages. The requirements elicitation and analysis & negotiation (E and A&N) phases in requirements engineering (RE) process are very critical and are major source of requirements defects. A poor E and A&N process may lead to a software requirements specifications (SRS) full of defects like missing, ambiguous, inconsistent, misunderstood, and incomplete requirements. If these defects are identified and fixed in later stages of SDLC then they could cause major rework by spending extra cost and effort. Organizations are spending about half of their total project budget on avoidable rework and majority of defects originate from RE activities. This study is an attempt to prevent requirements level defects from penetrates into later stages of SDLC. For this purpose empirical and literature studies are presented in this thesis. The empirical study is carried out with the help of six companies from Pakistan & Sweden by conducting interviews and literature study is done by using literature reviews. This study explores the most common requirements defect types, their reasons, severity level of defects (i.e. major or minor), DP techniques (DPTs) & methods, defect identification techniques that have been using in software development industry and problems in these DPTs. This study also describes possible major differences between Swedish and Pakistani software companies in terms of defect types and rate of defects originating from E and A&N phases. On the bases of study results, some solutions have been proposed to prevent requirements defects during the RE process. In this way we can minimize defects originating from E and A&N phases of RE in the bespoke requirements engineering (BESRE).

Abstract Title: Client Information Needs of MFIs: A Case Study of ASA Bangladesh Author: Juber Ahmed Academic Advisor: Dr. Klaus Solberg Søilen Department: School of Management, Blekinge Institute of Technology Course: Master Thesis in Business Administration Purpose: To enrich the knowledge base of client’s needs of financial services and assessing the tools MFIs used to collect clients’ information and how they utilized the information for developing new products and services or modifying existing products and services or their terms and conditions to meet the needs of financial services of their clientele. Also how MFIs organized and managed the information and how they categorized their clients using that information. Method: The investigation conducted from both a theoretical and an empirical point of view. The deductive approach used for the study and the case study method deployed. I studied ASA which is an MFI renowned in Bangladesh and beyond. At first, I had gone through a secondary research for collecting a number of successful methods and standard types of information used by successful MFIs from existing literature. In primary research, I interviewed 10 Managers (Assistant Directors) for ASA to determine which of the methods found in the literature were more effective for collecting clients’ information for them and also asked them to add their ideas to the list. At last I asked interviewees to rate the methods and results presented in this paper. Theory: This study was an exploratory one where I discussed the related aspects for the study - Microfinance, Client Assessment, Clients of Microfinance, Information needs and Management Information System. Findings: The study showed that ASA utilized client information for developing their credit products and services and based on number of loans taken by the clients they categorized their clients and modified or developed new products and services for each category of clients. Although ASA executed several tools for collecting client information but the managers think that their staffs’ collection of information from regular meeting with clients was more effective than others for modifying products’ terms and conditions and modifying or developing new products and services to their women and small enterprise clients. The conducted study also revealed that in ASA impact study was necessary to know clients’ overall level of satisfaction but management needed specific information on what aspects of ASA and its credit products and services clients preferred and did not prefer and the reasons of the preferences. Also they needed action plan to address clients’ specific concerns, so they needed the information on a continual basis and they were successful to achieve this continuous flow of information. For ASA, the best way to get this type of information would be through client satisfaction Focus Group Discussions (FGDs), although they utilized several tools but not often as discussed in part 3 in chapter 5. ASA owned an MIS (AMMS) for monitoring and managing clients’ information and they utilized this to categorize their clients based on the collected information about their number of loans. Conclusion: This study revealed that ASA served only women and small enterprise clientele that included the vulnerable non-poor and could contribute to the profitability of ASA. There was no attempt to diversify the products to include all poor that should be the goal of microfinance to alleviate poverty. Moreover client treated as individual client but the loans used to fulfill household or family needs of the clients. There were tools for collecting information on household about impact of credit programs participation but they took seldom effort for collecting information of the household money management or in other words how they utilized the loans for variety of household needs. There is lack of access to a variety of financial services for poor clients, even though MFIs are mostly serving vulnerable non-poor instead of taking consideration of all categories of poor. It revealed from the study that MFIs could gain long term success by serving specific market segment but it should not be only focus of MFIs, their initiative should be to include all poor in their clients profile with a priority to a specific market segment. This could help them to become sustainable and to minimize risks by spreading it in different market segments. The study found that ASA considered FGDs as an effective tool for collecting clients’ information as their staffs and managers were familiar with this tool, moreover it was cost effective for them. It observed that they seldom followed Tool Selection Process and it was the top management that decided over the tools, the decision might influence by internal and external interest groups and the competition. MFIs should organize client information in a way so that they could be able to manipulate the specific client information to serve client better and to take effective decision, although it is imperative to argue that they may like to serve the wealthier clients. This research paper is also presenting some important findings from existing literature of microfinance and a number of recommendations based on the study experience and scholars opinions from existing microfinance study that may help MFIs to prepare themselves to adopt client-oriented approach by utilizing client assessment tools to fulfill the needs of financial services of their clients that may hopefully include all poor irrespective of their categories.

The Sub-Saharan African country of Ghana is growing at a rapid pace. The construction industry is striving to keep up with the increasing demand for housing and commercial and industrial space while simultaneously protecting the physical environment and social well-being of the country – a challenge becoming known in the industry as ‘sustainable construction.’ This paper proposes a strategic approach to manage these twin challenges, consisting of two parts: a building rating system and a participatory method called multi-stakeholder dialogue. The combination rating system and MSD process was presented to the industry to determine its potential effectiveness in assisting the industry to move towards sustainability. The industry’s response indicates that the proposal could be of value to the industry, with certain noted limitations. This paper describes the rating system-MSD proposal, the industry’s response, and implications for the construction industry in Ghana moving forward

This project presents the description, design and the implementation of a 4-channel microphone array that is an adaptive sub-band generalized side lobe canceller (GSC) beam former uses for video conferencing, hands-free telephony etc, in a noisy environment for speech enhancement as well as noise suppression. The side lobe canceller evaluated with both Least Mean Square (LMS) and Normalized Least Mean Square (NLMS) adaptation. A testing structure is presented; which involves a linear 4-microphone array connected to collect the data. Tests were done using one target signal source and one noise source. In each microphone’s, data were collected via fractional time delay filtering then it is divided into sub-bands and applied GSC to each of the subsequent sub-bands. The overall Signal to Noise Ratio (SNR) improvement is determined from the main signal and noise input and output powers, with signal-only and noise-only as the input to the GSC. The NLMS algorithm signiﬁcantly improves the speech quality with noise suppression levels up to 13 dB while LMS algorithm is giving up to 10 dB. All of the processing for this thesis is implemented on a computer using MATLAB and validated by considering different SNR measure under various types of blocking matrix, different step sizes, different noise locations and variable SNR with noise.

Knowledge management (KM) is essential for success in Global Soft- ware Development (GSD); Distributed Software Development (DSD); or Global Software Engineering (GSE). Software organizations are managing knowledge in innovative ways to increase productivity. One of the major objectives of KM is to improve productivity through effective knowledge sharing and transfer. Therefore, to maintain effective knowledge sharing in distributed agile projects, practitioners need to adopt different types of knowledge sharing techniques and strategies. Distributed projects introduce new challenges to KM. So, practices that are used in agile teams become difficult to put into action in distributed development. Though, informal communication is the key enabler for knowledge sharing, when an agile project is distributed, informal communication and knowledge sharing are challenged by the low communication bandwidth between distributed team members, as well as by social and cultural distance. In the work presented in this thesis, we have made an overview of empirical studies of knowledge management in distributed agile projects. Based on the main theme of this study, we have categorized and reported our findings on major concepts that need empirical investigation. We have classified the main research theme in this thesis within two sub-themes: • RT1: Knowledge sharing activities in distributed agile projects. • RT2: Spatial knowledge sharing in a distributed agile project. The main contributions are: • C1: Empirical observations regarding knowledge sharing activities in distributed agile projects. • C2: Empirical observations regarding spatial knowledge sharing in a distributed agile project. • C3: Process improvement scope and guidelines for the studied project.

Governments in both developing and developed countries provide services to its citizen however some provide better services than the others. Public services generally include facilities of health, education, electricity and water supply, social welfare, transportation, communication and other services. Countries face many challenges while providing these services to citizens and many developing countries have failed to provide the public services efficiently and are facing issues while providing service delivery to its citizens. One of the issues is corruption. The developing countries need to adopt e-governance and Information and Communication Technology (ICT) to overcome these issues. E-governance is a process of improving through Information Technology (IT), the way government works, shares information, interacts with clients (citizen, Business, Government) and provides services to clients. Our thesis will investigate whether ICT can combat corruption and improve Public Service Delivery (PSD) in developing countries. Findings of our study will help developing countries in combating corruption and improving public services. We have reviewed the literature related to public service delivery, corruption and its adverse effects, use of ICT and e-governance to combat corruption. We have adopted a case study approach to see what sorts of ICT initiatives are being taken by the government of Punjab to reduce corruption and improve the public services. In this regard we have explored the e-governance projects of PITB (Information Technology department of the Government of Punjab) including one of a recent project of PITB namely Citizen Feedback Model (CFM) which has got international attention worldwide for combating corruption and improving public services delivery. We have performed quantitative analysis on six months data set of CFM which was comprised of over one hundred and seventy thousand responses categorized in predefined categories of feedback. By doing the quantitative analysis of CFM data we have found that this particular e-governance project is reducing corruption of public officials and improving public service delivery. Further explanations are provided through information gathered by interviews. Finally we have tied our results to findings in the literature and suggested implications.

Context: A District Heating System (DHS) uses a central heating plant to produce and distribute hot water in a community. Such a plant is connected with consumers’ premises to provide them with hot water and space heating facilities. Variations in the consumption of heat energy depend upon different factors like difference in energy prices, living standards, environmental effects and economical conditions etc. These factors can manage intelligently by advanced tools of Information and Communication Technology (ICT) such as smart metering. That is a new and emerging technology; used normally for metering of District Heating (DH), district cooling, electricity and gas. Traditional meters measures overall consumption of energy, in contrast smart meters have the ability to frequently record and transmit energy consumption statistics to both energy providers and consumers by using their communication networks and network management systems. Objectives: First objective of conducted study was providing energy consumption/saving suggestions on smart metering display for accepted consumer behavior, proposed by the energy providers. Our second objective was analysis of financial benefits for the energy provides, which could be expected through better consumer behavior. Third objective was analysis of energy consumption behavior of the residential consumes that how we can support it. Moreover, forth objective of the study was to use extracted suggestions of consumer behaviors to propose Extended Smart Metering Display for improving energy economy. Methods: In this study a background study was conducted to develop basic understanding about District Heat Energy (DHE), smart meters and their existing display, consumer behaviors and its effects on energy consumption. Moreover, interviews were conducted with representatives of smart heat meters’ manufacturer, energy providers and residential consumers. Interviews’ findings enabled us to propose an Extended Smart Metering Display, that satisfies recommendations received from all the interviewees and background study. Further in this study, a workshop was conducted for the evaluation of the proposed Extended Smart Metering Display which involved representatives of smart heat meters’ manufacture and residential energy consumers. DHE providers also contributed in this workshop through their comments in online conversation, for which an evaluation request was sent to member companies of Swedish District Heating Association. Results: Informants in this research have different levels of experiences. Through a systematic procedure we have obtained and analyzed findings from all the informants. To fulfill the energy demands during peak hours, the informants emphasized on providing efficient energy consumption behavior to be displayed on smart heat meters. According to the informants, efficient energy consumption behavior can be presented through energy consumption/saving suggestions on display of smart meters. These suggestions are related to daily life activities like taking bath and shower, cleaning, washing and heating usage. We analyzed that efficient energy consumption behavior recommended by the energy providers can provide financial improvements both for the energy providers and the residential consumers. On the basis of these findings, we proposed Extended Smart Metering Display to present information in simple and interactive way. Furthermore, the proposed Extended Smart Metering Display can also be helpful in measuring consumers’ energy consumption behavior effectively. Conclusions: After obtaining answers of the research questions, we concluded that extension of existing smart heat meters’ display can effectively help the energy providers and the residential consumers to utilize the resources efficiently. That is, it will not only reduce energy bills for the residential consumers, but it will also help the energy provider to save scarce energy and enable them to serve the consumers better in peak hours. After deployment of the proposed Extended Smart Metering Display the energy providers will able to support the consumers’ behavior in a reliable way and the consumers will find/follow the energy consumption/saving guidelines easily.

Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model.

This thesis paper proposes a Medium Access Control (MAC) protocol for wireless networks, termed as CD-MMAC that utilizes multiple channels and incorporates opportunistic cooperative diversity dynamically to improve its performance. The IEEE 802.11b standard protocol allows the use of multiple channels available at the physical layer but its MAC protocol is designed only for a single channel. The proposed protocol utilizes multiple channels by using single interface and incorporates opportunistic cooperative diversity by using cross-layer MAC. The new protocol leverages the multi-rate capability of IEEE 802.11b and allows wireless nodes far away from destination node to transmit at a higher rate by using intermediate nodes as a relays. The protocol improves network throughput and packet delivery ratio significantly and reduces packet delay. The performance improvement is further evaluated by simulation and analysis.

Transport sector is an essential driver of economic development and growth, and at the same time, one of the biggest contributors to climate change, responsible for almost a quarter of the global carbon dioxide emissions. The sector is 95 percent dependent on fossil fuels. International Energy Agency (IEA) scenarios present different mixes of fuels to decrease both dependence on fossil fuels and emissions, leading to a more sustainable future. The main alternative fuels proposed in the Blue map scenario, presented in the Energy Technologies Perspective 2008, were hydrogen and second-generation ethanol. An assessment of these fuels was made using the tools SLCA (Sustainability Life Cycle Assessment) and SWOT Analysis. A Framework for Strategic Sustainable Development (FSSD) is the background used to guide the assessment and to help structure the results and conclusions. The results aim to alert the transport sector stakeholders about the sustainability gaps of the scenario, so decisions can be made to lead society towards a sustainable future.

Today’s software is more vulnerable to attacks due to increase in complexity, connectivity and extensibility. Securing software is usually considered as a post development activity and not much importance is given to it during the development of software. However the amount of loss that organizations have incurred over the years due to security flaws in software has invited researchers to find out better ways of securing software. In the light of research done by many researchers, this thesis presents how software can be secured by considering security in different phases of software development life cycle. A number of security activities have been identified that are needed to build secure software and it is shown that how these security activities are related with the software development activities of the software development lifecycle.

Context. Software testing is one of the crucial phases in software development life cycle (SDLC). Among the different manual testing methods in software testing, Exploratory testing (ET) uses no predefined test cases to detect defects. Objectives. The main objective of this study is to test the effectiveness of ET in detecting defects at different software test levels. The objective is achieved by formulating hypotheses, which are later tested for acceptance or rejection. Methods. Methods used in this thesis are literature review and experiment. Literature review is conducted to get in-depth knowledge on the topic of ET and to collect data relevant to ET. Experiment was performed to test hypotheses specific to the three different testing levels : unit , integration and system. Results. The experimental results showed that using ET did not find all the seeded defects at the three levels of unit, integration and system testing. The results were analyzed using statistical tests and interpreted with the help of bar graphs. Conclusions. We conclude that more research is required in generalizing the benefits of ET at different test levels. Particularly, a qualitative study to highlight factors responsible for the success and failure of ET is desirable. Also we encourage a replication of this experiment with subjects having a sound technical and domain knowledge.

Fuzzy relation equations are becoming extremely important in order to investigate the optimal solution of the inverse problem even though there is a restrictive condition for the availability of the solution of such inverse problems. We discussed the methods for finding the optimal (maximum and minimum) solution of inverse problem of fuzzy relation equation of the form $R \circ Q = T$ where for both cases R and Q are kept unknown interchangeably using different operators (e.g. alpha, sigma etc.). The aim of this study is to make an in-depth finding of best project among the host of projects, depending upon different factors (e.g. capital cost, risk management etc.) in the field of civil engineering. On the way to accomplish this aim, two linguistic variables are introduced to deal with the uncertainty factor which appears in civil engineering problems. Alpha-composition is used to compute the solution of fuzzy relation equation. Then the evaluation of the projects is orchestrated by defuzzifying the obtained results. The importance of adhering to such synopsis, in the field of civil engineering, is demonstrated by an example.

Using eyes as an input modality for different control environments is a great area of interest for enhancing the bandwidth of human machine interaction and providing interaction functions when the use of hands is not possible. Interface design requirements in such implementations are quite different from conventional application areas. Both command-execution and feedback observation tasks may be performed by human eyes simultaneously. In order to control the motion of a mobile robot by operator gaze interaction, gaze contingent regions in the operator interface are used to execute robot movement commands, with different screen areas controlling specific directions. Dwell time is one of the most established techniques to perform an eye-click analogous to a mouse click. But repeated dwell time while switching between gaze-contingent regions and feedback-regions decreases the performance of the application. We have developed a dynamic gaze-contingent interface in which we merge gaze-contingent regions with feedback-regions dynamically. This technique has two advantages: Firstly it improves the overall performance of the system by eliminating repeated dwell time. Secondly it reduces fatigue of the operator by providing a bigger area to fixate in. The operator can monitor feedback with more ease while sending commands at the same time.

To protect our health and environment from pollution, among others regulatory agencies in the European Union (EU) and legislation from the U.S. Environmental Protection Agency (EPA) has required that pollutants produced by diesel engines - such as nitrogen oxides (NOx), hydrocarbons (HC) and particulate matter (PM) - be reduced. The key emission reduction and control technologies available for NOx control on Diesel engines are combination of Exhaust Gas Recirculation (EGR) and Selective Catalytic Reduction (SCR). SCR addresses emission reduction through the use of Diesel Exhuast Fluid (DEF), which has a trade-name AdBlue. Which is 32.5% high purity urea and 67.5% deionized water, Adblue in the hot exhaust gas decomposes into ammonia (NH3) which then reacts with surface of the catalyst to produce harmless nitrogen(N2) and water (H20). Highest NOx conversion ratios while avoiding ammonia slip is achieved by Efficient SCR and accurate Urea Dosing System it’s therefore critical we model and simulate the UDS in order to analyze and gain holistic understanding of the UDS dynamic behavior. The process of Modeling and Simulating of Urea Dosing System is a result of a compromise between two opposing trends. Firstly, one needs to use as much mathematical models as it takes to correctly describe the fundamental principles of fluid dynamics such as, (1) mass is conserved (2), Newton’s second law and (3) energy is conserved, secondly the model needs to be as simple as possible, in order to express a simple and useful picture of real systems. Numerical model for the simulation of Urea Dosing System is implemented in GT Suite® environment, it is complete UDS Model (Hydraulic circuit and Dosing Unit) and it stands out for its ease of use and simulation fastness, The UDS model has been developed and validated using as reference Hilite Airless Dosing System at the ATC Lab, results provided by the model allow to analyze the UDS pump operation, as well the complete system, showing the trend of some important parameters which are difficult to measure such as viscosity, density, Reynolds number and giving plenty of useful information to understand the influence of the main design parameters of the pump, such as volumetric efficiency, speed and flow relations.

Understanding and capturing game play experiences of players have been of great interest for some time, both in academia and industry. Methods used for eliciting game play experiences have involved the use of observations, biometric data and post-game techniques such as surveys and interviews. This is true for games that are played in fixed settings, such as computer or video games. Pervasive games however, provide a greater challenge for evaluation, as they are games that typically engage players in outdoor environments, which might mean constant movement and a great deal of the players' motor skills engaged for several hours or days. In this project I explored a new method for eliciting different aspects of the game play experience of pervasive game players, specifically focusing on motional states and different qualities of immersion. I have centered this work on self-reporting as a means for reporting these aspects of the game play experiences. However, this required an approach to selfreporting as non-obtrusive, not taking too much of the players’ attention from the game activities as well as provide ease of use. To understand the challenges in introducing a new method into a gaming experience, I focused my research on understanding experience, which is a subjective concept. Even though there are methods aiming at capturing the physiological changes during game play, they don’t capture players’ interpretations of the gaming situation. By combining this with objective measurements, I was able to gain a comprehensive understanding of the context of use. The resulting designs were two tools, iteratively developed and pre-tested in a tabletop role-playing session before a test run in the pervasive game Interference. From my findings I was able to conclude that using self-reporting tools for players to use while playing was successful, especially as the data derived from the tools supported post-game interviews. There were however challenges regarding the design and functionality, in particular in outdoor environments, that suggests improvements, as well as considerations on the use of selfreporting as an additional method for data collection.

Background. This thesis focuses on the task of historical document semantic segmentation with recurrent neural networks. Document semantic segmentation involves the segmentation of a page into different meaningful regions and is an important prerequisite step of automated document analysis and digitisation with optical character recognition. At the time of writing, convolutional neural network based solutions are the state-of-the-art for analyzing document images while the use of recurrent neural networks in document semantic segmentation has not yet been studied. Considering the nature of a recurrent neural network and the recent success of recurrent neural networks in document image binarization, it should be possible to employ a recurrent neural network for document semantic segmentation and further achieve high performance results.

Objectives. The main objective of this thesis is to investigate if recurrent neural networks are a viable alternative to convolutional neural networks in document semantic segmentation. By using a combination of a convolutional neural network and a recurrent neural network, another objective is also to determine if the performance of the combination can improve upon the existing case of only using the recurrent neural network.

Methods. To investigate the impact of recurrent neural networks in document semantic segmentation, three different recurrent neural network architectures are implemented and trained while their performance are further evaluated with Intersection over Union. Afterwards their segmentation result are compared to a convolutional neural network. By performing pre-processing on training images and multi-class labeling, prediction images are ultimately produced by the employed models.

Results. The results from the gathered performance data shows a 2.7% performance difference between the best recurrent neural network model and the convolutional neural network. Notably, it can be observed that this recurrent neural network model has a more consistent performance than the convolutional neural network but comparable performance results overall. For the other recurrent neural network architectures lower performance results are observed which is connected to the complexity of these models. Furthermore, by analyzing the performance results of a model using a combination of a convolutional neural network and a recurrent neural network, it can be noticed that the combination performs significantly better with a 4.9% performance increase compared to the case with only using the recurrent neural network.

Conclusions. This thesis concludes that recurrent neural networks are likely a viable alternative to convolutional neural networks in document semantic segmentation but that further investigation is required. Furthermore, by combining a convolutional neural network with a recurrent neural network it is concluded that the performance of a recurrent neural network model is significantly increased.

This investigate an important options of cost reductions, offshore outsourcing is found interesting, these days phenomenon of economic downturn, decreasing oil prices and credit crunch cause intense competition so this study is an attempt to understand factors that what are the driving forces for offshore outsourcing in engineering services industry. This project studies the factors of offshore outsourcing practices of engineering services companies in the UK. The scope of study limited to industrial engineering services primarily in oil and gas industry, so this project is also aimed to understand offshore outsourcing practices of UK engineering services sector. The purpose of research is to review the available literature on offshore outsourcing and understand offshore outsourcing situations. This research will focus on knowledge in these fields. The discussions on finding would be focus on why “company” needs offshore outsourcing and “how” different factors help engineering services companies to take offshore outsourcing decisions

Recent advancements in signal and image processing have reduced the time of diagnoses, effort and pressure on the screeners by providing auto diagnostic tools for different diseases. The success rate of these tools greatly depend on the quality of acquired images. Bad image quality can significantly reduce the specificity and the sensitivity which in turn forces screeners back to their tedious job of manual diagnoses. In acquired fundus images, some areas appear to be brighter than the other, that is areas close to the center of the image are always well illuminated, hence appear very bright while areas far from the center are poorly illuminated hence appears to be very dark. Several techniques including the simple thresholding, Naka Rushton (NR) filtering technique and histogram equalization (HE) method have been suggested by various researchers to overcome this problem. However, each of these methods has limitations at their own and hence the need to develop a more robust technique that will provide better performance with greater flexibility. A new method of compensating uneven (irregular) illumination in fundus images termed global-local adaptive histogram equalization using partially-overlapped windows (GLAPOW) is proposed in this paper. The developed algorithm has been tested and the results obtained show superior performance when compared to other known techniques for uneven illumination correction.

Automatic diagnosis and display of diabetic retinopathy from images of retina using the techniques of digital signal and image processing is presented in this paper. The acquired images undergo pre-processing to equalize uneven illumination associated with the acquired fundus images. This stage also removes noise present in the image. Segmentation stage clusters the image into two distinct classes while the abnormalities detection stage was used to distinguish between candidate lesions and other information. Methods of diagnosis of red spots, bleeding and detection of vein-artery crossover points have also been developed in this work using the color information, shape, size, object length to breadth ration as contained in the acquired digital fundus image. Furthermore, two graphical user interfaces (GUIs) have also been developed during this work; the first is for the collection of lesion data information and was used by the ophthalmologist in marking images for database while the second GUI is for automatic diagnosing and displaying of the result in a user friendly manner. The algorithm was tested with a separate set of 25 fundus images. From this, the result obtained for microaneurysms and haemorrhages diagnosis shows the appropriateness of the method.

The use of vascular intersection aberration as one of the signs when monitoring and diagnosing diabetic retinopathy from retina fundus images (FIs) has been widely reported in the literature. In this paper, a new hybrid approach called the combined cross-point number (CCN) method able to detect the vascular bifurcation and intersection points in FIs is proposed. The CCN method makes use of two vascular intersection detection techniques, namely the modified cross-point number (MCN) method and the simple cross-point number (SCN) method. Our proposed approach was tested on images obtained from two different and publicly available fundus image databases. The results show a very high precision, accuracy, sensitivity and low false rate in detecting both bifurcation and crossover points compared with both the MCN and the SCN methods.

The aim of this thesis is to investigate the possibility to form high resolution Synthetic Aperture Radar (SAR) images using the Global Navigation Satellite System (GNSS) Galileo, GPS and Glonas, In particular the thesis study the GPS signal and evaluate its properties for bistatic case. The report is based on the fact that Galileo and GPS are both positioning systems with similar characteristics. The difference is mainly that Galileo System uses a larger number of satellites and a different modulation scheme to improve the efficiency of the system, resulting in a better accuracy. On the topic of GNSS SAR, the report will be described with modes, resolution, geometry and algorithms. It is also explained the Space Surface Bi-static Radar and within two particular cases: parallel and non parallel paths

Usability is one of the most important quality attributes in the new generation of software applications and computational devices. On the other hand, Model- View-Controller is a well known software architectural pattern and is widely used in its original form or its variations. The relationship between usability and the usage of Model-View-Controller, however, is still unknown. This thesis tries to contribute to this research question by providing the outcomes of a case study where a prototype has been developed in two different versions: one using Model-View-Controller and another using a widely known Object-Oriented guideline, the GRASP patterns. Those prototypes have been developed based on a non-functional prototype with a good level of usability. With the prototypes in hands, they were compared based on their design and based on the usability heuristics proposed by Nielsen. From this study, we discovered that the usage of MVC brings more advantages and disadvantages to the usability of the system than the ones that were found on the literature review. In general, the relationship between MVC and usability is beneficial, easing the implementation of usability features like validation of input data, evolutionary consistency, multiple views, inform the result of actions and skip steps in a process.

Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise as one of the critical resources for high-quality work.

Objectives. We aim at understanding whether software teams in different contexts develop transactive memory systems (TMS) and whether well-developed TMS leads to performance benefits as suggested by research conducted in other knowledge-intensive disciplines. Because multiple factors may influence the development of TMS, based on related TMS literature we also suggest to focus on task allocation strategies, task characteristics and management decisions regarding the project structure, team structure and team composition.

Methods. We use the data from two large-scale distributed development companies and 9 teams, including quantitative data collected through a survey and qualitative data from interviews to measure transactive memory systems and their role in determining team performance. We measure teams’ TMS with a latent variable model. Finally, we use focus group interviews to analyze different organizational practices with respect to team management, as a set of decisions based on two aspects: team structure and composition, and task allocation.

Results. Data from two companies and 9 teams are analyzed and the positive influence of well-developed TMS on team performance is found. We found that in large-scale software development, teams need not only well-developed team’s internal TMS, but also have well- developed and effective team’s external TMS. Furthermore, we identified practices that help of hinder development of TMS in large-scale projects.

Conclusions. Our findings suggest that teams working in large-scale software development can achieve performance benefits if transactive memory practices within the team are supported with networking practices in the organization.

Corporate Social Responsibility has been a very topical issue in contemporary times, but an in-depth understanding of this salient concept is quite questionable to many including actors in corporate spheres. This we could attribute to ignorance or limited research work to propagate what corporate social responsibility means and the benefits it may bring forth if properly adhered to by corporations. This paper thus has as fundamental focus to clarify and enhance the understanding of corporate social responsibility as well as the extent to which corporations adhere to the ever increasing demands emanating from stakeholders (consumers, governments, employees, environmental activists etc) for socially responsibility. The extent to which corporate social responsibility is a valid criterion to judge the actions of corporations is also of special interest in this research work. This paper adopts a theoretical and an empirical approach to provide a better understanding of corporate social responsibility, while a broad based exploratory case study approach is used to investigate the activities of some selected multinationals across the globe .We used three management responsibility categories technique to bridge our findings to our conclusions that the management style of multinationals examined fall under the immoral category associated highly with socially irresponsible companies which typically do not consider stakeholder interests. The paper unravels the devastative effects of their activities on consumers, the environment, employees, and local communities whose interest and well being they should enhance from the corporate social responsibility perspective. The paper concludes with some limitations encountered in this study and suggestions for further research.

Abstract: The channel characterization of a mobile satellite communication which is an important and fast growing arm of wireless communication plays an important role in the transmission of information through a propagation medium from the transmitter to the receiver with minimum barest error rate putting into consideration the channel impairments of different geographical locations like urban, suburban, rural and hilly. The information transmitted from satellite to mobile terminals suffers amplitude attenuation and phase variation which is caused by multipath fading and signal shadowing effects of the environment. These channel impairments are commonly described by three fading phenomena which are Rayleigh fading, Racian fading and Log-normal fading which characterizes signal propagation in different environments. They are mixed in different proportions by different researchers to form a model to describe a particular channel. In the thesis, the general overview of mobile satellite is conducted including the classification of satellite by orbits, the channel impairments, the advantages of mobile satellite communication over terrestrial. Some of the major existing statistical models used in describing different type of channels are looked into and the best out of them which is Lutz model [6] is implemented. By simulating the Lutz model which described all possible type of environments into two states which represent non-shadowed or LOS and shadowed or NLOS conditions, shows that the BER is predominantly affected by shadowing factor.

Purpose The purpose of the study is to scrutinize the practises in the investment deci-sion-making procedures within enterprises applying the evaluation methods. Furthermore, by way of comparing these procedures with similar later evalua-tions within the same enterprises, the reliability of the method is appraised. The aim is also to respond to the question as of to which extent conventional investment calculations are applicable in the assessment of system values. Methods Out of the supply of enterprises applying the method, I have chosen three ob-jects for my interviews. I also have interviewed two recruitment tool producers (software developers). The latter was done in order to deepen the knowledge behind the solutions and to grasp the experience achieved. Conclusions The methods applied producing the basis of decision making varies case by case. There are no systematics to point out in this context. Surprisingly, several of the interviewed enterprises have disregarded cost-benefit evaluations in the decision process, while basing their choices on undefined hopes directed at the cost saving effect the system is expected to bring about. Nevertheless back-ground information and facts upon which the decisions have been based have been correct. In this respect, a reservation has to be considered regarding ref-erence facts, ie. the starting point in relation to which the system is to be evaluated.

Today more than ever, Computerized Trouble Ticketing System is becoming a booming information technology system that makes the difference between staying in business in a competitive global telecommunication arena.

This quantitative exploratory survey utilised conveniently selected research subjects to explore computerized trouble ticketing system and its inherent benefits in Vodafone Ghana Plc. Cross section of vital data set collected with the aid of structured questionnaires haven been analyzed using descriptive statistics model.

The study revealed that, effective and efficient usage of computerized trouble ticketing systems benefit the company in terms of its customer satisfaction, competitive advantage and business intelligence in competitive telecom arena. Nevertheless, the smooth realization of these inherent benefits are constantly challenged by complexity in managing volumes of data generated, intense era of competition, high cost of trouble ticketing system, as well as, rapid technological obsolesce in computerized trouble ticketing applications in telecommunication market.

The study recommended for the quick and effective adoption of differentiation strategy, cost leadership strategy and customer relationship management, which are customer-centric measures that can build sustainable long-term customer relationship that can create value for the company, as well as, for the customers.

Optical free space communication system faces the major challenge because of the atmospheric condition. Signals receive in the ground station using two different types of receivers (Coherent detection and Intensity Modulation and Direct Detection (IM/DD)). Coherent detection uses PIN photo detector in the receiver end to attain the more sensitivity of the receiver. It receives the input data as a carrier signal and the local oscillator signal is mixed with the received signal and down convert the carrier signal to an intermediate frequency signal. The Intensity Modulation direct detection uses the Avalanche photo detector in the receiver end to attain the more sensitivity. This detection receives the input signal as a carrier signal and it is directly demodulated at the receiver back into the original signal. Signals receive in the ground station from the aircraft will be affected by the various types of noise like shot noise, thermal noise, etc. The occurrence of noises in the coherent detection is not exactly same as the IM/DD. Some noise get varies according to the electrical circuit noise produced in the receiver side. By deriving the signal-to-noise ratio, the background noise occur in the desired signal can be calculated. One of the main goals would be to derive a Probability Density Function (PDF) of the Signal-to-Noise Ratio (SNR) of the each type of receiver to check the efficiency of the receivers. Transmitting the optical signal from aircraft will face some data loss problem due to atmospheric turbulence disturbances, to identify the loss arises in the transmitting signal will be done by using the probability error method. Bit Error Rate (BER) derivation will take place to calculate and to identify the data loss occurs in the received signal. The project deals with measuring the efficiency and sensitivity among those two optical receivers and to check the robustness between those receivers against scintillations (power fades and surges) effects. In this work performance of the coherent receiver and IM/DD receiver using APD is compared with the different system characteristics. Sensitivity and performance of both the receivers are calculated with the same fading vector. Signal to noise ratio and bit error rate are theoretically derived and numerically analyzed in the case of atmospheric turbulence. Numerical results predict the performance of both the receivers.

Deployment of sensor networks are increasing either manually or randomly to monitor physical environments in different applications such as military, agriculture, medical transport, industry etc. In monitoring of physical environments, the most important application of wireless sensor network is monitoring of critical conditions. The most important in monitoring application like critical condition is the sensing of information during emergency state from the physical environment where the network of sensors is deployed. In order to respond within a fraction of seconds in case of critical conditions like explosions, fire and leaking of toxic gases, there must be a system which should be fast enough. A big challenge to sensor networks is a fast, reliable and fault tolerant channel during emergency conditions to sink (base station) that receives the events. The main focus of this thesis is to discuss and evaluate the performance of two different routing protocols like Ad hoc On Demand Distance Vector (AODV) and Dynamic Source Routing (DSR) for monitoring of critical conditions with the help of important metrics like throughput and end-to-end delay in different scenarios. On the basis of results derived from simulation a conclusion is drawn on the comparison between these two different routing protocols with parameters like end-to-end delay and throughput.

The advancements in broadband and mobile communication has given many privileges to the subscribers for instance high speed data connectivity, voice and video applications in economical rates with good quality of services. WiMAX is an eminent technology that provides broadband and IP connectivity on “last mile” scenario. It offers both line of sight and non-line of sight wireless communication. Orthogonal frequency division multiple access is used by WiMAX on its physical layer. Orthogonal frequency division multiple access uses adaptive modulation technique on the physical layer of WiMAX and it uses the concept of cyclic prefix that adds additional bits at the transmitter end. The signal is transmitted through the channel and it is received at the receiver end. Then the receiver removes these additional bits in order to minimize the inter symbol interference, to improve the bit error rate and to reduce the power spectrum. In our research work, we investigated the physical layer performance on the basis of bit error rate, signal to noise ratio, power spectral density and error probability. These parameters are discussed in two different models. The first model is a simple OFDM communication model without the cyclic prefix, while the second model includes cyclic prefix.

Annually, road accidents cause more than 1.2 million deaths, 50 million injuries, and US$ 518 billion of economic cost globally. About 90% of the accidents occur due to human errors such as bad awareness, distraction, drowsiness, low training, fatigue etc. These human errors can be minimized by using advanced driver assistance system (ADAS) which actively monitors the driving environment and alerts a driver to the forthcoming danger, for example adaptive cruise control, blind spot detection, parking assistance, forward collision warning, lane departure warning, driver drowsiness detection, and traffic sign recognition etc. Unfortunately, these systems are provided only with modern luxury cars because they are very expensive due to numerous sensors employed. Therefore, camera-based ADAS are being seen as an alternative because a camera has much lower cost, higher availability, can be used for multiple applications and ability to integrate with other systems. Aiming at developing a camera-based ADAS, we have performed an ethnographic study of drivers in order to find what information about the surroundings could be helpful for drivers to avoid accidents. Our study shows that information on speed, distance, relative position, direction, and size & type of the nearby vehicles & other objects would be useful for drivers, and sufficient for implementing most of the ADAS functions. After considering available technologies such as radar, sonar, lidar, GPS, and video-based analysis, we conclude that video-based analysis is the fittest technology that provides all the essential support required for implementing ADAS functions at very low cost. Finally, we have proposed a Smart-Dashboard system that puts technologies – such as camera, digital image processor, and thin display – into a smart system to offer all advanced driver assistance functions. A basic prototype, demonstrating three functions only, is implemented in order to show that a full-fledged camera-based ADAS can be implemented using MATLAB.

Road accidents cause a great loss to human lives and assets. Most of the accidents occur due to human errors, such as bad awareness, distraction, drowsiness, low training, and fatigue. Advanced driver assistance system (ADAS) can reduce the human errors by keeping an eye on the driving environment and warning a driver to the upcoming danger. However, these systems come only with modern luxury cars because of their high cost and complexity due to several sensors employed. Therefore, camera-based ADAS are becoming an option due to their lower cost, higher availability, numerous applications and ability to combine with other systems. Targeting at designing a camera-based ADAS, we have conducted an ethnographic study of drivers to know what information about the driving environment would be useful in preventing accidents. It turned out that information on speed, distance, relative position, direction, and size and type of the nearby objects would be useful and enough for implementing most of the ADAS functions. Several camera-based techniques are available for capturing the required information. We propose a novel design of an integrated camera-based ADAS that puts technologies-such as five ordinary CMOS image sensors, a digital image processor, and a thin display-into a smart system to offer a dozen advanced driver assistance functions. A basic prototype is also implemented using MATLAB. Our design and the prototype testify that all the required technologies are now available for implementing a full-fledged camera-based ADAS.

The need of change is essential for a software system to reside longer in the market. Change implementation is only done through the maintenance and successful software maintenance gives birth to a new software release that is a refined form of the previous one. This phenomenon is known as the evolution of the software. To transfer software from lower to upper or better form, maintainers have to get familiar with the particular aspects of software i.e. source code and documentation. Due to the poor quality of documentation maintainers often have to rely on source code. So, thorough understanding of source code is necessary for effective change implementation. This study explores the code comprehension problems discussed in the literature and prioritizes them according to their severity level given by maintenance personnel in the industry. Along with prioritizing the problems, study also presents the maintenance personnel suggested methodologies for improving code comprehension. Consideration of these suggestions in development might help in shortening the maintenance and evolution time.

In this thesis, the concept of virtual reality has been elaborated in the context of games, industrial design and manufacturing. The main purpose of this master’s thesis is to create a virtual environment for games that are near to the reality and according to the human nature through aspects like better interface, simulation, lights, shadow effects and their types. The importance of these aspects regarding realistic virtual environment is complemented through the comparison between two environments i.e. desktop and CAVE on a flight simulation program.

Debugging is an important and critical phase during the software development process. Software debugging is serious and tough practice involved in functional base test driven development. Software vendors encourages their programmers to practice test driven development during the initial development phases to capture the bug traces and the associated code coverage infected from diagnosed bugs. Application’s source code with fewer threats of bug existence or faulty executions is assumed as highly efficient and stable especially when real time software products are in consideration. Due to the fact that process of development of software projects relies on great number of users and testers which required having an effective fault localization technique. This specific fault localization technique can highlight the most critical areas of software system at code as well as modular level so that debugging algorithm can be used to debug the application source code. Nowadays many complex or simple software systems are in corporation with open bug repositories to localize the bugs. Any inconsistency or imperfection in early development phase of software product results in low efficient system and less reliability. Statistical debugging of program source code for visualization of fault is an important and efficient way to select and rank the suspicious lines of code. This research provides guidelines for practicing statistical debugging technique for programs coded in Ruby programming language. This thesis presents statistical debugging techniques available for dynamic programming languages. Firstly, the statistical debugging techniques were thoroughly observed with different predicate base approaches followed in previous work done in the subject area. Secondly, the new process of statistical debugging for programs coded in Ruby programming language is introduced by generating dynamic predicates. Results were analyzed by implementing multiple programs written in Ruby programming language with different complexity level. The analysis of experimentation performed on candidate programs depict that SOBER is more efficient and accurate in bug identification than Cause Isolation Scheme. It is concluded that despite of extensive research in the field of statistical debugging and fault localization it is not possible to identify majority of the bugs. Moreover SOBER and Cause Isolation Scheme algorithms are found to be two most mature and effective statistical debugging algorithms for bug identification with in software source code.

Synthetic Aperture Radar (SAR) image processing involves a two dimensional (2-D) Fourier Transform (FT) and the spectrum shape introduces high intensity sidelobes. These sidelobes may severely distort the image. Apodization technique can decrease the sidelobes level while preserving the image resolution. However, in Ultra wideband (UWB) SAR imaging, we have to reduce both orthogonal and non-orthogonal sidelobes. In this paper, a new linear window function has been presented based on analysis of different linear and non-linear apodization techniques. This new linear method controls both orthogonal and non-orthogonal sidelobes better than other conventional window functions. This method has been applied and verified with a real SAR image.

Ultra wideband (UWB) Synthetic Aperture Radar (SAR) holds huge possibilities for both terrestrial and celestial object sensing with excellent details which assists in science and technology. SAR systems associated with large antenna beamwidth, large signal bandwidth and low frequency operating in the VHF/UHF region is becoming gradually more popular because of their rising number of application in the areas of ground-penetrating radar (GPR) and foliage penetration radar (FOPEN). Apodization techniques in UWB SAR imaging have attracted significant interest in recent years for sidelobe suppression in SAR images. This technique is split into two groups: linear apodization and non-linear apodization. Linear apodization technique means to apply amplitude weighting functions in frequency domain prior to the final inverse Fourier transform requisite to appropriately focus on the SAR images. Both linear and non linear techniques can be used to suppress sidelobes level. Frequently used linear weighting functions are Hanning, Hamming and Blackman. Linear techniques can control the sidelobes level but image resolution reduces simultaneously. But non-linear techniques like Spatially Variant Apodization (SVA), Complex Duel Apodization and Dual-Apodization can suppress sidelobes and preserve the spatial resolution concurrently. However for these methods, it can be hard to understand how the output signal relates to input signal and also the phase information of image is lost. In this thesis paper, the main focus is, on apodization techniques to propose a new weighting function for sidelobe apodization and investigate it on real SAR images. In this thesis, we also study Impulse response (IPR) function for UWB SAR image processing. A two dimensional sinc function is used as an impulse response function for narrow band (NB) SAR system. This function can be obtained from a two dimensional Fourier Transform of a SAR image. This rectangular estimation is reasonable for narrow band and narrowbeam SAR. But for large bandwidth and wide integration angles, this approximation for the UWB SAR spectrum is not valid. It can provide erroneous SAR image quality measurements. To obtain precise image quality measurement, SAR image need to be generated for a range of different integration angle as UWB SAR systems are related with large integration angle to maintain azimuth focusing. So, in this work the choices of optimum windows have been investigated at different integration angles in order to see if there are large differences between NB SAR Apodization and UWB SAR Apodization.

The study aimed to investigate and critically analyze claims management, an ethical issue in insurance companies in Nigeria, to find out if these insurance companies recognize it to be an ethical issue and also to find out how they handle insured’s claims. A qualitative research method was used in carrying out this study; data was sourced through interviews and by secondary data using literatures from books, journals, articles, and electronic websites. The researchers used purposive sampling to select some top insurance companies in Nigeria; in these insurance companies basically personnel working in the claims department were interviewed, also sales agents from two of these insurance companies were interviewed. Data was sourced from two insurance broking firms in Nigeria by interviewing their top personnel, and also some of the insuring public with and without insurance policies was interviewed. The analytical strategy adopted in this research work was to rely on theoretical propositions. This study made use of Jones (1991) moral intensity model. Based on the analysis of data collected during the interview, the study revealed that insurance personnel in claims administration who take decision on insured’ claims in Nigeria recognize that there is a moral dilemma in their act and they discharge this responsibility professionally and ethically sticking to the rules of the business. Also the characteristics that constitute moral intensity model; proximity, social context, probability of effect, concentration of effect and magnitude of consequence offered by Jones (1991) influence the moral decision making process and moral behavior of claims personnel in Nigeria insurance companies. But due to some challenges faced by these personnel in discharging their duty and some lapses from their side and the insured’s there have always been complaint on claims. However they acknowledge that no one is perfect therefore they are open to getting feedbacks from their clients on the way they feel about their claims which they look into and make necessary amendments where needed. This study concluded with proposition for future researchers to look into how the challenges encountered by personnel managing insured’s’ claims in insurance companies in Nigeria can be dealt with and to find out how insurance companies in Nigeria can gain the awareness of the insuring public and make them understand the terms and conditions of insurance service.