Proceedings from the Journal on Computing (JoC) Vol.1 August 2010

Academic Conferences

By : Global Science & Technology Forum

Date : 2010

Location : United States US / New York

PDF182p

Description :

These articles are the end result of scholars’ tremendous effort and creativeness in exploring the theory, application, and social implications of diverse frontier research areas in science and technology.

The Global Science and Technology Forum (GSTF) presents peer-reviewed scholarly articles rigorously selected from conferences. These articles are the end result of scholars’ tremendous effort and creativeness in exploring the theory, application, and social implications of diverse frontier research areas in science and technology.

Proceedings from the Journal on Computing (JoC) Vol.1 August 2010 conference.

Abstracts from papers available from Journal on Computing (JoC) Vol.1 August 2010 conference.

1. The Design and Implementation of a Testbed for Comparative Game AI Studies

Hollie Boudreaux, Jim Etheredge, Ashok Kumar

An essential component of realism in video games is the behavior exhibited by the non-player character (NPC) agents in the game. Most development efforts employ a single artificial intelligence (AI) method to determine NPC agent behavior during gameplay. This paper describes an NPC AI testbed under development which will allow for a variety of AI methods to be compared under simulated gameplay conditions. Two squads of NPC agents are pitted against each other in a game scenario. Multiple games using the starting same AI assignments will form an epoch. The testbed allows for the testing of a variety of AI methods in three dimensions. Individual agents can be assigned different AI methods. Individual agents can use different AI methods at different times during the game. And finally, the AI used by one type of agent can be made to differ from the AI used by another agent type. Extensive data is collected for all agent actions in all games played in an epoch. This data will form the basis of the comparative analysis.

2. Continuous and Reinforcement Learning Methods for First-Person Shooter Games

Tony C. Smith and Jonathan Miles

Machine learning is now widely studied as the basis for artificial intelligence systems within computer games. Most existing work focuses on methods for learning static expert systems, typically emphasizing candidate selection. This paper extends this work by exploring the use of continuous and reinforcement learning techniques to develop fully-adaptive game AI for first-person shooter bots. We begin by outlining a framework for learning static control models for tanks within the game BZFlag, then extend that framework using continuous learning techniques that allow computer controlled tanks to adapt to the game style of other players, extending overall playability by thwarting attempts to infer the underlying AI. We further show how reinforcement learning can be used to create bots that learn how to play based solely through trial and error, providing game engineers with a practical means to produce large numbers of bots, each with individual intelligences and unique behaviours;all from a single initial AI model.

‘Games with a purpose’ is a paradigm where games are designed to computationally capture the essence of the underlying collective human conscience or common sense that plays a major role in decision-making. This human computing method ensures spontaneous participation of players who, as a byproduct of playing, provide useful data that is impossibleto generate computationally and extremely difficult to collect through extensive surveys. In this paper we describe a game that allows us to collect data on human perception of character body shapes. The paper describes the experimental setup, related game design constraints, art creation, and data analysis. In our interactive role-playing detective game titled Villain Ville, players are asked to characterize different versions of full body color portraits of three villain characters. They are later supposed to correctly match their character-trait ratings to a set of characters represented only with outlines of primitive vector shapes.

By transferring human intelligence tasks into core game-play mechanics, we have successfully managed to collect motivated data. Preliminary analysis on game data generated by 50 secondary school students shows a convergence to some common perception associations between role, physicality and personality. We hope to harness this game to discover perception for a wide variety of body-shapes to build up an intelligent shape trait-role model, with application in tutored drawing, procedural character geometry creation and intelligent retrieval.

This paper shows how college students without prior experience in video game design can create an interesting video game. Video game creation is a task that requires weeks if not months of dedication and perseverance to complete. However, with Alice, a group of three sophomore students whonever designed a game can create a full-fledged video game from given specifications. Alice is 3D graphics interactive animation software, which is well-tried and proven to be an enjoyable learning environment. At the start of this project, students are given guidelines that describe expected outcomes.

With minimum supervision, in three days, a working program that matches the guidelines is accomplished. In additional two days, students enhance the quality with better graphics design and music. With this experience, 3D graphics interactive animation software, like Alice, is demonstrated to be a useful teaching tool in education for academic courses of game development and design. This paper not just discusses how the video game was created, but also speaks of the difficulties the team overcome seasily with Alice.

In the following we will discuss a cost effective immersive gaming environment and the implementation in Blender, an open source game engine. This extends traditional approaches to immersive gaming which tend to concentration multiple flat screens, sometimes surrounding the player, or cylindrical [2] displays. In the former there are unnatural gaps between each display due to screen framing, in both cases they rarely cover the 180 horizontal degree field of view and are even less likely to cover the vertical field of view required to fully engage the field of view of the human visual system. The solution introduced here concentrates on seamless hemispherical displays, planetariums in general and the iDome as a specific case study. The methodology discussed is equally appropriate to other realtime 3D environments that are available in source code form or have a suitably powerful means of modifying the rendering pipeline.

6. Towards Teaching Secondary School Physics in an Immersive 3D Game Environment

Bill Rogers, The University of Waikato, Hamilton, New Zealand

Dacre Denny, The University of Waikato, Hamilton, New Zealand

Jonathan Stichbury, The University of Waikato, Hamilton, New Zealand

Laboratory exercises are an important part of secondary school physics classes that make an important contribution to student learning. Virtual laboratories have the advantage of allowing experiments that might be too dangerous or too costly in the real world. We present Gary’s Lab, an experimental immersive 3D laboratory environment using computer game technology. Our system allows students considerable freedom in constructing apparatus, and running qualitative and quantitative experiments using that apparatus. We argue that the process of constructing experiments in interesting contexts might be expected to help students engage with their lessons, focusing their attention on the apparatus and the methods of measurement used.

This paper presents the development of a real time perception enhanced virtual environment for maritime applications which simulates real-time six degrees of freedomship motions (pitch, heave, roll, surge, sway, and yaw) under user interactions, environmental conditions and various threat scenarios. This simulation system consists of reliable shipmotion prediction system and perception enhanced immersive virtual environment with greater ecological validity. This virtual environment supports multiple-display viewing that can greatly enhance user perception and we developed the ecological environment for strong sensation of immersion. In this virtual environment it is possible to incorporate real world ships, geographical sceneries, several environmental conditions and wide range of visibility and illumination effects. This system can be used for both entertainment and educational applications such as consol level computer games, teaching & learning applications and various virtual reality applications. Especially this framework can be used to create immersive multi user environments.

8. Multi-view Rendering using GPU for 3-D Displays

François de Sorbier, Vincent Nozick, and Hideo Saito

Creating computer graphics based content for stereoscopic and auto-stereoscopic displays require rendering a scene several times from slightly different viewpoints. In that case, maintaining real-time rendering can be a difficult goal if the geometry reaches thousands of triangles. However, similarities exist among the vertices belonging to the different views like the texture, some transformations or parts of the lightning. In this paper, we present a single pass algorithm using the GPU that speeds-up the rendering of stereoscopic and multi-view images. The geometry is duplicated and transformed for the new viewpoints using a shader program, which avoid redundant operations on vertices.
9. ToyBox Futuristi: An Expression of Futurism via Digital Puppetry

Goutham Dindukurthi, Designer, Programmer, Pittsburgh, PA

Cory Garfin, Designer, Co-Producer, Pittsburgh, PA

Carlos Hurtado, Designer, Programmer, Pittsburgh, PA

EJ Lee, Artist, Pittsburgh, PA

Matt McLean, Animator, Co-Producer, Pittsburgh, PA

Kana Otaki, Artist, Pittsburgh, PA

Francisco Souki, Designer, Programmer, Pittsburgh, PA

ToyBox Futuristi is a digital puppeteering toolkit created by a student team from Carnegie Mellon University’s Entertainment Technology Center (known from this point forward as ETC). The team took its inspiration from the work of Italian Futurist Fortunato Depero, in particular Balli Plastici, the ‘plastic dance’ he created in 1918. Depero’s marionettes encapsulate the Futurist ideal of machinery striving to break free of human control, a theme expanded upon by the use of contemporary technologies. The project was driven by two deliverables: the digital re-imagining of Balli Plastici, a version ofwhich was shown at the Performa 09 Arts Festival in New York City, and the development of ToyBox Futuristi, the puppetry software that enables users to create their own versions of the ballet. This paper will focus on the creation of ToyBox Futuristi, including how the creative team took inspiration from the work of Depero and the Futurists while simultaneously establishing a sense of style and functionality adapted to the digital age.

English language, Mathematics and Science for life are mandatory subjects for Thai students to finish their primary school studies. Regarding the unsatisfied result of the annual assessment, there are many students fail those subjects every year. This paper thus proposes the educational computer game to enhance learning of English language, Mathematics and Sciences subjects. The proposed game employs the concept of collaborative learning integrated into the game to promote the better understanding of contents and the familiarization of teamwork experience, while the players are still filled with the joy and the challenge. The proposed game is designed as a multi-players online game. All players compete among each other to be a leader and conduct the game along with help from team members to achieve the goal. The developed game is evaluated with 2 aspects including the learning efficiency and the satisfaction of students.

The empirical study is conducted with 100 students from 3 different primary schools in Chiang Rai, Thailand. These students are divided into 2 groups including the group playing game individually and collaboratively respectively. The first group has 25 students while the second group has 15 groups with 5 students per each. The results reveal that, the students playing game collaboratively can achieve higher learning efficiency than the students playing game individually. Moreover, the collaborative game obtains Good satisfaction level by the students.

A method for maintaining quality of service in game servers with excessive users is often done by increasing the number of game servers. While this method is straightforward, it demands a number of new machines to be invested without realizing local utilization of resources available in the current machines. In this paper, we argue that graphic processor (GPU) working in parallel with local central processor (CPU) inside a machine can be a good candidate for reducing the workload, before attempting to distribute it to the other machines. By using the empirical study, we investigate in what level the GPU can give benefits to different types of online game servers. As a result, we can give suggestion how the GPU should be involved so that the performance of game services can be improved. We believe that this result can give benefits to online game developers who may want to gain performance of their applications without requiring any extra resources.

12. Using a Game Engine to Integrate Experimental, Field, and Simulation Data for Science Education: You Are the Scientist!

The purpose of this project is to use a game engine to integrate geo-referenced research data, whether experimental or simulated, to present it interactively to the user. Geo-referenced means that every image, video, or sound file, every pressure map, and every simulated temperature chart is attached to a specific point on a map or body. These data may also be time referenced,so that different data sets may be available at the same location for different times of the day or seasons of the year. Target users for the interactive applications are high-school and college students who can then conduct their own “experiments” or “explorations” as a way to get exposed to the problems and methodologies of science and research. We use two examples of projects to illustrate the approach.

13. Sustainable Business Strategies and PESTEL Framework

Dr Tomayess Issa, Curtin University of Technology, Perth WA

A/Prof. Vanessa Chang, Curtin University of Technology, Perth WA

Dr Theodora Issa, Curtin University of Technology, Perth WA

This paper aims to provide a brief review of cloud computing, followed by an analysis of cloud computing environment using the PESTEL framework. The future implications and limitations of adopting cloud computing as an effective eco-friendly strategy to reduce carbon footprint are also discussed in the paper. This paper concludes with are commendation to guide researchers to further examine this phenomenon.

Organizations today face tough economic times, especially following the recent global financial crisis and the evidence of catastrophic climate change. International and local businesses find themselves compelled to review their strategies. They need to consider their organizational expenses and priorities and to strategically consider how best to save. Traditionally, Information Technology (IT) department is one area that would be affected negatively in the review. Continuing to fund these strategic technologies during an economic downturn is vital to organizations.

It is predicted that in coming years IT resources will only be available online. More and more organizations are looking at operating smarter businesses by investigating technologies such as cloud computing, virtualization and green IT to find ways to cut costs and increase efficiencies.

Abstract—Cloud computing is widely associated with major capital investment in mega data centres, housing expensive blade servers and storage area networks. In this paper we argue that a modular approach to building local or regional data centres using commodity hardware and open source hardware can produce a cost effective solution that better addresses the goals of cloud computing, and provides a scalable architecture that meets the service requirements of a high quality data centre.

In support of this goal, we provide data that supports three research hypotheses:

1. that central processor unit (CPU) resources are not normally limiting;
2. that disk I/O transactions (TPS) are more often limiting, but this can be mitigated by maximizing the TPS-CPU ratio;
3. that customer CPU loads are generally static and small.

Our results indicate that the modular, commodity hardware based architecture is near optimal. This is a very significant result, as it opens the door to alternative business models for the provision of data centres that significantly reduce the need for major up-front capital investment.

In the current era of competitive business world and stringent market share and revenue sustenance challenges, organizations tend to focus more on their core competencies rather than the functional areas that support the business. However, traditionally this has not been possible in the IT management area because the technologies and their underlying infrastructures are significantly complex thus requiring dedicated and sustained inhouse efforts to maintain IT systems that enable core business activities.

Senior executives of organisations are forced in many cases to conclude that it is too cumbersome, expensive and time consuming for them to manage internal IT infrastructures. This takes the focus away from their core revenue making activities.This scenario facilitates the need for external infrastructure hosting, external service provision and outsourcing capability. This trend resulted in evolution of IT outsourcing models. The authors attempted to analyse the option of leveraging the cloud computing model to facilitate this common scenario.

This paper initially discusses the characteristics of cloud computing focusing on scalability and delivery as a service. The model is evaluated using two case scenarios, one is an enterprise client with 30,000 worldwide customers followed by a small scale subject matter expertise through small to medium enterprise (SME) organisations. The paper evaluates the findings and develops a governance framework to articulate the value proposition of cloud computing.. The model takes into consideration the financial aspects, and the behaviors and IT control structures of an IT organisation.

16. A Conceptual Analysis on the Taxation System for Highly Virtual Enterprises

Guozhen George Huang, The Open Polytechnic of New Zealand

Highly virtual enterprise (HVE) as a new form of business has existed in the cloud for a long time, but its taxation system is far from establishment. This paper analysed the characteristics of a HVE in the cloud environment, suggested the criteria for its tax residency determination, and recommended the framework for accounting, auditing and legislation in New Zealand. It further proposed the network system for monitoring HVE activities and accounting for their website transactions. It also provided deliverables for cloud technology to develop in the future.

17. Cloud Computing and Virtualization: The “Entrepreneur without Borders” Workbench for 21st Century Enterprise Development

Dr. William J. Lawrence, Director NYIT Center for Entrepreneurial Studies, Professor of Economics and Entrepreneurship, New York Institute of Technology, NY, USA

This paper looks at how entrepreneurial innovation and creativity can and will become a primary catalyst in the development and implantation of cloud systems in a wide variety of decision making environments. Following a brief introduction to the historical evolution towards Cloud Computing and Virtualization, we explore some of the original developments that have led to how this technological and philosophical way of thinking and planning our personal and professional lives has begun to come of age.

18. Decision Support for Selection of Cloud Service Providers

Tomas Sander, HP Labs, Princeton, US

Siani Pearson, HP Labs, Bristol, UK

Clear and consistent assessment of the various capabilities of cloud service providers (CSPs) will become an essential factor in deciding on which CSPs to use in the future, particularly as cloud service provision expands futher into more sensitive and regulated areas. This paper describes an approach that is useful in this regard. Specifically, we describe a mechanismin which context is gathered relating to CSPs; this is inputted to a rule-based system and decisions are output about the suitability of each CSP, including an analysis of privacy and security risk and recommended stipulations to be taken into account when negotiating contracts and SLAs.

Virtualization is a key technology in cloud computing to render on-demand provisioning of virtual services. Xen, an open source paravirtualized virtual machine monitor (hypervisor), has been adopted by many leading data centers of the world today. A scheduler in Xen handles CPU resources sharing among virtual machines hosted on the same physical system. This study is focused on a scheduler in the current Xen release - the Credit scheduler. Credit uses two parameters (weight and cap) to fine tune CPU resources sharing. Previous studies have shown that these two parameters can impact various performance measures of virtual machines hosted on Xen. In this study, we present a holistic procedure to establish performance models of virtual machines. Empirical data of two commonly used measures, namely calculation power and network throughput, were collected by simulations under various settings of weight and cap. We then employed a powerful machine learning tool (multi-kernel support vector regression) to learn performance models from the empirical data. These models were evaluated satisfactorily by using established procedures in machine learning.

20. Formulating a Security Layer of Cloud Data Storage Framework Based on Multi Agent System Architecture

The tremendous growth of the cloud computing environments requires new architecture for security services. In addition, these computing environments are open, and users may be connected or disconnected at any time. Cloud Data Storage, like any other emerging technology, is experiencing growing pains. It is immature, it is fragmented and it lacks standardization. To verify the correctness, integrity, confidentially and availability of users’ data in the cloud, we propose a security framework. This security framework consists of two main layers as agent layer and cloud data storage layer. The propose MAS architecture includes five types of agents: UserInterface Agent (UIA), User Agent (UA), DER Agent (DERA), Data Retrieval Agent (DRA) and Data Distribution Preparation Agent (DDPA). The main goal of this paper is to formulate our secure framework and its architecture.

21. Cloud Computing Tipping Point Model

Chris Peiris, Faculty of Information Sciences and Engineering, University of Canberra, Australia

Bala Balachandran, Faculty of Information Sciences and Engineering, University of Canberra, Australia

Dharmendra Sharma, Faculty of Information Sciences and Engineering, University of Canberra, Australia

Recently a continuing trend toward IT industrialization has grown in popularity. IT services delivered via hardware, software and people are becoming repeatable and usable by a wide range of customers and service providers.This is due, in part, to the commoditization and standardization of technologies, virtualization and the rise of service-oriented software architectures, and (most importantly) the dramatic growth in popularity/use of the Internet and the Web. Taken together, they constitute the basis of a discontinuity that amounts to a new opportunity to shape the relationship between those whouse IT services and those who sell them. The discontinuity implies that the ability to deliver specialised services in IT can be paired with the ability to deliver those services in an industrialised andpervasive way.

The reality of this implication is that users of IT related services can focus on what the services provide them, rather than how the services are implemented or hosted. Analogous to how utility companies sell power to subscribers, and telephone companies sell voice and data services, some IT services such as network security management, data centre hosting or even departmental billing can now be easily delivered as a contractual service. This notion of cloud computing capability is gathering momentum rapidly. However, the governance and enterprise architecture to obtain repeatable, scalable and secure business outcomes from cloud computing is still greatly undefined.

This paper attempts to evaluate the enterprise architecture features of cloud computing and investigates a model that an IT organisation can leverage to predict / evaluate the ‘tipping point’ where an organisation can make an objective decision to invest in cloud computing. Current research results are attempting to build a quantitative and qualitative service centric framework by mapping cloud computing features with ValIT and COBIT industry best practices.

22. Implications of Cloud Computing on Digital Forensics

Vincent Urias, Socorro

John Young, Washington, D.C

Sherelle Hatcher, Owings Mills, MD

Cloud computing is a paradigm for computing services that are delivered to users over the Internet. In cloud computing, users rent rather than buy their computing resources. Cloud computing likely represents the next stage in the evolution of the Internet. But the cloud computing paradigm is still developing, with numerous unknowns and many questions openfor research. One critical question that has not received much attention is security. A significant subset is digital forensics—that is,

(1) the discovery of evidence remaining on a computer after a security breach or attack and
(2) the use of that evidence to investigate the event and establish facts for use in legal proceedings.

This paper discusses the impact that cloud computing will have on digital forensics. From a forensic perspective, cloud computing raises a number of concerns. Most immediate is whether or not forensic practitioners will be able to analyze the Cloud using existing techniques of digital forensics. During a traditional forensic examination, files on the storage media are examined along with the entire file system structure. But this may not be a practical model for examinations in the Cloud, where the computer is virtual, that is, where numerous heterogeneous resources, often geographically distributed, are combined. Other concerns include protecting evidence against contamination and anticipating the legal issues that will be raised by the Cloud paradigm, with its resources spread over diverse administrative and geopolitical domains. Comprehensive security services to protect not only the Cloud’s resources but also the data that resides on them may need to be instituted. The open literature to date has yet to address any of these challenges.

Cloud technologies are predicted to cause a paradigm shift in digital forensic techniques. This paper discusses the application of traditional digital forensic examinations to cloud forensics.

23. An Approach to Enable Cloud-Computing by the Abstraction of Event-Processing Classes

Jonathan Eccles, Department of Computer Science and Information Systems, Birkbeck, University of London, London

George Loizou, Department of Computer Science and Information Systems, Birkbeck, University of London, London

Following our introduction of the concept of Abstraction Classes, we present herein their realisation withina cloud environment. This is achieved using a combination of integrated service-location models, including Knowledge-Based Systems, and distributed metadata using XML. This is complemented by service control software invoked at the level of Abstraction Classes.

24. Analysis of Advanced Encryption Standards

Minal Moharir, Lecturer, Dept of ISE, R.V. College of Engg., Banglore

Dr. A V Suresh, Prof. & Head Dept of IEM, R.V. College of Engg., Banglore

The Advanced Encryption Standard (AES), the block cipher ratified as a standard by National Instituteof Standards and Technology of the United States (NIST), was chosen using a process markedly more open and transparent than its predecessor, the aging Data Encryption Standard (DES).

Fifteen algorithm were submitted as to NIST in 1998, NIST choose five finalist. NIST primary selection criteria are security, performance,and flexibility. This paper enlightens the last two criteria. In this paper we have discussed software performance of five AES finalist.

The paper specifically compares performance of the five AES finalist on a verity of common software platform: 32-bit CPU (both large and smaller microprocessors, smart cards,embedded microprocessors) and high end 64-bits CPUs.

25. A Decision Support System for Moving Workloads to Public Clouds

Mohammad Firoj Mithani, Unisys Global Services, India

Michael A. Salsburg, Ph.D. Unisys Corporation, USA

Shrisha Rao, Ph.D. IIIT-Bangalore, India

The current economic environment is compelling CxOs to look for better IT resource utilization in order to get morevalue from their IT investments and reuse existing infrastructure to support growing business demands. How to get more from less? How to reuse the resources? How to minimize the Total Cost of Ownership (TCO) of underlying IT infrastructure and data centeroperation cost? How to improve Return On Investment (ROI) to remain profitable and transform the IT cost center into a profitcenter? All of these questions are now being considered in light of emerging ‘Public Cloud Computing’ services. Cloud Computing is a model for enabling resource allocation to dynamic business workloads in a real time manner from a pool of free resources in a cost effective manner. Providing resource on demand at cost effective pricing is not the only criteria when determining if a business service workload can be moved to a public cloud. So what else must CxOs consider before they migrate to public cloud environments? There is a need to validate the business applications and workloads in terms of technical portability and business requirements/compliance so that they can be deployed into a public cloud without considerable customization. This validation is not a simple task.

In this paper, we will discuss an approach and the analytictooling which will help CxOs and their teams to automate theprocess of identifying business workloads that should move toa public cloud environment, as well as understanding its costbenefits. Using this approach, an organization can identify themost suitable business service workloads which could be movedto a public cloud environment from a private data center withoutre-architecting the applications or changing their business logic.This approach helps automate the classification and categorizationof workloads into various categories. For example, BusinessCritical (BC) and Non-business Critical (NBC) workloads canbe identified based on the role of business services within theoverall business function. The approach helps in the assessmentof public cloud providers on the basis of features and constraints.This approach provides consideration for industry complianceand the price model for hosting workloads on a pay-per-usebasis. Finally, the inbuilt analytics in the tool find the ‘best-fit’cloud provider for hosting the business service workload. ‘Bestfit’is based on analysis and outcomes of the previously mentioned steps.

Today, the industry follows a manual time consumingprocess for workload identification, workload classification andcloud provider assessment to find the best-fit for business service workload hosting. The suggested automated approach enables anorganization to reduce cost and time when deciding to move toa public cloud environment. The proposed automated approachaccelerates the entire process of leveraging cloud benefits,through an effective, informed, fact-based decision process.

Monitoring solutions for virtualized infrastructure (VI) should evolve to collect, analyze and provide configuration recommendation based on a broader range of operational metrics. A virtualized infrastructure is a complex interaction of hardware (servers, network andstorage), hosting variety of multi-tier application with specific service level requirements and governed by their security and compliance policies. Most existing solutions of today monitorand analyze only a subset of these interactions. The analysis and recommendation obtained tend to optimize only particular aspects of the infrastructure and can potentially introduce violations for the others. A virtualized infrastructure is dynamic in nature, providing immense opportunities to automate configuration changes to virtual machines, networks and storage. It delivers the capability to administer the whole of infrastructure as a large resource pool shared by multiple workloads. Monitoring solutions that look at only few aspects end up forcing administrators to create silos within the infrastructure that are specially designed to ensure that business service requirements are met for the specific applications running there. A monitoring solution that can collect and analyze multiple aspects for assisting in decision making and process automation can deliver greater efficiency to the virtualized infrastructure.

In this paper we argue the importance of having amonitoring solution that provides a holistic view of thevirtualized infrastructure. We discuss the need for solutions tobe capable of monitoring and analyzing a broader set of metricssuch as health of infrastructure components; performance ofoperating environment such as hypervisors, operating systemsand application running on them; capacity utilization indicatorsfor server, networks and storages; information available withconfiguration and change management database containingpolicies including security and compliance policies. We also takea look at what these broader set of metrics are and who wouldbe interested in them. The paper further proposes a monitoringframework for collecting and analyzing the above mentionedaspects of a virtual infrastructure to develop a more completesolution.

27. Multicast Delivery of IPTV Over the Internet

Dane L. Jackson, Raymond A. Hansen, and Anthony H. Smith

Television represents one of the great advancements in information delivery. Traditionally, television service has been delivered using dedicated communication methods such as terrestrial and satellite based wireless transmissions and fixed cable based transmissions. Some of these delivery mechanisms have advanced and now provide services including voice and Internet access. Another communication method, traditional telephone service, has greatly improvedand expanded to deliver services such as television and Internet access.

This convergence of service provides cost savings, allowing providers to utilize existing communication networks to deliver additional services to its customers, often at minimal or zero infrastructure cost. One disadvantage of this method is customer reach is still limited to those with access to dedicated service provider networks. The ability to disengage television service from these dedicated networks and move it to a more ubiquitous network would greatly improve the customer reach of the providers.

The most obvious network choice for a delivery medium is the Internet. Given that television delivery mechanisms have already started the progression towards IPTV, the service is a natural fit. One issue hindering this transition is bandwidth availability. In private delivery networks, the issue of bandwidth availability for IPTV is often combated through the use of IPMulticasting. Considering the Internet is already believed to be bandwidth constrained, the use of multicasting could be deemed a requirement. The following paper will explore current issues with deploying IPTV over the Internet, the use of multicast to combat some of these problems, and the inherent challenges of pushing multicast based IPTV services over the Internet.

28. Networked Games based on Web Services

Chong-wei Xu, Computer Science and Information Systems, Kennesaw State University

Hongwei Lei, Computer Science and Information Systems, Kennesaw State University

On one hand, web services have demonstrated their important roles in the field of computing. On the other, networked games need server support, which is usually based on socket programming. For example, in a two-player take turn game using TCP protocol, a server communicates and coordinates the two game GUIs utilized by the two players. This gives rise to one important research question, “Can the server take the advantages of web services in order to replace the sockets while supporting networked games?” This article describes some technical aspects for accomplishing this goal.

29. MCQ Exams Correction in an Offline Network Using XML

Jehad Al-Sadi, Arab Open University

Daed Al-Halabi

Hasan Al-Halabi

One of vital subject in education facility is student assesment. A common way used to compute there work is making exams. Generally class sizes tend to expand in some societies. As a result there is a trend to give a quick accurate evaluation which is become more demand. A computerized questions make the process of taking an exam easier and smoother. This caused the move towards the use of multiple-choice questions (MCQ). The rapid progress of using XML (ExtensibleMarkup Language) for large amount of structured data, due to its ability of saving time and manipulate data makes it suitable for MCQ exam environment. Moreover, XML manipulates and deals with networks suffering from failure occurrences. The main contribution of this paper is to present an efficient method of transfer data related to online questions between the server and clients stations without being affected if the connection fails during taking the exam. An analytical study of the efficiency of module is also presented.

The Design and Implementation of a Testbed for Comparative Game AI Studies

Company Description : An essential component of realism in video games is the behavior exhibited by the non-player character (NPC) agents in the game. Most development efforts employ a single artificial intelligence (AI) method to determine NPC agent behavior during gameplay. This paper describes an NPC AI testbed under development which will allow for a variety of AI methods to be compared under simulated gameplay conditions. Two squads of NPC agents are pitted against each other in a game scenario. Multiple games using the starting same AI assignments will form an epoch. The testbed allows for the testing of a variety of AI methods in three dimensions. Individual agents can be assigned different AI methods. Individual agents can use different AI methods at different times during the game. And finally, the AI used by one type of agent can be made to differ from the AI used by another agent type. Extensive data is collected for all agent actions in all games played in an epoch. This data will form the basis of the comparative analysis.

Continuous and Reinforcement Learning Methods for First-Person Shooter Games

Company Description : Machine learning is now widely studied as the basis for artificial intelligence systems within computer games. Most existing work focuses on methods for learning static expert systems, typically emphasizing candidate selection. This paper extends this work by exploring the use of continuous and reinforcement learning techniques to develop fully-adaptive game AI for first-person shooter bots. We begin by outlining a framework for learning static control models for tanks within the game BZFlag, then extend that framework using continuous learning techniques that allow computer controlled tanks to adapt to the game style of other players, extending overall playability by thwarting attempts to infer the underlying AI. We further show how reinforcement learning can be used to create bots that learn how to play based solely through trial and error, providing game engineers with a practical means to produce large numbers of bots, each with individual intelligences and unique behaviours;all from a single initial AI model.

Company Description : ‘Games with a purpose’ is a paradigm where games are designed to computationally capture the essence of the underlying collective human conscience or common sense that plays a major role in decision-making. This human computing method ensures spontaneous participation of players who, as a byproduct of playing, provide useful data that is impossibleto generate computationally and extremely difficult to collect through extensive surveys. In this paper we describe a game that allows us to collect data on human perception of character body shapes. The paper describes the experimental setup, related game design constraints, art creation, and data analysis. In our interactive role-playing detective game titled Villain Ville, players are asked to characterize different versions of full body color portraits of three villain characters. They are later supposed to correctly match their character-trait ratings to a set of characters represented only with outlines of primitive vector shapes.
By transferring human intelligence tasks into core game-play mechanics, we have successfully managed to collect motivated data. Preliminary analysis on game data generated by 50 secondary school students shows a convergence to some common perception associations between role, physicality and personality. We hope to harness this game to discover perception for a wide variety of body-shapes to build up an intelligent shape trait-role model, with application in tutored drawing, procedural character geometry creation and intelligent retrieval.

Company Description : This paper shows how college students without prior experience in video game design can create an interesting video game. Video game creation is a task that requires weeks if not months of dedication and perseverance to complete. However, with Alice, a group of three sophomore students whonever designed a game can create a full-fledged video game from given specifications. Alice is 3D graphics interactive animation software, which is well-tried and proven to be an enjoyable learning environment. At the start of this project, students are given guidelines that describe expected outcomes.
With minimum supervision, in three days, a working program that matches the guidelines is accomplished. In additional two days, students enhance the quality with better graphics design and music. With this experience, 3D graphics interactive animation software, like Alice, is demonstrated to be a useful teaching tool in education for academic courses of game development and design. This paper not just discusses how the video game was created, but also speaks of the difficulties the team overcome seasily with Alice.

Company Description : In the following we will discuss a cost effective immersive gaming environment and the implementation in Blender, an open source game engine. This extends traditional approaches to immersive gaming which tend to concentration multiple flat screens, sometimes surrounding the player, or cylindrical [2] displays. In the former there are unnatural gaps between each display due to screen framing, in both cases they rarely cover the 180 horizontal degree field of view and are even less likely to cover the vertical field of view required to fully engage the field of view of the human visual system. The solution introduced here concentrates on seamless hemispherical displays, planetariums in general and the iDome as a specific case study. The methodology discussed is equally appropriate to other realtime 3D environments that are available in source code form or have a suitably powerful means of modifying the rendering pipeline.

Towards Teaching Secondary School Physics in an Immersive 3D Game Environment

Company Description : Laboratory exercises are an important part of secondary school physics classes that make an important contribution to student learning. Virtual laboratories have the advantage of allowing experiments that might be too dangerous or too costly in the real world. We present Gary’s Lab, an experimental immersive 3D laboratory environment using computer game technology. Our system allows students considerable freedom in constructing apparatus, and running qualitative and quantitative experiments using that apparatus. We argue that the process of constructing experiments in interesting contexts might be expected to help students engage with their lessons, focusing their attention on the apparatus and the methods of measurement used.

Company Description : This paper presents the development of a real time perception enhanced virtual environment for maritime applications which simulates real-time six degrees of freedomship motions (pitch, heave, roll, surge, sway, and yaw) under user interactions, environmental conditions and various threat scenarios. This simulation system consists of reliable shipmotion prediction system and perception enhanced immersive virtual environment with greater ecological validity. This virtual environment supports multiple-display viewing that can greatly enhance user perception and we developed the ecological environment for strong sensation of immersion. In this virtual environment it is possible to incorporate real world ships, geographical sceneries, several environmental conditions and wide range of visibility and illumination effects. This system can be used for both entertainment and educational applications such as consol level computer games, teaching & learning applications and various virtual reality applications. Especially this framework can be used to create immersive multi user environments.

Company Description : Creating computer graphics based content for stereoscopic and auto-stereoscopic displays require rendering a scene several times from slightly different viewpoints. In that case, maintaining real-time rendering can be a difficult goal if the geometry reaches thousands of triangles. However, similarities exist among the vertices belonging to the different views like the texture, some transformations or parts of the lightning. In this paper, we present a single pass algorithm using the GPU that speeds-up the rendering of stereoscopic and multi-view images. The geometry is duplicated and transformed for the new viewpoints using a shader program, which avoid redundant operations on vertices.

Company Description : ToyBox Futuristi is a digital puppeteering toolkit created by a student team from Carnegie Mellon University’s Entertainment Technology Center (known from this point forward as ETC). The team took its inspiration from the work of Italian Futurist Fortunato Depero, in particular Balli Plastici, the ‘plastic dance’ he created in 1918. Depero’s marionettes encapsulate the Futurist ideal of machinery striving to break free of human control, a theme expanded upon by the use of contemporary technologies. The project was driven by two deliverables: the digital re-imagining of Balli Plastici, a version ofwhich was shown at the Performa 09 Arts Festival in New York City, and the development of ToyBox Futuristi, the puppetry software that enables users to create their own versions of the ballet. This paper will focus on the creation of ToyBox Futuristi, including how the creative team took inspiration from the work of Depero and the Futurists while simultaneously establishing a sense of style and functionality adapted to the digital age.

Company Description : English language, Mathematics and Science for life are mandatory subjects for Thai students to finish their primary school studies. Regarding the unsatisfied result of the annual assessment, there are many students fail those subjects every year. This paper thus proposes the educational computer game to enhance learning of English language, Mathematics and Sciences subjects. The proposed game employs the concept of collaborative learning integrated into the game to promote the better understanding of contents and the familiarization of teamwork experience, while the players are still filled with the joy and the challenge. The proposed game is designed as a multi-players online game. All players compete among each other to be a leader and conduct the game along with help from team members to achieve the goal. The developed game is evaluated with 2 aspects including the learning efficiency and the satisfaction of students.
The empirical study is conducted with 100 students from 3 different primary schools in Chiang Rai, Thailand. These students are divided into 2 groups including the group playing game individually and collaboratively respectively. The first group has 25 students while the second group has 15 groups with 5 students per each. The results reveal that, the students playing game collaboratively can achieve higher learning efficiency than the students playing game individually. Moreover, the collaborative game obtains Good satisfaction level by the students.

Company Description : A method for maintaining quality of service in game servers with excessive users is often done by increasing the number of game servers. While this method is straightforward, it demands a number of new machines to be invested without realizing local utilization of resources available in the current machines. In this paper, we argue that graphic processor (GPU) working in parallel with local central processor (CPU) inside a machine can be a good candidate for reducing the workload, before attempting to distribute it to the other machines. By using the empirical study, we investigate in what level the GPU can give benefits to different types of online game servers. As a result, we can give suggestion how the GPU should be involved so that the performance of game services can be improved. We believe that this result can give benefits to online game developers who may want to gain performance of their applications without requiring any extra resources.

Using a Game Engine to Integrate Experimental, Field, and Simulation Data for Science Education

Company Description : The purpose of this project is to use a game engine to integrate geo-referenced research data, whether experimental or simulated, to present it interactively to the user. Geo-referenced means that every image, video, or sound file, every pressure map, and every simulated temperature chart is attached to a specific point on a map or body. These data may also be time referenced,so that different data sets may be available at the same location for different times of the day or seasons of the year. Target users for the interactive applications are high-school and college students who can then conduct their own “experiments” or “explorations” as a way to get exposed to the problems and methodologies of science and research. We use two examples of projects to illustrate the approach.

Company Description : This paper aims to provide a brief review of cloud computing, followed by an analysis of cloud computing environment using the PESTEL framework. The future implications and limitations of adopting cloud computing as an effective eco-friendly strategy to reduce carbon footprint are also discussed in the paper. This paper concludes with are commendation to guide researchers to further examine this phenomenon.
Organizations today face tough economic times, especially following the recent global financial crisis and the evidence of catastrophic climate change. International and local businesses find themselves compelled to review their strategies. They need to consider their organizational expenses and priorities and to strategically consider how best to save. Traditionally, Information Technology (IT) department is one area that would be affected negatively in the review. Continuing to fund these strategic technologies during an economic downturn is vital to organizations.
It is predicted that in coming years IT resources will only be available online. More and more organizations are looking at operating smarter businesses by investigating technologies such as cloud computing, virtualization and green IT to find ways to cut costs and increase efficiencies.

Company Description : Abstract—Cloud computing is widely associated with major capital investment in mega data centres, housing expensive blade servers and storage area networks. In this paper we argue that a modular approach to building local or regional data centres using commodity hardware and open source hardware can produce a cost effective solution that better addresses the goals of cloud computing, and provides a scalable architecture that meets the service requirements of a high quality data centre.

Company Description : In the current era of competitive business world and stringent market share and revenue sustenance challenges, organizations tend to focus more on their core competencies rather than the functional areas that support the business. However, traditionally this has not been possible in the IT management area because the technologies and their underlying infrastructures are significantly complex thus requiring dedicated and sustained inhouse efforts to maintain IT systems that enable core business activities.
Senior executives of organisations are forced in many cases to conclude that it is too cumbersome, expensive and time consuming for them to manage internal IT infrastructures. This takes the focus away from their core revenue making activities.This scenario facilitates the need for external infrastructure hosting, external service provision and outsourcing capability. This trend resulted in evolution of IT outsourcing models. The authors attempted to analyse the option of leveraging the cloud computing model to facilitate this common scenario.
This paper initially discusses the characteristics of cloud computing focusing on scalability and delivery as a service. The model is evaluated using two case scenarios, one is an enterprise client with 30,000 worldwide customers followed by a small scale subject matter expertise through small to medium enterprise (SME) organisations. The paper evaluates the findings and develops a governance framework to articulate the value proposition of cloud computing.. The model takes into consideration the financial aspects, and the behaviors and IT control structures of an IT organisation.

A Conceptual Analysis on the Taxation System for Highly Virtual Enterprises

Company Description : Highly virtual enterprise (HVE) as a new form of business has existed in the cloud for a long time, but its taxation system is far from establishment. This paper analysed the characteristics of a HVE in the cloud environment, suggested the criteria for its tax residency determination, and recommended the framework for accounting, auditing and legislation in New Zealand. It further proposed the network system for monitoring HVE activities and accounting for their website transactions. It also provided deliverables for cloud technology to develop in the future.

Cloud Computing and Virtualization: The “Entrepreneur without Borders” Workbench for 21st Centur

Company Description : This paper looks at how entrepreneurial innovation and creativity can and will become a primary catalyst in the development and implantation of cloud systems in a wide variety of decision making environments. Following a brief introduction to the historical evolution towards Cloud Computing and Virtualization, we explore some of the original developments that have led to how this technological and philosophical way of thinking and planning our personal and professional lives has begun to come of age.

Company Description : Clear and consistent assessment of the various capabilities of cloud service providers (CSPs) will become an essential factor in deciding on which CSPs to use in the future, particularly as cloud service provision expands futher into more sensitive and regulated areas. This paper describes an approach that is useful in this regard. Specifically, we describe a mechanismin which context is gathered relating to CSPs; this is inputted to a rule-based system and decisions are output about the suitability of each CSP, including an analysis of privacy and security risk and recommended stipulations to be taken into account when negotiating contracts and SLAs.

Company Description : Virtualization is a key technology in cloud computing to render on-demand provisioning of virtual services. Xen, an open source paravirtualized virtual machine monitor (hypervisor), has been adopted by many leading data centers of the world today. A scheduler in Xen handles CPU resources sharing among virtual machines hosted on the same physical system. This study is focused on a scheduler in the current Xen release - the Credit scheduler. Credit uses two parameters (weight and cap) to fine tune CPU resources sharing. Previous studies have shown that these two parameters can impact various performance measures of virtual machines hosted on Xen. In this study, we present a holistic procedure to establish performance models of virtual machines. Empirical data of two commonly used measures, namely calculation power and network throughput, were collected by simulations under various settings of weight and cap. We then employed a powerful machine learning tool (multi-kernel support vector regression) to learn performance models from the empirical data. These models were evaluated satisfactorily by using established procedures in machine learning.

Company Description : The tremendous growth of the cloud computing environments requires new architecture for security services. In addition, these computing environments are open, and users may be connected or disconnected at any time. Cloud Data Storage, like any other emerging technology, is experiencing growing pains. It is immature, it is fragmented and it lacks standardization. To verify the correctness, integrity, confidentially and availability of users’ data in the cloud, we propose a security framework. This security framework consists of two main layers as agent layer and cloud data storage layer. The propose MAS architecture includes five types of agents: UserInterface Agent (UIA), User Agent (UA), DER Agent (DERA), Data Retrieval Agent (DRA) and Data Distribution Preparation Agent (DDPA). The main goal of this paper is to formulate our secure framework and its architecture.

Company Description : Recently a continuing trend toward IT industrialization has grown in popularity. IT services delivered via hardware, software and people are becoming repeatable and usable by a wide range of customers and service providers.This is due, in part, to the commoditization and standardization of technologies, virtualization and the rise of service-oriented software architectures, and (most importantly) the dramatic growth in popularity/use of the Internet and the Web. Taken together, they constitute the basis of a discontinuity that amounts to a new opportunity to shape the relationship between those whouse IT services and those who sell them. The discontinuity implies that the ability to deliver specialised services in IT can be paired with the ability to deliver those services in an industrialised andpervasive way.
The reality of this implication is that users of IT related services can focus on what the services provide them, rather than how the services are implemented or hosted. Analogous to how utility companies sell power to subscribers, and telephone companies sell voice and data services, some IT services such as network security management, data centre hosting or even departmental billing can now be easily delivered as a contractual service. This notion of cloud computing capability is gathering momentum rapidly. However, the governance and enterprise architecture to obtain repeatable, scalable and secure business outcomes from cloud computing is still greatly undefined.
This paper attempts to evaluate the enterprise architecture features of cloud computing and investigates a model that an IT organisation can leverage to predict / evaluate the ‘tipping point’ where an organisation can make an objective decision to invest in cloud computing. Current research results are attempting to build a quantitative and qualitative service centric framework by mapping cloud computing features with ValIT and COBIT industry best practices.

Company Description : Cloud computing is a paradigm for computing services that are delivered to users over the Internet. In cloud computing, users rent rather than buy their computing resources. Cloud computing likely represents the next stage in the evolution of the Internet. But the cloud computing paradigm is still developing, with numerous unknowns and many questions openfor research. One critical question that has not received much attention is security. A significant subset is digital forensics—that is,
(1) the discovery of evidence remaining on a computer after a security breach or attack and
(2) the use of that evidence to investigate the event and establish facts for use in legal proceedings.
This paper discusses the impact that cloud computing will have on digital forensics. From a forensic perspective, cloud computing raises a number of concerns. Most immediate is whether or not forensic practitioners will be able to analyze the Cloud using existing techniques of digital forensics. During a traditional forensic examination, files on the storage media are examined along with the entire file system structure. But this may not be a practical model for examinations in the Cloud, where the computer is virtual, that is, where numerous heterogeneous resources, often geographically distributed, are combined. Other concerns include protecting evidence against contamination and anticipating the legal issues that will be raised by the Cloud paradigm, with its resources spread over diverse administrative and geopolitical domains. Comprehensive security services to protect not only the Cloud’s resources but also the data that resides on them may need to be instituted. The open literature to date has yet to address any of these challenges.
Cloud technologies are predicted to cause a paradigm shift in digital forensic techniques. This paper discusses the application of traditional digital forensic examinations to cloud forensics.

An Approach to Enable Cloud-Computing by the Abstraction of Event-Processing Classes

Company Description : Following our introduction of the concept of Abstraction Classes, we present herein their realisation withina cloud environment. This is achieved using a combination of integrated service-location models, including Knowledge-Based Systems, and distributed metadata using XML. This is complemented by service control software invoked at the level of Abstraction Classes.

Company Description : The Advanced Encryption Standard (AES), the block cipher ratified as a standard by National Instituteof Standards and Technology of the United States (NIST), was chosen using a process markedly more open and transparent than its predecessor, the aging Data Encryption Standard (DES).
Fifteen algorithm were submitted as to NIST in 1998, NIST choose five finalist. NIST primary selection criteria are security, performance,and flexibility. This paper enlightens the last two criteria. In this paper we have discussed software performance of five AES finalist.
The paper specifically compares performance of the five AES finalist on a verity of common software platform: 32-bit CPU (both large and smaller microprocessors, smart cards,embedded microprocessors) and high end 64-bits CPUs.

Company Description : The current economic environment is compelling CxOs to look for better IT resource utilization in order to get morevalue from their IT investments and reuse existing infrastructure to support growing business demands. How to get more from less? How to reuse the resources? How to minimize the Total Cost of Ownership (TCO) of underlying IT infrastructure and data centeroperation cost? How to improve Return On Investment (ROI) to remain profitable and transform the IT cost center into a profitcenter? All of these questions are now being considered in light of emerging ‘Public Cloud Computing’ services. Cloud Computing is a model for enabling resource allocation to dynamic business workloads in a real time manner from a pool of free resources in a cost effective manner. Providing resource on demand at cost effective pricing is not the only criteria when determining if a business service workload can be moved to a public cloud. So what else must CxOs consider before they migrate to public cloud environments? There is a need to validate the business applications and workloads in terms of technical portability and business requirements/compliance so that they can be deployed into a public cloud without considerable customization. This validation is not a simple task.
In this paper, we will discuss an approach and the analytictooling which will help CxOs and their teams to automate theprocess of identifying business workloads that should move toa public cloud environment, as well as understanding its costbenefits. Using this approach, an organization can identify themost suitable business service workloads which could be movedto a public cloud environment from a private data center withoutre-architecting the applications or changing their business logic.This approach helps automate the classification and categorizationof workloads into various categories. For example, BusinessCritical (BC) and Non-business Critical (NBC) workloads canbe identified based on the role of business services within theoverall business function. The approach helps in the assessmentof public cloud providers on the basis of features and constraints.This approach provides consideration for industry complianceand the price model for hosting workloads on a pay-per-usebasis. Finally, the inbuilt analytics in the tool find the ‘best-fit’cloud provider for hosting the business service workload. ‘Bestfit’is based on analysis and outcomes of the previously mentioned steps.
Today, the industry follows a manual time consumingprocess for workload identification, workload classification andcloud provider assessment to find the best-fit for business service workload hosting. The suggested automated approach enables anorganization to reduce cost and time when deciding to move toa public cloud environment. The proposed automated approachaccelerates the entire process of leveraging cloud benefits,through an effective, informed, fact-based decision process.

Company Description : Monitoring solutions for virtualized infrastructure (VI) should evolve to collect, analyze and provide configuration recommendation based on a broader range of operational metrics. A virtualized infrastructure is a complex interaction of hardware (servers, network andstorage), hosting variety of multi-tier application with specific service level requirements and governed by their security and compliance policies. Most existing solutions of today monitorand analyze only a subset of these interactions. The analysis and recommendation obtained tend to optimize only particular aspects of the infrastructure and can potentially introduce violations for the others. A virtualized infrastructure is dynamic in nature, providing immense opportunities to automate configuration changes to virtual machines, networks and storage. It delivers the capability to administer the whole of infrastructure as a large resource pool shared by multiple workloads. Monitoring solutions that look at only few aspects end up forcing administrators to create silos within the infrastructure that are specially designed to ensure that business service requirements are met for the specific applications running there. A monitoring solution that can collect and analyze multiple aspects for assisting in decision making and process automation can deliver greater efficiency to the virtualized infrastructure.
In this paper we argue the importance of having amonitoring solution that provides a holistic view of thevirtualized infrastructure. We discuss the need for solutions tobe capable of monitoring and analyzing a broader set of metricssuch as health of infrastructure components; performance ofoperating environment such as hypervisors, operating systemsand application running on them; capacity utilization indicatorsfor server, networks and storages; information available withconfiguration and change management database containingpolicies including security and compliance policies. We also takea look at what these broader set of metrics are and who wouldbe interested in them. The paper further proposes a monitoringframework for collecting and analyzing the above mentionedaspects of a virtual infrastructure to develop a more completesolution.

Company Description : Television represents one of the great advancements in information delivery. Traditionally, television service has been delivered using dedicated communication methods such as terrestrial and satellite based wireless transmissions and fixed cable based transmissions. Some of these delivery mechanisms have advanced and now provide services including voice and Internet access. Another communication method, traditional telephone service, has greatly improvedand expanded to deliver services such as television and Internet access.
This convergence of service provides cost savings, allowing providers to utilize existing communication networks to deliver additional services to its customers, often at minimal or zero infrastructure cost. One disadvantage of this method is customer reach is still limited to those with access to dedicated service provider networks. The ability to disengage television service from these dedicated networks and move it to a more ubiquitous network would greatly improve the customer reach of the providers.
The most obvious network choice for a delivery medium is the Internet. Given that television delivery mechanisms have already started the progression towards IPTV, the service is a natural fit. One issue hindering this transition is bandwidth availability. In private delivery networks, the issue of bandwidth availability for IPTV is often combated through the use of IPMulticasting. Considering the Internet is already believed to be bandwidth constrained, the use of multicasting could be deemed a requirement. The following paper will explore current issues with deploying IPTV over the Internet, the use of multicast to combat some of these problems, and the inherent challenges of pushing multicast based IPTV services over the Internet.

Company Description : On one hand, web services have demonstrated their important roles in the field of computing. On the other, networked games need server support, which is usually based on socket programming. For example, in a two-player take turn game using TCP protocol, a server communicates and coordinates the two game GUIs utilized by the two players. This gives rise to one important research question, “Can the server take the advantages of web services in order to replace the sockets while supporting networked games?” This article describes some technical aspects for accomplishing this goal.

Company Description : One of vital subject in education facility is student assesment. A common way used to compute there work is making exams. Generally class sizes tend to expand in some societies. As a result there is a trend to give a quick accurate evaluation which is become more demand. A computerized questions make the process of taking an exam easier and smoother. This caused the move towards the use of multiple-choice questions (MCQ). The rapid progress of using XML (ExtensibleMarkup Language) for large amount of structured data, due to its ability of saving time and manipulate data makes it suitable for MCQ exam environment. Moreover, XML manipulates and deals with networks suffering from failure occurrences. The main contribution of this paper is to present an efficient method of transfer data related to online questions between the server and clients stations without being affected if the connection fails during taking the exam. An analytical study of the efficiency of module is also presented.

GSTF provides a global intellectual platform for top notch academics and industry professionals to actively interact and share their groundbreaking research achievements. GSTF is dedicated to promoting research and development and offers an inter-disciplinary intellectual platform for leading scientists, researchers, academics and industry professionals across Asia Pacific to actively consult, network and collaborate with their counterparts across the globe.