Purpose – The purpose of this paper is to provide a case study of the selection process used to select the ReSearcher Suite, and its implementation including an outline of why and how the suite was implemented at the Library in the Institute of Technology in Tallaght.
Design/methodology/approach – Case study – single site.
Findings – The Simon Frasier University (SFU) hosted ReSearcher Suite (with support) provides an open source solution for open URL linking integrated with inter-library loan submission, federated searching, knowledge base, coverage data, and A-Z listings for journals and databases. While it does not have the full integration with the library management system that a vendor supplied product would, the functionality is strong enough on the user end to offer a viable alternative.
Practical implications – Provides a start point for similar projects.
Originality/value – The site studied is an international customer for SFU, while the suite is still in beta. The option ...

The past number of years have seen the emergence of Service-Oriented Architectures as a dominant architecture for implementing enterprise scale distributed systems. Two main styles of SOA exist, namely SOAP based services and RESTful services. There has been much comment and debate on the pros and cons of each approach to implementing a SOA, a lot of which has surrounded the performance characterictcs of both approaches. In this paper, the authors presents the results of a performance analysis that was conducted on a set of test SOA scenarios implemented using both SOAP and RESTful approaches; in particular the caching capabilities of REST have been exploited with significant benefits accruing, an option not available with SOAP based approaches.

In single equation LS regression the common practice is. to test goodness-of- fit by the standard error o f estimate s and probable absence, of residual autoregression by the Durbin-Watson d, or the more recent count Of sign changes r. With a wide choice of causative (or independent) variables (indvars) and With access to a computer, a multitude of regressions can be produced, one for each set o f indvars selected. We usually pick the regression with the lowest s and a satisfactory d or r as the 'best', unless there are very compelling a priori reasons for picking some other set. Truth to say, there is still much empiricism in regression practice; in it art has a place as well as science.

The number of programming languages is large [1] and steadily increasing [2]. However, little structured information and empirical evidence is available to help software engineers assess the suitability of a language for a particular development project or software architecture.
We argue that these shortages are partly due to a lack of high-level, objective programming language feature assessment criteria: existing advice to practitioners is often based on ill-defined notions of `paradigms? [3, p.xiii] and `orientation? [4], while researchers lack a shared common basis for generalisation and synthesis of empirical results.
This paper presents a feature model constructed from the programmer?s perspective, which can be used to precisely compare general-purpose programming languages in the actor-oriented, agent-oriented, functional, object-oriented, and procedural categories. The feature model is derived from the existing literature on general concepts of programming, and validated wit...

This paper discusses the design, application and generalisation of a Linked Data vocabulary to describe historical events of political violence. The vocabulary was designed to capture the United States political violence 1795- 2010 dataset created by Prof. Peter Turchin in the course of his social science research into Cliodynamics. The vocabulary has been generalized to support a semi-automated data collection process suitable for the creation of a complimentary dataset of political violence events in the UK and Ireland.
Both datasets will be published as managed linked data that is inter-connected with other web-based datasets such as DBpedia, a computer-readable version of Wikipedia. The lifecycle of the datasets will be actively managed with tool support for further harvesting, evolution and consistency checking.
The creation of the political violence vocabulary required the evaluation of re-existing vocabularies for potential reuse and compatibility. The original US political v...

Weighted Markov decision processes (MDPs) have long been used to model quantitative aspects of systems in the presence of uncertainty. However, much of the literature on such MDPs takes a monolithic approach, by modelling a system as a particular MDP; properties of the system are then inferred by analysis of that particular MDP. In contrast in this paper we develop compositional methods for reasoning about weighted MDPs, as a possible basis for compositional reasoning about their quantitative behaviour. In particular we approach these systems from a process algebraic point of view. For these we define a coinductive simulation-based behavioural preorder which is compositional in the sense that it is preserved by structural operators for constructing weighted MDPs from components. For finitary convergent processes, which are finite-state and finitely branching systems without divergence, we provide two characterisations of the behavioural preorder. The first uses a novel quantitative ...

2012 3rd IEEE PES Innovative Smart Grid Technologies Europe (ISGT Europe), , Berlin, GermanyThis paper presents the use of induction generator turbine machines with simplified frequency control as a direct drive solution for wind energy conversion. An offshore wind farm system is proposed utilising a VSC-HVDC connection. The wind farm will contain variable speed wind turbines driving Squirrel Cage Induction Generators (SCIG). The study will look at the electrical performance of the generators with real wind data and the design control implications with a VSC-HVDC link. The performance of the system is verified by computer simulation using the Dymola/Modelica software platform with the ObjectStab power systems analysis toolbox. This paper presents the design of independently developed optimised power system models for variable speed wind turbine machines with simplified pitch angle and frequency control with a VSC-HVDC link for grid interconnection.Science Foundation Irelan...

"Nanoscience" is recognized as an emerging science of objects that have at least one dimension ranging from a few nanometers to less than 100 nm. Through the manipulation of organic and inorganic materials at the atomic level, novel materials can be prepared with different thermal, optical, electrical, and mechanical properties, compared to the bulk state of the same materials. Nanoscale entities are abundant in biological systems and include diverse entities such as proteins, small-molecule drugs, metabolites, viruses, and antibodies. In the past 20 years, there has been a rapid expansion in the number of engineered nanosystems that have been developed for biological and medical applications. Nanotechnology is a demanding new field based on the convergence of technical disciplines such as physics, chemistry, engineering and computer sciences, cell biology, and neuroscience. Nanotechnology is recognized as the design, preparation, characterization, and applications of mate...

The Syria conflict has been described as the most socially mediated in history, with online social media playing a particularly important role. At the same time, the ever-changing landscape of the conflict leads to difficulties in applying analytical approaches taken by other studies of online political activism. Therefore, in this paper, we use an approach that does not require strong prior assumptions or the proposal of an advance hypothesis to analyze Twitter and YouTube activity
of a range of protagonists to the conflict, in an attempt to reveal additional insights into the relationships between them. By means of a network representation that combines multiple data views, we uncover communities of accounts falling into four categories that broadly reflect the situation on the ground in Syria. A detailed analysis of selected communities within the anti-regime
categories is provided, focusing on their central actors,
preferred online platforms, and activity surrounding “real world...

The research presented in this thesis was developed as part of a project called Intelligent Agent Based Collaborative Design Information Management and Support Tools (I-DIMS). The I-DIMS project was funded by the Irish Research Council for Science, Engineering
and Technology (IRCSET) as a partnership project between Galway-Mayo Institute of Technology and the Computer Integrated Manufacturing Research Unit (CIMRU), National University o f Ireland, Galway. The project aimed to investigate the use of software agents to support the synthesis and presentation of information for distributed teams for the purposes of enhancing design, learning, creativity, communication and
productivity.

In this paper we present an investigation into the run-time
behaviour of objects in Java programs, using specially adapted
coupling metrics. We identify objects from the same class
that exhibit non-uniform coupling behaviour when measured
dynamically.
We define a number of object level run-time metrics, based
on the static Chidamber and Kemerer coupling between objects
(CBO) measure. These new metrics seek to quantify
coupling at different layers of granularity, that is at classclass
and object-class level. We outline our method of collecting
such metrics and present a study of the programs
from the JOlden benchmark suite as an example of their
use.
A number of statistical techniques, principally agglomerative
hierarchical clustering analysis, are used to facilitate
the identification of such objects.

It gives us great pleasure to present this special issue, containing papers from the
conference on the Principles and Practice of Programming in Java held in Kilkenny City,
Ireland, in June 2003. All authors of full papers presented at PPPJ 2003 were invited to
submit revised and extended version of their papers for this special issue. These papers
were rigorously reviewed, resulting in the six papers presented here.

We present a structured reformulation of
the seminal algorithm for automatic generation of test
cases for a context-free grammar. Our reformulation
simplifies the algorithm in several ways. First, we
provide a structured reformulation so that it is obvious
where to proceed at each step. Second, we partition
the intricate third phase into five functions, so
that the discussion and comprehension of this phase
can be modularized. Our implementation of the algorithm
provides information about the grammatic,
syntactic and semantic correctness of the generated
test cases for two important languages in use today:
C and C++.
The results of our study of C and C++ highlight
a lacuna latent in the research to date. In particular,
if one or more of the automatically generated
test cases is syntactically or semantically incorrect,
then the confidence of structural \coverage" may be
compromised for the particular grammar-based tool
under test. Our ongoing work focuses on a solution
to this p...

This paper highlights the problem with access rights as a part of information security in enterprises with many information systems and their human users. In many organisations, users often write down their user names and passwords, thus enabling outsiders to enter information systems without proper authorisation. Furthermore, access rights commonly remain active after their possessors have left the organisation or after roles in the organisation have changed. In addition, there are instances in enterprises where access rights are managed with severe deficiencies. In this study we discuss a case where this issue was found out to be in a critical state when the organisation planned to extend and specialise its business abroad. Literature exposed several approaches and concepts to be concerned with. In our paper, we introduce how we approached the problem with a pragmatic contextual view. Based on prior research we explored access rights perceived in the enterprise with the help of a...

Machine-understandable data constitutes the basis for the Semantic Desktop. We provide in this paper means to author and annotate Semantic Documents on the Desktop. In our approach, the PDF file format is the basis for semantic documents, which store both a document and the related metadata in a single file. To achieve this we provide a framework, SALT that extends the Latex writing environment and supports the creation of metadata for scientific publications. SALT lets the scientific author create metadata while putting together the content of a research paper. We discuss some of the requirements one has to meet when developing such an ontology-based writing environment and we describe a usage scenario.

This position paper set out the argument that an interesting avenue of exploration and study of universals and variation in spatial reference is to address this topic in termsa of the universals in human perception and attention and to explore how these universals impact on spatial reference across cultures and languages.

A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to set up in a spreadsheet and is easily adapted for the prediction of highly accurate Madelung constants. Calculations performed out to the 10th cube return a value for the Madelung constant exact to 5 decimal places. From an educational point of view, the method presents students the possibility of directly observing the difference between doing a "crude" lattice-sum calculation compared to what results under the "zero-net charge" condition. Caveats, such as truncation errors that must be considered by students when comparing the results of their limited cube calculations with literature pair-distribution functions, are als...

24th Irish Conference on Artificial Intelligence and Cognitive Science (AICS'16), University College Dublin, Dublin, Ireland, 20-21 September 2016Modern computer systems generate large volumes of log data as a matter of course and the analysis of this log data is seen as one of the most promising opportunities in big data analytics. Moodle is a Virtual Learning Environment (VLEs) used extensively in third level education that captures a significant amount of log data on student activity. In this paper we present an analysis of Moodle data that reveals interesting differences in student work patterns. We demonstrate that, by clustering activity profiles represented as time series using Dynamic Time Warping, we can uncover meaningful clusters of students exhibiting similar behaviours. We use these clusters to identify distinct activity patterns among students, such as Procrastinators, Strugglers, and Experts. We see educators as the potential users of a tool that might resul...

We describe a tool chain that enables experimentation and study of real C++ applications. Our tool chain enables reverse engineering and program analysis by exploiting gcc, and thus accepts any C++ application that can be analysed by the C++ parser and front end of gcc. Our current test suite consists of large, open-source applications with diverse problem domains, including language processing and gaming. Our tool chain is designed using a GXL-based pipe-filter architecture; therefore, the individual applications and libraries that constitute our tool chain each provide a point of access. The preferred point of access is the g4api Application Programming Interface (API), which is located at the end of the chain. g4api provides access to information about the C++ program under study, including information about declarations, such as classes (including template instantiations); namespaces; functions; and variables, statements and some expressions. Access to the information is via eit...

Earth System Science (ESS) observational data are often inadequately semantically enriched by geo-observational information systems in order to capture the true meaning of the associated data sets. Data models underpinning these information systems are often too rigid in their data representation to allow for the ever-changing and evolving nature of ESS domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in a computable way.
Object oriented techniques typically employed to model data in a complex domain (with evolving domain concepts) can unnecessarily exclude domain specialists from the design process, invariably leading to a mismatch between the needs of the domain specialists, and how the concepts are modelled. In many cases, an over simplification of the domain concept is captured by the computer scientist.
This paper proposes that two-level modelling methodologies developed...

Whether we believe or not that science can one day provide us with computers that can mimic human behaviour, we, as people engaging in art practises, should be, probably more than anyone else, very careful in the dealing of such topic especially when ideas such as artificial intelligence (AI) and creativity or consciousness are presented in the same context. Beyond sci-fi plots, the concerned scientific and aesthetic literature too often offers text in which computer programs generating art are addressed as creative entities (or agents). This is, in my opinion, a cultural trap caused by a wealth of concomitant factors, from sociological, psychological, historical to philosophical and all more or less connected to long lasting tradition of a so-called positivist attitude in relation to knowledge. It is not my intention to re-iterate here the many historical arguments presented in favour and against the feasibility of intelligent machines. Nor it is my intention to suggest that much o...

Virtual machines (VMs) are a popular target for language implementers. A long-running question in the design of virtual machines has been whether stack or register architectures can be implemented more efficiently with an interpreter. Many designers favour stack architectures since the location of operands is implicit in the stack pointer. In contrast, the operands of register machine instructions must be specified explicitly. In this paper, we present a working system for translating stack-based Java virtual machine (JVM) code to a simple register code. We describe the translation process, the complicated parts of the JVM which make translation more difficult, and the optimisations needed to eliminate copy instructions. Experimental results show that a register format reduced the number of executed instructions by 34.88%, while increasing the number of bytecode loads by an average of 44.81%. Overall, this corresponds to an increase of 2.32 loads for each dispatch removed. We believ...