Overview of content related to 'digital library'http://www.ariadne.ac.uk/taxonomy/term/88/all?article-type=&term=&organisation=&project=&author=&issue=
RSS feed with Ariadne content related to specified tagenAutomating Harvest and Ingest of the Medical Heritage Libraryhttp://www.ariadne.ac.uk/issue73/henshaw-et-al
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue73/henshaw-et-al#author1"><u><font color="#0066cc">Christy Henshaw</font></u></a>, <a href="/issue73/henshaw-et-al#author2"><u><font color="#0066cc">Dave Thompson</font></u></a> and <a href="/issue73/henshaw-et-al#author3"><u><font color="#0066cc">João Baleia</font></u></a> describe an automated process to harvest medical books and pamphlets from the Internet Archive into the Wellcome Library’s Digital Services environment.</p>
</div>
</div>
</div>
<h2 id="Overview_of_the_UK_Medical_Heritage_Library_Project">Overview of the UK Medical Heritage Library Project</h2>
<p>The aim of the UK Medical Heritage Library (UK-MHL) Project is to provide free access to a wealth of medical history and related books from UK research libraries. There are already over 50,000 books and journal issues in the Medical Heritage Library drawn from North American research libraries. The UK-MHL Project will expand this collection considerably by digitising a further 15 million pages for inclusion in the collection.</p>
<p><a href="http://www.ariadne.ac.uk/issue73/henshaw-et-al" target="_blank">read more</a></p>issue73feature articlechristy henshawdave thompsonjoao baleiaabbyyjiscuniversity college londonuniversity of bristoluniversity of glasgowuniversity of leedswellcome librarywellcome trustinternet archiveapibibliographic databaseborn digitalbrowsercataloguingcsvdatadatabasedigital asset managementdigital librarydigitisationdisseminationhtml5identifierjavascriptjpegjpeg 2000jsonmarcmetadatametsocroptical character recognitionpreservationresearchsearch technologystandardsstylesheetxmlxsltz39.50Fri, 13 Feb 2015 15:35:57 +0000editor2549 at http://www.ariadne.ac.ukEditorial Introduction to Issue 71http://www.ariadne.ac.uk/issue71/editorial2
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue71/editorial2#author1">The editor</a> introduces readers to the content of <em>Ariadne</em> Issue 71.</p>
</div>
</div>
</div>
<p>As I depart this chair after the preparation of what I thought would be the last issue of <em>Ariadne</em> [<a href="#1">1</a>], I make no apology for the fact that I did my best to include as much material&nbsp; to her ‘swan song’ as possible. With the instruction to produce only one more issue this year, I felt it was important to publish as much of the content in the pipeline as I could.</p>
<p><a href="http://www.ariadne.ac.uk/issue71/editorial2" target="_blank">read more</a></p>issue71editorialrichard walleramazonbirmingham city universitydigital repository federationjiscloughborough universityoclcoregon state universityukolnuniversity for the creative artsuniversity of huddersfielduniversity of oxforduniversity of sussexwellcome libraryjuspkapturscarletaccessibilityagile developmentapiarchivesaugmented realityauthenticationbig datablogbs8878cataloguingcontent managementcontrolled vocabulariescurationdatadata managementdata setdatabasedigital librarydigitisationdiigoebookeducational data miningframeworkgoogle docshigher educationhtmlhtml5infrastructurejquerylearning analyticsmetadatametsmobilenative appsopen accessopen sourceportalpreservationpreservation metadatarepositoriesresearchsearch technologysoftwaresolrstandardisationstandardssushitaggingtwitterurlvideowcagweb 2.0web appwidgetxml schemaWed, 17 Jul 2013 19:01:02 +0000lisrw2493 at http://www.ariadne.ac.ukThe Wellcome Library, Digitalhttp://www.ariadne.ac.uk/issue71/henshaw-kiley
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue71/henshaw-kiley#author1">Christy Henshaw</a> and <a href="/issue71/henshaw-kiley#author2">Robert Kiley</a> describe how the Wellcome Library has transformed its information systems to support mass digitisation of historic collections.</p>
</div>
</div>
</div>
<p>Online access is now the norm for many spheres of discovery and learning. What benefits bricks-and-mortar libraries have to offer in this digital age is a subject of much debate and concern, and will continue to be so as learning resources and environments shift ever more from the physical to the virtual. In order to maintain a place in this dual environment, most research libraries strive to replicate their traditional offerings in the digital world.</p>
<p><a href="http://www.ariadne.ac.uk/issue71/henshaw-kiley" target="_blank">read more</a></p>issue71feature articlechristy henshawrobert kileyjiscwellcome librarywellcome trustalgorithmapiarchivesauthenticationbibliographic datablogborn digitalcachecataloguingcontent managementcopyrightcreative commonsdatadatabasedigital archivedigital asset managementdigital librarydigital preservationdigital repositoriesdigitisationfacebookflashframeworkhtmlhtml5information architectureinfrastructurejavascriptjpegjpeg 2000jsonlibrary management systemslicencemetadatametsmobilepasswordsportalpreservationpreservation metadatarepositoriesresearchsearch technologystandardstwitterurlusabilityvideoweb browserxml schemaTue, 18 Jun 2013 14:52:03 +0000lisrw2449 at http://www.ariadne.ac.ukeMargin: A Collaborative Textual Annotation Toolhttp://www.ariadne.ac.uk/issue71/kehoe-gee
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue71/kehoe-gee#author1">Andrew Kehoe</a> and <a href="/issue71/kehoe-gee#author2">Matt Gee</a> describe their Jisc-funded eMargin collaborative textual annotation tool, showing how it has widened its focus through integration with Virtual Learning Environments.</p>
</div>
</div>
</div>
<p>In the Research and Development Unit for English Studies (RDUES) at Birmingham City University, our main research field is Corpus Linguistics: the compilation and analysis of large text collections in order to extract new knowledge about language. We have previously developed the WebCorp [<a href="#1">1</a>] suite of software tools, designed to extract language examples from the Web and to uncover frequent and changing usage patterns automatically. eMargin, with its emphasis on <em>manual</em> annotation and analysis, was therefore somewhat of a departure for us.</p>
<p>The eMargin Project came about in 2007 when we attempted to apply our automated Corpus Linguistic analysis techniques to the study of English Literature. To do this, we built collections of works by particular authors and made these available through our WebCorp software, allowing other researchers to examine, for example, how Dickens uses the word ‘woman’, how usage varies across his novels, and which other words are associated with ‘woman’ in Dickens’ works.</p>
<p>What we found was that, although our tools were generally well received, there was some resistance amongst literary scholars to this large-scale automated analysis of literary texts. Our top-down approach, relying on frequency counts and statistical analyses, was contrary to the traditional bottom-up approach employed in the discipline, relying on the intuition of literary scholars. In order to develop new software to meet the requirements of this new audience, we needed to gain a deeper understanding of the traditional approach and its limitations.</p>
<p style="text-align: center; "><img alt="logo: eMargin logo" src="http://ariadne-media.ukoln.info/grfx/img/issue71-kehoe-gee/emargin-logo.png" style="width: 250px; height: 63px;" title="logo: eMargin logo" /></p>
<h2 id="The_Traditional_Approach">The Traditional Approach</h2>
<p>A long-standing problem in the study of English Literature is that the material being studied – the literary text – is often many hundreds of pages in length, yet the teacher must encourage class discussion and focus this on particular themes and passages. Compounding the problem is the fact that, often, not all students in the class have read the text in its entirety.</p>
<p>The traditional mode of study in the discipline is ‘close reading’: the detailed examination and interpretation of short text extracts down to individual word level. This variety of ‘practical criticism’ was greatly influenced by the work of I.A. Richards in the 1920s [<a href="#2">2</a>] but can actually be traced back to the 11<sup>th</sup> Century [<a href="#3">3</a>]. What this approach usually involves in practice in the modern study of English Literature is that the teacher will specify a passage for analysis, often photocopying this and distributing it to the students. Students will then read the passage several times, underlining words or phrases which seem important, writing notes in the margin, and making links between different parts of the passage, drawing out themes and motifs. On each re-reading, the students’ analysis gradually takes shape (see Figure 1). Close reading takes place either in preparation for seminars or in small groups during seminars, and the teacher will then draw together the individual analyses during a plenary session in the classroom.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue71/kehoe-gee" target="_blank">read more</a></p>issue71tooled upandrew kehoematt geeahrcamazonbirmingham city universityblackboardbritish librarycetisd-lib magazinegoogleims globalims global learning consortiumjiscnisouniversity of leicesteruniversity of oxfordwikipediaaccessibilityaggregationajaxapibig datablogbrowserdatadatabasedigital libraryebookfree softwarehtmlinteroperabilityintranetjavajavascriptjquerymetadatamoodleplain textrepositoriesresearchsearch technologysoftwarestandardstag cloudtaggingteiurlvleweb browserwikiwindowsxmlThu, 04 Jul 2013 17:20:45 +0000lisrw2467 at http://www.ariadne.ac.ukDataFinder: A Research Data Catalogue for Oxfordhttp://www.ariadne.ac.uk/issue71/rumsey-jefferies
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue71/rumsey-jefferies#author1">Sally Rumsey</a> and <a href="/issue71/rumsey-jefferies#author2">Neil Jefferies</a> explain the context and the decisions guiding the development of DataFinder, a data catalogue for the University of Oxford.</p>
</div>
</div>
</div>
<p>In 2012 the University of Oxford Research Committee endorsed a university ‘Policy on the management of research data and records’ [<a href="#1">1</a>]. Much of the infrastructure to support this policy is being developed under the Jisc-funded Damaro Project [<a href="#2">2</a>]. The nascent services that underpin the University’s RDM (research data management) infrastructure have been divided into four themes:</p>
<p><a href="http://www.ariadne.ac.uk/issue71/rumsey-jefferies" target="_blank">read more</a></p>issue71feature articleneil jefferiessally rumseybodleian librariesdatacitejiscorciduk data archiveuniversity of oxforddmponlineimpact projectaggregationalgorithmapiarchivescataloguingcontrolled vocabulariescurationdatadata citationdata managementdata modeldata setdatabasedigital archivedigital libraryeprintsfedora commonsidentifierinfrastructurejacslinked datametadataoai-pmhopen accessopen archives initiativepasswordspreservationpurlrdfrepositoriesresearchresearch information managementschemasearch technologysemantic websoftwaresolrstandardsuriurlvocabularieswireframexmlThu, 13 Jun 2013 20:23:22 +0000lisrw2446 at http://www.ariadne.ac.ukMining the Archive: eBookshttp://www.ariadne.ac.uk/issue71/white
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue71/white#author1">Martin White</a> looks through the <em>Ariadne</em> archive to track the development of ebooks.</p>
</div>
</div>
</div>
<p>My definition of being rich is being able to buy a book without looking at the price. I have long since lost count of the number of books in my house. The reality is that if I did carry out a stock-take I might be seriously concerned about both the total number and the last known time I can remember reading a particular book. Nevertheless I have few greater pleasures than being asked a question and knowing in which of our two lofts one or more books will be found with the answer. On many occasions I have found a definitive answer much more quickly than using Google.</p>
<p><a href="http://www.ariadne.ac.uk/issue71/white" target="_blank">read more</a></p>issue71feature articlemartin whiteamazonapplegoogleintranet focus ltdjiscoclcukolnuniversity of aberdeenuniversity of sheffielduniversity of strathclydeebonijisc information environmentdatadigital librarye-scienceebookepubipadmobilesearch technologystandardsusabilitywirelessWed, 12 Jun 2013 19:21:11 +0000lisrw2445 at http://www.ariadne.ac.ukECLAP 2013: Information Technologies for Performing Arts, Media Access and Entertainmenthttp://www.ariadne.ac.uk/issue71/eclap-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue71/eclap-rpt#author1">Marieke Guy</a> reports on the second international conference held by ECLAP, the e-library for performing arts.</p>
</div>
</div>
</div>
<p>The beautiful city of Porto was the host location for ECLAP 2013 [<a href="#1">1</a>], the 2nd International Conference on Information Technologies for Performing Arts, Media Access and Entertainment. &nbsp;Porto is the second largest city in Portugal after Lisbon and home of the Instituto Politécnico do Porto (IPP), the largest polytechnic in the country, with over 18,500 students.</p>
<p><a href="http://www.ariadne.ac.uk/issue71/eclap-rpt" target="_blank">read more</a></p>issue71event reportmarieke guybbccoventry universitydccmicrosoftoaisukolnuniversity of leedsuniversity of lisbonw3ceuropeanaaccessibilityarchivesbibliographic datablogcopyrightcreative commonsdatadata managementdigital archivedigital librarydigital mediadigital preservationdigitisationdublin coredvdebookepubfoafframeworkgeospatial datahapticshigher educationictinternet explorerinteroperabilityknowledge baselodmetadatamultimediaontologiesopen dataowlpreservationrdfremote workingrepositoriesresearchschemasocial networkssoftwarestandardsstreamingusabilityvideovocabulariesThu, 04 Jul 2013 20:46:57 +0000lisrw2471 at http://www.ariadne.ac.ukBook Review: Powering Search - The Role of Thesauri in New Information Environmentshttp://www.ariadne.ac.uk/issue71/will-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue71/will-rvw#author1">Leonard Will</a> reviews a comprehensive survey of the literature on the use of thesauri in information search processes and interfaces.</p>
</div>
</div>
</div>
<p><em>Powering Search</em> is a comprehensive review and synthesis of work that has been done over the past 50 years on the use of thesauri to make searching for information more effective. The book does not discuss the principles and practice of construction of information retrieval thesauri in any detail, but concentrates on the search process and on the user interface through which a searcher interacts with a body of information resources. It is written clearly: each chapter begins and ends with a summary of its content, and the first and last chapters summarise the whole book. There are copious references throughout and a full index.</p>
<p>As the author says in his conclusion:</p>
<blockquote><p>'This book has taken a new approach to thesauri by critiquing the relevant literatures of a variety of communities who share an interest in thesauri and their functions but who are not, it should be noted, closely collaborating at this time – research communities such as library and information science, information retrieval, knowledge organization, human-computer interaction, information architecture, information search behavior, usability studies, search user interface, metadata-enabled information access, interactive information retrieval, and searcher education.'</p>
</blockquote>
<p>One consequence of these disparate approaches is that terminology varies across communities: there are many interpretations of the meaning of <em>facet, category, keyword </em>or<em> taxonomy</em>, for example, which the author acknowledges, but he then uses these terms without saying precisely what definition he gives them.</p>
<h2 id="Information_Search_Processes">Information Search Processes</h2>
<p>Chapters 2 and 3 review studies on how people go about searching for information, leading to the perhaps self-evident conclusion that there are two types of approach. If a specific and well-defined piece of information is sought, people will amend and refine their queries in the light of initial results to get closer to what they seek. On the other hand, if the search requirement is less well defined, a browsing or 'berrypicking' approach is adopted to explore a subject area, picking up and assembling pieces of information and changing the destination as the exploration progresses. Both these approaches use an iterative procedure, within which a thesaurus can serve to make a search more precise, in the first case, or to show the broader context, in the second.</p>
<p>Chapter 4 deals with thesauri in Web-based search systems, and gives several examples of thesauri in digital libraries, subject gateways and portals, digital archives and linked data repositories. This is one way of grouping these examples, but it is not clear that there is any distinction in principle between the way thesauri can be used in each of them, or indeed in search interfaces to other types of document collections. The main distinction, which is not fully addressed, is whether the information resources being searched have been indexed with terms from the thesaurus being used, or whether the thesaurus is just a source of possible terms for searching the text, and possibly the metadata, of documents. More weight needs to be given to the statement in the introduction to ISO 25964 -1:</p>
<blockquote><p>'If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design ...'</p>
</blockquote>
<p>In fact the book generally talks about <em>terms</em> rather than the approach taken by the current standards of considering unambiguously defined <em>concepts</em>, with terms just serving as convenient labels for these. Each concept may have many labels by which it can be retrieved, including one chosen as <em>preferred</em> for each language covered by the thesaurus.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue71/will-rvw" target="_blank">read more</a></p>issue71reviewleonard willansicilipisonisowillpower informationaccessibilitycontrolled vocabulariesdewey decimaldigital archivedigital librarygraphicsinformation architectureinformation retrievalinteroperabilitylcshlinked datametadatarepositoriesresearchsearch technologystandardstaxonomythesaurusurlusabilityvisualisationvocabulariesWed, 26 Jun 2013 18:00:00 +0000lisrw2451 at http://www.ariadne.ac.ukGold Open Access: Counting the Costs http://www.ariadne.ac.uk/issue70/andrew
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/andrew#author1">Theo Andrew</a> presents new data on the cost of Gold OA publishing at the University of Edinburgh.</p>
</div>
</div>
</div>
<p>Research Councils UK (RCUK) have recently announced a significant amendment to their open access (OA) &nbsp;policy which requires all research papers that result from research partly or wholly funded by RCUK to be made open access [<a href="#1">1</a>]. To comply with this policy, researchers must either; a) publish in an open access journal, termed Gold OA, which often incurs an article processing charge (APC); or, b) ensure that a copy of the post-print is deposited in an appropriate repository, also known as Green OA.</p>
<p>A subsequent clarification from RCUK stated that Gold OA is the preferred mechanism of choice to realise open access for outputs that they have funded and have announced the award of block grants to eligible institutions to achieve this aim [<a href="#2">2</a>]. Where a Gold OA option is unavailable, Green OA is also acceptable; however, RCUK have indicated that the decision will be ultimately left up to institutions as to which route to take [<a href="#3">3</a>].</p>
<p>Since RCUK are the major funder of research in the United Kingdom, this new policy will not only have a major impact on how researchers publish their work, but also huge implications for their budgets. Many research institutions funded by RCUK are currently investigating how they will implement this policy and are looking at the costs for open access publication, and how they can support the adoption of open access within their organisation. The ball is very much in the court of institutions to decide how to play the open access game.</p>
<p>One of the key factors that will affect institutions is the cost that publishers will set for their APCs. So far RCUK have steered clear of suggesting an appropriate fee, leaving individual publishers to determine the market level of the APCs as per the current situation. Meanwhile there seems to be a huge variability in costs. There is a general expectation that over time APCs will settle to a reasonable rate and similarly journal subscriptions will lower to reflect the gradual change in business model from subscription fees to APCs. Most publishers have not yet been upfront about what impact they will have on journal subscriptions, if any, and it is hard to access and assess real-life data. RSC Publishing is one notable exception since it has introduced a system of waiving a proportion of APC fees based on institutional subscription costs.</p>
<p>Much of this transition period to full open access will have to be navigated through uncharted territory, where no one has a clear handle on the costs involved. The rationale of this article is to present data on article processing charges gathered over the past five years, report on trends seen within this data, to suggest some approaches and to generally contribute to and inform the policy discussion.</p>
<h2 id="The_Problem">The Problem</h2>
<p>To put some rough-and-ready figures on the table, the University of Edinburgh publishes in the region of 4,000-4,500 peer-reviewed journal articles per year; this figure does not include other publication types like working papers not affected by the RCUK policy. Assuming an average Article Processing Charge (APC) of £1500 [<a href="#4">4</a>], the total publication costs to make all of these outputs Gold would be in the region of £6m. It is clear that even with guaranteed funding from HEFCE, and other funders of research, large research-intensive universities will not be able to pay for all of their research to be published under Gold OA. How to allocate funding to researchers will be a difficult choice that many institutions are currently asking themselves - will it be on a first-come-first-served basis, funder-specific, or will REF-submitted material take priority?</p>
<p>Equally problematic are the difficulties we face in fully assessing an institution’s total spend on open access. Whilst it is possible to find out through aggregate sources like Web of Science how many articles are published in fully open access journals. It is virtually impossible to find out the number of open access articles published in hybrid journals as there is currently no flag in the metadata which indicates the open status of the paper. A hybrid journal is a traditional subscription journal that offers open access to individual articles upon payment of an APC. Of course it is possible to find hybrid open access content through EuropePMC.org; however this will only give a snapshot for the biomedical and life sciences. With current systems and processes it is virtually impossible to gauge this spend accurately.</p>
<h2 id="Cost_Data">Cost Data</h2>
<p>Unfortunately, financial data about open access publishing is scarce. The University of Edinburgh (UoE) has recently implemented account codes to allow the finance systems to track this spend going forwards; however, finding out costs retrospectively remains problematic. Furthermore, institutions are not typically in the habit of publishing this data with others. The institutions that have shared data show a degree of variability. In 2010, the foremost initial supporter and enabler of Gold Open Access publishing in the UK, the Wellcome Trust, found that the&nbsp;average cost of publication under the author-pays model was $2,367 (approximately £1,500) [<a href="#4">4</a>]. RCUK in their recent press release on block grants for open access estimate the average APC as £1,727 plus VAT [<a href="#2">2</a>], whilst, based on figures in the Finch Report, the University of Nottingham paid on average £1,216 [<a href="#5">5</a>].</p>
<p>All these figures are useful as they give a ballpark figure upon which further estimates can be based. The precise cost of individual APCs levied by publishers is generally unavailable in a form which easily enables further analysis. Typically this information is available from publisher’s Web sites; however, aggregating the data is cumbersome as there is no consistent way to interrogate the Web sites and APCs commonly vary from title to title in the publishers’ portfolio. There have been some commendable attempts to gather this information, for example the SHERPA RoMEO listing of Publishers with Paid Options for Open Access [<a href="#7">7</a>]. Here about 100 publishers have been surveyed and their APCs are listed. A large cost variance exists for some publishers’ records as individual journals often have different APCs, and also institutional subscriptions/memberships can reduce costs in a non-uniform way. It takes a lot of effort to gather these data and keep them it up to date. Other approaches have tried to crowd-source this activity, for example Ross Mounce’s survey of open access publishers, publications, licences and fees. Here approximately 130 publishers’ web sites were surveyed to find out what licences are being used on the open access content; the cost being a secondary focus of the survey. Analysis of these data shows less than 5% of publishers claiming 'open access' are fully compliant with the Budapest Declaration on Open Access [<a href="#7">7</a>].</p>
<p>The data we present here is an attempt to enrich the data available to interested parties and make them available in a reusable format for further analysis. It comprises articles funded by the Wellcome Trust at the University of Edinburgh between 2007 and 2012. In total there are 260 articles published in a mixture of open access journals and traditional subscription journals with an open access option (sometimes known as hybrid). All of the journals charged an article processing fee. Overall, the total cost incurred was £452,713.40. The mean article processing charge was £1,741.21, with the median value £1,644.22. The full data can be accessed online at the Edinburgh DataShare repository [<a href="#8">8</a>].</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue70/andrew" target="_blank">read more</a></p>issue70feature articletheo andrewhefceuniversity of edinburghuniversity of nottinghamwellcome trustdatasharesherpa romeoaccessibilityaltmetricsblogcreative commonsdatadata managementdata setdigital librarylicencemetadataopen accessportfoliorepositoriesresearchMon, 03 Dec 2012 20:23:29 +0000lisrw2393 at http://www.ariadne.ac.ukUpskilling Liaison Librarians for Research Data Managementhttp://www.ariadne.ac.uk/issue70/cox-et-al
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/cox-et-al#author1">Andrew Cox</a>, <a href="/issue70/cox-et-al#author2">Eddy Verbaan</a> and <a href="/issue70/cox-et-al#author3">Barbara Sen</a> explore the design of a curriculum to train academic librarians in the competencies to support Research Data Management.</p>
</div>
</div>
</div>
<p>For many UK HEIs, especially research-intensive institutions, Research Data Management (RDM) is rising rapidly up the agenda. Working closely with other professional services, and with researchers themselves, libraries will probably have a key role to play in supporting RDM. This role might include signposting institutional expertise in RDM; inclusion of the topic in information literacy sessions for PhD students and other researchers; advocacy for open data sharing; or contributing to the management of an institutional data repository.</p>
<p><a href="http://www.ariadne.ac.uk/issue70/cox-et-al" target="_blank">read more</a></p>issue70feature articleandrew coxbarbara seneddy verbaandccjiscnorthumbria universitysconuluk data archiveuniversity of essexuniversity of sheffielddatum for healthrdmrosearchivesbibliographic databibliometricscataloguingcollection developmentcopyrightcurationdatadata citationdata managementdata setdigital curationdigital librarye-researche-scienceframeworkhigher educationinfrastructureinstitutional repositoryknowledge baseknowledge managementlicencemetadataopen accessopen dataopen educationpreservationrepositoriesresearchsoftwareweb portalThu, 06 Dec 2012 19:27:43 +0000lisrw2402 at http://www.ariadne.ac.ukThe LIPARM Project: A New Approach to Parliamentary Metadatahttp://www.ariadne.ac.uk/issue70/gartner
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/gartner#author1">Richard Gartner</a> outlines a collaborative project which aims to link together the digitised UK Parliamentary record by providing a metadata scheme, controlled vocabularies and a Web-based interface.</p>
</div>
</div>
</div>
<p>Parliamentary historians in the United Kingdom are particularly fortunate as their key primary source, the record of Parliamentary proceedings, is almost entirely available in digitised form. Similarly, those needing to consult and study contemporary proceedings as scholars, journalists or citizens have access to the daily output of the UK's Parliaments and Assemblies in electronic form shortly after their proceedings take place.</p>
<p>Unfortunately, the full potential of this resource for all of these users is limited by the fact that it is scattered throughout a heterogeneous information landscape and so cannot be approached as a unitary resource.&nbsp; It is not a simple process, for instance, to distinguish the same person if he or she appears in more than one of these collections or, for that matter, to identify the same legislation if it is referenced inconsistently in different resources. As a result, using it for searching or for more sophisticated analyses becomes problematic when one attempts to move beyond one of its constituent collections.</p>
<p>Finding some mechanism to allow these collections to be linked and so used as a coherent, integrated resource has been on the wish-list of Parliamentary historians and other stakeholders in this area for some time. In the mid-2000s, for instance, the History of Parliament Trust brought together the custodians of several digitised collections to examine ways in which this could be done. In 2011, some of these ideas came to fruition when JISC (Joint Information Systems Committee) funded a one-year project named LIPARM (Linking the Parliamentary Record through Metadata) which aimed to design a mechanism for encoding these linkages within XML architectures and to produce a working prototype for an interface which would enable the potential offered by this new methodology to be realised in practice.</p>
<p>This article explains the rationale of the LIPARM Project and how it uses XML to link together core components of the Parliamentary record within a unified metadata scheme. It introduces the XML schema, Parliamentary Metadata Language (PML), which was created by the project and the set of controlled vocabularies for Parliamentary proceedings which the project also created to support it.&nbsp; It also discusses the experience of the project in converting two XML-encoded collections of Parliamentary proceedings to PML and work on the prototype Web-based union catalogue which will form the initial gateway to PML-encoded metadata.</p>
<h2 id="Background:_The_Need_for_Integrated_Parliamentary_Metadata">Background: The Need for Integrated Parliamentary Metadata</h2>
<p>The UK's Parliamentary record has been the focus of a number of major digitisation initiatives which have made its historical corpus available in almost its entirety: in addition, the current publishing operations of the four Parliaments and Assemblies in the UK ensure that the contemporary record is available in machine-readable form on a daily basis. Unfortunately, these collections have limited interoperability owing to their disparate approaches to data and metadata which renders the federated searching and browsing of their contents currently impossible. In addition, the disparity of platforms on which they are offered, and the wide diversity of user interfaces they use to present the data (as shown by the small sample in Figure 1), render extensive research a time-consuming and cumbersome process if it is necessary to extend its remit beyond the confines of a single collection.</p>
<p style="text-align: center; "><img alt="Figure 1: Four major collections of Parliamentary proceedings, each using a different interface" src="http://ariadne-media.ukoln.info/grfx/img/issue70-gartner/liparm-figure1.png" style="width: 640px; height: 231px;" title="Figure 1: Four major collections of Parliamentary proceedings, each using a different interface" /></p>
<p style="text-align: left; "><strong>Figure 1: Four major collections of Parliamentary proceedings, each using a different interface</strong></p>
<p>A more integrated approach to Parliamentary metadata offers major potential for new research: it would, for instance, allow the comprehensive tracking of an individual's career, including all of their contributions to debates and proceedings. It would allow the process of legislation to be traced automatically, voting patterns to be analysed, and the emergence of themes and topics in Parliamentary history to be analysed on a large scale.</p>
<p>One example of the linkages that could usefully be made in an integrated metadata architecture can be seen in the career of Sir James Craig, the Prime Minister of Northern Ireland from 1921 to 1940.&nbsp; Figure 2 illustrates some of the connections that could be made to represent his career:-</p>
<p style="text-align: center; "><img alt="Figure 2: Sample of potential linkages for a Parliamentarian" src="http://ariadne-media.ukoln.info/grfx/img/issue70-gartner/figure2-james-craig-v3.jpg" style="width: 640px; height: 331px;" title="Figure 2: Sample of potential linkages for a Parliamentarian" /></p>
<p style="text-align: center; "><strong>Figure 2: Sample of potential linkages for a Parliamentarian</strong></p>
<p>The connections shown here are to the differing ways in which he is named in the written proceedings, to his tenures in both Houses, the constituencies he represented, the offices he held and the contributions he made to debates. Much more complex relationships are, of course, possible and desirable.</p>
<p>The advantages of an integrated approach to metadata which would allow these connections to be made have long been recognised by practitioners in this field, and several attempts have been made to create potential strategies for realising them. But it was only in 2011 that these took more concrete form when a one-day meeting sponsored by JISC brought together representatives from the academic, publishing, library and archival sectors to devise a strategy for integrating Parliamentary metadata. Their report proposed the creation of an XML schema for linking core components of this record and the creation of a series of controlled vocabularies for these components which could form the basis of the semantic linkages to be encoded in the schema [<a href="#1">1</a>]. These proposals then formed the basis of a successful bid to JISC for a project to put them into practice: the result was the LIPARM (Linking the Parliamentary Record through Metadata) Project.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue70/gartner" target="_blank">read more</a></p>issue70feature articlerichard gartnerjisckings college londonlibrary of congressnational library of walesliparmarchivescataloguingcontrolled vocabulariesdatadigital librarydigitisatione-researchidentifierinteroperabilitymetadatamultimedianational libraryrdfresearchresearch information managementschemaurivocabulariesxmlxml schemaFri, 30 Nov 2012 19:41:15 +0000lisrw2391 at http://www.ariadne.ac.ukMotivations for the Development of a Web Resource Synchronisation Frameworkhttp://www.ariadne.ac.uk/issue70/lewis-et-al
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/lewis-et-al#author1">Stuart Lewis</a>, <a href="/issue70/lewis-et-al#author2">Richard Jones</a> and <a href="/issue70/lewis-et-al#author3">Simeon Warner</a> explain some of the motivations behind the development of the ResourceSync Framework.</p>
</div>
</div>
</div>
<p>This article describes the motivations behind the development of the ResourceSync Framework. The Framework addresses the need to synchronise resources between Web sites. &nbsp;Resources cover a wide spectrum of types, such as metadata, digital objects, Web pages, or data files. &nbsp;There are many scenarios in which the ability to perform some form of synchronisation is required. Examples include aggregators such as Europeana that want to harvest and aggregate collections of resources, or preservation services that wish to archive Web sites as they change.</p>
<p><a href="http://www.ariadne.ac.uk/issue70/lewis-et-al" target="_blank">read more</a></p>issue70tooled uprichard jonessimeon warnerstuart lewisaberystwyth universitycornell universityimperial college londonjisclibrary of congressnisooaioclcukolnuniversity of edinburghuniversity of oxforddbpediaeuropeanaopendoarwikipediaaccess controlaggregationapiarchivesatomcachecataloguingdatadata managementdata setdatabasedigital librarydoidspacedublin coreeprintsframeworkftphigher educationhtmlhypertextidentifierinteroperabilityknowledge baselinked datametadatanamespacenational libraryoai-oreoai-pmhopen accessopen archives initiativeopen sourcepasswordsportalportfoliopreservationprovenancerepositoriesresearchrfcrsssearch technologyservice oriented architecturesoftwaresrusrwstandardssword protocolsyndicationtwitteruriurlweb appweb resourcesweb servicesxmlz39.50Mon, 03 Dec 2012 15:58:46 +0000lisrw2392 at http://www.ariadne.ac.ukSUSHI: Delivering Major Benefits to JUSPhttp://www.ariadne.ac.uk/issue70/meehan-et-al
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/meehan-et-al#author1">Paul Meehan</a>, <a href="/issue70/meehan-et-al#author2">Paul Needham</a> and <a href="/issue70/meehan-et-al#author3">Ross MacIntyre</a> explain the enormous time and cost benefits in using SUSHI to support rapid gathering of journal usage reports into the JUSP service.</p>
</div>
</div>
</div>
<p>A full-scale implementation of the Journal Usage Statistics Portal (JUSP) would not be possible without the automated data harvesting afforded by the Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol. Estimated time savings in excess of 97% compared with manual file handling have allowed JUSP to expand its service to more than 35 publishers and 140 institutions by September 2012. An in-house SUSHI server also allows libraries to download quality-checked data from many publishers via JUSP, removing the need to visit numerous Web sites. The protocol thus affords enormous cost and time benefits for the centralised JUSP service and for all participating institutions. JUSP has also worked closely with many publishers to develop and implement SUSHI services, pioneering work to benefit both the publishers and the UK HE community.</p>
<p style="text-align: center; "><img alt="Journal Usage Statistics Portal (JUSP)" src="http://ariadne-media.ukoln.info/grfx/img/issue70-meehan-et-al/jusp-logo.png" style="width: 145px; height: 133px;" title="Journal Usage Statistics Portal (JUSP)" /></p>
<h2 id="JUSP:_Background_to_the_Service">JUSP: Background to the Service</h2>
<p>The management of journal usage statistics can be an onerous task at the best of times. The introduction of the COUNTER [<a href="#1">1</a>] Code of Practice in 2002 was a major step forward, allowing libraries to collect consistent, audited statistics from publishers. By July 2012, 125 publishers offered the JR1 report, providing the number of successful full-text downloads. In the decade since COUNTER reports became available, analysis of the reports has become increasingly important, with library managers, staff and administrators increasingly forced to examine journal usage to inform and rationalise purchasing and renewal decisions.</p>
<p>In 2004, JISC Collections commissioned a report [<a href="#2">2</a>] which concluded that there was a definite demand for a usage statistics portal for the UK HE community; with some sites subscribing to more than 100 publishers, just keeping track of access details and downloading reports was becoming a significant task in itself, much less analysing the figures therein. There followed a report into the feasibility of establishing a ‘Usage Statistics Service’ carried out by Key Perspectives Limited and in 2008 JISC issued an ITT (Invitation To Tender). By early 2009 a prototype service, known as the Journal Usage Statistics Portal (JUSP) had been developed by a consortium including Evidence Base at Birmingham City University, Cranfield University, JISC Collections and Mimas at The University of Manchester; the prototype featured a handful of publishers and three institutions. However, despite a centralised service appearing feasible [<a href="#3">3</a>], the requirement to download and process data in spreadsheet format, and the attendant time taken, still precluded a full-scale implementation across UK HE.</p>
<p style="text-align: center; "><img alt="COUNTER" src="http://ariadne-media.ukoln.info/grfx/img/issue70-meehan-et-al/counter-header.png" style="width: 640px; height: 45px;" title="COUNTER" /></p>
<p>Release 3 of the COUNTER Code of Practice in 2009 however mandated the use of the newly-introduced Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol [<a href="#4">4</a>], a mechanism for the machine-to-machine transfer of COUNTER-compliant reports; this produced dramatic efficiencies of time and cost in the gathering of data from publishers. The JUSP team began work to implement SUSHI for a range of publishers and expanded the number of institutions. By September 2012, the service had grown significantly, whilst remaining free at point of use, and encompassed 148 participating institutions, and 35 publishers. To date more than 100 million individual points of data have been collected by JUSP, all via SUSHI, a scale that would have been impossible without such a mechanism in place or without massive additional staff costs.</p>
<p>JUSP offers much more than basic access to publisher statistics, however; the JUSP Web site [<a href="#5">5</a>] details the numerous reports and analytical tools on offer, together with detailed user guides and support materials. The cornerstone of the service though is undeniably its SUSHI implementation, both in terms of gathering the COUNTER JR1 and JR1a data and - as developed more recently - its own SUSHI server, enabling institutions to re-harvest data into their own library management tools for local analysis.</p>
<h2 id="JUSP_Approach_to_SUSHI_Development_and_Implementation">JUSP Approach to SUSHI Development and Implementation</h2>
<p>Once the decision was made to scale JUSP into a full service, the development of SUSHI capability became of paramount importance. The team had been able to handle spreadsheets of data on a small scale, but the expected upscale to 100+ institutions and multiple publishers within a short time frame meant that this would very quickly become unmanageable and costly in staff time and effort - constraints that were proving to be a source of worry at many institutions too: while some sites could employ staff whose role revolved around usage stats gathering and analysis, this was not possible at every institution, nor especially straightforward for institutions juggling dozens, if not hundreds, of publisher agreements and deals.</p>
<p>Two main issues were immediately apparent in the development of the SUSHI software. Firstly, there was a lack of any standard SUSHI client software that we could use or adapt, and, more worryingly, the lack of SUSHI support at a number of major publishers. While many publishers use an external company or platform such as Atypon, MetaPress or HighWire to collect and provide usage statistics, others had made little or no progress in implementing SUSHI support by late 2009 - where SUSHI servers were in place these were often untested or unused by consumers.</p>
<p>An ultimate aim for JUSP was to develop a single piece of software that would seamlessly interact with any available SUSHI repository and download data for checking and loading into JUSP. However, the only client software available by 2009 was written and designed to work in the Windows environment, or used Java, which can be very complex to work with and of which the JUSP team had limited expertise. The challenge therefore became to develop a much simpler set of code using Perl and/or PHP, common and simple programming languages which were much more familiar to the JUSP team.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue70/meehan-et-al" target="_blank">read more</a></p>issue70feature articlepaul meehanpaul needhamross macintyrebirmingham city universitycranfield universityelsevierintutejiscjisc collectionsmimasnisouniversity of manchesteruniversity of oxfordjuspneslipirus2zetocarchivesauthenticationcsvdatadata setdatabasedigital librarydublin corehtmlidentifierinteroperabilityjavamultimediaopenurlpasswordsperlphpportalraptorrepositoriesresearchshibbolethsoftwarestandardssushiwindowsxmlWed, 05 Dec 2012 17:54:19 +0000lisrw2396 at http://www.ariadne.ac.ukHydra UK: Flexible Repository Solutions to Meet Varied Needshttp://www.ariadne.ac.uk/issue70/hydra-2012-11-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/hydra-2012-11-rpt#author1">Chris Awre</a> reports on the Hydra UK event held on 22 November 2012 at the Library of the London School of Economics.</p>
</div>
</div>
</div>
<p>Hydra, as described in the opening presentation of this event, is a project initiated in 2008 by the University of Hull, Stanford University, University of Virginia, and DuraSpace to work towards a reusable framework for multi-purpose, multi-functional, multi-institutional repository-enabled solutions for the management of digital content collections [<a href="#1">1</a>]. An initial timeframe for the project of three years had seen all founding institutional partners successfully implement a repository demonstrating these characteristics.&nbsp; Key to the aims of the project has always been to generate wider interest outside the partners to foster not only sustainability in the technology, but also sustainability of the community around this open source development.&nbsp; Hydra has been disseminated through a range of events, particularly through the international Open Repositories conferences [<a href="#2">2</a>], but the sphere of interest in Hydra has now stimulated the holding of specific events in different countries: Hydra UK is one of them.</p>
<p>The Hydra UK event was held on 22 November 2012, kindly hosted by the Library at the London School of Economics.&nbsp; Representatives from institutions across the UK, but also Ireland, Austria and Switzerland, came together to learn about the Hydra Project, and to discuss how Hydra might serve their digital content collection management needs.&nbsp; 29 delegates from 21 institutions were present, representing mostly universities but also the archive, museum and commercial sectors.&nbsp; Five presentations were given on Hydra, focusing on the practical experience of using this framework and how it fits into overall system architectures, and time was also deliberately given over to discussion of more specific topics of interest and to allow delegates the opportunity to voice their requirements.&nbsp; The presentations were:</p>
<ul>
<li>Introduction to Hydra</li>
<li>Hydra @ Hull</li>
<li>Hydra @ Glasgow Caledonian University</li>
<li>Hydra @ LSE</li>
<li>Hydra @ Oxford</li>
</ul>
<h2 id="Introduction_to_Hydra">Introduction to Hydra</h2>
<p>Chris Awre from the University of Hull gave the opening presentation.&nbsp; The starting basis for Hydra was mutual recognition by all the founding partners that a repository should be an enabler for managing digital content collections, not a constraint or simply a silo of content.&nbsp; Digital repositories have been put forward and applied as a potential solution for a variety of use cases over the years, and been used at different stages of a content lifecycle.&nbsp;</p>
<p style="text-align: center; "><img alt="LSE Library (Photo courtesy of Simon Lamb, University of Hull.)" src="http://ariadne-media.ukoln.info/grfx/img/issue70-hydra-2012-11-rpt/figure1-hydra-rpt-lse-library.jpg" style="width: 178px; height: 178px;" title="LSE Library (Photo courtesy of Simon Lamb, University of Hull.)" /></p>
<p style="text-align: center; "><strong>Figure 1: LSE Library</strong><br /><small>(Photo courtesy of Simon Lamb, University of Hull.)</small></p>
<p>To avoid producing a landscape of multiple repositories all having to be managed to cover these use cases, the Hydra Project sought to identify a way in which one repository solution could be applied flexibly to meet the requirements of different use cases. The idea of a single repository with multiple points of interaction came into being – Hydra – and the concept of individual Hydra ‘head’ solutions.</p>
<p>The Hydra Project is informed by two main principles:</p>
<ul>
<li>No single system can provide the full range of repository-based solutions for a given institution’s needs,<br />o&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; …yet sustainable solutions require a common repository infrastructure.</li>
<li>No single institution can resource the development of a full range of solutions on its own,<br />o&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; …yet each needs the flexibility to tailor solutions to local demands and workflows.</li>
</ul>
<p>The Hydra Project has sought to provide the common infrastructure upon which flexible solutions can be built, and shared.</p>
<p>The recognition that no single institution can achieve everything it might want for its repository has influenced the project from the start. &nbsp;To quote an African proverb, ‘If you want to go fast go alone, if you want to go far, go together’. Working together has been vital.&nbsp; To organise this interaction, Hydra has structured itself through three interleaving sub-communities, the Steering Group, the Partners and Developers, as shown by Figure 2.</p>
<p style="text-align: center; "><img alt="Figure 2: Hydra community structure" src="http://ariadne-media.ukoln.info/grfx/img/issue70-hydra-2012-11-rpt/hydra-community-structure-v4.jpg" style="width: 661px; height: 506px;" title="Figure 2: Hydra community structure" /></p>
<p style="text-align: center; "><strong>Figure 2: Hydra community structure</strong></p>
<!--
<p style="text-align: center; "><img alt="Figure 2: Hydra community structure" src="http://ariadne-media.ukoln.info/grfx/img/issue70-hydra-2012-11-rpt/figure2-hydra-community-structure.jpg" style="width: 640px; height: 490px;" title="Figure 2: Hydra community structure"></p><p style="text-align: center; "><strong>Figure 2: Hydra community structure</strong></p>
--><!--
<p style="text-align: center; "><img alt="Figure 2: Hydra community structure" src="http://ariadne-media.ukoln.info/grfx/img/issue70-hydra-2012-11-rpt/figure2-hydra-community-structure.jpg" style="width: 640px; height: 490px;" title="Figure 2: Hydra community structure"></p><p style="text-align: center; "><strong>Figure 2: Hydra community structure</strong></p>
--><p>The concept of a Hydra Partner has emerged from this model of actively working together, and the project has a Memorandum of Understanding (MoU) process for any institution wishing to have its use of, and contribution and commitment to Hydra recognised.&nbsp; Starting with the original four partners in 2008, Hydra now has 11 partners, with two more in the process of joining.&nbsp; All have made valuable contributions and helped to make Hydra better.&nbsp; Hydra partnership is not the only route to involvement, though, and there are many in the Hydra developer community who are adopters of the software, but who have not reached a stage where partnership is appropriate.</p>
<p>The technical implementation of Hydra was supported through early involvement in the project by MediaShelf, a commercial technical consultancy focused on repository solutions.&nbsp; All Hydra software is, though, open source, available under the Apache 2.0 licence, and all software code contributions are managed in this way.&nbsp; The technical implementation is based on a set of core principles that describe how content objects should be structured within the repository, and with an understanding that different content types can be managed using different workflows.&nbsp; Following these principles, Hydra could be implemented in a variety of ways: the technical direction taken by the project is simply the one that suited the partners at the time.</p>
<p>Hydra as currently implemented is built on existing open source components, and the project partners are committed to supporting these over time:</p>
<ul>
<li>Fedora: one of the digital repository systems maintained through DuraSpace [<a href="#3">3</a>]</li>
<li>Apache Solr: powerful indexing software now being used in a variety of discovery solutions [<a href="#4">4</a>]</li>
<li>Blacklight: a next-generation discovery interface, which has its own community around it [<a href="#5">5</a>]</li>
<li>Hydra plugin: a collection of components that facilitate workflow in managing digital content [<a href="#6">6</a>]</li>
<li>Solrizer: a component that indexes Fedora-held content into a Solr index</li>
</ul>
<p>These components are arranged in the architecture shown in Figure 3.</p>
<p style="text-align: center; "><img alt="Figure 3: Hydra architecture" src="http://ariadne-media.ukoln.info/grfx/img/issue70-hydra-2012-11-rpt/figure3-hydra-architecture-v4.jpg" style="width: 543px; height: 258px;" title="Figure 3: Hydra architecture" /></p>
<p style="text-align: center; "><strong>Figure 3: Hydra architecture</strong></p>
<!--
<p style="text-align: center; "><img alt="Hydra architecture" src="http://ariadne-media.ukoln.info/grfx/img/issue70-hydra-2012-11-rpt/architecture.png" style="width: 547px; height: 262px;" title="Hydra architecture"></p><p style="text-align: center; "><strong>Hydra architecture</strong></p> --><!--
<p style="text-align: center; "><img alt="Hydra architecture" src="http://ariadne-media.ukoln.info/grfx/img/issue70-hydra-2012-11-rpt/architecture.png" style="width: 547px; height: 262px;" title="Hydra architecture"></p><p style="text-align: center; "><strong>Hydra architecture</strong></p> --><p>A common feature of the last three components in the list above is the use of Ruby on Rails as the coding language and its ability to package up functionality in discrete ‘gems’.&nbsp; This was consciously chosen for Hydra because of its agile programming capabilities, its use of the MVC (Model–View–Controller) structure, and its testing infrastructure.&nbsp; The choice has been validated on a number of occasions as Hydra has developed.&nbsp; However, it was noted that other coding languages and systems could be used to implement Hydra where appropriate.&nbsp; This applies to all the main components, even Fedora.&nbsp; Whilst a powerful and flexible repository solution in its own right, Fedora has proved to be complex to use: Hydra has sought in part to tap this capability through simpler interfaces and interactions.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue70/hydra-2012-11-rpt" target="_blank">read more</a></p>issue70event reportchris awrebbcbodleian librariescalifornia digital libraryduraspaceglasgow caledonian universityjisclondon school of economicssakaistanford universityuniversity of hulluniversity of oxforduniversity of virginiahydrajisc information environmentremap projectapacheapiarchivesauthenticationcataloguingcollection developmentcontent managementdatadata managementdata setdigital archivedigital librarydigital preservationdigital repositoriesdisseminationeprintsfedora commonsframeworkgoogle mapsinfrastructureinstitutional repositorylicencemetadatamultimediaopen sourcepreservationrepositoriesresearchrubysearch technologysharepointsoftwaresolrstreamingvideovleThu, 13 Dec 2012 19:24:07 +0000lisrw2411 at http://www.ariadne.ac.ukIFLA World Library and Information Congress 2012http://www.ariadne.ac.uk/issue70/ifla-2012-08-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/ifla-2012-08-rpt#author1">Marieke Guy</a> reports on the 78th IFLA General Conference and Assembly held in Helsinki, Finland over 11-17 August 2012.</p>
</div>
</div>
</div>
<p>The Sunday newcomers session chaired by <strong>Buhle Mbambo-Thata</strong> provided us with some insight into the sheer magnitude of IFLA (as most people seem to call it) or the World Library and Information Congress (to give the formal name) [<a href="#1">1</a>]. This year’s congress had over 4,200 delegates from 120 different countries, though over a thousand of these were Finnish librarians making the most of the locality of this year’s event.</p>
<p><a href="http://www.ariadne.ac.uk/issue70/ifla-2012-08-rpt" target="_blank">read more</a></p>issue70event reportmarieke guyarlassociation of research librariescnicoalition for networked informationdccgoogleiflasimon fraser universityukolnuniversity of bathuniversity of glasgowuniversity of northamptonaccessibilityaggregationarchiveschromecloud computingcommunications protocolcopyrightcurationdatadata managementdata setdigital curationdigital librarydigital preservationdublin corefacebookframeworkidentifierinternet explorerlinked datamac osmetadatamobilenamed entity recognitionpreservationprivacyremote workingrepositoriesresearchtwittervideoTue, 11 Dec 2012 13:16:31 +0000lisrw2407 at http://www.ariadne.ac.ukInternational Conference on Theory and Practice of Digital Libraries (TPDL) 2012http://www.ariadne.ac.uk/issue70/tpdl-2012-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/tpdl-2012-rpt#author1">Anna Mastora</a> and <a href="/issue70/tpdl-2012-rpt#author2">Sarantos Kapidakis</a> report on TPDL 2012 held at Paphos, Cyprus, over 23-27 September 2012.</p>
</div>
</div>
</div>
<p>The 16<sup>th</sup> International Conference on Theory and Practice of Digital Libraries (TPDL) 2012 [<a href="#1">1</a>] was another successful event in the series of ECDL/TPDL conferences which has been the leading European scientific forum on digital libraries for 15 years. Across these years, the conference has brought together researchers, developers, content providers and users in the field of digital libraries by addressing issues in the area where theoretical and applied research meet, such as digital library models, architectures, functionality, users, and quality.</p>
<p><a href="http://www.ariadne.ac.uk/issue70/tpdl-2012-rpt" target="_blank">read more</a></p>issue70event reportanna mastorasarantos kapidakiscity university londoncnicoalition for networked informationgoogleionian universityisomassachusetts institute of technologymicrosoftnational technical university of athensopen universityprinceton universitythe national archivesuniversity of cyprusuniversity of maltauniversity of strathclydeeuropeanaarchivesblogdatadata setdigital archivedigital librarydigital preservationdigitisationdisseminationfacebookfrbrgraphicsinformation retrievalinteroperabilitylinked datametadatamultimedianatural language processingontologiespreservationresearchresource discoverysearch technologysemantic webskossoftwarestandardsthesaurustwittervisualisationSun, 16 Dec 2012 13:44:54 +0000lisrw2432 at http://www.ariadne.ac.ukOnline Information 2012http://www.ariadne.ac.uk/issue70/online-2012-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/online-2012-rpt#author1">Marieke Guy</a> reports on the largest gathering of information professionals in Europe.</p>
</div>
</div>
</div>
<p>Online Information [<a href="#1">1</a>] is an interesting conference as it brings together information professionals from both the public and the private sector. The opportunity to share experiences from these differing perspectives doesn’t happen that often and brings real benefits, such as highly productive networking. This year’s Online Information, held between 20 - 21 &nbsp;November, felt like a slightly different event to previous years.</p>
<p><a href="http://www.ariadne.ac.uk/issue70/online-2012-rpt" target="_blank">read more</a></p>issue70event reportmarieke guyamazondccgooglejiscmicrosoftmimasoclcukolnuniversity of bathuniversity of dundeeuniversity of edinburghuniversity of manchesteruniversity of sheffielduniversity of sussexdatasharedmponlinerdmrosescarletschema.orgwikipediaworldcatalgorithmaugmented realitybibliographic databig datablogcataloguingcloud computingcopyrightdatadata managementdata setdatabasedigital curationdigital librarydigital repositoriesfacebookflickrframeworkhigher educationidentifierinteroperabilityjunaiolibrary datalicencelinked datamarcmetadatamobileoeropen dataopen sourceoperating systemprivacyqr coderdfaremote workingrepositoriesresearchsearch technologysoftwarestreamingtwitterurivideovocabulariesyoutubeSun, 16 Dec 2012 17:10:56 +0000lisrw2437 at http://www.ariadne.ac.ukBook Review: User Studies for Digital Library Developmenthttp://www.ariadne.ac.uk/issue70/aytac-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/aytac-rvw#author1">Selenay Aytac</a> reviews a collection of essays on user studies and digital library development that provides a concise overview of a variety of digital library projects and examines major research trends relating to digital libraries.</p>
</div>
</div>
</div>
<p><em>User Studies for Digital Library Development</em> provides a concise overview of a variety of digital library projects and examines major research trends relating to digital libraries. While there are many books on user studies and digital library development, this work operates at the junction of these two domains and stands out for its insights, balance, and quality of its case-based investigations. The book brings together points of view from different professional communities, including practitioners as well as researchers.</p>
<p><a href="http://www.ariadne.ac.uk/issue70/aytac-rvw" target="_blank">read more</a></p>issue70reviewselenay aytacbbcglasgow caledonian universitylibrary of congresslong island universitymanchester metropolitan universitynational library of australiauniversity of edinburghuniversity of glasgowuniversity of maltauniversity of oxforduniversity of sheffielduniversity of strathclydeeuropeanaaccessibilityarchivesbibliographic datacourse designcreative commonsdatadigital librarydigital preservatione-learningframeworkinformation societymetadatamobilemultimedianational libraryopen accessresearchresource discoveryusabilityweb 2.0Thu, 13 Dec 2012 22:10:17 +0000lisrw2412 at http://www.ariadne.ac.ukBook Review: The Embedded Librarianhttp://www.ariadne.ac.uk/issue70/azzolini-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/azzolini-rvw#author1">John Azzolini</a> reviews a comprehensive overview of embedded librarianship, a new model of library service that promises to enhance the strategic value of contemporary knowledge work.</p>
</div>
</div>
</div>
<p>Librarianship as a profession is confronting a growing demand to prove its worth. Library patrons expect utility. The organisations that fund them pre-suppose a contribution to their bottom lines.</p>
<p>The calls for this proof come from librarians themselves as much as from their employers. And the tone of the questioning is persistent if not redundant. It can be distilled to a fundamental query: Can the library sustain its basic mission of effectively and efficiently fulfilling its users' information needs given the technological, social, and economic developments that are transforming how people interact with data, documents, and each other?</p>
<h2 id="Librarianship:_In_Search_of_the_Value_Proposition">Librarianship: In Search of the Value Proposition</h2>
<p>These transformations have been occurring for some time, in different areas of living and working. Though not flowing from a single source, for librarians the impacts from these changes have seemingly converged on their profession as if they were collusive forces.</p>
<p>A global financial crisis and its lingering downturns have resulted in deeper budget cuts for many departments in every type of institution, public and private. A rising trend toward direct information consumption has caused many everyday users as well as executives to believe that removing librarians from the knowledge cycle is the next logical step. Caught within the sights of cost-conscious decision makers, libraries and information centres have become vulnerable to downsizing.</p>
<p>Students enter universities - even secondary schools - wedded unconsciously to their handhelds, always connected, assuming unmitigated and near-immediate digital satisfaction for their knowledge wants. Most of them were born into this socio-technical life-world as if it were a natural order. They know and expect nothing else. In such an environment, librarians orchestrate access but need not be confronted. They maintain crucial databases and finding aids, but can do so unseen and disembodied. They can be relegated to infrastructural innards.</p>
<p>For-profit organisations, the home of law firm and business librarians, are looking upon the outsourcing of support staff with increasing favour. And while library positions have not yet been handed over wholesale to third-party providers, there is industry trepidation that it could move in that direction. The threat is vague but distinctly present.</p>
<p>Many have taken to the outlets of library opinion and prediction, warning of impending disintermediation and possible obsolescence if the field fails to embrace drastic changes in how it carries out its service mission. Blogs, journals, and conferences are animated with calls to re-conceptualise philosophies and re-direct core methods. Some commentators merely emit distress signals on behalf of the library community. They are invocations of crisis without even a stab at real solutions. Others, however, are serious attempts to map out alternative pathways to a more stable occupational future. These need to be reckoned with.</p>
<p>A common path taken by the more constructive endeavours is demonstrating how librarianship can re-establish its value in a rapidly changing environment. This value is understood to be the knowledge-creating and disseminating efficacies that libraries bring to their users more ably and with less cost than other institutions. Since libraries are housed and financially supported by parent organisations of some kind, the value is usually construed as a combination of business and mission-relevant attributes. The emphasis on mission may be more pronounced in academic and public libraries, while corporate and firm libraries stress the financial aspects, but it is ultimately about how management assesses the library's contributions to the organisation's long-term integrity. Granted, the value has a large practical component for a library's patrons; the direct benefits are the answers, leads, and guidance they obtain when visiting the reference desk or searching the collections. However, the final criterion for most libraries will be the value proposition attributed to them by upper-level decision makers. User satisfaction is a valuable standard, but in the end it is often translated into a determination of whether the library produces distinct results in light of the resources devoted to maintaining it.</p>
<p>A concrete attempt to re-assert the business and service value of librarians has been the adoption of the practice model known as embedded librarianship. Although it has been applied in libraries in one form or another for a few decades - without necessarily using the word ‘embedded’ - only in the past several years has it risen to widespread notability. Judging by the upsurge in professional discussions and published cases devoted to this approach, librarians of many types are expressing keen interest in the value-enhancing potential of embedding themselves. Its contemporary significance is fully examined by David Shumaker in <em>The Embedded Librarian: Innovative Strategies for Taking Knowledge Where It's Needed</em>. The author, an associate professor at The Catholic University of America's School of Library and Information Science in Washington, D.C., is a well-known chronicler of embedded practices. This book is the field's first attempt at a comprehensive review of embedded librarianship's shared features, variable manifestations, and elements for success among major types of libraries.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue70/azzolini-rvw" target="_blank">read more</a></p>issue70reviewjohn azzoliniclifford chanceblogcataloguingdatadatabasedigital libraryframeworkhigher educationresearchsearch technologystandardsThu, 13 Dec 2012 22:32:15 +0000lisrw2413 at http://www.ariadne.ac.ukBook Review: Information 2.0http://www.ariadne.ac.uk/issue70/dobreva-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue70/dobreva-rvw#author1">Milena Dobreva</a> reviews the newly published book of Martin de Saulles which looks at the new models of information production, distribution and consumption.</p>
</div>
</div>
</div>
<p>Writing about information and the changes in the models of its production, distribution and consumption is no simple task. Besides the long-standing debate on what information and knowledge really mean, the world of current technologies is changing at a pace which inevitably influences all spheres of human activity. But the first of those spheres to tackle is perhaps that of information – how we create, disseminate, and use it.</p>
<p><a href="http://www.ariadne.ac.uk/issue70/dobreva-rvw" target="_blank">read more</a></p>issue70reviewmilena dobrevaamazonjiscuniversity of brightonuniversity of maltaarchivesbig datablogcloud computingdatadata miningdigital librarydigital preservationdigitisationgoogle searchinformation societyinstitutional repositorymobilepodcastresearchsearch technologyvideowikiThu, 13 Dec 2012 22:49:00 +0000lisrw2414 at http://www.ariadne.ac.ukMaking the Most of a Conferencehttp://www.ariadne.ac.uk/issue69/taylor
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue69/taylor#author1">Stephanie Taylor</a> writes about how she made the most of a conference to promote and inform the work of a project.</p>
</div>
</div>
</div>
<p>I’ve been working with repositories in various ways for over five years, so I have, of course, attended the major international conference Open Repositories before. I have never actually presented anything or represented a specific project at the event, though. This year was different. This year I had a mission -&nbsp; to present a poster on the DataFlow Project [<a href="#1">1</a>] and to talk to people about the work we had been doing for the past 12 months and (I hoped) to interest them in using the Open Source (OS) systems we had developed during that period.</p>
<p><a href="http://www.ariadne.ac.uk/issue69/taylor" target="_blank">read more</a></p>issue69feature articlestephanie taylorukolnuniversity of glasgowuniversity of oxforduniversity of southamptondevcsihydrarspapiarchivesblogcloud computingcopyrightdatadata managementdata setdatabasedigital librarydigital repositoriesdisseminationdoiflickrframeworkhashtaghigher educationinfrastructurejavascriptlicencelinked datalinuxmetadataopen accessopen sourceprovenancerdfrepositoriesresearchresearch information managementsoftwarestandardssword protocoltaggingtext miningtwittervisualisationwidgetwikizipTue, 31 Jul 2012 15:05:33 +0000lisrw2374 at http://www.ariadne.ac.ukRedeveloping the Loughborough Online Reading List Systemhttp://www.ariadne.ac.uk/issue69/knight-et-al
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue69/knight-et-al#author1">Jon Knight</a>, <a href="/issue69/knight-et-al#author2">Jason Cooper</a> and <a href="/issue69/knight-et-al#author3">Gary Brewerton</a> describe the redevelopment of Loughborough University’s open source reading list system.</p>
</div>
</div>
</div>
<p>The Loughborough Online Reading Lists System (LORLS) [<a href="#1">1</a>] has been developed at Loughborough University since the late 1990s.&nbsp; LORLS was originally implemented at the request of the University’s Learning and Teaching Committee simply to make reading lists available online to students.&nbsp; The Library staff immediately saw the benefit of such a system in not only allowing students ready access to academics’ reading lists but also in having such access themselves. This was because a significant number of academics were bypassing the library when generating and distributing lists to their students who were then in turn surprised when the library did not have the recommended books either in stock or in sufficient numbers to meet demand.</p>
<p>The first version of the system produced by the Library Systems Team was part of a project that also had a ‘reading lists amnesty’ in which academics were encouraged to provide their reading lists to the library which then employed some temporary staff over the summer to enter them into the new system.&nbsp; This meant that the first version of LORLS went live in July 2000 with a reasonable percentage of lists already in place.&nbsp; Subsequently the creation and editing of reading lists was made the responsibility of the academics or departmental admin staff, with some assistance from library staff.</p>
<p>LORLS was written in Perl, with a MySQL database back-end.&nbsp; Most user interfaces were delivered via the web, with a limited number of back-end scripts that helped the systems staff maintain the system and alert library staff to changes that had been made to reading lists.</p>
<p>Soon after the first version of LORLS went live at Loughborough, a number of other universities expressed an interest in using or modifying the system. Permission was granted by the University to release it as open source under the General Public Licence (GPL)[<a href="#2">2</a>].&nbsp; New versions were released as the system was developed and bugs were fixed. The last version of the original LORLS code base/data design was version 5, which was downloaded by sites worldwide.</p>
<h2 id="Redesign">Redesign</h2>
<p>By early 2007 it was decided to take a step back and see if there were things that could be done better in LORLS.&nbsp; Some design decisions made in 1999 no longer made sense eight years later.&nbsp; Indeed some of the database design was predicated on how teaching modules were supposed to work at Loughborough and it had already become clear that the reality of how they were deployed was often quite different.&nbsp; For example, during the original design, the principle was that each module would have a single reading list associated with it.&nbsp; Within a few years several modules had been found that were being taught by two (or more!) academics, all wanting their own independent reading list.</p>
<p>Some of the structuring of the data in the MySQL database began to limit how the system could be developed.&nbsp; The University began to plan an organisational restructuring shortly after the redesign of LORLS was commenced, and it was clear that the simple departmental structure was likely to be replaced by a more fluid school and department mix.</p>
<p>Library staff were also beginning to request new features that were thus increasingly awkward to implement.&nbsp; Rather than leap through hoops to satisfy them within the framework of the existing system, it made sense to add them into the design process for a full redesign.</p>
<p>It was also felt that the pure CGI-driven user interface could do with a revamp.&nbsp; The earlier LORLS user interfaces used only basic HTML forms, with little in the way of client-side scripting.&nbsp; Whilst that meant that they tended to work on any web browser and were pretty accessible, they were also a bit clunky compared to some of the newer dynamic web sites.</p>
<p>A distinct separation of the user interface from the back-end database was decided upon to improve localization and portability of the system as earlier versions of LORLS had already shown that many sites took the base code and then customised the user interface parts of the CGI scripts to their own look and feel.&nbsp; The older CGI scripts were a mix of user interaction elements and database access and processing, which made this task a bit more difficult than it really needed to be.</p>
<p>Separating the database code from the user interface code would let people easily tinker with one without unduly affecting the other.&nbsp; It would also allow local experimentation with multiple user-interface designs for different user communities or devices.</p>
<p>This implied that a set of application programming interfaces (APIs) would need to be defined. As asynchronous JavaScript and XML (AJAX)[<a href="#3">3</a>] interactions had been successful applied in a number of recent projects the team had worked on, XML was chosen as the format to be used.&nbsp; At first simple object access protocol (SOAP) style XML requests was experimented with, as well as XML responses, but it was soon realised that SOAP was far too heavy-weight for most of the API calls, so a lighter ‘RESTful’ API was selected.&nbsp; The API was formed of CGI scripts that took normal parameters as input and returned XML documents for the client to parse and display.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue69/knight-et-al" target="_blank">read more</a></p>issue69tooled upgary brewertonjason cooperjon knightgoogleharvard universityloughborough universitymicrosoftgnuaccess controlajaxapiarchivesauthenticationbibliographic datablogcachechromecookiedatadatabasedigital librarye-learningframeworkgoogle booksgplhtmljavascriptjqueryjsonlibrary management systemslicencemetadatamoodlemysqlopen sourceperlrefworksrestfulschemashibbolethsoapsoftwaresqlstandardsweb browserxmlz39.50zipSat, 28 Jul 2012 14:32:55 +0000lisrw2354 at http://www.ariadne.ac.ukBook Review: I, Digital – A History Devoid of the Personal?http://www.ariadne.ac.uk/issue69/rusbridge-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue69/rusbridge-rvw#author1">Chris Rusbridge</a> reviews an edited volume that aims to fill a gap in ‘literature designed specifically to guide archivists’ thinking about personal digital materials’.</p>
</div>
</div>
</div>
<p>We are all too familiar with the dire predictions of coming Digital Dark Ages, when All Shall be Lost because of the fragility of our digital files and the transience of the formats. We forget, of course, that loss was always the norm. The wonderful documents in papyrus, parchment and paper that we so admire and wonder at, are the few lucky survivors of their times. Sometimes they have been carefully nurtured, sometimes they have been accidentally preserved. But almost all were lost!</p>
<p><a href="http://www.ariadne.ac.uk/issue69/rusbridge-rvw" target="_blank">read more</a></p>issue69reviewchris rusbridgebritish librarydccjiscnational library of australiauniversity of glasgowuniversity of oxforduniversity of virginiaelibwikipediaarchivesbibliographic datacurationdatadigital librarydigital preservationdigital repositoriesebookfacebookinternet explorermispreservationprivacyrepositoriesresearchsocial webtwitterweb serviceswordpressyoutubeSun, 29 Jul 2012 18:17:27 +0000lisrw2365 at http://www.ariadne.ac.ukPerceptions of Public Libraries in Africahttp://www.ariadne.ac.uk/issue68/elbert-et-al
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue68/elbert-et-al#author1">Monika Elbert</a>, <a href="/issue68/elbert-et-al#author2">David Fuegi</a> and <a href="/issue68/elbert-et-al#author3">Ugne Lipeikaite</a> describe the principal findings of the study <em>Perceptions of Public Libraries in Africa</em> which served to provide evidence of how public libraries are perceived by their stakeholders.</p>
</div>
</div>
</div>
<p>This article presents a summary of some results of the study <em>Perceptions of Public Libraries in Africa</em> [<a href="#1">1</a>] which was conducted to research perceptions of stakeholders and the public towards public libraries in six African countries. The study is closely linked with the EIFL Public Library Innovation Programme [<a href="#2">2</a>], which awarded grants to public libraries in developing and transition countries to address a range of socio-economic issues facing their communities, including projects in Kenya, Ghana and Zambia.</p>
<p><a href="http://www.ariadne.ac.uk/issue68/elbert-et-al" target="_blank">read more</a></p>issue68feature articledavid fuegimonika elbertugne lipeikaiteeifliflaoclctns rmseifl-plipeuropeanaarchivescataloguingdatadigital librarydisseminatione-governmentejournalframeworkictinfrastructurenational libraryresearchsearch technologysmssoftwareFri, 09 Mar 2012 14:06:59 +0000lisrw1690 at http://www.ariadne.ac.ukThe Future of the Past of the Webhttp://www.ariadne.ac.uk/issue68/fpw11-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue68/fpw11-rpt#author1">Matthew Brack</a> reports on the one-day international workshop 'The Future of the Past of the Web' held at the British Library Conference Centre, London on 7 October, 2011.</p>
</div>
</div>
</div>
<p>We have all heard at least some of the extraordinary statistics that attempt to capture the sheer size and ephemeral nature of the Web. According to the Digital Preservation Coalition (DPC), more than 70 new domains are registered and more than 500,000 documents are added to the Web every minute [<a href="#1">1</a>]. This scale, coupled with its ever-evolving use, present significant challenges to those concerned with preserving both the content and context of the Web.</p>
<p><a href="http://www.ariadne.ac.uk/issue68/fpw11-rpt" target="_blank">read more</a></p>issue68event reportmatthew brackbbcbritish librarybsidccdigital preservation coalitiongooglehanzo archivesinstitute of historical researchisojisckings college londonlibrary of congressnhsoxford internet institutethe national archivesuniversity of oxforduniversity of sheffieldwellcome libraryarcomeminternet archivemementouk government web archiveaggregationalgorithmapiarchivesbig datablogbrowsercachecurationdatadata miningdata modeldigital asset managementdigital curationdigital librarydigital preservationdigitisationdisseminationdoiflickridentifierinteroperabilitylibrary datalodmetadatapreservationrepositoriesresearchsearch technologysocial websoftwaretag cloudtwitterulccuriurlvisualisationwarcwayback machineweb resourceswordpressyoutubeMon, 27 Feb 2012 12:06:52 +0000lisrw2236 at http://www.ariadne.ac.ukIMPACT Final Conference 2011http://www.ariadne.ac.uk/issue68/impact-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue68/impact-rpt#author1">Marieke Guy</a> reports on the two-day conference looking at the results of the IMPACT Project in making digitisation and OCR better, faster and cheaper.</p>
</div>
</div>
</div>
<p>The IMPACT Project (<strong>Imp</strong>roving <strong>Ac</strong>cess to <strong>T</strong>ext) [<a href="#1">1</a>] was funded by the European Commission back in 2007 to look at significantly advancing access to historical text using Optical Character Recognition (OCR) methods. As the project reaches its conclusion, one of its key objectives is sharing project outputs.</p>
<p><a href="http://www.ariadne.ac.uk/issue68/impact-rpt" target="_blank">read more</a></p>issue68event reportmarieke guyabbyyaustrian national librarybnfbrightsolidbritish librarycalifornia digital librarycontent conversion specialistsd-lib magazinedccgoogleibminstitute for dutch lexicologynational and university library of slovenianational library of finlandnational library of the netherlandsstanford universitytufts universityukolnuniversity of bathuniversity of munichuniversity of oxforduniversity of salforduniversity of utrechtahlibeuropeanaimpact projectarchivesblogcopyrightdatadata managementdata miningdata setdatabasedigital librarydigitisationdisseminationfinereaderframeworkgoogle booksictinformation retrievalinformation societyinteroperabilitymetadatametsnational libraryocroeropen sourceoptical character recognitionpreservationresearchsearch technologysoftwaresolrtaggingtesseracttwitterunicodewikiwordpressSun, 26 Feb 2012 13:36:33 +0000lisrw2233 at http://www.ariadne.ac.ukBook Review: The Future of Archives and Recordkeepinghttp://www.ariadne.ac.uk/issue68/azzolini-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue68/azzolini-rvw#author1">John Azzolini</a> reviews an anthology of perceptive essays on the challenges presented to archival thought and practice by Web 2.0, postmodern perspectives, and cross-disciplinary interchanges.</p>
</div>
</div>
</div>
<p>Librarians, archivists, and records managers do not share identical challenges or controversies in their practical endeavours or theoretical queries. However, a common issue for all the information professions and a dominating topic of discussion in their literature is the fundamental change in the structure and distribution of knowledge caused by mass digitisation. The proliferation of daily digital content, in quantity, reach, and manifestation, is confronting them all with a disquieting role ambiguity. The expanding tools and expectations of Web 2.0 have made this self-questioning a recurrent one, but they have also stimulated invigorating debate on the purpose and direction of these fields. The perception is one of extraordinary change initiated by emerging technologies, unprecedented knowledge production and dissemination, and a new centralised role for the information user. In these galvanising changes leading library and archives practitioners are sensing opportunities for confirming the professions’ relevance, in the estimation of other scholarly disciplines and of society at large, but, perhaps most of all, in their own eyes as well.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue68/azzolini-rvw" target="_blank">read more</a></p>issue68reviewjohn azzoliniclifford chancearchivesblogcataloguingdigital librarydigitisationdisseminationfacebookflickrframeworkknowledge managementmetadatapersonalisationpreservationprovenanceresearchsemiotictwittervocabulariesweb 2.0wikiyoutubeTue, 08 Nov 2011 14:50:08 +0000lisrw1689 at http://www.ariadne.ac.ukeSciDoc Days 2011: The Challenges for Collaborative eResearch Environmentshttp://www.ariadne.ac.uk/issue68/escidoc-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue68/escidoc-rpt#author1">Ute Rusnak</a> reports on the fourth in a series of two-day conferences called eSciDoc Days, organised by FIZ Karlsruhe and the Max Planck Digital Library in Berlin over 26-27 October 2011.</p>
</div>
</div>
</div>
<p>eSciDoc is a well-known open source platform for creating eResearch environments using generic services and tools based on a shared infrastructure. This concept allows for managing research and publication data together with related metadata, internal and/or external links and access rights. Development of eSciDoc was initiated by a collaborative venture between FIZ Karlsruhe – Leibniz Institute for Information Infrastructure and the Max Planck Digital Library (MPDL) and was funded by the German Federal Ministry of Education and Research.</p>
<p><a href="http://www.ariadne.ac.uk/issue68/escidoc-rpt" target="_blank">read more</a></p>issue68event reportute rusnakfiz karlsruhejiscarchivesauthenticationbig databrowsercopyrightcurationdatadata managementdata setdatabasedigital librarydigital preservationdigital repositoriesdigitisationdisseminatione-researchebookejournalfedora commonsframeworkhigher educationinfrastructureinternet explorerinteroperabilityknowledge managementlicencemetadataopen sourcepreservationprovenancerepositoriesresearchrich internet applicationsoasoftwarevirtual research environmentvisualisationweb servicesMon, 27 Feb 2012 20:20:52 +0000lisrw2239 at http://www.ariadne.ac.ukEditorial Introduction to Issue 67: Changes Afoothttp://www.ariadne.ac.uk/issue67/editorial
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue67/editorial#author1">Richard Waller</a> introduces Ariadne issue 67.</p>
</div>
</div>
</div>
<!-- start main content --><!-- start main content --><p>For readers who might have been wondering, I shall resist Mark Twain's remark about reports of his demise being exaggerated, and reassure you that while <em>Ariadne</em> has been undergoing changes to the way in which it will be delivered to the Web, it has been business as usual in the matter of the content, as you will see from the paragraphs that follow. Issue 67, while currently not looking any different, is in the process of being migrated to a new platform developed to enhance functionality and give a more user-friendly look and feel to the publication.</p>
<p><a href="http://www.ariadne.ac.uk/issue67/editorial" target="_blank">read more</a></p>issue67editorialrichard wallerbectajiscjisc techdismeta-netukolnuniversity of bathuniversity of derbydevcsihomer multitextmobile campus assistantmymobilebristolwikipediaaccessibilityarchivesbibliographic datablogcataloguingcurationdatadigital librarydigitisationelluminateeprintsframeworkgeospatial datagisidentifierinfrastructureinteroperabilitylibrarythingmetadatamobilenatural language processingpreservationprogramming languagerepositoriesresearchrsssemantic websoftwarestandardstaggingtwitteruimaulccurnusabilityweb 2.0web serviceswebinarSun, 03 Jul 2011 23:00:00 +0000editor1618 at http://www.ariadne.ac.ukImage 'Quotation' Using the C.I.T.E. Architecturehttp://www.ariadne.ac.uk/issue67/blackwell-hackneyBlackwell
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue67/blackwell-hackneyBlackwell#author1">Christopher Blackwell</a> and <a href="/issue67/blackwell-hackneyBlackwell#author2">Amy Hackney Blackwell</a> describe with examples a digital library infrastructure that affords canonical citation for 'quoting' images, useful for creating commentaries, arguments, and teaching tools.</p>
</div>
</div>
</div>
<p>Quotation is the heart of scholarly argument and teaching, the activity of bringing insight to something complex by focused discussion of its parts. Philosophers who have reflected on the question of quotation have identified two necessary components: a name, pointer, or citation on the one hand and a reproduction or repetition on the other. Robert Sokolowski calls quotation a 'curious conjunction of being able to name and to contain' [<a href="#1">1</a>]; V.A. Howard is more succinct: quotation is 'replication-plus-reference' [<a href="#2">2</a>]. We are less interested in the metaphysical aspects of quotation than in the practical ones.</p>
<p>The tools and techniques described here were supported by the National Science Foundation under Grants No. 0916148 &amp; No. 0916421. Any opinions, findings and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the National Science Foundation (NSF).</p>
<h2 id="Quotation">Quotation</h2>
<p>Quotation, when accompanied by citation, allows us to bring the reader's attention to bear on a particular part of a larger whole efficiently and without losing the surrounding context. A work of Biblical exegesis, for example, can quote or merely cite 'Genesis 1:29' without having to reproduce the entire Hebrew Bible, or even the Book of Genesis; a reader can resolve that citation to a particular passage about the creation of plants, and can see that passage as a discrete node at the bottom of a narrowing hierarchy: Hebrew Bible, Genesis, Chapter 1, Verse 29. We take this for granted.</p>
<p>Quoting a text is easy. But how can we quote an image? This remains difficult even in the 21st century where it is easy to reproduce digital images, pass them around through networks, and manipulate them on our desks.</p>
<p>A scholar wishing to refer to a particular part of an image will generally do something like this: She will open one version of an image in some editing software, select and 'cut' a section from it, and 'paste' that section into a document containing the text of her commentary or argument. She might add to the text of her argument a reference to the source of the image. The language that describes this process is that of mechanical work&nbsp;– cutting and pasting&nbsp;– rather than the language of quotation and citation. The process yields a fragment of an image with only a tenuous connection to the ontological hierarchy of the object of study. The same scholar who would never give a citation to '<em>The Bible</em>, page 12' rather than to 'Genesis 1:29' will, of necessity, cite an image-fragment in a way similarly unlikely to help readers find the source and locate the fragment in its natural context.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue67/blackwell-hackneyBlackwell" target="_blank">read more</a></p>issue67feature articleamy hackney blackwellchristopher blackwellclemson universityfurman universitygoogleharvard universitynational academy of sciencesnational science foundationuniversity of virginiagnuhomer multitextarchivesbrowsercreative commonscssdatadigital librarydoidublin corefirefoxfree softwarehtmlidentifierinfrastructurejavalicencemetadatanamespaceopenofficeresearchsafarischemasoftwarestandardsstylesheetteithesaurusurlurnvocabulariesweb browserxhtmlxmlxslxsltzipSun, 03 Jul 2011 23:00:00 +0000editor1620 at http://www.ariadne.ac.ukRetooling Special Collections Digitisation in the Age of Mass Scanninghttp://www.ariadne.ac.uk/issue67/rinaldo-et-al
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue67/rinaldo-et-al#author1">Constance Rinaldo</a>, <a href="/issue67/rinaldo-et-al#author2">Judith Warnement</a>, <a href="/issue67/rinaldo-et-al#author3">Tom Baione</a>, <a href="/issue67/rinaldo-et-al#author4">Martin R. Kalfatovic</a> and <a href="/issue67/rinaldo-et-al#author5">Susan Fraser</a> describe results from a study to identify and develop a cost-effective and efficient large-scale digitisation workflow for special collections library materials.</p>
</div>
</div>
</div>
<!-- start main content --><!-- start main content --><p>The Biodiversity Heritage Library (BHL) [<a href="#1">1</a>] is a consortium of 12 natural history and botanical libraries that co-operate to digitise and make accessible the legacy literature of biodiversity held in their collections and to make that literature available for open access and responsible use as a part of a global 'biodiversity commons.' [<a href="#2">2</a>] The participating libraries hold more than two million volumes of biodiversity literature collected over 200 years to support the work of scientists, researchers and students in their home insti</p>
<p><a href="http://www.ariadne.ac.uk/issue67/rinaldo-et-al" target="_blank">read more</a></p>issue67feature articleconstance rinaldojudith warnementmartin r. kalfatovicsusan frasertom baioneamerican museum of natural historycalifornia digital libraryharvard universityiflalibrary of congressnew york botanical gardenoclcsmithsonian institutionuniversity of cambridgeuniversity of oxfordinternet archiveopen librarywikipediaarchivesbibliographic datacataloguingcsvdatadatabasedigital librarydigitisationdublin coreframeworkinfrastructureintellectual propertylibrarythingmetadataopacopen accessrepositoriesresearchtaggingurlvideoweb serviceswikiz39.50Sun, 03 Jul 2011 23:00:00 +0000editor1624 at http://www.ariadne.ac.ukInstitutional Challenges in the Data Decadehttp://www.ariadne.ac.uk/issue67/dcc-2011-03-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue67/dcc-2011-03-rpt#author1">Marion Tattersall</a>, <a href="/issue67/dcc-2011-03-rpt#author2">Carmen O'Dell</a> and <a href="/issue67/dcc-2011-03-rpt#author3">John Lewis</a> report on Institutional Challenges in the Data Decade, organised by the Digital Curation Centre (DCC) in partnership with the White Rose University Consortium and held 1-3 March 2011 at the University of Sheffield.</p>
</div>
</div>
</div>
<p><a href="http://www.ariadne.ac.uk/issue67/dcc-2011-03-rpt" target="_blank">read more</a></p>issue67event reportcarmen odelljohn lewismarion tattersalldccjiscnational grid servicenational science foundationqueensland university of technologyuk data archiveukolnuniversity of edinburghuniversity of glasgowuniversity of leedsuniversity of manchesteruniversity of melbourneuniversity of oxforduniversity of sheffielduniversity of yorkbeginners guide to digital preservationdata train projectdmtpsychjisc information environmentwikipediayodlarchivesblogcloud computingcopyrightcreative commonscurationdatadata managementdata setdatabasedigital curationdigital librarydigital preservatione-researchfedora commonsfoiframeworkinfrastructurelicencemetadatamultimediaopen dataportalpreservationprivacyrepositoriesresearchresource discoverysoftwaretaxonomyusbvideovisualisationvleSun, 03 Jul 2011 23:00:00 +0000editor1631 at http://www.ariadne.ac.ukBook Review: Envisioning Future Academic Library Serviceshttp://www.ariadne.ac.uk/issue67/azzolini-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue67/azzolini-rvw#author1">John Azzolini</a> reviews a timely collection of essays that highlights the values of institutional leadership and resourcefulness in academic librarianship's engagements with Web 2.0.</p>
</div>
</div>
</div>
<p>Since networked information technology has initiated a breathtaking transformation of knowledge practices, librarians have had a generous supply of thought leaders whose lifetime experience has permitted them to issue credible translations of the 'writing on the wall'. Recently, however, there seems to be many more analysts (and soothsayers) and much more anxious observation and published interpretation of such writing. And the message comes in a red ink, in bold, and with distinct portent, when not downright ominous.</p>
<p><a href="http://www.ariadne.ac.uk/issue67/azzolini-rvw" target="_blank">read more</a></p>issue67reviewjohn azzolinibritish libraryclifford chancegoogleuniversity of melbourneyale universitybibliographic datablogcataloguingcopyrightcurationdatadata managementdata setdigital librarydigitisationdisruptive innovationdisseminationebookframeworkhigher educationictknowledge managementmobilemuvesopen accesspersonalisationpreservationresearchsearch technologysecond lifeweb 2.0Sun, 03 Jul 2011 23:00:00 +0000editor1632 at http://www.ariadne.ac.ukReading Van Gogh Online?http://www.ariadne.ac.uk/issue66/boot
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue66/boot#author1">Peter Boot</a> shows how log analysis can be employed to assess a site's usability, usage, and users, using the Van Gogh letter edition as an example.</p>
</div>
</div>
</div>
<!-- v5 author edits, revised images and new table 4 : 2011-02-21-17-21 REW --><!-- v5 author edits, revised images and new table 4 : 2011-02-21-17-21 REW --><p>Large amounts of money are spent building scholarly resources on the web. Unlike online retailers, large publishers and banks, scholarly institutions tend not to monitor very closely the way visitors use their web sites. In this article I would like to show that a look at the traces users leave behind in the Web servers' log files can teach us much about our sites' usability and about the way visitors use them.</p>
<p>In 2009 the <a href="http://www.huygensinstituut.knaw.nl/">Huygens Institute</a> [<a href="#1">1</a>], together with the <a href="http://www.vangoghmuseum.nl/">Van Gogh Museum</a> [<a href="#2">2</a>], published a new edition of the letters of Vincent van Gogh. The complete edition was <a href="http://vangoghletters.org/vg/">published online</a> [<a href="#3">3</a>], and is accessible for free; there is also a six-volume book edition [<a href="#4">4</a>]. The online edition was reviewed in a number of publications [<a href="#5">5</a>][<a href="#6">6</a>][<a href="#7">7</a>]. I will use the server logs of the Van Gogh edition as an example of what we can learn about our visitors. I will focus not on the simple quantities, but try to assess the visitors' access patterns. When we created the edition, our assumption was that researchers would use the web site, while people who wanted to read the letters would favour the book. The desire to test that assumption was one of the reasons for embarking on this investigation.</p>
<p>When users view, or read, editions online, busy traffic is going on between their browser (e.g. Firefox, Internet Explorer, Safari), and the web server where the edition is located. Web servers keep logs of this traffic, and inspecting the logs gives us an opportunity to see how people are actually using the editions that we create. When people buy a book, this shows their intention to use it, in some sense. When people go to a web site, the server registers their visit, including, depending on the design of the site, every page they read and every search they do.</p>
<p>Most of the work on log analysis in scholarly environments has been done in the context of libraries researching use of electronic journals [<a href="#8">8</a>]. The financial interest in accurate knowledge about usage patterns in that context is obviously important. The LAIRAH (Log Analysis of Digital Resources in the Arts and Humanities) study [<a href="#9">9</a>] used log analysis on portal sites in order to assess usage of digital resources in the arts and humanities. I believe the present article is the first reported study on actual usage data of a scholarly digital edition.</p>
<p>First I will discuss why these log data deserve investigation. I then will show what the data that we collect looks like and discuss both their potential and their limitations. I will give a brief overview of the edition site, as the log data can only be understood in the context of the site's structure and navigational facilities. Then I'll show a number of the things that can be done on the basis of the log files.</p>
<p></p><p><a href="http://www.ariadne.ac.uk/issue66/boot" target="_blank">read more</a></p>issue66feature articlepeter bootgooglehuygens institute for dutch historyuniversity college londonarchivesbibliographic datablogbrowsercachedatadigital libraryfirefoxgraphicsinternet exploreroperating systemportalresearchsafarisearch technologyusabilityvisualisationwindowsSun, 30 Jan 2011 00:00:00 +0000editor1603 at http://www.ariadne.ac.ukInternational Digital Curation Conference 2010http://www.ariadne.ac.uk/issue66/idcc-2010-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue66/idcc-2010-rpt#author1">Alex Ball</a> reports on the 6th International Digital Curation Conference, held on 7-8 December 2010 in Chicago.</p>
</div>
</div>
</div>
<!-- version v2: final edits after author review 2011-01-12 REW --><!-- version v2: final edits after author review 2011-01-12 REW --><p>The International Digital Curation Conference has been held annually by the Digital Curation Centre (DCC) [<a href="#1">1</a>] since 2005, quickly establishing a reputation for high-quality presentations and papers. So much so that, as co-chair Allen Renear explained in his opening remarks, after attending the 2006 Conference in Glasgow [<a href="#2">2</a>] delegates from the University of Illinois at Urbana-Champaign (UIUC) offered to bring the event to Chicago.</p>
<p><a href="http://www.ariadne.ac.uk/issue66/idcc-2010-rpt" target="_blank">read more</a></p>issue66event reportalex ballcnicoalition for networked informationcornell universitydatacitedccindiana universityjohns hopkins universityleiden universitymassachusetts institute of technologymichigan state universitynational library of australianational science foundationresearch information networkrutgers universityukolnuniversity of arizonauniversity of bathuniversity of california berkeleyuniversity of cambridgeuniversity of chicagouniversity of edinburghuniversity of illinoisuniversity of oxforduniversity of sheffielduniversity of southamptondatasharei2s2idmbmyexperimentsagecitesudamihaggregationarchivesarkauthenticationblogcataloguingcollection developmentcontent managementcurationdatadata citationdata managementdata modeldata setdatabasedigital curationdigital librarye-scienceeprintsframeworkidentifierinfrastructureintellectual propertyinteroperabilityirodslinked datalinuxmetadatamobilenational libraryontologiesopen accessopen dataoperating systempersistent identifierpreservationpreservation metadataprovenancerdfrepositoriesresearchresource descriptionsearch technologysemantic websharepointsoftwarestandardstaggingteitext miningtwittervideovirtual research environmentvisualisationwikiwindowsxmlSun, 30 Jan 2011 00:00:00 +0000editor1611 at http://www.ariadne.ac.ukBook Review: Practical Open Source Software for Librarieshttp://www.ariadne.ac.uk/issue66/rafiq-rvw
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue66/rafiq-rvw#author1">Muhammad Rafiq</a> takes a look at a work on the open source community and open source software.</p>
</div>
</div>
</div>
<!-- v2 inserting author's final edits 2011-02-20-18-44 REW --><!-- v2 inserting author's final edits 2011-02-20-18-44 REW --><p>Open source (OS) usually refers to an application whose source code is made available for use or modification in line with users' needs and requirements. OS projects usually develop in the public domain where contributors participate in a collaborative manner and update or refine the product. OS offers more flexibility and freedom than software purchased with licence restrictions. Both the OS community and the library world share many common principles. They share and promote open standards and believe in sharing.</p>
<p><a href="http://www.ariadne.ac.uk/issue66/rafiq-rvw" target="_blank">read more</a></p>issue66reviewmuhammad rafiqcd-romcontent managementdatadatabasedigital librarydigital mediadrupaldspacedvdfirefoxgraphicsinstant messaginginternet explorerinteroperabilitylicencelinuxmoodleopen sourceopenofficeoperating systempodcastrepositoriesresearchsoftwarevufindwikiwordpressSun, 30 Jan 2011 00:00:00 +0000editor1617 at http://www.ariadne.ac.ukRepository Fringe 2010http://www.ariadne.ac.uk/issue65/repos-fringe-2010-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue65/repos-fringe-2010-rpt#author1">Martin Donnelly</a> (and friends) report on the Repository Fringe "unconference" held at the National e-Science Centre in Edinburgh, Scotland, over 2-3 September 2010.</p>
</div>
</div>
</div>
<p>2010 was the third year of Repository Fringe, and slightly more formally organised than its antecedents, with an increased number of discursive presentations and less in the way of organised chaos! The proceedings began on Wednesday 1 September with a one-day, pre-event SHERPA/RoMEO API Workshop [<a href="#1">1</a>] run by the Repositories Support Project team.</p>
<p><a href="http://www.ariadne.ac.uk/issue65/repos-fringe-2010-rpt" target="_blank">read more</a></p>issue65event reportmartin donnellycetisdccduraspaceedinagooglejiscopen universitysherpaukolnuniversity of cambridgeuniversity of edinburghuniversity of glasgowuniversity of hulluniversity of southamptonuniversity of st andrewsaddressing historycrispooldatasharedepositmohydrajorummementorepommanrepositrepositories support projectromeosharegeosneepwikipediaaggregationapiarchivesbibliographic datablogcontent managementcontent negotiationcsvcurationdatadata managementdata setdatabasedigital curationdigital librarydigital preservationdigitisationdisseminationdoidspaceeprintsfedora commonsfile formatframeworkgeospatial datagisgoogle mapshashtaghtmlhypertextidentifierinfrastructureinstitutional repositoryipadkmllearning objectsmashupmetadatanational libraryoerontologiesopen accessopen sourcepreservationrepositoriesresearchrsssearch technologysocial networkssolrstandardstaggingtwitterurivideovisualisationwordpressyahoo pipesFri, 29 Oct 2010 23:00:00 +0000editor1592 at http://www.ariadne.ac.ukSurvive or Thrivehttp://www.ariadne.ac.uk/issue65/survive-thrive-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue65/survive-thrive-rpt#author1">Ed Fay</a> reports on a two-day conference organised by UKOLN on behalf of JISC to consider growth and use of digital content on the Web, which was held in Manchester in June 2010.</p>
</div>
</div>
</div>
<p>Survive or Thrive [<a href="#1">1</a>] is the punchy title given to an event intended to stimulate serious consideration amongst digital collections practitioners about future directions in our field - opportunities but also potential pitfalls. The event, which focused on content in HE, comes at a time of financial uncertainty when proving value is of increasing importance in the sector and at a point when significant investment has already been made in the UK into content creation, set against a backdrop of increasingly available content on the open Web from a multitude of sources.</p>
<p><a href="http://www.ariadne.ac.uk/issue65/survive-thrive-rpt" target="_blank">read more</a></p>issue65event reported fayapplebbccalifornia digital librarycerlimedinaeduservgooglejiscjisc digital medialondon school of economicsmassachusetts institute of technologyordnance surveyrdtftalisthe national archivesuniversity of huddersfieldaccessibilityaggregationagile developmentapiarchivesblogcataloguingdatadigital curationdigital librarydigital mediadigital preservationdigitisationdisseminationdomain modele-learningflickrgeospatial datagishtmlidentifierinformation retrievalinfrastructureinstitutional repositoryinteroperabilityitunesjavascriptlinked datamashupmetadatamobilepersonalisationpreservationrepositoriesresearchresource discoverysearch technologysocial networkssoftwaresolrstandardstaggingtext miningtwitterusabilitywidgetFri, 29 Oct 2010 23:00:00 +0000editor1593 at http://www.ariadne.ac.ukEuropeana Open Culture 2010http://www.ariadne.ac.uk/issue65/open-culture-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue65/open-culture-rpt#author1">David Fuegi</a> and <a href="/issue65/open-culture-rpt#author2">Monika Segbert-Elbert</a> report on the annual Europeana Conference, held at the Westergasfabriek in Amsterdam in October 2010.</p>
</div>
</div>
</div>
<p>The Europeana Conference is a free annual event which highlights current challenges for libraries, museums, archives and audio-visual archives and which looks for practical solutions for the future. It connects the main actors in cultural and scientific heritage in order to build networks and establish future collaborations. The Europeana Open Culture 2010 Conference [<a href="#1">1</a>] was the third annual conference and the biggest so far. It focused on how the cultural institutions can create public value by making digital, cultural and scientific information openly available.</p>
<p><a href="http://www.ariadne.ac.uk/issue65/open-culture-rpt" target="_blank">read more</a></p>issue65event reportdavid fuegimonika elbertbbcbritish museumgoogleeuropeanawikipediaaggregationarchivesauthenticationblogcopyrightcreative commonsdatadatabasedigital librarydigitisationdisseminationflickrframeworkgeospatial datagisgoogle booksinformation societyintellectual propertylinked datametadataopen accessopen dataopen sourceportalprovenancesemantic webstandardsvideoweb 2.0Fri, 29 Oct 2010 23:00:00 +0000editor1594 at http://www.ariadne.ac.ukInternet Librarian International Conference 2010http://www.ariadne.ac.uk/issue65/ili-2010-rpt
<div class="field field-type-text field-field-teaser-article">
<div class="field-items">
<div class="field-item odd">
<p><a href="/issue65/ili-2010-rpt#author1">Claire Tylee</a>, <a href="/issue65/ili-2010-rpt#author2">Katrin Flemming</a> and <a href="/issue65/ili-2010-rpt#author3">Elly Cope</a> report on the two-day Internet Librarian International Conference focusing on innovation and technology in the information profession, held in London on 14-15 October 2010.</p>
</div>
</div>
</div>
<script type="text/javascript">toc_collapse=0;</script><div class="toc" id="toc1">
<div class="toc-title">Table of Contents<span class="toc-toggle-message">&nbsp;</span></div>
<div class="toc-list">
<ol>
<li class="toc-level-1"><a href="#Thursday_14_October">Thursday 14 October</a></li>
<li class="toc-level-1"><a href="#Track_A:_Looking_Ahead_to_Value">Track A: Looking Ahead to Value</a></li>
</ol>
</div>
</div><h2 id="Thursday_14_October"><a id="thursday" name="thursday"></a>Thursday 14 October</h2>
<h2 id="Track_A:_Looking_Ahead_to_Value"><a id="thursday-track-a" name="thursday-track-a"></a>Track A: Looking Ahead to Value</h2>
<h3 id="A102:_Future_of_Academic_Libraries"><a id="a102" name="a102"></a>A102: Future of Academic Libraries</h3>
<h4 id="Mal_Booth_University_of_Technology_Sydney_Australia">Mal Booth, University of Technology Sydney (Australia)</h4>
<h4 id="Michael_Jubb_Research_Information_Network_UK">Michael Jubb, Research Information Network (UK)</h4>
<p>Mal Booth from the University of Technology Sydney started the session by giving an insight into current plans and projects underway to inform a new library building due to open in 2015 as part of a major redeveloped city campus.</p>
<p><a href="http://www.ariadne.ac.uk/issue65/ili-2010-rpt" target="_blank">read more</a></p>issue65event reportclaire tyleeelly copekatrin flemmingamazonbritish librarycornell universityedinagoogleisojiscmimasopen universityporticoresearch information networkuniversity of bathuniversity of california berkeleyuniversity of cambridgeuniversity of manchesterpeprswikipediazetocandroidarchivesbibliographic datablogbrowsercataloguingcontent managementcopyrightcurationdatadatabasedigital librarydigitisationdisseminationejournalfacebookflickrfrbrhigher educationidentifierinfrastructureiphonelibrary datalibrary management systemslicencelinked datamac osmarcmashupmetadatamicrobloggingmobileopacopen accessopen sourcepodepreservationqr coderesearchrfidrsssearch technologysemantic websoftwarestandardstaggingtwittervideoweb 2.0web browserweb portalwikiwordpressyoutubeFri, 29 Oct 2010 23:00:00 +0000editor1596 at http://www.ariadne.ac.uk