About Phil Reed

Data Specialist at The University of Manchester Library. Business data, digital skills, research and learning support.
I am also a Research Associate at Alliance Manchester Business School, The University of Manchester.
View all posts by Phil Reed →

This month I delivered Digital Humanities Second Library Lab, a hands-on showcase of digital library collections and tools created for the purpose of innovative research using computational methods. This three-hour session followed on from a previous event I ran in March and concludes a short run of events that form part of DH@Manchester.

The aim of the workshop was to inspire researchers at all levels to gain practical experience with tools and techniques in order to go on to develop individual research projects with these or similar collections. Participants did not need any technical experience to join in, other than basic office and web browsing skills. The workshop plan and instructions are available online.

What projects and collections did we look at?

The three activities focused on image searching, analysing text and analysing colour. We looked at projects including the following.

JSTOR Text Analyzer from JSTOR Labs, a beta tool which will identify what any document you give it is about and recommend articles and chapters from JSTOR about the same topics.

Robots Reading Vogue from Yale University Library’s Digital Humanities Lab, a collection of tools to interrogate the text within the entire U.S. Vogue Archive(ProQuest) and its front covers, such as a topic modeller, N-gram viewer and various colour analysis methods.

While developing this workshop, I created a project of my own to visualise the average colour used in the front covers of all full-colour issues from Illustrated London News (Gale Cengage). Just a few short Python scripts were required to extract this information from the collection and display it in an interactive web page. This allowed us to look for trends with particular hues, such as the more common use of reds on December issues.

What did we learn?

After each activity we discussed some of the issues raised. (Incidentally, I captured key points on a Smart Kapp digital flipchart or smart whiteboard, continuing the “Digital First” principles that Library colleagues are adopting.)

Image analysis and computer vision has many potential applications with library collections, such as identifying where printed or handwritten text occurs in an image, facial recognition, and detecting patterns or differences between different editions or issues within a series.

For image analysis systems to work best, the image sets and algorithms will need to be carefully curated and trained. This is a time-consuming process.

The text analyser worked quite well but, as with the image search, was not perfect. It is important to find out precisely what “goes wrong” and why.

Other applications for the text analysis tool include checking your grant application for any gaps in topics you think should be covered, for checking your thesis development, or for lecturers to check their students’ use of references in submitted papers.

Being able to visualise an entire collection in one display (and then dive into the content) can give one an idea of what is there before selecting which physical item to go to the trouble of visiting and retrieving. Whitelaw (2015) suggests that such “generous interfaces” can open up the reader to a broader, less prescriptive view into a collection than the traditional web search.

It could be more useful to be able to compare different collections or publications against each other. This can be difficult when multiple licence holders or publishers are involved, with different technical or legal restrictions to address.

Programming or other technical skills would need to be learned in order to develop or apply many tools. Alternatively, technical specialists would need to work in partnership with researchers, perhaps utilising the University’s Research IT service or the Library’s Digital Technologies & Services division.

Summary

Digital or computational tools and techniques are increasingly being applied to arts, humanities and social science methods. Many of the collections at The University of Manchester Library have potential for stimulating interdisciplinary research. Such Digital Scholarship projects would often require a greater level of technical knowledge or skill than many research groups might currently possess, so further training or provision for technical support might be necessary.

Like this:

A new pilot workshop, the first Digital Humanities Library Lab, ran on 3 March 2017. This engaging and informative cross-discipline event offered a dozen researchers the chance to explore and discuss new tools and digital text collections from The University of Manchester Library, inspiring the development of future Digital Humanities computational research methods.

A technical prototype I developed for the Business Data Service has been used as the driving force behind a new and exciting research project post, bringing together partners from outside The University of Manchester Library.

What is the basic premise?

To develop a collection of tools to bring together commercially available databases from separate suppliers for use in leading, innovative research, using specialist knowledge of the field for accurate and efficient execution.

Why is this new post useful?

After spending money on expensive data sets, we need to make the most out of them. It is critical to use them together in order to unlock their full research value. In the case of some specialist resources, this activity is non-trivial.

Why is joining these datasets difficult?

There is no easy way to use the data from these different sources together, no common index.

Identifying companies across different databases is difficult as the codes used within each platform usually do not correspond to those used in another. There are good reasons why a platform will do this (their intellectual property is one), but this makes work harder for researchers, sometimes resorting to checking company name matches by eye, one at a time!

Writing code to map these where cross-checking is available requires the software developer to be aware of the various identification codes used such as CUSIP, ISIN, SEDOL and various ticker symbols, some of which can change with time or be further complicated in other ways. A close relationship to the curators of these databases at the University is required; this is found in the Library’s Business Data Service team whose expertise is well respected and appreciated by its users.

How will it happen?

As part of the project funding application, a new post was created. It sits outside the Library but is dependent on the library staff’s curating skills and knowledge of the library’s specialist financial databases. Under this post I will use my skills as a software developer and experience working in the Library to write new tools to combine access to various datasets within the project, as the products become available and as the researchers need them.

I’ll still be working my usual job in the Library as well, so nothing is lost from the Business Data Service.

Where might it lead?

The primary objective is to publishing new research on topics covering institutional investors, financial innovation and the “real economy”.

Once the research is published, we can develop new teaching topics and further broaden access to the University’s data sets with these tools, introducing them to new audiences in other subject areas.

Like this:

At The University of Manchester Library we subscribe to many database resources, containing vast amounts of structured data, organised by further descriptive or meta data. These descriptions can be considered as many dimensions or variables, and it is important to focus on just a few to begin with.

Many research students frequently need to consult our large and rich selection of specialist business and financial databases to collect data and to shape their studies. There are over fifty databases that I would consider particularly relevant to that field, which are also of interest to a wider audience. It would be beneficial to these new researchers to have a better way to begin to answer these queries quickly, saving potential hours of trawling through the wrong resource.

As an experiment, I created this diagram of specialist financial databases in the style of a topological tube network:

Specialist financial databases visualised as a tube map

I will explain the process I took to planning and constructing this diagram below, but first I will briefly explain what it shows. Seven research areas that require the use of specialist financial and business databases are represented as tube lines. The viewer can follow each of these lines through the various database products, which are shown as stations. The places where researchers must be to use each database are shown as zones.

Identifying the content

With so many factors to consider, I focused on the most important or first answered:

Research subject area (such as corporate governance, or economics)

Geographical coverage

Access location (in the Library or through the web).

Further factors that I would like to consider include:

Historical coverage

Type of companies or equities covered (quoted, private, banks)

Consideration of survivorship bias (active or dead companies)

Type of data provided (numerical, reports).

These seven questions still only scratch the surface when choosing a business data source, but it is a start. I had already created a table with a list of the 50-plus relevant databases and columns for each of those factors (Figure a) which I used to gain a better understanding of the resources I work with when I came into post.

I worked with my colleague Xia Hong to reduce this table to the 21 most important databases and the three most important factors listed above (Figure b). The research areas were marked against databases just as yes or no matches, preparing for a decision of which lines will go through which station.

Designing the structure

I decided to use good old pen and paper when it came to drawing out the layout. (See Figure c.) Network building software exists but I decided that the learning curve for these would be too steep for the benefits, since the hand-drawn approach worked for me. I started with the database that matched the most research areas (ThomsonONE.com) and drew outwards from there.

Next, I entered the structure into PowerPoint, as it was the fastest tool available that I knew how to use (Figure d). This clearer format was used for checking the content for accuracy and omissions. The layout of the objects was refined in this tool, before employing CorelDRAW for the final markup.

The final design has these features:

Stations: database products, with symbol “W” for those with WRDS portal entry

Lines: research subject areas, coloured with University branding

Zones: access location, with inner zone 1 for databases you need to come into the Library to use; zone 2 for web access only on-campus; and zone 3 for web access from anywhere

Position: the very top is North American coverage, the left China, above the middle is Europe, and the rest is international.

Sadly, there is no river, which I could have used to separate North America from the other continents!

Where next?

This diagram is busy enough that no more information could be added without compromising its readability. There is more information on the Library website subject guide page covering these databases, which is the first port of call for a student enquiry. After that, all current students and staff of The University of Manchester are welcome to attend a research consultation session, where an expert from the Research Services team will be available.

Summary

It is difficult to convey lots of structured information. If we focus on just the initial or most important factors, we can produce something that is helpful and appealing.

Like this:

I recently attended my first conference representing The University of Manchester, for the annual Business Librarians Association (BLA). It was a particularly good conference for a first-timer as the themes were interesting and relevant to most academic libraries across the country, not to mention the friendly and welcoming atmosphere from fellow delegates and sponsors.

The major themes were:

Customer Service Excellence

Employability and Information Literacy

Marketing

Customer Service Excellence

Many academic libraries are thinking about Customer Service Excellence (CSE) status, which The University of Manchester Library achieved in 2013. Helen Loughran from Leeds Metropolitan University (soon to be known as Leeds Beckett University) spoke about her experiences with her institution’s application, which is the largest university in the country to achieve accreditation so far.

While Helen’s talk was of most benefit to people thinking about or currently working towards CSE status, there were certainly ideas to take away for those with it already, to maintain and continuously improve standards. She posed some thought-provoking questions:

Are customers engaging with a library’s social media platforms, asking questions? If so, are there staff allocated and trained to answer these questions?

Do all staff across the institution know who to pass online queries on to, including when received in error?

When a customer has expressed dissatisfaction, should we consider inviting them in for a chat?

To this I would add my own thought:

Could we use a computer-aided text analysis program for determining the tone of free-text (typed words) feedback? There are many processes for qualitative data summation, but putting qualitative text data through a tool like Diction could give better analysis.

How do librarians improve student employability?

The position of a careers service will vary between institutions, in some cases being closely tied to the library. But I think everyone whose work is related to students will have some concern for those students’ employability. Paul Chin from the University of Hull quoted that 79% of students go to university to improve their job opportunities (NSS, 2011). He went on to explain that people don’t always realise all the skills that they have already, asking this question:

In addition to developing and offering excellent training, if we can help students to be able to explain why they studied and what it will enable them to do, this will ultimately improve their interview performance.

Kaye Towlson from De Montfort University ran a breakout session titled How do librarians improve student employability? In groups, we produced an image-enhanced mind map to visualise a student’s journey through the library. Our group’s mind map (pictured) includes a customer-centric hub with spoke roads leading to books, social media, technology, skills portfolio, meeting spaces and global citizenship.

Jane Secker from LSE spoke about how information literacy relates to digital literacy. It’s not just about technology, but extends out of the library into the curriculum for all students. She also made this suggestion:

Just because young people are mostly “tech-savvy”, this does not mean that they can “just do” academic research without guidance.

Tuning out the white noise

Ned Potter from the University of York gave a great presentation on library marketing, explaining how unfocused communication is often lost or not seen at all (noticeboards, websites, some email). He suggested ways to counter it. We may well be offering services beyond the traditional (book lending) remit of a library:

This could include ensuring your content is searchable via Google, which is used must more than a library’s website as a starting point (much more). Ned also encouraged the use of direct, tailored and targeted communications to ensure the message about library value really hits home.