Pages

Friday 13 June 2014

Closed stacks - improved workflow

ELAG 2014
Going digital in the closed stacks – Library logistics with a smart phone
Eva Dahlbäck and Theodor Tolstoy, Stockholm University Library

(see description of the talk)

Approx. 300 orders a day. Up until recently, orders would have been printed on paper. Orders from closed stacks, interlibrary loans and missing books. Before: digital -> paper -> digital (email sent to user to let them know item available).

The workflow is essentially the same, even if the orders come in differently and come out differently. So now a new digital workflow: All the orders are collected together in a list managed by the Viola programme, in which they are sorted. The librarian makes downloads the list, goes to the stacks and comes back with items. It's managed by a portable phone.

The list can be ordered by location. It is downloaded in the mobile phone. Each book has a separate entry and includes the shelfmark info. The librarians has also a small printer for slips. The phone scans the barcode and a slip is printed and goes in the book. Once a book has been scanned, it receives a green star (they can be various colours for various situations).

Benefits are fewer manual steps and a unique workflow. It's faster and easier to follow the order steps. It requires less people. Viola is also connected to the invoice-system (what does this imply?...). However collection is only twice a day...

A close collaboration between developpers and librarians was necessary. They worked with "user stories" to help in the collaborative work, to prioritise tasks and break them into smaller steps and to follow progress. The user stories included staff, such as the book fetcher (what does he/she need?) Intention is to release this as open source!

The technology used was ASP.NET MVC, the database is a SQL server (replaceable thanks to the ORM PetaPoco) and Android APP (Xmarin Monodroid). Most of these are open source.

Identifying and identifiers

ELAG 2014
Integrating ORCiD – A two way conversation
Tom Demeranville, software engineer specialising in digital identifiers and identities at the British Library

(see description of the talk)

ODIN (DataCite Interoperability Network) is concerned with linking authors with research output and is a 2-year project. It's also looking at datasets, grey literature, etc. What do we mean by identifying authors? The answer varies. One person can have lots of identifiers and profiles, including institutional profile, an ISNI, a ScopusID etc. or an ORCiD.

So the first distinction is between an identifier and a profile. We usually think of identifiers as unique ID but a profile can be much more. Another important point is that no one wants to type the same thing twice. Profiles can be automated or manual. Then there is the difference of identifiers as Institutions or Users, with conflicting notions of control but we all want disambiguation...

So what we need... One identifier and many profiles that solve different use cases.

ORCiD is meant to be a more open identifying system, managed for people with many different use cases. Relevant to publishers, unis, funders and libraries. It would help systems to talk to each other.

Ethos is e-Thesis Online Import. See demo at http://ethos-orcid.appspot.com

The ODIN project is working to integrate ORCiD and DataCite.

The Mechanical Curator

ELAG 2014
The Surprising Adventures of the Mechanical Curator, and Other Tales
Ben O’Steen, technical leader of  British Library Labs

(see description of the talk and the slides)

This project started last year, as an accident! Taking the stuff that's technically accessible... and making it accessible! Engaged with the researchers, formally and informally through yearly competitions. What they win is our time and effort! The unifying theme to (pretty much) all the requests is: Give us everything! But this is quite depressing: so librarians don't take part in research, they're only there to provide content? Another theme is to have tools to interpret the content, to be able to work on broad sweeps of content rather than one at a time.

The Sample Generator shows the chasm between the collection and the digitised material. Not only is the content not as much digitised but it is also not as accessible as it could be.

The challenge was that research didn't want to work with api's but access large amounts of data. Made an experiment: face detection on 19thC illustrations - it wasn't very successful. The depiction is usually "clean" and posed, males represented differently from females and therefore less often detected etc. But it gave the idea of the mechanical curator, who digs in the collection of digitised images and tries to find visually similar images, based on a calculated match. It has now been doing that for a couple of months (and tweets about it). An unguided way of discovering material.

Images published on Flickr, many views in 4 days. They are published as CC0 and there are already examples of creative re-uses, such as colouring-in for children, an artit's interpretation etc. But this doesn't bring money to the Library, which is always hard to justify. But this is encouraging creativity, it may not be research but it's not less imporatnt. An animation student used images to represent them in 3-D. Moments, by Joe Bell

So the impact is hard to measure. Accessible is great, can we make it more useful? A group of UCL big Data CS students will be given access to all the book data, cloud computing and will make an experiment for broader and more direct access to the collections.


Thursday 12 June 2014

Data visualisation

ELAG 2014
Data visualization as a library service? Examples from Chalmers library
Stina Johansson, Librarian/bibliometrician at Chalmers Library, Sweden

(see description of the talk)

Father of Data? William Playfair (1759-1823), Scottish engineer and political economist. Developed first data charts. "The king at once undrestood the charts and was hihgly pleased. He said they spoke all languages..."

Graphics can make data much more "readable". A lot of communication is done through images. Chalmers Library uses a lot of visualisation to communicate about its data. E.g. topical analysis through keywords or citations networks or geospatial visualisations. E.g. Author co-citation analysis: a map representing the most cited authors, with links between them meaning various things, size or centrality of "dots" providing visual information.

Some of the visualisations are interactive - clicking brings to additional or linked info. Also visualisations of country level collaboration with environmental department data, for example.

See: http://chalmeriana.lib.chalmers.se/visuals/journal_citation/

Tools
Gephi: open source visualisation tool
Raw: open web app to create vector-based visualisations (from spreadsheets, potentially?) used on top of D.js.library (java) through a simple interface
VOSviewer: can use with a raw text file, easy to use

Data has to be clean and structured though! Be creative and play!

EuropeanaBot

ELAG 2014
EuropeanaBot – using open data and open APIs to present digital collections
Peter Mayr, administrator for the ILL-system at the North Rhine- Westphalian library consortium (hbz) in Cologne

(see description of talk)

Serendipity vs standard search. The library is a "precious provider of unpredictability"

TwitterBots are a class of software. EuropeanaBot is based on Europeana api - uses open data collections to sweep interesting things, and make some kind of catalogue enrichment. E.g. list of Nobel Prize winners, Guardian api, place names etc.The Guradian api allows to get news and keywords with corresponding images. Wordnik api: every day at 1pm a word is published and Europeana looks for relevant images. Wikipedia api works in a similar way.

There's of course a Digital Persona behind the EuropeanaBot (he likes to post images of cats).

Conclusion: we hide great objects behind search forms, so we need more serendipity! Let our collections speak for themselves. Not too much work and maintenance is required and it brings results. People out there will listen.

Code behind the EuropeanaBot api: https://github.com/hatorikibble/twitter-europeanabot

The Revolution

ELAG 2014
The black box opens
Marina Muilwijk, software developer University of Utrecht

(see description of talk)

Web services to replace combination of files and sql. E.g. system that can talk from repository to digitisation process or to the catalogue etc. Without an api it would be really complicated to do this. So must think of what should the api do, where is the data coming from?

Example: mobile view of an individual's loans. Without needing to talk directly to the circulation system, it talks to the outside layer via the api and finds the info.

The "revolution", inspired by book "The Lean Startup", start with hypothesis, test it and measure, i.e. build, measure, learn and so on (in a circle - no starting point). At startup, not sure what the end product will be. If you ask users you may not get very far... So from wrong to right: success when requirements are implemented vs success when users use it!

Revolution represented here

Useful and Usable Web Services

ELAG 2014
Building Useful & Usable Web Services
Steve Meyer, Technical Product Manager OCLC WorldShare Platform

(see also description of this talk)

API = a system of tools and resources in an OS enabling developers to create software applications. For data an example would be to create an aspirational view of what your data could look like, i.e. expose it as you want it to look like. Standards should be used to bring a clear understanding of your data model, serialisation and statements you want to make. Think of the community you're aiming at but also the one you want to belong to. A good standard will be stable.

sql is a way of creating api's. The language is not that distant from http commands (such as post, get etc.) to provide access to data. As api creator we can use any respectable programming language.

WorldCat metadata as case study. Is made of core assets with an intuitive api.Use of data modelling to carry out all sorts of tasks. Example of an api would be the validation of MARC record, with messages such as "008 must be present" etc.

Issues around authentication. When in a web service context, it's not a person you're authenticating. Access to a dataset by a machine is not without consequence. An api "key" allows me to enter but still need to provide identification. At OCLC try to provide equivalence api to web service after authentication (?)

Other issues: software is never perfect, documentation always with biaises...Ex. OCLC's api "holding availability", with intended use: connect a patron to a library that holds an item but actual use: read high volume holindgs info for analysis. Sometimes things go wrong.

Consider principles of useability in the way that you would in a unser interface context.

Questions:
Is it best to build api on top of your own system rather than someone else's? Yes though not always possible.
How to guarantee openess and re-sharing? Most OCLC api's are operational data, so we don't think about that. But we think of licencing and rights can be integrated in the data itself.
Who should write the documentation, the technical people, an editor etc.? It can be done as part of the process but mostly useful for highly technical people. Other option is to use people closer to the customers, such as product managers etc.

Europeana: Collection level description

ELAG 2014

Discovering libraries’ gold through collections-level descriptions
Valentine Charles, Data Specialist at The European Library and Europeana

(See also description of talk)

Europeana work in a large scale aggregation ecosystem and now works in collaboration with Cendari. Digitisation is still only the tip of the iceberg of the content of European libraries. Digital objects are displayed attractively as they are very visual. There is also full text available. But most of this material is disconnected from one another. Mostly it is item-level description, with different levels of quality. Wouldn't it be nice to link an image to the relevant journal page?

New strategy for collection-level description. Looking at specific topics, talking to historians, surveying members. E.g. Old Slavic Manuscripts - not digitised but at least there is some description. Another example would be not so much a subject but the specific collections from a particular library, e.g. National Library of Serbia.

Collaboration with Cendari with libraries and archives, about integrating digital data on the mediaval times and the First World War. Researchers working on a project should use this data and we try to facilitate the research activity.The aim is to link the material from different libraries. An environment called Archival research guide is being built to support research. Idea is: when a researcher starts on a topic, he/she writes some paragraphs and it would be incorporated in the guide so that it becomes a narrative, which links to specific collections (the sources). Encouraged to use and re-use data directly in the research, rather than only talking about it. Tools such as NER (name-entity recognition) technology to help identify the entities used in the research or facets etc. are made available. The Archival research guides provide access points to relevant contemporary research, connect collection description to other resources via domain specific ontologies.

Beyond collection description, the most interesting is to link it to other data and other type of material. E.g. with some full text, enriching with annotations, vocabularies, NER etc. Also interested to get this data re-integrated in the various libraries. Cendari is a 4-year project, there are 2 more years to go.

Wednesday 11 June 2014

MIF and Europeana Inside

ELAG 2014

Metadata Interoperability Framework (MIF)
Naeem Muhammad, Software Architect at LIBIS KULeuven and Sam Alloing, Business Consultant at LIBIS KULeuven, Belgium

Made for Europeana inside. This is a technical project with different partners, content providers and software providers. It's to create a better integration of the different content in Europeana. So the goal is to create a component that developers of the different systems can directly add into their content management system and it will talk to Europeana. End of this project planned in Sept. 2014.

The content is enriched by Europeana and the content providers can get it back. This is still in discussion and development. The enriched metadata is not always correct so there are still issues to resolve.

ECK=Europeana Connection Kit. Technical providers use this to transform and push data to Europeana. The ECK local is the part to be integrated in the local system itself. Core ECK services include:
  • Metadata definintion
  • Set Mangaer
  • Statistics
  • PID generation
  • Preview service
  • Validation of metadata
  • Data push (Sword) / data pull (OAI-PMH) because some content providers would rather push the data to Europeana rather than them taking it but Europeana is afraid of compabtibility issues so the data pull is still the one in use
  • Mapping and Transformation
LibisCoDe (L content delivery to Europeana) is the tool used (?) for content providers to put their data in. It is then sent to Europeana. It will be developed in a way that users can decide what data they want to take back from Europeana.

Mapping and Transformation supports MARC to EDM and LIDO to EDM because that's what used in Europeana. LIDO =XML fomrat used by museums, EDM=RDF format from Europeana. So the input has to be MARC XML or LIDO. The output is only EDM at the moment. There are core classes and additional classes. EDM uses Dublin Core. For MARC, it works like this:
[command],[marc tag + subfield],[edm field], e.g. COPY, marc506a,dc:rights
doesn't use indicators at this moment, it could change.

Commands are: COPY, APPEND, SPLIT, COMBINE (multiple source fields can be combined in one target field), LIMIT (to limit the number of characters in a field), PUT, REPLACE, CONDITION (combine different actions and use a conditional flow; can be used with IF.

The plan is that EDM has to be as easy for users as possible, even though some understanding will help. The important is to know the EDM field, not the format.

It's a wservice, so no user interface. Meant to be integrated in CMS or use a REST client. Parameters: records (can be a zip file, XML), mappingRulesFile, sourceFormat (LIDO or MARC), targetFormat (EDM)

Future: add input formats, such as csv, filemaker xml, some custom xml etc. Update/add output formats (add EDM contextual classes, other formats...), extend/add update actions, add queuing (near future), add mapping interface (or integrate with  MINT, another Europeana project for mapping)


Details, links and interface

ELAG 2014
The LIBRIS upgrade
Niklas Lindström, Lina Westerling, Swedish National Libraray

Abstract: Starting in earnest in 2012, The Swedish National Library (Kungliga Biblioteket – KB) begun the development of a new infrastructure and system, based at its core on Linked Data. It directly employs the linked entity description model represented by RDF, and has the capacity to mesh with other linked data on the web, through minimal engineering efforts.

(see more description of talk)

Need for a modern produce, a platform for data for making is searchable and describing, a method for mapping exisitng data to contemporary models of description and a user interface for editing (cataloguing, curating, linking). Web-based cataloguing tool.

The platform is Open Source and works with all data formats, including RDF etc. Limits of MARC, especially hard to find things and to define things. RDF is not a solution but a means to help solve this problem, because of how it describes data. The tool is a simple expression independent of formats, terms etc. Transform of MARC in JSON-LD. Use of prefixes and uri's.

The Utter Denormalisation of turning JSON-LD back into MARC. This is a temporary measure, because needs to integrate with union catalogues, extract data etc. But the idea is that there's a new interface and new formats.

The design is intiuitve, simple, inspiring, user centered. See beta at devkat.libris.kb.se (test/test) It is quite similar to an end-user search tool. It is based on linked data. Needs to handle all data. Normalising the catalogue will not be able to cover everything.

Doing the mapping is challenging, the data expressed in MARC isn't always normalised so it's not clear if the description is an expression or a manifestation. MARC is very structured but sometimes meaningless, there's lots of convolusion in the specificity, the perspective of different domaines are not well coordinated etc. But there are possibilities of capturing the specificity, better coordinating the vocabularies and so on. Then by linking to external resources we add more value to our resources. Use of SPARQL to help in the linking of sources. Value can also be added to link to internal data.

Another of the main challenges is convincing people, especially cataloguers, so we need to be open.

...in the precioussss knowledge

ELAG 2014
Lord of the strings – a somewhat expected journey
  • Karen Coyle, Digital Libraries Consultant, USA
  • Rurik Greenall, Developer, Norwegian University of Science and Technology (NTNU) Library
  • Lukas Koster, Library Systems Coordinator, Library of the University of Amsterdam
  • Martin Malmsten, Head of Development and Design, National Library of Sweden/LIBRIS
  • Anders Söderback, Head of the Department of Publishing, Stockholm University Library
(see description of the talk)

Sir Marc of the SubFields thinks the question is a waste of time: where is the lingering Gold?
Whilst Richard the Evangelist of the Dubliners sold us the Discovery Tool with a Search Box.

We need to find how to turn this straw... books, into gold: Drink from the magic cup of Sir Tim then wander in The Cloud. The RDA rules... so secret no one knows what it is.

The Story of Link-a-Lot: the Format Monster follows any format, the Deep Sea Owl jumped in the water and disappeared, the wolves Frbrooooo, the non-dead Marc and twin brother Mods... all enter the story.

The answer is in GOD On Tology, Godot. Begin your search waiting for Godot.

For a complete view of the story, see The Lord of the Strings slides

Role of libraries in supporting digital scholarship

ELAG 2014
Key note: The Role of libraries in supporting digital scholarship
Stella Wisdom, Digital Curator, The British Library

(see description of the talk)

Need to change the services to meet the need of researchers. There is more and more digital content, increased collaboration working or re-purposing of content. The BL wants the researchers to do innovative research with their content. The BL has been digitising for at least two decades and aims to do much more.

Digital content - examples of recent developments at the BL

  • Georeference maps and new interactive tool (http://www.bl.uk/maps
  • Europeana 1914-1918 Roadshows - visited museums in different parts of the country, showing some of their digitised images
  • Off the Map: video games festival, following a preservation about complexe object conference to which Stella went to and gave her ideas of what the BL can do in this area. There's a museum about video games, Victoria & Albert Museum also organised a competition. The BL made a special feature on the web archive. BL organised a competion: Crytek off the map: visual trip through 17thC. London made by 6 2nd-grade students (winners of last year's competition)
  • Work done on sound collections, with permissions to re-use (under certain conditions) see the Flying Buttress
  • Organising exhibitions such as Beautiful Science (picturing data, inspiring insight)
  • Dora's lost data game
  • British Library labs: one of the main actions is a yearly competition to identify innovative ideas that showcase the Library's collections
  • The Victorian Meme Machine, to preserve Victorian jokes (one of the winners of the labs competition) - it will combine jokes with images, all coming from the BL collections