Author Archives: mek

About mek

Citizen of the World, Open Librarian @ openlibrary.org

Refining the Open Library Catalogue: My Internship Story

By Jordan Frederick AKA Tauriel063 (she/her), Canada

Photo of Jordan

When deciding where to complete my internship for my Master’s in Library and Information Science (MLIS) degree, Open Library was an obvious choice. Not only have I been volunteering as an Open Librarian since September 2022, but I have also used the library myself. I wanted to work with people who already knew me, and to work with an organisation whose mission I strongly believe in. Thus, in January 2025, I started interning at Open Library with Lisa Seaberg and Mek Karpeles as my mentors. 

At the time of writing this, I am three courses away from completing my MLIS through the University of Denver, online. During my time as both a student and Open Librarian, I gained an interest in both cataloguing and working with collections. I decided to incorporate both into my internship goals, along with learning a little about scripting. Mek and Lisa had plenty of ideas for tasks I could work on, such as creating patron support videos and importing reading levels into the Open Library catalogue, which ensured that I had a well-rounded experience (and also never ran out of things to do). 

The first few weeks of my internship centered largely around building my collection, for which I chose the topic of Understanding Artificial Intelligence (AI). Unfortunately, I can’t take credit for how well-rounded the collection looks presently, as I quickly realised that my goal to learn some basic coding was more challenging than I expected. If you happen to scroll to the bottom and wonder why there are over 80 revisions to the collection, that was because I spent frustrated hours trying to get books to display using the correct codes and repeatedly failed. It is because of Mek’s and Jim Champ’s coding knowledge that the collection appears fairly robust, although I suggested many of the sections within the collection, such as “Artificial Intelligence: Ethics and Risks” and “History of Artificial Intelligence.” However, Mek has informed me that the AI collection will likely continue to receive attention by the community for the remainder of the year, as part of the project’s yearly goals. I hope to see it in much better shape by the time of our annual community celebration in October. 

Screenshot of the "Understanding Artificial Intelligence" collection. Showcases various AI books.

The Artificial Intelligence Collection. 

I successfully completed several cataloguing tasks, including adding 25 AI books to the catalogue. With the help of Scott Barnes, an engineer at the Internet Archive, I made these books readable. I also separated 36 incorrectly merged Doctor Who books and merged duplicate author and book records. Another project involved addressing bad data, where hundreds of book records had been imported into the catalogue under the category “Non renseigné,” with minimal information provided for each. While I was able to fix the previously conflated Doctor Who records, there are still over 300 records listed as “Non renseigné.” As such, this part of the project will extend beyond the scope of my internship.

Screenshot of a book record for Doctor Who - Inferno.

One of the fixed Doctor Who records.

I am particularly proud of this patron support video I created as part of my internship. It shows patrons how to create bookmarks and notes within a book they are reading, as well as how to search certain phrases or words within a book. I also created a video on how to use the audiobook feature in Open Library. Projects like these, assigned by Lisa, tie directly into my MLIS education, allowing me to put my education in meeting patrons’ needs to use. Lisa also asked me to look at the Open Library’s various “Help” pages and identify any issues such as broken links, misleading/inaccurate information, and more. This task allowed me to practice working with library documentation and to advocate for patrons’ needs by examining the pages through the perspective of a patron rather than that of a librarian.

A screenshot from the book Anne of Green Gables, used in a patron support video. There are three bookmark options on the left-hand side.

First patron support video created.

I created a survey to determine patrons’ needs and wants regarding the AI collection. This, in the library profession, is referred to as a “patron needs assessment,” which is vital when building a collection. While it would certainly have been fun for me to create a collection purely based on my own interests, there is little point in developing a collection of interest to only one person. Mek gently reminded me of this,  so I thought about how an AI collection may benefit patrons. Some potential uses for the collection I came up with were:

  • Understanding AI
  • The History of AI as a field
  • Leveraging AI for work
  • Ethics of AI

In order to determine patron perspectives on the collection, I developed a Google Forms survey, which asks questions such as: 

  • In your own words, tell us what types of books you would like to see in an AI collection (some examples might include scientific research, effects on the future, and understanding AI).
  • What are you likely to use an AI collection for (eg. academic research, understanding how to use it, light reading)?
  • Do you have any recommendations for the collection? 

Once this survey is made live, those involved in the collection will have a better idea of how to meet patrons’ needs.

While I had initially assumed that the collection would allow me to build up some collection-building skills and would hopefully benefit patrons with an interest in AI, Mek has since informed me that this collection ties in with the library’s yearly goals. In particular, the AI collection aligns with the goal of having “at least 10 librarians creating or improving custom collection pages” [2025 Planning]. Additionally, I have spent some time bulk-tagging various books (well over 100 by now), which also ties into the team’s 2025 goals. It’s gratifying to know my efforts during my internship will have far-reaching effects.

As with the AI collection, using reading levels to enhance the K-12 collection is still a work in progress. As I worked with Mek over the course of the last nine weeks, I learned more about the JSON data format than I ever knew before (which was nothing at all), what “comment out” means when running a script, and a general idea of what a key and a value are in a dictionary. So far, we’ve been able to match more than 11,000 ISBNs from Mid-Columbia library’s catalogue to readable items in the Open Library, allowing us to import reading levels for these titles and add them to the search engine.

Finally, I was offered the chance to work on my leadership skills when both Mek and Lisa asked me to lead one of our Tuesday community calls. While initially caught off guard, I  rose to the challenge and led the call successfully. I certainly fumbled a few times and had to be reminded about what order to call on people for their updates (and ironically forgot Lisa before a few community members reminded me to give her a chance to speak). But I appreciated the chance to take on a more active role in the call and may consider doing so again in the future. 

The last nine weeks have been both intense and highly educational. I am grateful I was able to complete my internship through Open Library, as I believe strongly in the organisation’s mission, enjoy working with people within its community, and intend to continue contributing for as long as possible. I would like to thank Mek and Lisa for making this internship possible and offering their guidance, Jim Champ for his help in coding the AI collection, and Scott Barnes for taking time out of his evening and weekend to assist me with JSON scripting (and patiently answering questions).

I look forward to continuing contributing to Open Library. 
If you’re interested in an even more in-depth view of the work I did during my internship, feel free to read my final paper.

API Search.json Performance Tuning

This is a technical post regarding a breaking change for developers whose applications depend on the /search.json endpoint that is scheduled to be deployed on January 21st, 2025.

Description: This change reduces the default fields returned by /search.json to a more restrictive and performant set that we believe will meet most clients’ metadata needs and result in faster, higher quality service for the entire community.

Change: Developers are strongly encouraged to now follow our documentation to set the fields parameter on their requests with the specific fields their application requires. e.g:

https://openlibrary.org/search.json?q=sherlock%20holmes&fields=key,title,author_key,author_name,cover_i

Those relying on the previous behavior can still access the endpoint’s previous, full behavior by setting fields=* to return every field.

Reasoning: Our performance monitoring at Open Library has shown a high number of 500 responses related to search engine solr performance. During our investigation, we found that some endpoints, like search.json, return up to 500kb of payload and often return fields with large lists of data that are not frequently used by many clients. For more details, you can refer to the pull request implementing this change: https://github.com/internetarchive/openlibrary/pull/10350

As always, if you have questions or comments, please message us on x/twitter @openlibrary, bluesky, open an issue on github, or contact mek@archive.org.

Warmly,

The Open Library Maintainers

Improving Search, Removing Dead-Ends

Thanks to the work of 2024 Design & Engineering Fellow Meredith White, the Open Library search page now suggests Search Inside results any time a search fails to find books matching by title or author.

Before:

After:

The planning and development of this feature were led by volunteer and 2024 Design & Engineering Fellow, Meredith White who did a fantastic job bringing the idea to fruition.

Meredith writes: Sooner or later, a patron will take a turn that deviates from what a system expects. When this happens, the system might show a patron a dead-end, something like: ‘0 results found’. A good UX though, will recognize the dead-end and present the patron with options to carry on their search. The search upgrade was built with this goal in mind: help patrons seamlessly course correct past disruptive dead-ends.

Many patrons have likely experienced a case where they’ve typed in a specific search term and been shown the dreaded, “0 results found” message. If the system doesn’t provide any next steps to the patron, like a “did you mean […]?” link, then this is a dead-end. When patrons are shown dead-ends, they have the full burden of figuring out what went wrong with their search and what to do next. Is the item the patron is searching for not in the system? Is the wrong search type selected (e.g. are they searching in titles rather than authors)? Is there a typo in their search query? Predicting a patron’s intent and how they veered off course can be challenging, as each case may require a different solution. In order to develop solutions that are grounded in user feedback, it’s important to talk with patrons

In the case of Open Library, interviewing learners and educators revealed many patrons were unaware that the platform has search inside capabilities.

“Several interviewees were unaware of Open Library’s [existing] full-text search, read aloud, or note-taking capabilities, yet expressed interest in these features.”

https://blog.openlibrary.org/2024/06/16/listening-to-learners-and-educators/

Several patrons were also unaware that there’s a way to switch search modes from the default “All” to e.g. “Authors” or “Subjects”. Furthermore, several patrons expected the search box to be type-agnostic.

From our conversations with patrons and reviewing analytics, we learned many dead-end searches were the result of patrons trying to type search inside queries into the default search, which primarily considers titles and authors. What does this experience look like for a patron? An Open Library patron might type into the default book search box, a book quote such as: “All grown-ups were once children… but only a few of them remember it“. Unbeknownst to them, the system only searches for matching book titles and authors and, as it finds no matches, the patron’s search returns an anticlimactic ‘No results found’ message. In red. A dead-end.

As a Comparative Literature major who spent a disproportionate amount of time of my undergrad flipping through book pages while muttering, “where oh where did I read that quote?”, I know I would’ve certainly benefitted from the Search Inside feature, had I known it existed. With a little brainstorming, we knew the default search experience could be improved to show more relevant results for dead-end queries. The idea that emerged is: display Search Inside results as a “did you mean?” type suggestion when a search returns 0 matches. This approach would help reduce dead-ends and increase discoverability of the Search Inside feature. Thus the “Search Inside Suggestion Card” was born.

The design process started out as a series of Figma drawings:

Discussions with the design team helped narrow in on a prototype that would provide the patron with enough links and information to send them on their way to the Search Inside results page, a book page or the text reader itself, with occurrences of the user’s search highlighted. At the same time, the card had to be compact and easy to digest at a glance, so design efforts were made to make the quote stand out first and foremost.

After several revisions, the prototype evolved into this design:

Early Results

The Search Inside Suggestion card went live on August 21st and thanks to link tracks that I rigged up to all the clickable elements of the card, we were able to observe its effectiveness. Some findings:

  • In the first day, 2k people landed on the Search Inside Suggestion card when previously they would have seen nothing. That’s 2,000 dead-end evasion attempts!
  • Of these 2,000 users, 60% clicked on the card to be led to Search Inside results.
  • 40% clicked on one of the suggested books with a matching quote.
  • ~8% clicked on the quote itself to be brought directly into the text.

I would’ve thought more people would click the quote itself but alas, there are only so many Comparative Literature majors in this world.

Follow-up and Next Steps

To complement the efforts of the Search Inside Suggestion card’s redirect to the Search Inside results page, I worked on re-designing the Search Inside results cards. My goal for the redesign was to make the card more compact and match its styling as closely as possible to the Search Inside Suggestion card to create a consistent UI.

Before:

After:

The next step for the Search Inside Suggestion card is to explore weaving it into the search results, regardless of result count. The card will offer an alternate search path in a list of potentially repetitive results. Say you searched ‘to be or not to be’ and there happens to be several books with a matching title. Rather than scrolling through these potentially irrelevant results, the search result card can intervene to anticipate that perhaps it’s a quote inside a text that you’re searching for. With the Search Inside Suggestion card taking the place of a dead-end, I’m proud to report that a search for “All grown-ups were once children…” will now lead Open Library patron’s to Antoine de Saint-Exupéry’s The Little Prince, page 174!

Technical Implementation

For the software engineers in the room who want a peek behind the curtain, working on the “Search Inside Suggestion Card” project was a great opportunity to learn how to asynchronously, lazy load “parts” of webpages, using an approach called partials. Because Search Inside results can take a while to generate, we decided to lazy load the Search Inside Suggestion Card, only after the regular search had completed.

If you’ve never heard of a partial, well I hadn’t either. Rather than waiting to fetch all the Search Inside matches to the user’s search before the user sees anything, a ‘No books directly matched your search’ message and a loading bar appear immediately. The loading bar indicates that Search Inside results are being checked, which is UX speak for this partial html template chunk is loading.

So how does a partial load? There’s a few key players:

  1. The template (html file) – this is the page that initially renders with the ‘No books directly matched your search’ message. It has a placeholder div for where the partial will be inserted.
  2. The partial (html file) – this is the Search Inside Suggestion Card
  3. The Javascript logic – this is the logic that says, “get that placeholder div from the template and attach it to an initialization function and call that function”
  4. More Javascript logic – this logic says, “ok, show that loading indicator while I make a call to the partials endpoint”
  5. A Python class – this is where the partials endpoint lives. When it’s called, it calls a model to send a fulltext search query to the database. This is where the user’s wrong turn is at last “corrected”. Their initial search in the Books tab that found no matching titles is now redirected to perform a Search Inside tab search to find matching quotes.
  6. The data returned from the Python class is sent back up the line and the data-infused partial is inserted in the template from step 1. Ta-da!

About the Open Library Fellowship Program

The Internet Archive’s Open Library Fellowship is a flexible, self-designed independent study which pairs volunteers with mentors to lead development of a high impact feature for OpenLibrary.org. Most fellowship programs last one to two months and are flexible, according to the preferences of contributors and availability of mentors. We typically choose fellows based on their exemplary and active participation, conduct, and performance within the Open Library community. The Open Library staff typically only accepts 1 or 2 fellows at a time to ensure participants receive plenty of support and mentor time. Occasionally, funding for fellowships is made possible through Google Summer of Code or Internet Archive Summer of Code & Design. If you’re interested in contributing as an Open Library Fellow and receiving mentorship, you can apply using this form or email openlibrary@archive.org for more information.

Follow each other on Open Library

By Nick Norman, Mek, et al

Subscribe to readers with complimentary tastes to receive book recommendations.

Over the past few months, we’ve been rolling out the basic building blocks of the “Follow” feature: a way for readers to follow those with similar tastes and to tap into a personalized feed of book recommendations.

How does the “Follow” feature work?

Similar to following people on platforms like Facebook, Open Library’s “Follow” feature enables patrons to connect with fellow readers whose Reading Logs are set to public. When you follow other readers, their recent public reading activity will show up in your feed and hopefully help you discover interesting books to read next.

You can get to your feed from the My Books page, using the My Feed item in the left navigation menu:

What’s next?

Most of the functionality for following readers is live, but we’re still designing mechanisms for discovering readers to follow. Interested in shaping the development of this new feature? Take a look at these open Github issues relating to the Follow feature.

Your feedback is appreciated

Have other comments or thoughts? Please share them in the comments section below, connect with us on Twitter, and send us your feedback about the new “Follow” feature.

Let Readers Read

Mek here, program lead for OpenLibrary.org at the Internet Archive with important updates and a way for library lovers to help protect an Internet that champions library values.

Over the last several months, Open Library readers have felt the devastating impact of more than 500,000 books being removed from the Internet Archive’s lending library, as a result of Hachette v. Internet Archive.

In less than two weeks, on June 28th, the courts will hear the oral argument for the Internet Archive’s appeal.

What’s at stake is the very ability for library patrons to continue borrowing and reading the books the Internet Archive owns, like any other library

Consider signing this open letter to urge publishers to restore access to the 500,000 books they’ve caused to be removed from the Internet Archive’s lending library and let readers read.