How I Manage 844 Feeds in My RSS Reader

On a mailing list that I am on, I recently chimed in on a thread about librarian bloggers with a mention of how I followed hundreds of blogs in my RSS reader. Someone asked what my system was for keeping up. Rather than burden that list with a huge reply, I’ve written it up here. I think I’ve found a system that is manageable for me, but I can’t claim that the regimen is for everyone. The short version is that I skim a lot, archive a lot of what I’ve enjoyed, and save a lot for reading later, not all of which I eventually get to. What follows is what I do pretty much every single day.

Check in Feedly Throughout the Day

There are a ton of reasons why I like Feedly as my feed reader. The main one is that it’s available to me in lots of places. The more places that I can check in to see what new items have turned up, the easier it is to keep up. I can get to it in my browser, on my Android phone and Android tablet, and the loaner iPad I have from work.

Each day, I check in anywhere from ten to thirty times a day to see what’s unread. Now that I’ve typed that, I realize how crazy it looks, but honestly, I wouldn’t be doing it if I didn’t find the stream of information in the feeds valuable. When I am viewing Feedly in the browser, I only use the keyboard shortcut of “J” to advance to the next item (or “K” if I realize I need to back up to a previous item). Scrolling and clicking is way too inefficient a technique for getting through the typical ten to twenty unread items that greet me when I log in. If I go more than eight or nine hours, there might be up to one hundred unread items if it is a weekday and thirty to forty if it is a weekend.

I do a lot of headline scanning as I click “J” and make lots of quick decisions about whether at that moment I:

  • am interested enough to way to linger and read the post right then
  • am happy enough gleaning what I can from the headline
  • want to read a few sentences from the start and then move on
  • want to save the post in one of two places (more on that in a moment)

On my phone, tablet, and iPad, I can swipe through posts almost as quickly as I can click “J” when in the browser.

When I Want To Save Something

I have two main places that I use to save articles and one more that is as much for saving as it is for publicly sharing what I think is interesting. If I see a post, maybe a longish one or one that requires more directed attention, I’ll save it to Instapaper, a service I dearly love. Once an item gets saved to my Instapaper account, I can download it to my gadgets for offline reading. The downloaded version is a clean, mostly text only version of the post, which makes it really easy to read and focus on. Once I’ve read a post that I had saved in Instapaper, I delete it from my account.

In the Feedly app, you can designate what “read later” service you want your posts to be saved to. I set it up to go to Instapaper. At the top of every post in the Feedly app is a save icon I can tap that will send that item straight off to Instaper. In the browser, when I’m plowing through posts, if I want to save a post to Instapaper, I have to use a different method; I click on the post title to open it up in a different browser tab (it opens the post on the blog itself, not in Feedly) and the use the Instapaper bookmarklet I’ve got set up on the browser toolbar.

Sometimes after I’ve read a saved post in Instapaper, (maybe half of the time), I decide that it’s interesting enough that I want to share it with the world. I find a quote from the post that’s caught my imagination (even if it’s something I don’t agree with) and post it to my Tumblr site (along with a link back to the original post). That Tumblr serves as my commonplace book as well as archive of blog posts that I may want to get back to later. By using Tumblr to maintain my commonplace book, I can make my private acts of discovery and reading public, thereby spotlighting in a very open manner something that I think other folks in libraryland might find interesting.*

There are a handful of posts that turn up in Feedly that I want to save that are not really work-related. These posts are typically very pragmatic things that I suspect I’ll want to return to later (such as posts from Lifehacker about the top 5 pieces of software for creating DVDs). Posts like these I save to Evernote in a special notebook just for such items.

Weeding the Feeds

Over seven to eight years that I’ve been using a feed reader (Bloglines, then Google Reader, now Feedly), I’ve been adding new feeds to my collection of subscriptions at the rate of about one to three a month. Every few months, I unsubscribe to a feed because I’ve giving up on it providing any items I care about or because I’ve noticed that it’s been dormant for at least a year. I’ve not been that assiduous, though, about weeding dormant feeds all these years, as a number of blogs that I had given up as dead have sometimes come back to life. Also, it’s really not worth my time to delete dead feeds, as they really don’t take up any mental space for me; they’re are invisible within my reader unless I go browse the complete list of feeds, something I rarely do.

So that’s my system. It may sound nutty to some, but I hope that the technique of passing items off to Instapaper and Evernote is useful to some people who are struggling with unread items. My final word of advice is don’t sweat it at all if you decide to declare feed bankruptcy; if your inbox overflows beyond all reason, just click the “mark all as read” and move on with your life. Chances are, someone in the near future will blog about that same topic or link back to one of those posts you skipped, offering you another chance at it.

* Using the IFTTT service, I’ve got it set up so that every post on my Tumblr site is copied into my Evernote account. That way, I’ve got all those posts backed up in Evernote and saved in place where I’ve got all sorts of documents, notes, etc. squirreled away (not to mention easily refound via Evernote’s great search capabilities).

How the Mobile Web Affects Perceptions of the Traditional Web

This fall, I’m going to be working with some colleagues from other parts of CUNY to do some research that will allow us to explore the how perceptions of the web on mobile devices and on desktop/laptop browsers affect the user experience. Specifically, we want to look at an interface where there is a mobile web version that is notably different from the traditional web interface. How distinct can the interfaces be before the user experience is notably degraded? And which interface will suffer more in the way it’s perceived?

I realize that one way to get around the problem of forked interfaces (one for desktop/laptop browser dimensions and one for mobile browser dimensions) is to focus instead on a responsive design that shifts content around depending on the device being used. But I still wonder where the break point is for responsive design, too. How much shifting of content on a page in a responsive design can be done before the user who visits a site on different devices regularly gets thrown off by the altered layouts?

It may be the case that most users have minimal issues in adjusting themselves as they use different devices to visit the same site. I’m interested now in finding literature that addresses these questions and hopeful that the research my colleagues and I are about to  undertake will offer evidence with respect to interfaces in library resources.

Redesigning a Faculty Services Page

Today, we start the process of redesigning a page on our library’s website that details various services available to faculty members. I thought it might useful to document that process a bit.

Some History

The library here at Baruch College launched a redesigned website on December 26, 2012. Most of the work that went into the redesign focused on student needs. Now, we’d like to take some time to rethink how the services and resources of interest to the faculty are presented on the site. The text on the current “Faculty Services” page is mostly the same text we had on the old site.

The Plan

The short version: needs assessment, redesign, usability tests, tweak design as needed.

The long version: Today, I meet with a five-person Committee on the Library, a body whose members are elected from departments across the three schools at Baruch (a business school, a school of arts and sciences, and a school of public affairs). To prepare for today’s meetings, I asked members to complete a three-question survey:

  1. What are the top three tasks that you come to the library website for?
  2. What other reasons or tasks bring you to the library website?
  3. What brings you to the library website more: your own research needs or your teaching needs?

At the meeting, I intend to:

  • review the overall plan for redesigning the page
  • review the results of my survey and a survey administered last fall to the faculty that focused on the value of the library
  • do a card sorting exercise where I ask them to arrange cards featuring services and resources into piles that make sense to them (from this, I hope to discern a useful way to chunk the content on the page, to find better wording for that content, and to learn if there are any things I forgot or that I can forget)

After the meeting is over, I expect to do the following this semester:

  • work with our web design team to come up with a new layout and new text for the page
  • make individual appointments with members of the Committee on the Library to have them do a traditional usability test that will likely take them to the redesigned faculty services page as well as other places on the website
  • after analyzing the results of the usability tests, I’ll work with the web design team to further refine the faculty page

I expect that the process will be completed this spring semester. As I complete steps in the plan, I’ll try to return to my blog to write up posts about how it’s been going.

Looking for an E-Resource Ticketing System

At my library, I’ve been asked recently to look into ways that we might create an e-resource ticketing system. The main goal is to have a system that makes it easy for librarians to report a problem and to check the status of efforts to fix the problem.

Here is a list of some of the key functions and features that I’ve come up with so far:

  • librarians can report a problem with an e-resource
    • a web form would be the main way to enter requests
    • email submission would be a useful though not essential additional way to report
  • librarians can browse previously reported problems to see if the one they want to report has already been reported
    • filtering and sorting options would be preferable
  • librarians can check status of previously reported problems
  • if a login system is required for the librarians submitting or browsing tickets, it should allow us to hook up with Active Directory (I don’t want anyone to have to remember any additional user name/password combos)
  • my supervisor, the head of collection management, should be able to assign tickets to me or others as needed
  • those of us handling tickets should be able to add data to an additional field if initial troubleshooting reveals that the problem is related to a different e-resource or system (for example, a report may come in that we can’t access a particular journal, but the problem may actually turn out to be one with SFX and not the database where the journal is found)
  • those of us handling tickets should be able to update the status of the tickets and to add notes about how the resolution is progressing

This system is meant to be for internal use only and wouldn’t be made visible to our public. It’s got to be web-based, as we don’t want to be messing around with installing software. I’m also hoping that we can find a solution that is free. It doesn’t have to be a really fancy or slick system. Here are some of the options I’m thinking about:

After all this fuss, I may end up going with a boring but serviceable Google Spreadsheet/web form combination, as I can get it up and running for free in about 15 minutes. What free systems or tool would you recommend? If anyone has screenshots or a public view of some part of the system they use, I’d love to check them out.

Backup Livescribe Data in Dropbox

I’ve loved my Livescribe pen since I got it a few summers ago. It’s awesome to be able to take notes in the freest possible way–with a pen and paper–and know that regardless of which notebook I am using, all my notes will be aggregated in one, searchable, digitized place (the Livescribe desktop software). Until this week, though, I only had that searchable digitized place available on one computer (my work computer).

Thanks to this blog post by Rohan Kapoor, I’ve learned how to store the Livescribe files in Dropbox and thus make them available on any computer where I’ve installed the Livescribe desktop software. The process wasn’t as simple as just moving the files from the location on my hard drive to a folder in Dropbox. Instead, I had to install a program called Junction and create virtual pointers that said to the Livescribe software, “Hey, that archive of Livescribe files isn’t really here; it’s over there in Dropbox.” Setting it up involved typing in commands in the C prompt, something that took me back to good old DOS days.

I see that Livescribe now offers a new pen that automatically transmits via wifi all your notes to your Evernote account, thereby achieving the same kind of automated cloud storage of notes that I did. If you’ve got an older pen like mine, you may want to look into the Livescribe/Dropbox set up I did.

Using Qualtrics for Usability Testing

At the marvelously helpful Usability@NYU event I was at yesterday, I learned about a great way to use survey software (Qualtrics) for usability testing. Since we have the same software here at Baruch College, I spent part of today setting up a few sandbox surveys so that I could try out different question types and get a sense of how survey data would be recorded and displayed. I’ve found three question types so far that look like they’ll be useful. All of them involved uploading screenshots to be part of the question.

Question Type: Heat Maps

Looking at a screenshot, the user gets to click somewhere on the screen in response to some question posed in the survey. The data then gets recorded in a heat map of click data; if you mouse over different parts of the heat map report, you can see how many clicks were done in that one spot. Another way you can set up the screenshot is to predefine regions that you want to name so that the heat map report not only offers the traditional heat map display but also a table below showing all the regions you defined and how many clicked in one of those special regions.

Question Type: Hot Spots

As with the heat map question type, the hot spot question presents the user with a screenshot to click on. But this type of question requires that the person setting up the survey predefine regions on the screenshot. When the test participant is viewing the screenshot, they are are again being asked to click somewhere based on the question being posed. The survey designer can either make those predefined regions have borders that are visible only on mouse over or that are always visible. By making the region borders visible to the test participant, you can draw the participant’s eye to the choices you want him/her to focus on.

Question Type: Multiple Choice

Although multiple choice questions are the most lowly of question types here–no razzle dazzle–it wasn’t until today that I realized how easy it is to upload an image (such as a screenshot) to be part of the answer choice. This seems like a great way to present 2 or more design ideas you are toying with.

Many Uses for a Survey

As a one-person UX group at my library, I find running tests a challenge sometimes if I can’t find a colleague or two to rope into lending a hand with the test. Now I feel like I’ve got a new option for getting feedback, one that can be used in conjunction with a formal usability test or that can be used in lots of different ways:

  • Load the survey in the browser of a tablet and go up to students in the library, the cafeteria, etc. and ask for quick feedback\
  • Bring up the survey at the reference desk at the close of a reference interaction when it seems like the student might be open to helping us out for a minute or two
  • Distribute the survey link through various communication channels we’ve got (library home page, email blast to all students, on a flyer, etc.)

Sample Survey

I made a sample survey here in Qualtrics that you can try out. It’s designed to show off some of the features of questions in Qualtrics, not to answer any real usability questions we currently have here at Baruch. At the close of the session, I set it up so that it offers you a summary of your response (only I can see all the responses aggregated together in a report. It’s likely that when I use Qualtrics surveys for usability, I’ll set them up so they end either by looping back to the first question (useful when I’m going up to people with my iPad in hand and survey loaded in the browser) or by giving them some thank you message. If I get enough responses in this sample survey, I’ll write a new post to show what the report for the survey looks like. In the meanwhile, I’d be interested in hearing from anyone that is already using Qualtrics for usability testing or another survey tool.

First Presentation on Summon

At the CUNY IT Conference last week, I was fortunate enough to be asked to be a panel about discovery services with a bunch of really great folks: Angela Sidman from the CUNY Office of Library Services, Nadaleen Templeman-Kluit from NYU, and Bruce Heterick from JSTOR. My presentation was focused on how our pilot of Summon has been going. This was the first time since we launched Summon in January of this year that I’ve been asked to do a presentation on it. It was really useful to take some time to think about what impact we’ve seen so far and what kind of an impact we hope to see in the coming years.

Here’s the presentation on Google Drive

And here are the notes for the slides:

Slide 1

  • I’m a user experience librarian at Baruch College; do a lot of usability testing of online resources and interface tweaking
  • Mike Waldman couldn’t be here today

Slide 2

  • Like all other CUNY schools, Baruch is a commuter school
  • we have a FTE of about 14,000
  • We’re primarily a business school
  • about 80% of our materials budget is spent on electronic resources (Serials, ebooks, datasets)

Slide 3

  • Like most colleges, Baruch saw the number of databases it subscribed to multiply quickly; reference and instruction required us to tell the students to first go here to search, then go here, then go here, etc.
  • In 2008, we tried to pull access to many of those databases together into a single search screen using a federated search service called 360 Search; we called the tool “Bearcat Search” and added it to our list of databases and gave it a special high visibility location with a large graphic; over the next few years, we found the interface slow, balky, wonky, and high maintenance
  • in 2012, we swapped out our 360 Search subscription for a Summon subscription (both are products from Serials Solutions); we kept the name and placement of the links to the service as before
  • As Angela noted earlier, discovery services like Summon let you add your own local metadata from things like your catalog, your institutional repository, your digital media collections, etc., to the central index provided by the vendor (that central index is pre-populated with a massive collection of records for articles and ebooks)
  • Because this is a Baruch-only pilot project, it didn’t make sense for us to add catalog records for Baruch items, as doing so would require large nightly exports from a catalog server that is shared across the whole CUNY library system
  • One interesting local set of records that we added are our LibGuides

Slide 4

  • before talking about the impact we’ve been seeing from Summon so far, let me just highlight some notable features of it; in general, the search we present is stripped down basic box, as unintimidating as your typical search engine

Slide 5

  • Results are returned very fast in Summon (maybe loading in only 1% of the time it would take a typical 360 Search to load)
  • Let’s take a closer look at the search results page for this search for “cognitive load theory”
  • You can see the articles found from our search here; the full text of these articles may be found in any one of our databases that offers full text, so a search here may lead you to a database from JSTOR, Oxford, EBSCO, ProQuest, Cambridge, Elsevier, etc.

Slide 6

  • One clever thing Summon does is recommend subject specific databases at the top of your search results pages
  • As of a few days ago, we can now tweak the way this database recommender system makes it suggestions
  • For those who worry that a discovery system might eclipse your specialized databases, this feature shows that it can complement and even spotlight resources your students and faculty didn’t even know about in the first place

Slide 7

  • On the left side of every search page is a way to filter by format type (articles, ebooks, etc.)

Slide 8

  • Also on the left is a way to filter by subject
  • One thing that we really like about Summon is speed with which results are returned after a facet is clicked
  • Usability testing I conducted earlier this year surprised me by showing me the opposite of something I’d had long assumed to be true. I’d always thought students ignored the facets and filters on the results page and focused exclusively on the list of results; instead, I saw that student instinctively used the facets to refine the search (no instruction was required!)

Slide 9

  • Another new feature this week is the “did you mean” feature that suggests a new query if it thinks you misspelled something

Slide 10

  • So lets look at that same search for “cognitive load theory” in a very popular database, one that many colleges have long had and that is intended to search across periodicals representing a wide spectrum of subjects: Academic Search Complete
  • Like most libraries, we’ve long gone with presenting the advanced search screen as the default; there is a basic one of course, but many librarians have long assumed that students need the advanced search screen even if those students didn’t know it

Slide 11

  • Summon’s search results page isn’t really that much of a departure from the typical library database
  • Summon’s interface is a bit cleaner, though; it would be interesting to test usage levels for the facets in Summon vs. those in a traditional database like this one
  • Note that in Summon, we found 56,000 items in our search; here in Academic Search Complete, only 206

Slide 12

  • So what are the key ways that Summon is affecting our library? Here is what we know
  • For reasons that are unclear, we’re seeing use of Bearcat Search much higher now that it’s powered by Summon and not 360 Search
  • On a monthly basis, we’re now seeing about 50% more search sessions in Summon than we had in 360 Search, and more than 200% searches being run
  • the speedier delivery of results in Summon mean users are more likely to do the kind of iterative searching they are used to doing in Google (average number of searches per session is 5 compared to 2 in 360 Search)
  • The redesign of our library that we are launching at the end of this year will feature a search box dead center on the home page and at the top of every internal page; we expect our stats will really explode after that

Slide 13

  • So we see the raw numbers going up but we don’t know yet who is using it and why
  • We hope that Summon will increase other things for us, too
  • Given the ease of using this tool, it serves underclassmen well and may make a better candidate for a database to use when teaching 1st year students how to search
  • Because the index in Summon is so huge and includes records in databases that we know rarely get used, it’s hoped that it’s leading students to e-content that had previously been little used
  • We also hope that the database recommender feature may be yet another way that we try to steer our students to specialized databases that they typically only think to use when a teacher or a friend recommends it
  • And finally, we hope that student satisfaction will go up as they find a tool that is easier and more pleasant to use that still taps deep in relevant content they need for their assignments

As I was digging into the statistics a bit while preparing my presentation, I realized I had a number of questions that I’d like to find answers for:

  • Do students use facets on the search results page more or less than they do in Summon? In my usability testing of Summon this spring, I was surprised by how often and easily students used the facets without any prompting from me. If they use facets more often in Summon than in other databases, why is that the case?
  • How can we find out if Summon is driving up access for full text journals that had previously been underutilized because the only way to find them previously was to use lesser known databases?
  • Do students find searching in Summon more or less satisfying than searching in our traditional article databases?
  • Is there a better way to present the recommended databases that frequently appear at the top of the search results pages?  Do students actually see these recommendations? What do they think of them? How often and when will they actually click through to the recommended database?
  • How do students feel that information from many of our business databases that feature specialized reports and data about companies, industries, etc. are unlikely to ever appear in search results pages of Summon (except as recommended databases)? If they are searching in Summon for data that is only found in specialized databases, are they more likely to give up and try their luck in Google or will they ask for help or see what other databases/tools we offer?

It looks like I could fill up the rest of my professional career as an academic librarian trying to answer all these questions. No time to get started like the present.


Source of Information for Understanding Your Academic Library Users

As a user experience librarian, I need to make sure that I am considering all the sources of information that will help me better understand our students and faculty as library users. I want as much as possible to make keep in mind the mantra that “the user is not me.”

As an exercise in making a list of the main ways that I can learn about our users in the college library where I work, I put together this little mindmap that delineates between those sources where we are actively soliciting responses from our users and those sources where were are sifting through the traces of the users’ interactions with our services and systems. Did I miss anything important?

Testing Embedded LibGuides Content on External Sites

At my library, we’re thinking of using LibGuides to manage our database lists for the redesigned library website. I’m just experimenting here to see how well the API from LibGuides works that lets you publish a box from a LibGuide on an external website. Currently, we use a homegrown database to manage the display of databases in A-Z and subject breakdowns on the library site. We also use LibGuides for the usual kinds of subject guides. To help my colleagues who make LibGuides feel confident that the database links they use are the latest ones, I have a privately published LibGuide that maintains a canonical set of URLs. When librarians create new LibGuides and want to link to a given database, they don’t have to copy and paste URLs; instead, they can create a link that has a URL that is mapped to the canonical one. If I have to update the canonical URL in LibGuides, then all the LibGuides that use that mapped URL will automatically get updated with the latest URL.

With no effort to customize the look of this box from my philosophy subject guide, here’s a box republished via API:

Teaching in a Paperless Classroom

Last fall, I taught one of the library’s three-credit courses again. I decided to teach it in a way that would use as little paper as possible by using a combination of Google Docs, WordPress, and LibGuides. I have been meaning to write about this for months now. This morning, I did a presentation at the Teaching and Technology Conference here at Baruch College at which I spoke about my little experiment. I’m presenting my slides here as a way of sharing how it worked out for me. When I prepared my slides in PowerPoint, I typed out a script for what I would say in the notes for the slides; if you download the PowerPoint or PDF version of my slides, you’ll be see what it was that I had intended to write as a lengthy post on this blog. If you just want to take a spin through the slides, you can find them embedded below.