Using Qualtrics for Usability Testing

At the marvelously helpful Usability@NYU event I was at yesterday, I learned about a great way to use survey software (Qualtrics) for usability testing. Since we have the same software here at Baruch College, I spent part of today setting up a few sandbox surveys so that I could try out different question types and get a sense of how survey data would be recorded and displayed. I’ve found three question types so far that look like they’ll be useful. All of them involved uploading screenshots to be part of the question.

Question Type: Heat Maps

Looking at a screenshot, the user gets to click somewhere on the screen in response to some question posed in the survey. The data then gets recorded in a heat map of click data; if you mouse over different parts of the heat map report, you can see how many clicks were done in that one spot. Another way you can set up the screenshot is to predefine regions that you want to name so that the heat map report not only offers the traditional heat map display but also a table below showing all the regions you defined and how many clicked in one of those special regions.

Question Type: Hot Spots

As with the heat map question type, the hot spot question presents the user with a screenshot to click on. But this type of question requires that the person setting up the survey predefine regions on the screenshot. When the test participant is viewing the screenshot, they are are again being asked to click somewhere based on the question being posed. The survey designer can either make those predefined regions have borders that are visible only on mouse over or that are always visible. By making the region borders visible to the test participant, you can draw the participant’s eye to the choices you want him/her to focus on.

Question Type: Multiple Choice

Although multiple choice questions are the most lowly of question types here–no razzle dazzle–it wasn’t until today that I realized how easy it is to upload an image (such as a screenshot) to be part of the answer choice. This seems like a great way to present 2 or more design ideas you are toying with.

Many Uses for a Survey

As a one-person UX group at my library, I find running tests a challenge sometimes if I can’t find a colleague or two to rope into lending a hand with the test. Now I feel like I’ve got a new option for getting feedback, one that can be used in conjunction with a formal usability test or that can be used in lots of different ways:

  • Load the survey in the browser of a tablet and go up to students in the library, the cafeteria, etc. and ask for quick feedback\
  • Bring up the survey at the reference desk at the close of a reference interaction when it seems like the student might be open to helping us out for a minute or two
  • Distribute the survey link through various communication channels we’ve got (library home page, email blast to all students, on a flyer, etc.)

Sample Survey

I made a sample survey here in Qualtrics that you can try out. It’s designed to show off some of the features of questions in Qualtrics, not to answer any real usability questions we currently have here at Baruch. At the close of the session, I set it up so that it offers you a summary of your response (only I can see all the responses aggregated together in a report. It’s likely that when I use Qualtrics surveys for usability, I’ll set them up so they end either by looping back to the first question (useful when I’m going up to people with my iPad in hand and survey loaded in the browser) or by giving them some thank you message. If I get enough responses in this sample survey, I’ll write a new post to show what the report for the survey looks like. In the meanwhile, I’d be interested in hearing from anyone that is already using Qualtrics for usability testing or another survey tool.

First Presentation on Summon

At the CUNY IT Conference last week, I was fortunate enough to be asked to be a panel about discovery services with a bunch of really great folks: Angela Sidman from the CUNY Office of Library Services, Nadaleen Templeman-Kluit from NYU, and Bruce Heterick from JSTOR. My presentation was focused on how our pilot of Summon has been going. This was the first time since we launched Summon in January of this year that I’ve been asked to do a presentation on it. It was really useful to take some time to think about what impact we’ve seen so far and what kind of an impact we hope to see in the coming years.

Here’s the presentation on Google Drive

And here are the notes for the slides:

Slide 1

  • I’m a user experience librarian at Baruch College; do a lot of usability testing of online resources and interface tweaking
  • Mike Waldman couldn’t be here today

Slide 2

  • Like all other CUNY schools, Baruch is a commuter school
  • we have a FTE of about 14,000
  • We’re primarily a business school
  • about 80% of our materials budget is spent on electronic resources (Serials, ebooks, datasets)

Slide 3

  • Like most colleges, Baruch saw the number of databases it subscribed to multiply quickly; reference and instruction required us to tell the students to first go here to search, then go here, then go here, etc.
  • In 2008, we tried to pull access to many of those databases together into a single search screen using a federated search service called 360 Search; we called the tool “Bearcat Search” and added it to our list of databases and gave it a special high visibility location with a large graphic; over the next few years, we found the interface slow, balky, wonky, and high maintenance
  • in 2012, we swapped out our 360 Search subscription for a Summon subscription (both are products from Serials Solutions); we kept the name and placement of the links to the service as before
  • As Angela noted earlier, discovery services like Summon let you add your own local metadata from things like your catalog, your institutional repository, your digital media collections, etc., to the central index provided by the vendor (that central index is pre-populated with a massive collection of records for articles and ebooks)
  • Because this is a Baruch-only pilot project, it didn’t make sense for us to add catalog records for Baruch items, as doing so would require large nightly exports from a catalog server that is shared across the whole CUNY library system
  • One interesting local set of records that we added are our LibGuides

Slide 4

  • before talking about the impact we’ve been seeing from Summon so far, let me just highlight some notable features of it; in general, the search we present is stripped down basic box, as unintimidating as your typical search engine

Slide 5

  • Results are returned very fast in Summon (maybe loading in only 1% of the time it would take a typical 360 Search to load)
  • Let’s take a closer look at the search results page for this search for “cognitive load theory”
  • You can see the articles found from our search here; the full text of these articles may be found in any one of our databases that offers full text, so a search here may lead you to a database from JSTOR, Oxford, EBSCO, ProQuest, Cambridge, Elsevier, etc.

Slide 6

  • One clever thing Summon does is recommend subject specific databases at the top of your search results pages
  • As of a few days ago, we can now tweak the way this database recommender system makes it suggestions
  • For those who worry that a discovery system might eclipse your specialized databases, this feature shows that it can complement and even spotlight resources your students and faculty didn’t even know about in the first place

Slide 7

  • On the left side of every search page is a way to filter by format type (articles, ebooks, etc.)

Slide 8

  • Also on the left is a way to filter by subject
  • One thing that we really like about Summon is speed with which results are returned after a facet is clicked
  • Usability testing I conducted earlier this year surprised me by showing me the opposite of something I’d had long assumed to be true. I’d always thought students ignored the facets and filters on the results page and focused exclusively on the list of results; instead, I saw that student instinctively used the facets to refine the search (no instruction was required!)

Slide 9

  • Another new feature this week is the “did you mean” feature that suggests a new query if it thinks you misspelled something

Slide 10

  • So lets look at that same search for “cognitive load theory” in a very popular database, one that many colleges have long had and that is intended to search across periodicals representing a wide spectrum of subjects: Academic Search Complete
  • Like most libraries, we’ve long gone with presenting the advanced search screen as the default; there is a basic one of course, but many librarians have long assumed that students need the advanced search screen even if those students didn’t know it

Slide 11

  • Summon’s search results page isn’t really that much of a departure from the typical library database
  • Summon’s interface is a bit cleaner, though; it would be interesting to test usage levels for the facets in Summon vs. those in a traditional database like this one
  • Note that in Summon, we found 56,000 items in our search; here in Academic Search Complete, only 206

Slide 12

  • So what are the key ways that Summon is affecting our library? Here is what we know
  • For reasons that are unclear, we’re seeing use of Bearcat Search much higher now that it’s powered by Summon and not 360 Search
  • On a monthly basis, we’re now seeing about 50% more search sessions in Summon than we had in 360 Search, and more than 200% searches being run
  • the speedier delivery of results in Summon mean users are more likely to do the kind of iterative searching they are used to doing in Google (average number of searches per session is 5 compared to 2 in 360 Search)
  • The redesign of our library that we are launching at the end of this year will feature a search box dead center on the home page and at the top of every internal page; we expect our stats will really explode after that

Slide 13

  • So we see the raw numbers going up but we don’t know yet who is using it and why
  • We hope that Summon will increase other things for us, too
  • Given the ease of using this tool, it serves underclassmen well and may make a better candidate for a database to use when teaching 1st year students how to search
  • Because the index in Summon is so huge and includes records in databases that we know rarely get used, it’s hoped that it’s leading students to e-content that had previously been little used
  • We also hope that the database recommender feature may be yet another way that we try to steer our students to specialized databases that they typically only think to use when a teacher or a friend recommends it
  • And finally, we hope that student satisfaction will go up as they find a tool that is easier and more pleasant to use that still taps deep in relevant content they need for their assignments

As I was digging into the statistics a bit while preparing my presentation, I realized I had a number of questions that I’d like to find answers for:

  • Do students use facets on the search results page more or less than they do in Summon? In my usability testing of Summon this spring, I was surprised by how often and easily students used the facets without any prompting from me. If they use facets more often in Summon than in other databases, why is that the case?
  • How can we find out if Summon is driving up access for full text journals that had previously been underutilized because the only way to find them previously was to use lesser known databases?
  • Do students find searching in Summon more or less satisfying than searching in our traditional article databases?
  • Is there a better way to present the recommended databases that frequently appear at the top of the search results pages?  Do students actually see these recommendations? What do they think of them? How often and when will they actually click through to the recommended database?
  • How do students feel that information from many of our business databases that feature specialized reports and data about companies, industries, etc. are unlikely to ever appear in search results pages of Summon (except as recommended databases)? If they are searching in Summon for data that is only found in specialized databases, are they more likely to give up and try their luck in Google or will they ask for help or see what other databases/tools we offer?

It looks like I could fill up the rest of my professional career as an academic librarian trying to answer all these questions. No time to get started like the present.

 

Source of Information for Understanding Your Academic Library Users

As a user experience librarian, I need to make sure that I am considering all the sources of information that will help me better understand our students and faculty as library users. I want as much as possible to make keep in mind the mantra that “the user is not me.”

As an exercise in making a list of the main ways that I can learn about our users in the college library where I work, I put together this little mindmap that delineates between those sources where we are actively soliciting responses from our users and those sources where were are sifting through the traces of the users’ interactions with our services and systems. Did I miss anything important?

Testing Embedded LibGuides Content on External Sites

At my library, we’re thinking of using LibGuides to manage our database lists for the redesigned library website. I’m just experimenting here to see how well the API from LibGuides works that lets you publish a box from a LibGuide on an external website. Currently, we use a homegrown database to manage the display of databases in A-Z and subject breakdowns on the library site. We also use LibGuides for the usual kinds of subject guides. To help my colleagues who make LibGuides feel confident that the database links they use are the latest ones, I have a privately published LibGuide that maintains a canonical set of URLs. When librarians create new LibGuides and want to link to a given database, they don’t have to copy and paste URLs; instead, they can create a link that has a URL that is mapped to the canonical one. If I have to update the canonical URL in LibGuides, then all the LibGuides that use that mapped URL will automatically get updated with the latest URL.

With no effort to customize the look of this box from my philosophy subject guide, here’s a box republished via API:

Teaching in a Paperless Classroom

Last fall, I taught one of the library’s three-credit courses again. I decided to teach it in a way that would use as little paper as possible by using a combination of Google Docs, WordPress, and LibGuides. I have been meaning to write about this for months now. This morning, I did a presentation at the Teaching and Technology Conference here at Baruch College at which I spoke about my little experiment. I’m presenting my slides here as a way of sharing how it worked out for me. When I prepared my slides in PowerPoint, I typed out a script for what I would say in the notes for the slides; if you download the PowerPoint or PDF version of my slides, you’ll be see what it was that I had intended to write as a lengthy post on this blog. If you just want to take a spin through the slides, you can find them embedded below.

Usability Testing Basics

The kind folks who run the Carterette Series Webinars for the Georgia Library Association invited me to do a presentation on usability testing basics. I just finished up an hour ago and wanted to share my slides as soon as possible. The webinar recording will be archived and freely available soon (check the archived sessions page). In the meanwhile, here are my slides:

If you want to see my slides with my notes, you can get the original PowerPoint slides, too.

During the presentation, I read aloud from a script we used this past January when we were testing a draft of the library website. Here’s that script:

Test Script

First of all, we’d like to thank you for coming. Before we get started, I’m going to start a recording here so that we can document this session. Please don’t worry, as this session will be kept private and you’ll remain anonymous.

[Test moderator hits CTRL-F8 on the laptop keyboard to start the audio and screen recording]

As I mentioned earlier, we’re in the process of redesigning the library web site. In order to make it as easy to use as possible, we’d like to get some input from the people who will be using it. And that’s where you come in. We’re going to ask you to perform a very simple exercise that will give us some great insight into how we can make this web site easier to use.

I want to make it clear that we’re testing the site, not you. You can’t do anything wrong here. We want to hear exactly what you think, so please don’t worry that you’re going to hurt our feelings. We want to improve it, so we need to know honestly what you think.

As we go along, I’m going to ask you to think out loud, to tell me what’s going through your mind.

If you have questions, just ask. I may not be able to answer them right away, since we’re interested in how people do when they don’t have someone sitting next to them, but I will try to answer any questions you still have when we’re done.

Do you have any questions before we begin?

[initial questions for the test subject]

Before we begin the exercise, I’d like to ask you a few quick questions:

What year are you in school?
Approximately how many times have you used the library web site? (sample responses: several times a day, once a day, once a week, once a month, less than once a month)
Can you give me a list of 3-4 things you would expect to find or be able to do on the library’s website?
What type of information or services have you looked for or used on the library web site?
Is there any information or services you have had trouble finding on the library site?

OK, great. We’re done with the questions and we can begin the exercise. Here’s how it works. First, I’m just going to ask you to look at the home page of the test library website and tell me what you think it is, what strikes you about it, and what you think you would click on first.

And again, as much as possible, it will help us if you can try to think out loud so we know what you’re thinking about.

[Test moderator opens browser to test page]

OK. Is there anything that interests you on this page that you might click on?

Before you click, can you tell me what you expect to find when you click on the link?

[after clicking] Did you find what you expected?

[Three main tasks that test subject will complete]

I’m going to ask you to try to complete some tasks using the test library site. Please keep in mind that some of the interior pages of the library site don’t have all the text or links that ultimately will. And as you can see from the library home page, there are some open spaces that we haven’t put content into yet.

[First task; make sure the browser is back to the home page]

OK, beginning at the library home page, pretend that you want to know what the hours are for the library next week. Where would you go to find that information?

[Second task]

Great. OK, now let’s say that you’ve checked out a book that is due back soon. You’d like to extend the loan period. Can you see a way to use the library site to help with that?

[Third task]

Great. Now let’s say that you want to find a textbook titled “Brief Calculus.” Can you see a way to do that?

Thank you so much for your time. Your help today is going to be fed right back into our redesign efforts.

[Test moderator presses CTRL F9 on the laptop keyboard to stop the audio and screen recording]

Please feel free to reuse this script without attribution.

My 8 Years on last.fm

On March 25, 2004, I tried out a service that was new to me, last.fm, so I could hear some new music via the internet and keep track of what local files I was playing in iTunes. Eight years later, I find I’m using the service more than ever (check out my last.fm profile to see what I’ve been up to all these years). Only a few days ago, the number of songs I had played on the service or sent listening data to the service for had hit 26,000 (a number which seems big until I compare it to some of my friends on the service who have been more devoted users).* This got me thinking about all the reasons why I’ve stuck with last.fm over the years, although I have to admit that my use of it has grown more steady over the years:

1. Creation of an archive of my listening interests

I love that last.fm keeps track of all the songs you played and gives you all sorts of rankings about which artists and songs you’ve listened to over different time spans (last seven days, last month, last three months, last 6 months, last year, overall). I’ve occasionally plugged in my user name to sites and downloaded software that will further analyze and graphically present your listening habits.

2. Other music playing services and software send my listening data to last.fm

Over the years, I’ve sent data to last.fm about what songs I’ve played (what they refer to as “scrobbling”) via iTunes on all my computers and laptops, Pandora, Grooveshark, Amazon Music Player, Spotify, YouTube, the music player on my Android phone, and even the last.fm app I have set up on my Roku player that is connected to my stereo system and TV set. I tend to hop around to different means of playing music; whenever possible, I try to connect it up to last.fm so the data about what I am listening continues to aggregate. The rate at which my listening data has grown has increased over time as I have more options for scrobbling.

3. I have discovered tons of new music from the last.fm social network

I’ve got 81 friends in last.fm, many of whom have tastes that overlap with mine and whose listening profiles that I can view and whose personal radio stations on last.fm I can listen to have led me to new artists that I’ve grown to love.

4. Last.fm is a survivor

The service launched in 2002 shortly after the dot.com bubble burst. It amazes me that the company is still around after 10 years, a long time as far as websites go.

5. The more you use it, the better the recommendation engine gets

With eight years of data in my account, I feel like I get really good recommendations back out of last.fm service. Between all the scrobbling of other music services that I’ve done and all the listening I’ve done within last.fm, the service now has tons of information about what I might like to hear next.

On this 8-year anniversary on last.fm, I decided to finally give back to the service and have started a subscription so I can lose the ads and the occasional interruptions of playback. This bit of anniversary-inflected navel gazing has got me wishing that I had a similar service that would automatically track all the books that I’ve checked out of the library and that I’ve bought from bookstores (and that would like last.fm let me remove individual items from the history for the sake of privacy or better recommendations). Although Netflix will give me recommendations based on what I’ve viewed or rated, and the Zite feed reader app on my iPad will use my ratings of read items to show me relevant and potentially interesting posts from blogs I’m not yet subscribed to, I can’t think of any of other recommender service on the web that has such a rich ecosystem of inputs and that also offers so many ways to show you how and what you consumed.

* The 26,000th song was played via Spotify and was the Clash’s “Up in Heaven (Not Only Here,” from one of my favorite albums of all time, Sandinista!.

Leaving Posterous and Going Back to WordPress

As of today, I’ve ported over copies of all my blog posts from Beating the Bounds on the hosted service on Posterous to my personal domain at stephenfrancoeur.com where I use WordPress. The new address will be:

http://www.stephenfrancoeur.com/beatingthebounds/

I’m going to leave the Posterous site up for a while (maybe forever), but all posts after this one will exclusively be found at the new address. The URL for the RSS feed is still the same, so you shouldn’t need to change anything in your feed reader (all 40 of you).

Enabling Our Public Selves in a World of Maximal Copyright Control

With the web aflame this week with talk about legislation that aims to the major media companies exert greater control over the content they helped create (or that they inherited, acquired, stole, depending on the case), I was inspired again by a vision from Lewis Hyde about the need for us to reframe the narrative from simply being one about people who create and own intellectual property to one where we think about ways that we can be and should be public selves and that we can also be individuals with intellectual property rights. The discussion today is too much framed around notions of individual property and not enough around the cultural commons that we historically have had in America. Over the years, that commons has been walled off in various ways (enclosed, much as land was enclosed in England beginning in the 1500s and mostly completed in the 1800s).

I was first introduced to Hyde’s ideas last year when I read his amazing book, Common as Air: Revolution, Art, and Ownership. If you want to get an introduction to his thinking, this 14 January 2011 story on the radio show On the Media about Martin Luther King Jr.’s “I Have a Dream” speech delves deep into creativity, copyright, and the commons.

Here’s a pull quote from Hyde in the On the Media piece where he is talking about public selves:

I’m interested in collective being. I’m interested in making it easier for people to be public and social selves, as Martin Luther King certainly was. The risk is that if we turn everything into private property, it becomes harder and harder for us to have these common or collective selves, which is something we need. In anthropology, there’s an interesting resurrection of an old word, which is the word “dividual.” So we live in a nation that values individuality; we live in a nation of individuals. But a dividual person is somebody who’s imagined to contain within himself or herself the community that he or she lives in. So it would be nice if we began to have a better sense of how to own and circulate art and ideas, such that we could be present in our dividuality, as well as our individuality. 

Sharing My Google Reader Finds via Tumblr

If you would like to see what blog posts I found interesting in my Google Reader account (which features 839 feeds), there’s now a new place to find them: an old Tumblr blog that I haven’t done much with over the years: Stephen Francoeur’s Commonplace Book.

If you’re subscribed to me on FriendFeed, you’ll see that my Tumblr site is now connected and will automatically feed in new posts. Unfortunately, the posts on FriendFeed only offer the post title; the pull quote doesn’t appear as the first comment under that post anymore (the beauty of my old sharing system of Google Reader–>FriendFeed was that my “note” would also get published on FriendFeed this way). If you want to see my posts and my annotations you’ll need to subscribe to the RSS feed in a feed reader or clickthrough to the Tumblr post.

For the moment, the Shared Items on Google Reader link blog that I had been building up for years in the old Google Reader interface is still accessible, albeit frozen in time.

For anyone still reading, I’ll mention here the least interesting part of this post. After trying out Delicious, Pinboard, and Evernote as possible candidates for a replacement for the Shared Items on Google Reader link blog I was no longer able to use, I decided to use Tumblr because it’s a “share to” option in both Google Reader and Feedly that gives me an RSS feed that I can do lots of other things with. I’m next going to look into sending every Tumblr entry to my Delicious and Evernote accounts (probably via the IFTTT service, which lets you do these kinds of connections easily).