Personalized Documentation in Reference Interactions

Does this scenario from the reference desk sound familiar? A student asks for help finding something that requires you to set up a complicated search with lots of limiters, nested terms, truncation, etc. Or maybe the search you want to demo is in one of the funkier databases where it takes a few minutes just to get the query set up (Factiva) or where it takes a lot longer (Datastream). Or worse, you know that the seemingly simple request for information is going to require them to go to two or more different databases (this one for the articles, that one for the datasets, and another for that specialized report).

As you try your best to explain to the student what you are doing, you maybe urge them to take some notes. Maybe you print out a screenshot and mark it up to make sure the student doesn’t forget to tick off that little checkbox in the lower right corner that is absolutely essential to the query working at all. The student, eyes glazed over, maybe a little fearful looking, thanks you and walks over out the door hoping they’ll remember everything they heard by the time they get to a computer (across the library, in a lab, or worse, at home, hours later).

Wouldn’t it be great if all that demonstration you’re doing on the staff computer at the desk could be automatically recorded, uploaded to the library’s YouTube account with a private URL (maybe even one that could be password protected by you and the student)? And then, to help the student get to that URL, the screen on your computer would offer up a shortened URL and an affiliated QR code. You could print out the page with the URL for the student, or he/she could capture the URL with a QR code app on his/her smartphone. Maybe the screen would also have options that would let the user type in a mobile phone number or an email address that they’d want to the URL sent to. Or if I can really go off into fantasy land, the student could send the video to their personal research pad that the university set up for him/her on the first day of school (see my previous post for details on this).

While I’ve long done annotated screenshots on the fly for students I’ve helped at the desk (and also in email and chat reference interactions), it would great if we could provide richer personalized help documentation. Pieces of this vision are doable now: it’s trivial to set up screencapture software or use web-based services to record your demo. It’s not super hard to upload video to sites like YouTube or Vimeo. You can use things like to generate shortened URLs and a related QR code. But what I’d like to see is a system that can automate some of these processes: click the “stop” button on your screencapture software and the system does all the rest of the steps for you quickly, minimizing the time you and the student have to wait for it to do its thing. This is the future I want.

Screencasts for chat reference training

A week ago, my library at Baruch College started officially sharing its subscription to QuestionPoint with the libraries at three other CUNY schools: Brooklyn College, Hunter College, and the CUNY Graduate Center. We are still in a soft launch period now, as the librarians at those other schools will need some time to get acclimated to doing chat reference as part of the 24/7 Reference Academic Cooperative that QuestionPoint offers.

For the purposes of training my colleagues in our consortium about how to use QuestionPoint for chat reference, I slapped together a wiki (I never seem to run out of reasons to launch a new wiki) with links to QuestionPoint documentation, contact info, and a shared schedule. I’ve also finally had a good reason to try using Macromedia Captivate, which had been installed on my machine for a month unused (!), to create some screencasts. My hope is that these screencasts will cut down on the time I have to spend on the phone or composing e-mails with step-by-step instructions.

So far, I’ve got a screencast explaining how to create personal scripts in QuestionPoint and another demonstrating how to assign a resolution code at the close of a chat session. I must confess to have really enjoyed the process of recording the screencast and then tinkering with the pacing and special effects. I hope to have a half dozen of these screencasts done by the end of winter. Any feedback (in the comments section) on what I’ve done so far would be greatly appreciated. If you feel like nosing around our humble little wiki, you can do so here.