Thinking more about yesterday’s post about how to assess the usage of bX Recommender in our Primo VE instance, it occurred to me that the usage data I was presented with the other day is incomplete and maybe asking us to compare the wrong things. The data I got included annual numbers for:
- searches run in Primo VE
- Clicks on bX recommendations
- Clicks on the Citation Trail feature (a related “you may also be interested in this” feature in journal article records)
The trouble with the search number data is that it doesn’t really get at the real question we want to answer: when users viewed the full record for a journal article that included “Related Reading” links from the bX Recommender, what percentage of those record views showed the user clicking on a recommended link? This question has an additional complication to it, as not all journal article records bring up the bX Recommender links. I’m not sure why the recommendations don’t always appear on every record. To get a rough idea of how often this occurs, I ran a search for “homophily,” limited the results to records from peer-reviewed journals, and then counted how many records in the first fifty had the bX Recommender links. I found that eighty percent did have the links (there’s that 80/20 ratio again!)
Even if we do happen to know the exact answer to that real question of recommender usage, I’m still not sure what to make of it. There are no benchmarks of bX Recommender usage that I know of. In an ideal world, I could ask Ex Libris to answer the question for me of how our numbers stack up against all the other Primo VE instances (you might have to come up with number that represents clicks per FTE and then perhaps also weight what percentage of each Primo VE’s discoverable content includes scholarly journal articles (it’s a community college library may have a notably smaller percentage of scholarly journals in their collections than a library at a large university).
Since getting the ideal usage statistics is probably out of the question, I’ve been thinking that maybe the numbers can be put in perspective with annual usage data of other services that have nothing or little to do with discovery interfaces but that give some overall sense of where the service fits into the larger set of services we provide. Being able to place the bX Recommender data into the context of all the “how many people did [fill in the blank]” in our library might be useful. I realize that looking at the numbers from various services leads to an inevitable “apples and oranges” objection, but I guess I’m feeing a bit stuck about how to come up with a solid way of analyzing usage of this service so that we can make a recommendation about whether to keep it or not.
One useful point of analysis that has been recommended to me is to consider to what extent this system supports the needs of our large undergraduate student population. Prioritizing spending that helps the college reach its stated goals of enabling student success makes sense to me, so maybe I’ll be thinking more about the extent to which the bX Recommender is a tool that students (especially the undergraduates) use.