More on BEAM Model for Sources

I was honored to be asked back this week to the Adventures in Library Instruction podcast along with other guests from previous episodes. I learned about a number of interesting projects and ideas from the other guests. Chad Mairn makes use of PollEverywhere in his classes and workshops and finds it valuable for getting instant feedback from his students. He’s also got an interesting “Database Troubleshooting Guide” that helps walk patrons through issues they may be having when accessing library databases. Peter Larsen spoke about using Google Docs for his library’s credit course.

Chad and Peter had to leave after a half hour of recording, which left me to ramble on at the halfway mark with the show hosts about why I find Joseph Bizup’s model for teaching source types to students so powerful, a topic I recently blogged about here.

A Better System for Classifying Sources

Keri Bertino and Heather Sample at the Writing Center at Baruch College, with whom I have been working on a series of workshops for students working on undergraduate honors theses, have completely revolutionized the way that I think about sources. This summer, my colleagues recommended to me an article from 2008 by Joseph Bizup from Rhetoric Review (volume 27, number 1) titled “BEAM: A Rhetorical Vocabulary for Teaching Research-Based Writing” (behind a paywall…sorry). Arguing convincingly that the traditional model of sources that we teach to students–primary, secondary, tertiary–is limiting and confusing, Bizup goes on to suggest that we instead teach students to think about the different way that we use sources in writing.

Specifically, he recommends divvying up source types into four categories:

  1. Background: sources in which you want to assert that something is a fact and which can contextualize your claims
  2. Exhibits: sources that you offer an analysis or interpretation of
  3. Arguments: sources that are part of the discourse about your topic
  4. Method: sources that you use to delineate the method of analysis you will use or the terminology you will employ

Put more succinctly, Bizup wants us to teach students that “[w]riters rely on background sources, interpret of analyze exhibits, engage arguments, and follow methods” (76). As a mnemonic aid, the system is referred to as the BEAM model. Not only is this model useful in getting students to think about how they will use their sources in their paper and whether they have the right number from each category, but is also useful in teaching students how to analyze a source critically. In his classes, Bizup asks his students to read a source and, following the BEAM modelm, to indicate to what use each source is put.

As a librarian, I can recognize immediately how this model will help me when I do workshops and teach my own 3-credit course in research. But I can also see how it might help me in reference interactions where I am hoping to widen the student’s sense of what might work as a source in their research projects. All too often, students assume that sources are to be used solely as support for the claims they are making and, relatedly, that those sources must be precisely on their narrow topic (e.g., “I need to find a source that talks about the role of mothers in this poem by Dickinson and this play by Brecht”). With this model in mind, I can work with students to look at what sources they have found and whether they have found enough from each category to make the claim they want to. In particular, I think I will be able to employ Bizup’s maxim about getting started with a research project, where he states that “[i]f  you start with an exhibit, look for argument sources to engage; if you start with argument sources, look for exhibits to interpret” (82).

I would recommend that any librarian who does reference work or who teaches in classrooms take a look at this article.

Yahoo! Answers in the Wild

Yesterday, I got a chance to meet informally with the students who will be in my 3-credit course I’m teaching here in the library at Baruch College  (“Information Research for the Social Sciences and the Humanities”). When I was asking the students to tell me about kinds of research they have done that takes place outside of the classroom, a couple of the students mentioned using Yahoo! Answers to get advice about what cell phone or laptop to buy. Although they also mentioned using things like reviews on CNET, they preferred the personal commentary from question answerers to the more polished articles on tech and gadget sites.

When my class starts next Monday, I hope to probe more deeply into this issue and find out more about how they assess the credibility of those providing answers in Q&A sites. Not only will it be interesting to me as a reference librarian but also as an instructor trying to teach a semester-long course on how to find, evaluate, and use information to answer questions.

Getting More Mileage Out of Reference Interactions

A recent article in Reference & User Services Quarterly caught my eye today:

Finnell, Josh and Walt Fontane. “Reference Question Data Mining: A Systematic Approach to Library Outreach.” Reference & User Services Quarterly 49.3 (2010): 278-286. Web.

The authors describe a process developed at the library at McNeese State University in which reference statistics recorded at the desk included not just question type but also the subject of the question and the course (if any) connected to the question. Questions were later mapped to LC classification numbers, which then helped library staff to make collection development decisions. The data also led the librarians to refine their instructional offerings and to make new outreach efforts to specific departments.