A Multi-institution Exploration of Peer Instruction in Practice

Written with Cynthia Taylor, Jaime Spacco, Andrew Petersen, Soohyun Nam Liao, and Leo Porter.
Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE), pages 308-313, 2018.
A preliminary version was presented as a poster at the 49th ACM Technical Symposium on Computer Science Education (SIGCSE), 2018.


Abstract:

Peer Instruction (PI) is an active learning pedagogy that has been shown to improve student outcomes in computing, including lower failure rates, higher exam scores, and better retention in the CS major. PI's key classroom mechanism is the PI question: a formative multiple choice question on which students vote, then discuss, then vote again. While research indicates that PI questions lead to learning gains for students, relatively little is known about the questions themselves and how faculty employ them. Additionally, much of the work has examined PI data collected by researchers operating in a quasi-experimental setting. We examine data collected incidentally by multiple instructors using PI as a pedagogical technique in their classroom. We look at how many questions instructors use in their courses, the difficulty level of the questions, and normalized gain, a metric that looks at increases in student correctness between individual and group votes. We find normalized gain levels similar to those in existing literature, indicating that students are learning, and that most questions, even those developed by instructors new to PI, fall within recommended difficulty levels, indicating instructors can create good PI questions with little training. We also find that instructors add PI questions over the first several iterations of a new PI course, showing that they find PI questions valuable and suggesting that full development of PI materials for a course may take multiple semesters.