Creating a New Core Curriculum

A blog devoted to discussion of core curriculum and general education requirements, written in the context of my service as chair of a committee to draft a new core for Santa Clara University, a Jesuit, Catholic university in Silicon Valley.

Tuesday, June 20, 2006

Assessment Part 2

The latest issue of the Papers and Proceedings of the American Economic Review contains three articles on 'research on teaching innovations' in economics. The idea is that the effects of the innovations are measurable because the economists who introduced the innovations, or who analyzed the innovations, used randomized trials. The papers find that the effects of the innovations are on the order of 2 percentage point increases in final overall scores, leading to half-letter grade change for about 25% of class (in the case of the first innovation). The three innovations are: (1) requiring that Econ students take a math skills course and test online prior to their work in the Econ class; (2) using an interactive software to enable experiments and feedback real time in the classroom; and (3) having problem sets be graded rather than optional. The papers are striking, and typical of the assessment literature. To wit: they elevate the virtue of careful measurement of results (through randomization in these cases) while completely ignoring the most fundamental principle taught in economics, that of opportunity cost. Each one acts as if their innovation was costless and produced a 2% return, so why not do it everywhere? If students are taking a math skills class online, sure their Econ grade might go up- but might they not have done worse on their history assignment? Won't grades go up with practically *any* assignment? Don't instructors try to strike a balance between the work they think is important for their class, and an understanding that students are working on other classes at the same time? Moreover, signing students up for an on-line tutorial and math exam is not free- who is paying for that? If someone is paying for that, might the money not yield higher scores by bringing in high-quality tutors? The same goes for the grading requirement- mightn't the learning effect be even greater if the professor were not grading but rather were doing more review sessions? And the interactive software? How many class sessions are spent teaching students an obscure proprietary software that has no applications outside the Econ classroom. Isn't there an opportunity cost in terms of Econ material not covered? Maybe students perform better because they do less 'work' and have more fun (i.e. non-work), and the exams are ratcheted downwards? Anyway, food for the assessment skeptic to digest.

Thursday, June 15, 2006

Problems and remedies in assessing general education or core learning outcomes, part 1

One frequently encountered problem with 'final reports' of an assessment of a general education or core learning outcome is that both the quantitative data presented and the narrative thrust of the report focus on average performance. But general education or core learning outcomes are concerned about qualities that all graduates of the program will have, not their average performance. So the relevant quantitative data is what percent of students are not meeting the threshold of quality performance, sometimes called proficiency, and a clear communication of what that proficiency is. One rarely sees both of these elements in final assessment reports. There are obvious reasons for their absence: if the threshold is set too high, then the report will end up with a statement, "30% of students do not meet the proficiency standard." If the threshold is set too low, then the report will end up with a statement, "All students demonstrated a consistent ability to add four digit numbers without a calculator in a short period of time." Do you have good and bad examples of this kind of report? Please post on the comments section.