var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-43621832-1']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + ''; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();

Ethics Assessment


The ULO Ethics Committee worked on their assessment from 2009-10. They found the AAC&U's VALUE Rubric (PDF) to be the most appropriate for the project, and it was adapted for Cal Poly use. The committee's goal was to measure student traits like self awareness, understanding of different ethical theories and concepts, the application of ethical theories and concepts, and the evaluation of different ethical perspectives and concepts. The committee created and piloted a 40-item online test to begin measuring student proficiency in ethical reasoning. Because the instrument was in development, the committee collected limited demographic information: class level, college, and location of administration, i.e., whether or not the test was administered in an ethics course. In addition, several open-ended questions asked respondents to comment on the structure and content of the test in order to collect input for further development.

Committee Membership

Members on the committee were composed of faculty and staff and were led by Patrick Lin (Philosophy). Other members included Tal Scriven (Philosophy), Keith Abney (Philosophy), Doris Derelian (Food Science & Nutrition), Steve Mintz (Accounting), Adrienne Miller (Student Affairs), Suzanne Phelan (Kinesiology), and Charlotte Ratzlaff (School of Education).

Process and Methods

Once the use of the rubric was established, an online test was created and piloted for the project. The instrument included 37 multiple-choice questions. Six questions tested students’ level of self-awareness about the origins of their ethical beliefs. These items were scored on a scale of 1 to 5 with 1 being strongly disagree and 5 being strongly agree. Because these items could not be scored as correct or incorrect, they were not used to compute the score.

Participants were recruited in two ways. University Assessment Council members, college deans, ethics committee members, and others were asked to identify appropriate courses; the plan was to recruit participants who had been formally exposed to the study of ethics at the university level. Because the resulting group was too small, committee members and others were asked to administer the test in their own classes, even if these were not related to ethics.

Courses finally included BMED 420, BUS 424, ES 244, ES 322, PHIL 230, PHIL 231, PHYS 405, and PHYS 424. The pilot resulted in completed responses from 264 undergraduate students—more than expected—representing every college and class year (first year, second year, third year, fourth year) as well as varying levels of ethics coursework. Eleven questions tested students’ understanding of different ethical theories and concepts; seven tested their ability to recognize ethical issues; six tested their ability to apply ethical theories and concepts; and seven tested their ability to evaluate different ethical perspectives and concepts. These items allowed respondents to choose among four to five answers. Responses were coded as correct/incorrect and summed together for a total test score. In addition, the mean score for each of these traits was also computed. The full report is available in the WASC EER Report (PDF), pages 14-16.


The average exam score was 12.45 of 31, with students answering 40% of the questions correctly. Disturbingly, students who had taken an ethics course did not show higher performance on the test.


  • Complete the ULO Project on ethics, taking into account the need to align the instrument with the learning outcomes of ethics courses
  • It has been suggested that the recently reconfigured Academic Assessment Council will address this item


Related Content