hkr.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The use of scoring rubrics: reliability, validity and educational consequences
School of Teacher Education, Malmo University. (LISMA)ORCID iD: 0000-0002-3251-6082
School of Teacher Education, Malmo University.
2007 (English)In: Educational Research Review, ISSN 1747-938X, E-ISSN 1878-0385, Vol. 2, no 2, 130-144 p.Article in journal (Refereed) Published
Abstract [en]

Several benefits of using scoring rubrics in performance assessments have been proposed, such as increased consistency of scoring, the possibility to facilitate valid judgment of complex competencies, and promotion of learning. This paper investigates whether evidence for these claims can be found in the research literature. Several databases were searched for empirical research on rubrics, resulting in a total of 75 studies relevant for this review. Conclusions are that: (1) the reliable scoring of performance assessments can be enhanced by the use of rubrics, especially if they are analytic, topic-specific, and complemented with exemplars and/or rater training; (2) rubrics do not facilitate valid judgment of performance assessments per se. However, valid assessment could be facilitated by using a more comprehensive framework of validity when validating the rubric; (3) rubrics seem to have the potential of promoting learning and/or improve instruction. The main reason for this potential lies in the fact that rubrics make expectations and criteria explicit, which also facilitates feedback and self-assessment.

Place, publisher, year, edition, pages
2007. Vol. 2, no 2, 130-144 p.
Keyword [en]
Alternative assessment, Performance assessment, Scoring rubrics, Reliability, Validity
National Category
Pedagogy Social Sciences
Identifiers
URN: urn:nbn:se:hkr:diva-6335DOI: 10.1016/j.edurev.2007.05.002OAI: oai:DiVA.org:hkr-6335DiVA: diva2:301390
Available from: 2010-03-03 Created: 2010-03-03 Last updated: 2014-08-13Bibliographically approved
In thesis
1. Educative assessment for/of teacher competency: a study of assessment and learning in the ”Interactive examination” for student teachers
Open this publication in new window or tab >>Educative assessment for/of teacher competency: a study of assessment and learning in the ”Interactive examination” for student teachers
2008 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The aim of this dissertation is to explore some of the problems associated with introducing authentic assessment in teacher education. In the first part of the dissertation the question is investigated, through a literature review, whether the use of scoring rubrics can aid in supporting credible assessment of complex performance, and at the same time support student learning of such complex performance. In the second part, the conclusions arrived at from the first part are implemented into the design of the so-called “Interactive examination” for student teachers, which is designed to be an authentic assessment for teacher competency. In this examination, the students are shown short video sequences displaying critical classroom situations, and are then asked to describe, analyze, and suggest ways to handle the situations, as well as reflect on their own answers. It is investigated whether the competencies aimed for in the “Interactive examination” can be assessed in a credible manner, and whether the examination methodology supports student learning. From these investigations, involving three consecutive cohorts of student teachers (n = 462), it is argued that three main contributions to research have been made. First, by reviewing empirical research on performance assessment and scoring rubrics, a set of assumptions has been reached on how to design authentic assessments that both support student learning, and provide reliable and valid data on student performance. Second, by articulating teacher competency in the form of criteria and standards, it is possible to assess students’ skills in analyzing classroom situations, as well as their self-assessment skills. Furthermore, it is demonstrated that by making the assessment demands transparent, students’ performances are greatly improved. Third, it is shown how teacher competency can be assessed in a valid way, without compromising the reliability. Thus the dissertation gives an illustration of how formative and summative purposes might co-exist within the boundaries of the same (educative) assessment.

Place, publisher, year, edition, pages
Malmö: School of Teacher Education, Malmö University, 2008. 149 p.
Series
Malmö Studies in Educational Sciences, ISSN 1651-4513 ; 41
Keyword
Authentic assessment, Formative assessment, Learning, Reliability, Performance assessment, Scoring rubrics, Teacher education
National Category
Pedagogy
Identifiers
urn:nbn:se:hkr:diva-6338 (URN)978-91-977100-3-9 (ISBN)
Public defence
(English)
Opponent
Supervisors
Available from: 2010-03-03 Created: 2010-03-03 Last updated: 2014-06-10Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Jönsson, Anders
In the same journal
Educational Research Review
PedagogySocial Sciences

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 324 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf