hkr.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Educative assessment for/of teacher competency: a study of assessment and learning in the ”Interactive examination” for student teachers
Malmö university. (LISMA, Lärande och undervisning i matematik och naturvetenskap)ORCID iD: 0000-0002-3251-6082
2008 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The aim of this dissertation is to explore some of the problems associated with introducing authentic assessment in teacher education. In the first part of the dissertation the question is investigated, through a literature review, whether the use of scoring rubrics can aid in supporting credible assessment of complex performance, and at the same time support student learning of such complex performance. In the second part, the conclusions arrived at from the first part are implemented into the design of the so-called “Interactive examination” for student teachers, which is designed to be an authentic assessment for teacher competency. In this examination, the students are shown short video sequences displaying critical classroom situations, and are then asked to describe, analyze, and suggest ways to handle the situations, as well as reflect on their own answers. It is investigated whether the competencies aimed for in the “Interactive examination” can be assessed in a credible manner, and whether the examination methodology supports student learning. From these investigations, involving three consecutive cohorts of student teachers (n = 462), it is argued that three main contributions to research have been made. First, by reviewing empirical research on performance assessment and scoring rubrics, a set of assumptions has been reached on how to design authentic assessments that both support student learning, and provide reliable and valid data on student performance. Second, by articulating teacher competency in the form of criteria and standards, it is possible to assess students’ skills in analyzing classroom situations, as well as their self-assessment skills. Furthermore, it is demonstrated that by making the assessment demands transparent, students’ performances are greatly improved. Third, it is shown how teacher competency can be assessed in a valid way, without compromising the reliability. Thus the dissertation gives an illustration of how formative and summative purposes might co-exist within the boundaries of the same (educative) assessment.

Place, publisher, year, edition, pages
Malmö: School of Teacher Education, Malmö University , 2008. , p. 149
Series
Malmö Studies in Educational Sciences, ISSN 1651-4513 ; 41
Keyword [en]
Authentic assessment, Formative assessment, Learning, Reliability, Performance assessment, Scoring rubrics, Teacher education
National Category
Pedagogy
Identifiers
URN: urn:nbn:se:hkr:diva-6338ISBN: 978-91-977100-3-9 (print)OAI: oai:DiVA.org:hkr-6338DiVA: diva2:301438
Public defence
(English)
Opponent
Supervisors
Available from: 2010-03-03 Created: 2010-03-03 Last updated: 2014-06-10Bibliographically approved
List of papers
1. The use of scoring rubrics: reliability, validity and educational consequences
Open this publication in new window or tab >>The use of scoring rubrics: reliability, validity and educational consequences
2007 (English)In: Educational Research Review, ISSN 1747-938X, E-ISSN 1878-0385, Vol. 2, no 2, p. 130-144Article in journal (Refereed) Published
Abstract [en]

Several benefits of using scoring rubrics in performance assessments have been proposed, such as increased consistency of scoring, the possibility to facilitate valid judgment of complex competencies, and promotion of learning. This paper investigates whether evidence for these claims can be found in the research literature. Several databases were searched for empirical research on rubrics, resulting in a total of 75 studies relevant for this review. Conclusions are that: (1) the reliable scoring of performance assessments can be enhanced by the use of rubrics, especially if they are analytic, topic-specific, and complemented with exemplars and/or rater training; (2) rubrics do not facilitate valid judgment of performance assessments per se. However, valid assessment could be facilitated by using a more comprehensive framework of validity when validating the rubric; (3) rubrics seem to have the potential of promoting learning and/or improve instruction. The main reason for this potential lies in the fact that rubrics make expectations and criteria explicit, which also facilitates feedback and self-assessment.

Keyword
Alternative assessment, Performance assessment, Scoring rubrics, Reliability, Validity
National Category
Pedagogy Social Sciences
Identifiers
urn:nbn:se:hkr:diva-6335 (URN)10.1016/j.edurev.2007.05.002 (DOI)
Available from: 2010-03-03 Created: 2010-03-03 Last updated: 2017-12-12Bibliographically approved
2. Dynamic assessment and the “Interactive Examination”
Open this publication in new window or tab >>Dynamic assessment and the “Interactive Examination”
2007 (English)In: Journal of Educational Technology & Society, ISSN 1176-3647, E-ISSN 1436-4522, Vol. 10, no 4, p. 17-27Article in journal (Refereed) Published
Abstract [en]

To assess own actions and define individual learning needs is fundamental for professional development. The development of self-assessment skills requires practice and feedback during the course of studies. The “Interactive Examination” is a methodology aiming to assist students developing their self-assessment skills. The present study describes the methodology and presents the results from a multicentre evaluation study at the Faculty of Odontology (OD) and School of Teacher Education (LUT) at Malmö University, Sweden. During the examination, students assessed their own competence and their self-assessments were matched to the judgement of their instructors (OD) or to their examination results (LUT). Students then received a personal task, which they had to respond to in written text. After submitting their response, the students received a document representing the way an “expert” in the field chose to deal with the same task. They then had to prepare a “comparison document”, where they identified differences between their own and the “expert” answer. Results showed that students appreciated the examination in both institutions. There was a somewhat different pattern of self-assessment in the two centres, and the qualitative analysis of students’ comparison documents also revealed some interesting institutional differences.

Keyword
Assessment, Self-assessment, Oral health education, Teacher education
National Category
Pedagogy
Identifiers
urn:nbn:se:hkr:diva-6336 (URN)
Available from: 2010-03-03 Created: 2010-03-03 Last updated: 2017-12-12Bibliographically approved
3. Estimating the quality of performance assessments: the case of an “Interactive examination” for teacher competencies
Open this publication in new window or tab >>Estimating the quality of performance assessments: the case of an “Interactive examination” for teacher competencies
2009 (English)In: Learning Environments Research, ISSN 1387-1579, E-ISSN 1573-1855, Vol. 12, no 3, p. 225-241Article in journal (Refereed) Published
Abstract [en]

Professional schools prepare students to become competent professionals. Consequently, there is a need for assessments that can determine the acquisition of the relevant professional competencies. Although using performance assessment to replace traditional paper-and-pencil tests might provide one way to move forward, the use of performance assessments for summative purposes has been shown to be problematic (e.g. marker consistency and construct representation). With the aid of a comprehensive framework of quality criteria for competence assessments, this article considers if one particular existing competence assessment methodology is suitable for summative as well as formative use. It is argued that the comprehensive quality estimation of the examination procedure aids in identifying strengths and weaknesses in the assessment methodology, and that this information can be used to facilitate the inclusion of performance assessment in higher education, both for summative and formative use.

Keyword
Higher education, Performance assessment, Quality estimation, Reliability, Validity
National Category
Pedagogy
Identifiers
urn:nbn:se:hkr:diva-6324 (URN)10.1007/s10984-009-9061-z (DOI)
Available from: 2010-03-02 Created: 2010-03-02 Last updated: 2017-12-12Bibliographically approved
4. The use of transparency in the "Interactive examination" for student teachers
Open this publication in new window or tab >>The use of transparency in the "Interactive examination" for student teachers
2010 (English)In: Assessment in education: Principles, Policy & Practice, ISSN 0969-594X, E-ISSN 1465-329X, Vol. 17, no 2, p. 183-197Article in journal (Refereed) Published
Keyword
Assessment, Scoring rubrics, Self-assessment, Transparency
National Category
Pedagogy
Identifiers
urn:nbn:se:hkr:diva-6337 (URN)10.1080/09695941003694441 (DOI)
Available from: 2010-03-03 Created: 2010-03-03 Last updated: 2017-12-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

FulltextLIBRIS

Authority records BETA

Jönsson, Anders

Search in DiVA

By author/editor
Jönsson, Anders
Pedagogy

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 513 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf