The purpose of this study is to investigate the validity of using multiple-choice (MC) itemsas a complement to constructed-response (CR) items when making decisions about student performance on reasoning tasks. CR items from a national test in physics have been reformulated into MC items and students’ reasoning skills have been analyzed in two substudies. In the first study, 12 students answered the MC items and were asked to explain their answers orally. In the second study, 102 students from five randomly chosen schools answered the same items. Their answers were scored, and the frequencyof correct answers was calculated for each of the items. The scores were then compared to a sample of student performance on the original CR items from the national test. Findings suggest that results from MC items might be misleading when making decisions about student performance on reasoning tasks, since students use other skills when answering the items than is intended. Results from MC items may also contributeto an overestimation of students’ knowledge in science.