Tag Archives: Assessments

Assessments – Developing and Beyond!

Assessments in a school are a critical part of educational instruction. It determines if the goals of education are being met. Assessment results provide phenomenal information about the extent to which students and schools are meeting standards and what they need to do to improve.

There are six main types of assessments: diagnostic, formative, summative, norm-referenced, criterion-referenced, and interim/benchmarked.

And in each of these there can be five main question types: multiple choice, constructed response, extended constructed response, technology enhanced, and performance task.

It can be in three delivery methods of assessment: paper and pencil, online, or computer adaptive testing.

Finally the scoring can be done in three different ways: by hand, by computer, or distributed scoring.

One of the most popular way to do assessment is the multiple choice questions, MCQ.

MCQ is a form of assessment to evaluate a test-takers skill, knowledge and understanding of a subject. And of late, we see many examinations are following the

MCQ styles in their evaluation.

Most of us at RB are aware of these terms – Bloom’s level evaluation. But what’s this Bloom’s level? Curious, right?! Here we go, Benjamin Bloom was an American educational psychologist who classified the learning objectives into three main domains cognitive, affective and psychomotor.

Cognitive domain revolves around knowledge, comprehension and critical thinking on a particular subject.

Blooms taxonomy on cognitive domain is the one that excites me the most, being in the education industry.


Bloom's Taxonomy

Bloom’s Taxonomy

MCQ’ are good –

  • Easy to evaluate when done to a larger population or on a regular interval
  • Allows the instructor to evaluate the class’s performance on a wider range of content taught
  • Easy for the students to answer many questions in the given time about 45-60 seconds per question
  • Performance can be compared between classes and across years
  • Incorrect alternatives provide diagnostic information

MCQ’ are tricky to handle –

  • Each item should be short and clear
  • Independent items without overlap
  • Avoid negatively stated items
  • Avoid clues to correct answers
  • High degree of dependence on the student’s reading ability and the instructor’s writing ability
  • May encourage guessing

Once we have developed the right set of questions, we can test them for few of the key features of student assessment methods: Validity of the questions – content and the construct.

Some of the parameters commonly used to assess the validity of MCQ items are:

Difficulty index – calculated as the percentage of students that rightly answered the item. If the percentage values of the items are above 90%, then the questions cannot be reused again, as it’s a very easy question for the targeted audience.

On the other hand, if the percentage values are any lesser than 20%, then the questions have to be reviewed again for the content and the construct, as it’s a very tough question.

Discrimination index – describes the ability of an item to distinguish between the high and low scorers. Scorers of upper and lower 27% of the overall exam is separated out. Higher the value, more discriminating the test is!

  • Greater than 30% – good discrimination, we can reuse the questions
  • 20 to 30% – Needs an improvement to the questions
  • Less than 20% – poor discrimination, we will have to reject these questions. This means that the low performing studentsselected the correct answers more often than the high scorers.

Distractor efficiency – This is yet another tool to tell if the item was well constructed or not. The quality of distractors influences student’s performances on a test question. Ideally, low scorers
would choose to select the distractors, whereas the high scorers will select the right option. All of the incorrect options, or distracters, should actually be distracting. Preferably, each distracter should be selected by a greater proportion of the lower group than of the upper group. If, in a five-option multiple-choice item, only one distracter is effective, the item is, for all practical purposes, a two-option item. Existence of five options does not automatically guarantee that the item will operate as a five-choice item.