Can Fishes Climb Trees?

The question sounds really absurd, right? But, that’s exactly how we assess the kids’ learning in today’s schools. We all shed tears when we watched the movie ‘Taare Zameen Par’ and cheered Aamir Khan. How many such teachers who motivate the child in his areas of passion, do we  have in the real world? Or does the system allow such teachers to exist?

We, at Report bee are striving hard to bridge the gap and capture the learning of a child holistically. We’re still a long way off but we’re slowly but surely moving towards a day where a kid who wins drama competitions regularly is seen in the same light as a kid who perennially owns  the “First rank” badge in a class.

That day would truly be momentous in the life of Report Bee’s journey.

Naughty Teachers

When we designed our product, we knew the importance of the data we are dealing with. So, it was of paramount importance for the product to be rock-solid and stable. We invested heavily in storing multiple daily backups, at multiple locations, so the data is never lost, while also providing a historical view.

A golden rule in system administration is: Humans fail before machines. So, any system we design should protect against human errors first, before we design for natural disasters, failing machines, etc.

We have had quite a few cases of teachers making mistakes. As soon as the teachers or school administrators have realized their mistake, they have contacted us and we have always been able to help them out using our backups. The tricky situations arise when schools realize there has been an error, but the responsible teachers profess ignorance about the cause. If true, then it would mean there is a serious flaw in our product.

We had to investigate. We picked up our magnifying glass and went sniffing through our tracking records. To understand the usage patterns  of our users, we track all activities of the users on our website. And this usage data gave us an insight into what actually happened at the school and where the “error” originated. We don’t like to point fingers, but…. awkward!

In today’s digital age, it may seem easy to hide your actions behind a veil of anonymity. But the truth is that you’re just as accountable for your actions as in the physical world.

Assessments – Developing and Beyond!

Assessments in a school are a critical part of educational instruction. It determines if the goals of education are being met. Assessment results provide phenomenal information about the extent to which students and schools are meeting standards and what they need to do to improve.

There are six main types of assessments: diagnostic, formative, summative, norm-referenced, criterion-referenced, and interim/benchmarked.

And in each of these there can be five main question types: multiple choice, constructed response, extended constructed response, technology enhanced, and performance task.

It can be in three delivery methods of assessment: paper and pencil, online, or computer adaptive testing.

Finally the scoring can be done in three different ways: by hand, by computer, or distributed scoring.

One of the most popular way to do assessment is the multiple choice questions, MCQ.

MCQ is a form of assessment to evaluate a test-takers skill, knowledge and understanding of a subject. And of late, we see many examinations are following the

MCQ styles in their evaluation.

Most of us at RB are aware of these terms – Bloom’s level evaluation. But what’s this Bloom’s level? Curious, right?! Here we go, Benjamin Bloom was an American educational psychologist who classified the learning objectives into three main domains cognitive, affective and psychomotor.

Cognitive domain revolves around knowledge, comprehension and critical thinking on a particular subject.

Blooms taxonomy on cognitive domain is the one that excites me the most, being in the education industry.

 

Bloom's Taxonomy

Bloom’s Taxonomy

MCQ’ are good –

  • Easy to evaluate when done to a larger population or on a regular interval
  • Allows the instructor to evaluate the class’s performance on a wider range of content taught
  • Easy for the students to answer many questions in the given time about 45-60 seconds per question
  • Performance can be compared between classes and across years
  • Incorrect alternatives provide diagnostic information

MCQ’ are tricky to handle –

  • Each item should be short and clear
  • Independent items without overlap
  • Avoid negatively stated items
  • Avoid clues to correct answers
  • High degree of dependence on the student’s reading ability and the instructor’s writing ability
  • May encourage guessing

Once we have developed the right set of questions, we can test them for few of the key features of student assessment methods: Validity of the questions – content and the construct.

Some of the parameters commonly used to assess the validity of MCQ items are:

Difficulty index – calculated as the percentage of students that rightly answered the item. If the percentage values of the items are above 90%, then the questions cannot be reused again, as it’s a very easy question for the targeted audience.

On the other hand, if the percentage values are any lesser than 20%, then the questions have to be reviewed again for the content and the construct, as it’s a very tough question.

Discrimination index – describes the ability of an item to distinguish between the high and low scorers. Scorers of upper and lower 27% of the overall exam is separated out. Higher the value, more discriminating the test is!

  • Greater than 30% – good discrimination, we can reuse the questions
  • 20 to 30% – Needs an improvement to the questions
  • Less than 20% – poor discrimination, we will have to reject these questions. This means that the low performing studentsselected the correct answers more often than the high scorers.

Distractor efficiency – This is yet another tool to tell if the item was well constructed or not. The quality of distractors influences student’s performances on a test question. Ideally, low scorers
would choose to select the distractors, whereas the high scorers will select the right option. All of the incorrect options, or distracters, should actually be distracting. Preferably, each distracter should be selected by a greater proportion of the lower group than of the upper group. If, in a five-option multiple-choice item, only one distracter is effective, the item is, for all practical purposes, a two-option item. Existence of five options does not automatically guarantee that the item will operate as a five-choice item.