I remember encountering multiple choice questions (MCQs) as a student – I assumed the instructor was neglecting their duties as an educator, either by outsourcing the assessment or going for the convenience of grading the learning effectiveness. I remain instinctively skeptical about their use.

However, three things have changed recently. First of all, after taking several online courses, I have come to see their undisputed advantage for asynchronous learning. Second, I noted their effectiveness in the classroom as a pleasant way for the students to check their learning and for me to monitor their progress. And third, the move to online assessment has done away with the traditional, proctored exam as an option. Now that we have better tools, more experience, and weaker alternatives, it makes sense to be open-minded.

The advantages of MCQs

MCQs offer benefits to students as well as instructors, including:

  • They can be clearer and less ambiguous than open questions
  • They can cover a greater amount of course content than a reduced number of open-ended questions
  • They remove the discretion of the marking and therefore generate more consistency and transparency
  • They automate scoring and thus eliminate human errors.

Note that I do not include “reduce instructor time” in the list above, because although multiple choice questions are quicker to grade, their construction and implementation is heavier than traditional exam questions. Thus, MCQs benefit instructors who are comfortable moving the work of marking exams at design them.

The relevance of multiple choice questions

I divided my assessment questions into three categories (based on the work of the American psychologist JP Guilford):

Order questions: these test convergent thinking, in that the students are expected to come up with the same answer as each other. Solutions must be unambiguous and objectively assessed. Different instructors should be expected to give identical scores. Verbs for convergent thinking include: Choose, to select, identify, calculate, estimate and label.

Discussion questions: these test for divergent thinking, in that the students are expected to provide original answers. What constitutes a correct answer can be communicated via a marking scheme, but there is a significant margin for students to deviate from each other in their work. Verbs for divergent thinking include: to create, write and gift.

Questions about the exhibition: this can be a command or discussion question, but students provide their solution in the form of a visual image. For example, students should create a graph or complete a worksheet.

In terms of Bloom’s taxonomy of learning, the two upper levels, namely creation and evaluation, generally relate to questions of discussion, and the two lower levels, comprehension and memorization, are more suited to questions of command. I believe that when they are well constructed, MCQs can also occupy the two intermediaries, which are analysis and application. But this is not a necessary condition for their usefulness.

Provided that other assessment methods are used to test higher-order thinking, this article explores the use of multiple choice questions. in their own domain.

The structure of multiple choice questions

A well-composed MCQ has two elements:

1) The rod – this is the “question” and should provide a problem or situation.

2) The alternatives – a list of options to be selected by the students, containing a responnse and many distractors.

Some principles of good practice include:

The stem should be succinct and meaningful – it should contain only relevant information and draw attention to the learning objective without testing reading skills. It should be meaningful when read in isolation, and it should seek to provide a direct test of student understanding, rather than inviting vague consideration of a topic.

The material must relate to the content of the course – the questions must find a balance by being related to the content of the course without being a trivial memory test. If the question is about a simple definition, or anything that can be Googled, that calls into question the value of the course. Or, if the instructor is using a question bank from a textbook, perhaps students should just study the textbook directly. Having a general and widely used set of questions creates an unfair advantage for students with prior basic knowledge. Questions should include nuance while relating to what was taught, in order to detect whether students were actively engaged. For example, there is a difference between asking “which of the following are stakeholders” and “who the conferences think should be considered stakeholders”.

Consider two-step higher-level assessment in Texas – like Instructional designer Mike Dickinson explains, it is a way to increase the capacity of multiple choice questions by introducing a higher level reflection. The idea is that although students cannot ‘describe’ a concept if they have pre-assigned alternatives, they do. can “Select the best description”. MCQs do not allow them to make an interpretation, but they can “identify the most precise interpretation”. Think about what you want students to do and the associated verb, i.e. describe or rate, then change it to its noun form, i.e. description or rating when writing the MCQ.

Avoid negative sentences – provide a list of options and ask which one is not correct adds a layer of complexity and therefore difficulty, but especially for non-native English speakers it may shift the emphasis to reading comprehension rather than subject comprehension. If negative wording is required, it is a good idea to use italics to emphasize, for example, “which of the following statements is false: “

Avoid initial or interior blanks – forcing students to fill in the missing words shifts the cognitive load a way from a mastery of disciplinary knowledge. In many cases, radicals can be rewritten to retain the purpose of the question.

All alternatives must be plausible – there is nothing wrong with using distractors as bait. Instead of listing one correct answer and several random words, each distractor should be considered. Brame argues that “common student mistakes are the best source of distraction” and as long as they are genuine mistakes and not evidence of poorly worded questions, this is true. If I ask “which of the following, according to the course material, is an element of justice?” a power; b) Reciprocity; c) Respect; d) Fairness ”A student who selects“ c) Respect ”may be frustrated if they do not get a point and cite several internet articles claiming that the concept of respect is relevant to understanding justice. Like elsewhere. But in my class we use a framework that examines four elements of justice, one of which is b) reciprocity and none is c) respect. The fact that one can make a plausible argument as to why respect is such an important part of what constitutes justice is what makes it a good distractor.

The alternatives must be reasonably homogeneous – having very different options can serve as clues to the correct answer, so the alternatives provided should be reasonably similar in terms of language, form and length. Clever and knowledgeable students should not have an advantage over more naive students.

Don’t always make B the correct answer – every time I create multiple choice questions, I remember a former student who “revealed” his strategy of always choosing option B. Since he was in postgraduate studies, this strategy must have proven itself.

Be careful when using “all of the above” or “none of the above” options. – these are not considered as best practices because they allow students with partial knowledge to deduce the correct answers. However, when used deliberately, they can elicit closer engagement with the question by forcing students to read it multiple times. This is especially true with alternatives such as “a) A and B; b) B and C; c) none of the above; All the foregoing”.

Vary the number of distractors – providing four options throughout implies that the random riddles are worth 25 percent of the exam. Offering more distractors therefore forces the students to confront each other and select something. Presented with more options, students are more likely to try and answer the question themselves and then see if their answer is listed, as opposed to reverse-engineering each option, to see if it’s correct. . However, too many options make it more difficult to maintain plausibility and consistency. So varying the distractors, from 2 to 5 depending on the question, is quite appropriate.

Anthony J. Evans is Professor of Economics at ESCP Business School.

This advice is based on his blog, “Create good multiple-choice question exams”.