PASS the NREMT Exam!William-Brown-Signature-150

About Validating EMT Tests

What’s a teacher to do about validating their tests?

There are a number of ways a teacher can “help” validate their classroom tests.

Content validity

One method is to assure items have content validity. This means the test questions come from the book used in class and the lectures provided during the course. Lectures should be based upon the domain of knowledge, not just one textbook.

Teachers who teach one textbook deny their students access to a broad base of knowledge. When information is not included in a single textbook, it is up to the teacher to include information from other texts within their lectures.

Some teach “powerpoints” provided by publishers. These relate to single book content and are often insufficient. They are a good starting point but not all-inclusive. Some just “lecture” (and bore their students to death!) Some teach “the skills” while not emphasizing the “psycho” (or knowledge part) of a psychomotor skill. Students may know how to perform the skill, but not know when or when not to apply the skill and/or variations of the skill depending upon patient characteristics. For example, teaching students to “apply this face mask at a 10 liter flow per minute to every patient who has chest pain” is insufficient. Teachers should only ask test questions over educational material the student is exposed to and required to know. It is up to the teacher to assure the material covers the correct depth and breadth of knowledge required to do the job. Test questions must be tied back to the book and lectures, and every test first must have content validity.

Test Reliability

To obtain reliability a test must consist of 100 or more items, and an item analysis (a statistical program) must be conducted over student scores. What’s more, reliability can only be attained over multiple-choice items, which is not necessarily the best method to measure knowledge over particular subjects.

Another problem is interpretation of reliability outcomes. This is not as simple as it might seem, because no acceptable reliability measurement exists regarding a teacher made test. Most EMS educators can’t interpret reliability outcomes (a KR20) and can’t make adjustments in their tests to improve reliability over time.  Reliability is important for a certification examination but serves minimal importance in a teacher made examination. Small class size (less than 20) also inhibits reliability measurements and can only be corrected by having tests with hundreds of questions. Thus reliability in teacher made tests is more of a myth than becoming a reality.

Entry-level competency

Courses progress over time and so does student knowledge. Accreditation standards say students must graduate “at the entry-level.” The NREMT entry requirement is that a student must “successfully” complete a course. What does that mean? How does a teacher measure those concepts (success and entry-level)?

These are difficult concepts that are based upon longitudinal judgments and data.

Some teachers have taught so many students, that they “know” who is learning enough to be successful.

But what is their criterion? Likely they’ve given enough tests over time (longitudinal data) that they can predict (predictive validity) who will pass and who will fail the National Registry examination. These instructors know for example that, when a student gets below 65% of the items correct on his/her first test, that student is likely to fail the NREMT exam. This instructor’s examination possesses predictive validity when correlated to an examination with known validity–the NREMT examination. This teacher also knows what scores are needed for every test he/she administers. This teacher likely “fails” students who over time obtain scores below that which has been observed to predict success.

Some teachers, however, have no idea what scores on their test mean! They might be able to provide a grade on the test, but what does the grade predict? A test could be developed where every student achieves a score above 90% because “all” of the items are easy to get correct, or the items have been “compromised.” In these cases the score likely predicts the student will fail a test of known validity because the teacher-made test and subsequent score is not valid.

“I’ve talked to hundreds of students who complained about the NREMT test being too hard, that they passed their EMT class with a 98% grade, and yet they failed the NREMT exam. The most likely reason they failed the NREMT exam is that the teacher-made tests were poorly constructed, not valid, and way too easy. Thus, the student was unable to obtain entry-level competency.”

Teachers who have class sizes of 8-10 students must use tests they have given over multiple years of classes in order to obtain any predictive validity for their teacher-made tests, and so that their items remain “stable.” An item that was memorized by students in one class and shared with future classes will not remain stable in its measurement value. Thus, it’s particularly a struggle for teachers of small classes to obtain predictive validity.

One other method that may help validate a teacher-made test is to “calibrate” the items. Of course, this method also requires a test bank—something not likely to be available to the teacher. Calibrating items can be accomplished by placing the items in three “buckets.” One bucket is for very difficult items; the second is for moderately difficult items; and a third bucket is for easy items. The test can then be constructed so that about 10% of the items are difficult, 50% are moderate, and 40% are easy. This will yield a passing score of around 70%.

There are ways to calibrate each item in the test and make a judgment regarding setting a pass/fail score. When using individual item calibrations, test pass/fail scores will vary between tests, causing confusion for students.

The student’s role when taking a test is to get as many questions correct as possible, and not to worry about a minimum score that must be attained in order to pass.

EMT PASS and AEMT PASS calibrations

Every practice test in EMT PASS and the two simulation examinations in AEMT PASS have been calibrated, which will help the student understand what his/her score means. The process used in EMT PASS and AEMT PASS exceeds that of any other test question book or other electronic application available. While a student may not be happy with their score, it will at least be meaningful.

Sometimes a “marginal” classroom student will fail an examination of known validity and reliability (the NREMT examination.) But when an entire class fails or only a small percent of the class passes (such as a 20% pass rate), the culprit is often invalid teacher-made tests.

Classroom tests should help prepare students for entry-level competency, yet developing valid teacher-made tests is a difficult process, takes years to create, and can be imprecise.

William-Brown-Signature-150