In other words, test scores are misleading regarding competency of content.

My original thought(s) on the above article was to point out a ,”Houston, we have a problem” moment since so few students across the U.S. seem to NOT have quality writing skills.  Of course, it goes without saying this has been true for years and no one group/organization focused on this issue as test scores have been all the rage.

Students who have access to computers at home and regularly use them for assignments are more likely to be strong writers, a national exam suggests. But it also says just a quarter of America’s eighth- and 12th-grade students have solid writing skills. (sic – this is quoted from article)

And then I took on a very part-time position reviewing test questions for a company which sub-contracts to some other company and I began to realize  (yet again) other reasons why our students have poor writing skills.  Not only is it the dilemma of dumbing things down so students need only answer multiple choice and T/F questions, the questions have become significantly more about ‘test taking strategies’ than the higher levels of knowledge – application, synthesis, etc.  Test questions are not open-ended. There is an answer embedded (1/4 = 25% and 1/5 = 20% guess rate while T/F is 50/50 crap shoot!) and students just need to learn better test taking skills which is the older legacy the SAT, ACT and GRE  provided without a writing component.  These tests demonstrated the ability to think through questions.

I realize the SAT and ACT now have a writing component, which is obviously important if only 25% of students in the U.S. can write a five paragraph essay in Grade 12 and college clearly should expect more of a student.  Unfortunately, the five paragraph essay has become so pro forma just about anyone can learn the routine and write something, whether or not it is quality, the writing can meet the proficiency standard.

Great writers come about through reading and vice versa – they go hand in hand. When the brain has to spend time thinking ‘how’ to answer a question rather than the content of the question, it is already dumbed down.  Reading and interpreting a M/C, T/F test question does not lead to good writing skills, it leads to memorization skills and ‘trick’ techniques for understanding how test questions are written.  Having a computer is great – if it is used to read material – not play games and do other tasks with are multiple choice and T/F, yet easy to grade.

This issue has become most evident to me in working with foreigners who are writing test questions  abroad and want them ‘Americanized’ via grammar, etc. yet refuse to understand the quality of the questions are still poor, even when the grammar is corrected. This is based on the fact the questions do not rely on anything more than parsed out common information  and how well some student was able to memorize bits and pieces and think through testing logic.   The tests have little to do with the skills we would expect in a college classroom, workplace or even of students wishing to learn.

I have done this ‘job’ a few different times for different organizations. Each time the scenario is similar – questions are produced by foreigners and my job is to ‘grammatize’ the commodity so the business (American)  will think there is a new and better set of test questions in the question bank. Each and every time, the problems are the same, when you ask a question, making it ‘tricky’ does not make it better.  It proves the questions in the question bank are not promising.  It is the algorithms of how questions are selected and used which would make a good testing program for PRACTICE.

This is an example of what was returned to me when I could not understand what a particular question was asking, both grammatically and by material as it was asked in a convoluted manner:         “……but it’s a false question”.   My response would be (should have been), why a false question (assuming double negative) when it is testing test taking ability and logic, not content knowledge.   I was ‘dinged’ for my response by the question writers as I corrected the grammar since I made it ‘easy’.  All the process did was make me laugh about who might have the larger ego.

Obviously when we read and interpret test scores (the nefarious spring testing ritual), we are also determining how well our students can think through test logic, as opposed  to when there is/are written components.  Why is it then the spring test scores give a different visualization of what NAEP produces? It is not just the idea there are two different types of tests.

As schools (public, private and charter) have jumped further into the cesspool of test scores based on M/C and T/F, writing has diminished. We do not expect students to reason through and logic out a science experiment, do error analysis on math problems, write a fictional critical analysis or well researched scientific piece (all of which is appropriate writing across multiple genres) – we just need them to pick/choose an answer.

Even worse, there are people who would like to see teachers castigated for not teaching well unless test scores go up. How about we start rewarding teachers where writing improves – in all genres and content areas.   Just imagine if test scores remained the same or a bit higher each year AND students could write  compelling essays, papers and ideas by 8th and 12th Grade.

There are many organizations and businesses which would be better served offering services such that student work could be read and graded on a rubric for teachers (eliminating the favoritism and other issues of teacher grading in the classroom) rather than continuing to jump on the test bank bandwagon.  Until we choose to change how we ‘do business’ in the arena of testing, we are getting just what we pay for. Questions written by students abroad, which are then anglicized and made okay for the U.S. We are not changing the ‘known world’.