ARE THE READABILITY LEVELS OF TEXAS’ STAAR TESTS OUT OF WHACK?

Mar 31, 2019 by

I wonder why the authors chose not to use the Lexile Reading Framework  as one of their readability formula instruments.

“Are the Readability Levels of Texas’ STAAR Tests out of Whack?”

By Donna Garner

3.30.19

Recent news media articles have focused on readability studies by Szabo and Sinclair in which these authors indicate the readability levels of questions on the Texas STAAR tests are misaligned (“Readability of the STAAR Test Is Still Misaligned” — By Susan Szabo, EdD Professor Texas A&M University-Commerce Commerce, TX and Becky Barton Sinclair, PhD Associate Professor Texas A&M University-Commerce Commerce, TX — Schooling Volume 10, No. 1, 2019 —  http://www.nationalforum.com/Electronic%20Journal%20Volumes/Szabo,%20Susan%20Readability%20of%20STARR%20%20is%20%20Misaligned%20Schooling%20V10%20N1,2019.pdf )

I appreciate the work of these authors to track the readability levels of the 2018 STAAR reading questions (Grades 3 – 8). However, I know as an English / Language Arts / Reading (ELAR) classroom teacher for more than 33 years that the CONTENT of a reading passage makes all the difference. If the content is really interesting to the reader, he/she is motivated to read “above his/her reading level.”  

Because the classic writers have stood the test of time and have proven themselves to be experts at introducing dynamic characters, fascinating plots, and enticing settings, even weak readers can read above themselves because of their having a high interest level.  

All during the years that I taught, I required my students to give book reports for credit each six weeks. I gave them an extensive reading list from which they could self-select their choices. On most of the books, I gave the reading level; and I was always amazed at those students who chose to read successfully books that were far above their tested reading level. Great writers motivate students to read great books. Great writers motivate students to stretch themselves. [It is my belief that more selections from the time-honored classics should be included on the STAAR/End-of-Course tests.]

Another factor to consider is that the STAAR reading passages should test all students within a grade level – those who read above their grade level, at the middle of their grade level, and below their grade level.  If all the questions are on grade level, then those who read above their grade level will not be challenged nor motivated; the STAAR reading questions will bore them.  Students who are working above their grade level on their classroom work will begin to feel as if their academic efforts are not being fairly tested on the STAAR. This would be a disservice not to challenge them.

I learned through my many years of teaching that if I set the “standard of excellence” low, students would settle for that standard; but if I set the “standard of excellence” high, even the weaker students would try to reach it.

Also, I wonder why the authors chose not to use the Lexile Reading Framework as one of their readability formula instruments.  As I understand it, teachers in all 50 states utilize the Lexile; and many teachers in Texas use it to help their students choose books to read.  The Texas Education Agency uses the Lexile as one of its instruments to gauge the reading levels on the STAAR tests.

One other factor to consider carefully is that the Texas Education Agency uses a large number of classroom teachers meeting in various groups to determine whether each and every STAAR/End-of-Course test question is suitable for a particular grade level.  Since these classroom teachers are working with real students in real classrooms each day, I would rather put my trust in their judgment than in an inanimate readability formula.

Passage from the Szabo/Sinclair study:

However, readability formulas cannot tell if the target audience will understand the text, as they do not measure the context, the reader’s prior knowledge, or interest level in the topic or the cohesiveness of the text (Bailin & Grafstein, 2001; Bertram & Newman, 1981; Kirkwood & Wolfe, 1980; Zamanian & Heydari, 2012). Additionally, it was found that tinkering with the text to produce acceptable readability levels may make the text more difficult to understand (Rezaei, 2000). Nevertheless, today, various readability formulas are commonly used to determine the readability of government documents, educational materials for students, newspapers, magazines and popular literature (Begeny & Greene, 2014). Readability formulas are mathematical in nature and focus on different text features. These features include the number of words in a sentence, the percentage of high frequency words on predetermined grade level word lists, the number of multisyllabic or “hard” words, and/or the number of letters in the text (Bailin & Grafstein, 2001; Begeny & Greene, 2014). For this reason, several formulas should be used and averaged when determining the readability of a selection of text, to account for the differences in formula design (Szabo & Sinclair, 2012).

===============

This article is based upon the January 2019 Texas State Board of Education meeting in which the new ELAR/TEKS and accompanying STAAR/End-of-Course tests were discussed at length.  Listening to Tex. Comm. of Education Morath made me realize the fastidious and complicated process that the Texas Education Agency is implementing to make sure that the grade-level-specific TEKS (Texas’ curriculum standards adopted by the elected members of the SBOE) and the STAAR/EOC’s are carefully aligned:

1.30.19 — “Tex. Comm. of Education Morath Leading the Way to Real Change” — By Donna Garner – EdViews.org  — http://www.educationviews.org/tex-comm-of-education-morath-leading-the-way-to-real-change/

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.