NOTE: Since I criticized Dr. Ivy's position in this paper, it may seem that I had a poor regard for his teaching abilities, especially with ESL students. That is not true at all; Dr. Ivy was very effective as a teacher and with those students. I have substituted "X," "here," or "this school" for the name of the college where this happened throughout.
The issue being considered
is as follows: The East-West Japanese students have taken several reading
tests and have scored far below the American standard even after being
allowed double time. Ellis Ivey, who has been teaching reading to them,
feels that the reading class has been taught in the past to the ESL students
with no standard. He suggests that we accept a score of 17.69 on the Nelson-Denny
as a passing score since that is the average score earned by the ESL students
this year at this school on the higher of their two Nelson-Denny tests (American
students must earn 48.3).
Before giving my
reply, I would like to explain my background. I am not trained as a reading
teacher, but I did teach reading for one year in Kentucky and also for
half a quarter to ESL students at the University of Alabama. Furthermore,
I earned my second MA degree in TESL. My studies in English as a Second
Language included a look at tests and testing. More broadly, we looked
at theory and practice in the various areas of language learning and teaching,
including reading.
I have several
reasons for disliking the above proposal. First, the Nelson-Denny test
has no reliability or validity for testing ESL students. Second, even if
we substitute a test that does have reliability and validity, such as the
TOEFL or the University of Michigan test, that test has limited usefulness
for placement. Third, I do not think that any placement test should be
used to determine in any class which students pass or fail; a placement
test should be used for placement only, and promotion should be based on
achievement or upon reaching a well-defined standard towards which a class
is directed.
In regard to the
Nelson-Denny's reliability for testing ESL students, I need to look no
further than the scores presented by Dr. Ivey. Fifteen students took first
Form F and later Form E of the Nelson-Denny test. With three of these students,
the scores remained roughly the same, and with six of the students, the
scores increased dramatically. If only these nine students had taken the
test, we would believe that three students had not learned much while the
other six had made great progress. However, the results from the other
six students indicate the unreliability of such an assessment because these
scores plunged about as much as the other scores had risen. If a student's
score doubles within one quarter, I can accept such terrific improvement
as possible, especially among ESL students. However, if such a score plunges
dramatically, as did Mamie's, Tae's, and Akiko's, I have to either assume
that they were intoxicated when they retook the test or that the test is
meaningless. It is possible even with such a small set of scores to statistically
test for reliability between the two forms, but there is no reason for
doing so; the results are too obvious: our students did not fail the Nelson-Denny:
the Nelson-Denny failed our students.
We can deal with
the Nelson-Denny's validity in the same way, ignoring reliability this
time, but looking at the average of the two tests on reasonably consistent
scores. According to the two tests, Takashi and Tomoko are at the very
bottom of the class while Yuriko is second only to Kotaro. On the other
hand, if we look at the Assessment and Placement Test for Community Colleges,
Kayo has the highest score of all. While someone who has never taught these
students may accept such results, I can not. Takashi and Tomoko demonstrate
better language skills and grades than Yuriko and Kayo.
I can give several
good reasons why ESL students test unreliably. First, their reading speed
is low. Second, their vocabulary is tiny compared to that of an American
student; kindergarten students in the US understand far more words than
these students can reasonable be expected to learn in a few years. Third,
their vocabulary is non-standard; thus, they don't know language we take
for granted. Fourth, they lose a lot in the translation. Fifth, they come
from a different culture: even when they understand the material, they
might misunderstand the question. In addition, according to what I have
read about learning, a person learns best when the person is prepared to
receive the information, but these students are having to tackle strange
questions in a strange language about strange situations. An ESL student
has many handicaps that an American student does not. To some extent, even
his or her education is a handicap since it has been teaching the student
how to cope with a different world.
The proposal has
been made to accept 17.69 as a passing score for ESL students. I see absolutely
no reason to accept such a figure. Since the Nelson-Denny is demonstrably
unreliable for testing ESL students, a score of 17.69, or any other score,
is meaningless. Six of the students earned above that score on one test
and below that score on the other. If we give them a third test, it will
be just a matter of luck as to which of those six students pass.
During the last
meeting, it was proposed that I locate an assessment test that could be
used with ESL students. I feel a misunderstanding occurred at that point.
I did not feel at the time that what I learned would be helpful; however,
I was willing to update my knowledge and pass on whatever I learned. My
own thinking was and is, I don't think a desirable test exists, and if
it does exist, I would still prefer that we not use it.
In my course on
ESL testing a few years ago with Dr. Rebecca Oxford, I discovered that
ESL testing is usually neither valid nor reliable. My memory is (and a
former classmate agreed) that the only valid and reliable test of reading
at that time for ESL students was the TOEFL. If we want to establish a
clear and defendable measure of our student's abilities, giving the students
the TOEFL as a pre- and post-test would be perfect. In talking to Bill
Wallace, the Director of the English Language Institute at the University
of Alabama, I learned that another test exists that has been used in many
schools. This test was created at the University of Michigan, one of the
strongest schools in the US for ESL studies. We will be receiving information
in the mail soon.
However, Bill Wallace
does not use such a test at the ELI, nor do I recommend our using it. Why?
I need to restrict my remarks about a test I have no information on, so
I will explain my reasoning using the TOEFL as my example since I studied
the TOEFL at length. I can begin by giving the positive information about
the TOEFL. The reliability and validity information on the TOEFL is extensive,
and the TOEFL compares extremely well with the most accurate tests given.
Creating a valid and reliable test is very expensive; that's why the TOEFL
costs so much, and that's why other tests of its kind are so rare. However,
in spite of validity and reliability, a test such as the TOEFL is a poor
predictor of the students' success. Students who come to the US with low
TOEFL scores are almost as likely to make high grades as students who come
to the US with high TOEFL scores.
How can a valid
and reliable test be worthless? The very act of creating a valid and reliable
test excludes important factors that are much more important to the student's
total success than the factors which the test is measuring. My experience
teaching ESL students for almost nine years has proven time and again the
poor correlation between English ability and academic success among ESL
students. At Gadsden State, I taught ESL and American students separately,
speaking slowing and explaining every detail to the ESL students and having
rich dialogues with the American students. Nonetheless, on the weekly papers
and on the in-class final (unlimited time), the papers written by the ESL
students were far superior. There was no correlation between the TOEFL
scores and grades in my classes. Students with high TOEFL scores frequently
failed my English class while students with low scores often made "A's."
Some of my "A" students could barely read, talk, or understand a lecture,
but they did know how to work.
No assessment test
allows the student the opportunity to utilize all of the student's resources,
and therefore, assessment tests do not and can not predict success.
Therefore, I recommend
against using the TOEFL or any test like it as a method of selecting incoming
students. For further evidence, just look at the students who have been
on this campus. Setsuko Morimoto, for instance, arrived with marginal English
ability and left with weak English abilities yet still earned a 3.4 average.
I doubt that she could score a 500 on the TOEFL. However, some of our recent
arrivals, with TOEFL scores above 600, have done marginal work in English
composition and have had poor grades overall.
I have even stronger
objection to using a placement test such a the TOEFL to determine when
a student is ready to leave a class. I had many ESL students at Gadsden
State whose reading rate must have been measured in the hundreds of words
per hour who made "A's" on their term papers. While improving students'
reading abilities is a worthwhile objective, there is no established minimum
standard nor should there be.
A second problem
with an assessment test being used as a method of exiting students from
class is that it is norm-based and not objective-based. When I was in school
many years ago, only one "A" was allowed per class to allow a "bell curve"
of grade distribution. That method of grading was unfair partially because
it did not allow for the range of ability from class to class and even
more because it ignored whether the students actually learned anything
or not. A norm based on the entire population of students seems to avoid
these errors, but it does not really avoid the second. After all, what
is the content of the TOEFL? Does it reflect what ESL students here
need to know or do for their classes? Should we work towards getting them
to pass the placement test or should we work on preparing them for their
tests and assignments here?
Teachers of composition
have never relied on a norm. While goals and evaluation, as a result, vary
widely from teacher to teacher, each English teacher is free to set a practical
target, one that includes the level of the students' abilities and the
requirements of schools and occupations. While I have maintained the same
goals and standards from school to school, I have been free to vary the
support and the assignments to best help my students be successful. I never
judge my students' abilities; I am interested in their results.
I have tried to
accomplish the same purpose with our English composition exit grammar exam
which has a three-fold advantage over the assessment test which has been
used in English 081: it provides a clear, realistic, and teachable target.
On the other hand, the assessment test is based on a norm, I have no idea
of what is expected of the students or even if they would know something
worthwhile if they only learned enough to pass it, I am not sure if some
students will ever be able to pass it, and I don't know what I need to
teach them to get them to pass it.
A major failing
of all assessment tests is that they are measuring ability, and ability
under crippling circumstances, at least for some people. In one TESL class,
we student teachers were asked to take vision tests through bad glasses,
reading tests in the wrong language, and dexterity tests with thick gloves,
the wrong hand, or improper tools. Under such circumstances, we became
frustrated or disruptive or simply failed to perform well. When I ask one
of my ESL students to write a paper within a few days, the student is limited
to some extent but has freedom also. However, if I say the paper must be
written in one hour, the time becomes the most restricting factor, just
as plants in a desert find water to be the restricting factor. By adding
that one restriction, I am no long measuring writing ability; I am measuring
the ability to write against the clock. Little can be done to improve some
abilities. For instance, I have never been able to improve my memory, dexterity,
handwriting, spelling, or ability to distinguish left and right, in spite
of being punished both at school and at home. Yet, my verbal ability, which
was high in childhood, has continued to grow. To me a fair test is one
that looks at the positive accomplishments of that person. For instance,
if we were to evaluate the faculty at this college, should we use a test
of ability based on the norms for faculty members in the United States,
or would it be more fair to look at what each faculty member has accomplished?
To me, rather than
make another assessment of the students' abilities, we should make an assessment
of our students' needs. What do our students need to learn to do well on
their reading assignments for their classes and how can we best help them
reach that goal? I recognize that this second assignment is more difficult,
more open to question, and more unpredictable. However, because it will
establish a clearer, more realistic, and more teachable target, I think
it will improve the students' results. As far as grades are concerned,
I think the teacher can depend on his or her experience and judgment. However,
the students' grades should be based on their accomplishments, not on their
abilities.