Reflections on ICLS 2002
For me, the highlights of the conference (besides having meals with some very interesting people and seeing Colleen again) were Roy Pea's opening keynote and the special session on assessment of complex learning (Pellegrino, Bjork, Linn, Bereiter, Miyake and Shirouzu, discussant: Collins).
Apparently, the "No Child Left Behind" plan has some really negative consequences for the Learning Sciences community. It tries to enforce learning sciences evaluation to be similar to the type of evaluation that works for medicine and agriculture. 75% of all funded NSF research in education will use this type of evaluation in 5 years; currently, less than 5% of evaluation is done in this matter. It basically forces us researchers into a box that isn't useful. Roy Pea said "No Child Left Behind" should be called "No Child Left Untested," since that's what it really means. Unless there is a clear road for how this excessive testing will lead to better education, I think it could seriously negatively affect learning research.
This leads me to my second point. If we are going to test everyone and everything, we need to think more about assessment. The problem is that the kind of assessment that "No Child Left Behind" encourages is built on the learning theory that knowledge is facts that people transfer into their heads and that recall is the way to test that knowledge. This is a terribly naive version of how people learn. I bet however that most politicians who come up with such policies as "No Child Left Behind" have this theory of knowledge. The problem I have with the politicians is that they are forcing their naive model of learning unto the science research community. If politicians applied this technique to other sciences, I wonder whether half of all chemistry research would be concerned with trying to convert lead to gold.
Robert Bjork told of an interesting psychology study regarding the chunking approach to education. Basically, the study compared three scenarios of learning Spanish vocabulary words. The first was 3 hours at one time. The second was 1 hour for three days in a row. The third was three 1 hour sessions a month apart. On pre/post-testing, it was discovered that the first was most effective and the third method was least effective. In addition, students were most satisfied with the first approach and least satisfied with the third approach. So by this assessment, we could conclude that chunking (the first approach) is the better approach. However, when the researchers did another post-test after one year, the scores were reversed–the third method retained the most information while the first method retained the least information. I think most people would agree that retention after a year is the more important assessment, so we need to be clear about the assumptions we make when we are assessing anything. In the complex field of education, this seems even more applicable.