[Squeakland] Panel discussion: Can the American Mind be Opened?
alan.kay at vpri.org
Fri Nov 23 15:56:43 PST 2007
Here's what we did for City Building, Playground, and then Etoys
(I've written about this before, but I don't think on this thread).
"City Building" is a wonderful (at that time non-computer) curriculum
designed by Doreen Nelson that is very rich and has been used
successfully for many age ranges - in our case we implemented it with
Doreen's help for 3rd graders - which was the youngest group tried up
to that point. Google Doreen and "City Building" for a wealth of info
on this terrific curriculum design.
Playground was a different way to do Etoys (similar graphics model
and a different programming model). This was implemented in a grade
4-5 classroom (the school didn't have grades by age, but "clusters"
by developmental level - which works a lot better).
Doreen helped in every step of introducing "City Building" to very
willing "3rd grade" teachers. Still, it took 3 years before the deep
quality in the curriculum was manifest in the classroom and in the
students and what they did and how they did it. Photographs of each
of the three years would not reveal much visible difference. It was
what the children were concerned with, how they talked about it, and
how they went about the processes that changed profoundly. Trying to
trace all this back into "what happened?" we came to the inescapable
(and not too surprising) conclusion that the teachers had also
changed -- they had learned much more about design and systems over
the three years, and this was manifested in a "well above threshold"
assessment from Doreen and the rest of us in the 3rd year.
It's worth noting that assessments of fluency do not require control
groups because what is being judged is not a teaching method or a
curriculum per se, but results. Were the children doing deep "City
Building"? No for the first two tries, Yes from the 3rd try onwards.
Similarly, "are the children doing real math and real science or
not?" Questions like these are easily answered by people who can tell
the difference (just as musicians and coaches can assess their
learners for degrees of fluency).
The City Building experience and our long stay in this school allowed
us to try the same multiple year assessment for Playground
programming and its curriculum (with similar results). Basically,
there are just a lot of things that don't get normalized in single
trials of even worthwhile curriculum ideas that get smoothed out over
a few years. The teacher gets more knowledgeable and confident. The
curriculum is improved from some of the bugs found. The software
often requires tons of work over the three years before it is above
When we started on Etoys 10 years ago, we had the three year trial in
mind, and decided that all the initial curriculum would be tested
over three years before we wrote it up (the substance of Kim's and
BJ's book "Powerful Ideas in the Classroom" is about a dozen
projects, each of which was tested over three years).
What we don't know from this methodology is whether there are better
ways to teach Etoys and the math and science powerful ideas in these
examples. And we don't know whether the choices of the math and
science examples are the most appropriate. But what we do know is
that the processes of their book are highly likely to result in more
than 90% of a class of children getting fluent in what's in the book,
and that includes strong elements of differential vector geometry,
acceleration and Galilean gravity, etc.
This leads to interesting arguments, especially wrt young children,
of the kind "if you can get 10-11 year olds to do real math and real
science, then it doesn't much matter what the specific subject matter
is". And "if the specific subject matter can be strongly related to
adult uses and thinking about real math and real science, then all the better".
This bypasses the much more difficult problems of taking a given
theory of subject matter (school maths, etc.) and trying to contrast
different ways of teaching it. We do not do that at all, and the
Etoys work was done as part of "science time" in these classrooms (a
great place to teach real math given the difficulties with the school
math goals and processes).
The main point here is that above threshold fluency for 90%+ of the
children is one of the most important benchmarks -- and it can be
done a little more easily than trying to use specific control groups
if the subject matter is very different from school theories, yet
still recognizable by experts.
A side comment. The reactions against "the new" take partial form in
demands for "super scientific studies", and most of these are simply
not feasible, if our "three years for a good experiment" is valid.
But the largest most devastating studies in the US are the "whole
country" results that show beyond a shadow of a doubt that the
existing educational process is not resulting in more than a small
percentage of children getting above acceptable thresholds in
reading, writing, math and science (and thinking). This is the
problem they don't want to even discuss. Contrastive studies are not
interesting unless both are above threshold. If neither are, back to
the drawing board. If one is, then a more detailed contrast is of little value.
At 05:29 PM 11/22/2007, David Corking wrote:
>Tony Forster wrote:
> > Controlled blind large studies are rarely done. This is because the lab
> > rabbits are real kids and there are real ethical concerns. We are
> stuck with
> > anecdote and assertion for the large part. We need to critically
> examine all
> > this, as there is little hard evidence.
>For better or for worse, our society uses real kids for blind (and
>even double blind) trials of medical treatments.
>The ethics of a pendulum swinging from 'new math' to 'new new math' to
>'back to basics' and on, based each time on anecdote, are, to the
>naive observer, as great a cause for concern as giving two matched
>groups of children differing curricula for a couple of years. Perhaps
>saying that ruins my chances of influencing education, but instead of
>advocating such trials, and dismissing current research methods, my
>next step is to understand how, as a society, we should interpret an
>What are the benchmarks a study must meet to be considered good
>evidence to support making a change (to the learning environment, the
>learning methods, and even the learning objectives, or even just to an
>individual lesson plan?) Educators like yourself work hard on these
>studies to get them through peer review, or incorporated in government
>policy, and often aim for to be utterly dispassionate. So, how
>should a concerned parent (or administrator or politician) work with
>teachers in their community to separate the wheat from the chaff.
>Squeakland mailing list
>Squeakland at squeakland.org
More information about the Squeakland