Essay: Steven Pinker's Cyborg: How Teaching Reading Became More Complicated than Rocket Science

From Conservapedia
Jump to: navigation, search


Learning is not necessarily the same as external performance (i.e. ‘task time’). This is because performance data is commonly processed by the brain even after one's body has ceased the performance. This is how a child learns to walk within far less 'task time' than might be expected by a hypothetical robotics engineer who has had no exposure to toddlers. Some children spend a lot of 'task time' in learning to walk, while others spend comparatively little. And, either way, there is so much variability as to the age at which normal children attain basic proficiency in walking that some of them attain it only at twice the age of others. All this variability is how humans naturally vary in regard to learning any skill. In fact, the more 'artificial' the skill, the more variability there is in the age at which, and the means by which, humans can become proficient in it without unnecessary struggle.

A basic question for reading research that I've never noticed addressed is whether adults in primitive countries who grew up without being exposed to print and later taught to read have the same kinds and incidences of struggles in learning to read that American children have.

Diane McGuinness admitted that some children do not seem to find learning to read difficult; she rationalized that this must be because they have the talents for realizing how to do it even in the face of 'bad' instruction (Why Our Children Can't Read, pg. 30).

But, somewhat-less-than-perfectly 'unambiguous' terminology used in (imposed) reading instruction is a handicap for children who do not 'instinctively know from experience' something of what human reading is (or who do, but that the instruction they are subjected to fails to activate it within the context imposed). Within an imposed and micro-managed 'learning' context, children whose socio-linguistic development is 'lagging behind' that of their average age-mates need relatively more unambiguous terminology in order to keep from acquiring robotic habits of approaching print. But, it is these very children whom politically pressured teachers then find must be induced the more to submit to the programming.

When an entire nation of children were forced by law into this one-sided, dictatorial, 'educational' relationship, the conveniently bureaucratic solution to children's increasing 'learning failures' was to do lots of statistical, third-person research to figure out how to improve the programming, and, or, to load children down with even more of these menial mental chores within a given span of time and beginning at earlier and earlier ages.

Most of this essay is made up of parts of several of my posts that appear on the online discussion forum, EduTalk, hosted by Yahoo Groups, and all of which can be viewed beginning at Those posts get little exposure to anyone but members of that forum (and little by them, too, I suspect), so I'm making an essay here out of parts of some of those posts. The main inspiration for those posts, and for this essay, are the following persons, in order of priority. The late John Caldwell Holt (author of nine books on education/learning/science), Frank Smith (author of Understanding Reading), Rudolph Flesch (author of Why Johnny Still Can't Read), and, to my mind, deservedly famed cognitive psychologist Steven Pinker---who I'm nevertheless about to quite disparagingly quote. Hi ho, hi ho, a-essaying we now will go.

Look at the following set of three little shapes and tell me what you see.


Do they merely look like themselves? Or, do you tend to make them appear to be more than they are?

Are they a pair of glasses on a nose? Or, maybe they’re a universal sign for “bicycle-and-rider”? Or, maybe a bird eating a seed? The fact that you can so easily see such a simple, static visual shape like that in different and meaningful ways shows that the human visual system is not keyed to the visual trivia constituting the shape, but to any and every part of your brain that can come to bear. In fact, the way you get to know a new person (visually, or in general), and thus to get an ever-deepening sense of them, is by the same means that you get to know a simple, abstract shape. It’s no accident that the most published-and-read story in the world is not about a mad scientist and his inexplicable longing for a robotic companion, but about the core unit of human society: the first man and first woman.

In his book, Not Even Wrong, Paul Collins recounts his autistic son’s pre-occupation with letters by noting, at one point, how his son seemed to relate to letters as if they were persons. Letters, words, shapes, toys, random bits of wood, they’re all the same in how people recognize them: in ways that only social creatures can, who have brains wired for social sense despite disabilities that make them much-less-than-optimally social. You may not believe me at first, but the fact is that the way you recognize words is at least subtly informed by your sense of human faces. That’s right, I’m actually saying that you see written words as if they were faces. But, I mean that you see them that way only on a subtle level. What’s the first thing Adam or Eve saw? I mean, deeply appreciated in the seeing? A tree? A twig? A blade of grass? A cloud? No. They saw a face. A rather human- looking face. But, it was like no human face that you or I have ever seen. And, these were cognitively and physically mature persons, these first two humans were; they weren’t infants looking up helplessly. They were ready to meet their other half.

Yes, that’s what I’m saying: you see written words like you see a lover. Or, unless you‘ve never had a lover, or who has long since been dead, then you see them like you see your mother, your brother, or whoever is most in-your-soul. In short, you recognize written words by the very same visual faculty that allows you to recognize the most important human face in your life.

Now, for that disparaging quote I mentioned.

In the Forward to Diane McGuinness’ book, Why Our Children Can’t Read, Steven Pinker says:

[Humans have] an instinctive tendency to speak, [but no human] has an instinctive tendency to write. More than a century ago, Charles Darwin got it right: Language is a human instinct, but written language is not. Language is found in all societies, present and past. All languages are intricately complicated. Although languages change, they do not improve: English is no more complex than the languages of stone age tribes; Modern English is not an advance over Old English. All healthy children master their language without lessons or corrections. When children are thrown together without a usable language, they invent one of their own. Compare all this with writing. Writing systems have been invented a small number of times in history. They originated only in a few complex civilizations, and they started off crude and slowly improved over the millennia. Until recently, most children never learned to read or write; even with today’s universal education, many children struggle and fail. A group of children is no more likely to invent an alphabet than it is to invent the internal combustion engine. Children are wired for sound, but print is an optional accessory that must be painstakingly bolted on.

Bolted on? If that analogy were true, then a human who can read could be likened to a cyborg. But, there’s more. Lots more. This essay is dedicated to demonstrating just how over-simplistically erroneous are the main ideas in the rest of that quote (and in much of the book in which it appears). And, it’s all tied in to what it means to be a social creature.

Try, some time, having your live conversations with house guests without any exchange of sensory data except what you can get from print (no watching your guests’ hands or eyes, just seeing the static visual result of their having just finished writing the words they would otherwise simply speak). You probably will quickly begin to feel how utterly inconvenient and one-dimensional such a means of conversation is, since your guests will be right there available to simply talk to and see, just as a normal hearing-seeing human has the right to expect as normal. There's a complex set of reasons for why most children do not learn to read without a minimum of overt instruction. You can find all of those reasons in that 'live'-but-inconvenient manner of conversation.

But, you may find those reasons only by pondering and re-pondering the question for maybe days or months on end. Because, a one-time experiment requires a lot of pondering and sorting out and extrapolating. A baby, on the other hand, in the total process of finally becoming a basically proficient talker, feels quite free to take in as much data by 'experiment' as he wants to at any moment, right from his social environment. The now-popular Rosetta Stone language learning software replicates some of that environment, partly by making it possible to interact with a virtual language-user in whatever language you’re hoping to learn. Another part makes use of the common-sense principle that the more language-relevant data you are immersed in while the language is being produced, the less 'thinking' you have to do to become proficient in the language. In other words, you don’t have to sit staring at a blank wall trying to remember what you’d heard and what it had seemed to mean. This is how babies learn to understand and speak their language without also having to recall the complex cognitive and muscular details of how they did it---though they were right there the whole time experiencing it all.

Still, some people have difficulty grasping the sense in which reading, and learning to read, can be as natural-in-process as learning to understand and produce spoken language. I’ll just give you a hint right here: reading and understanding speech both make use of the brain’s own natural abilities, and these abilities by extension. In fact, there is no outward human ability that is not a product of extension of an inner human ability, from driving a car, riding a bike, or manufacturing-and-programming a Harrier jet from the raw earth to the keyboard. It’s only when the brain’s natural abilities are misdirected by what often merely happens to be disharmonious instruction-or-prosthetics that learning something becomes a royal frustration, whether learning to read, or learning to tie your shoes. For some rare people, learning to read is the far easier of the two.

As for the idea, in some circles, that reading is unnatural and must therefore be ‘bolted on’ to the brain’s own abilities, let me present the following hypothetical world, in which we can 'write upon the human tablet': 1)infants and adults each have a graphics screen of such form, and attached above their faces in such a way, that each person can see not only all in data on everyone else's screen but on their own screen. 2)each screen is hooked up to the corresponding person's brain so as to allow him to produce static line graphics at will. 3)no one has hands, mouths, or hearing. 4)all adults can already read and write ('write' on their screens, that is).

You will notice that, in this hypothetical world, infants can immediately identify the line graphics produced by another person as being directly associated with that person and that person's behavior. You will, I hope, also notice that this hypothetical world as I've defined it presents various automatic feedback systems related to reading and writing---feedback systems that, in the real world, are comparable to those available to hearing infants in regard to speech.

In such a hypothetical world, infants would learn to read as easily as hearing infants today learn to understand and produce voice signing (speech). If you doubt this, then let me mention two commonly known facts. One, hearing infants in the real world do not learn hand sign language if that signing is not used by the adults around them and in relation to them. Two, despite One, all infants learn hand signing as naturally as voice signing (speech) if the adults use it in relation to them---even though hand signing was *invented* and developed, and even though only *speech* is supposed to be the 'biological imperative'.

The nature of the data for all forms and mediums of signing are cognitively equivalent. What is different are the ways in which they each are most easily and thus commonly used. As implied in the above hypothetical world, static-graphic, permanent signing in a prosthetic medium is most naturally...indirect and passive, so that to make it otherwise would require (other) prosthetics of a very high-tech form.

When a person is reading silently, you cannot see the relationship that is going on between the reader and the text. Reading is, itself, both a non-social language-activity and a hidden language-activity. Static-graphic, prosthetic notation has many purely practical comparative and more immediate disadvantages to direct forms of language. Here's a simple picture to give you a clue what that means:

You can't ask for more juice in your bottle if you are unable not only to hold and manipulate a crayon upon a surface, but to keep your mom in the room and looking at this action while you write out your request. Besides, using your vocal chords is far less costly in terms of energy, and in the cost-to-dimensionality, and cost-to-speed, ratios. And, you can hear your mom talking from across the room even if you haven’t yet the ability to turn your head to see her, or from in another room if there is a wall between you and her. Writing is a more intellectually abstract activity than is speaking with someone directly. This is why most people never become prolific writers, even though it is technically possible for anyone who can write to become a prolific writer. When you write a letter to grandma in the evening (even if it is by email), you are not interacting with anyone in that activity. Writing is most easily and usually a lone activity. But, so is reading.

In the case of either hand or voice signing, the person is of course automatically identified by the infant with that person's signs. But, for reading to be social, it has to be *made* social by adding the social dimension to it, either by reading aloud to someone, or by talking about what you read; it is not social in itself, it is one or more degrees of social separation. Infants do not become language users by way of TV or radio programs that do not respond to their needs.

Reading is a lone and rather abstract activity, and the static-prosthetic medium (such as pen and paper, or paper and ink marks called letters) has no natural appeal as a medium of language. It just sits there. Give a baby a book who has never seen one before and he will treat it for what it is: a prosthetic to play with. It has no apparent socio-linguistic dimension. While written language can serve as a record for an isolated person, the core use of language, as such, is social.

A parallel to the relative inconvenience, inefficiency, and apparent arbitrariness of abstract ink marks can begin to be imagined for spoken language if we first appreciate what it would mean to prevent children from learning speech language despite a speech environment (just like children who have written language all around them and yet who do not seem in the least to be learning to read or write).

Imagine a scenario of the following three negative conditions: 1)the child is never exposed to any voice signing (speech) except indirectly, 2)the child is not allowed any associated medium for the advantage of overlapping redundancies (such redundancies allow cross checking), 3) all the acoustic information he gets is non-responsive to the child and to everything in the child's environment. An example of this negative scenario would be a radio tuned to a talk show station that has absolutely nothing but talk. In short, language learning would be impossible were the 'information environment' to consist of nothing but unresponsive signs in one medium. ---Or, at least, if all I've said is missing some key points, you begin to get the drift of the matter, and that's what counts.

Many adults recall having experienced getting lost and then trying to recognize familiar landmarks in order to get un-lost. If you are out traveling and you want to get home, but you feel lost in an anxious way, then you may start to use extra effort to try to recognize the landmarks around you. You may try to make them be what you want them to be: landmarks that you have, in fact, seen before. But, if you either fail to recognize landmarks that you do know (or mix them up in some way), or you mistakenly think that some of them around you are the very ones you are hoping to see, then when you find yourself mistaken and still lost, you may remain anxious and continue to make mistakes, or, you may become even more anxious and try even harder to make things look familiar. If you manage to get un-lost, yet all that mistaken effort you spent had no relation to your final success. The effort-expenditure of mistaken applications at performance does not contribute to a final success at the object. This applies to any activity or skill. Many amateur martial arts novices tense up their whole arm during the entire executing of a punch, either without knowing it, or with the feeling that it contributes to the force of impact. A brute-force effort actually expends far more energy than needs to be spent, and does so at the expense of a sensitivity to the data of natural cognitive coordination.

When a person feels pressured to try to recognize, or to learn to recognize, a particular collection of visual information (whether a letter, a sequence of letters, an animal, a landmark, etc.), he tends to try too hard in all sorts of ways that fundamentally cut down on the delicate and complex cognitive coordination of the visual recognition system. This is why imposed 'whole-word' reading instruction has generally poorer results that simply imposing on children to learn graphemes: for the latter there is so much less visual information to learn to recognize, and the functional units are all very small.

But, no one normally sees anything in a strictly brute-conscious way. That would take too much effort, because, like trying to stop your own heart from beating, it is far too narrow a task to be worth what the living organism cannot help but functional toward: living life. Even if we did commonly have a strictly brute-conscious knowledge of the vastly complex visio-socio-cognitive task of recognizing the face of a human stranger in terms of its being human, and not as a pixilated treetop-like abstraction, we could not put that knowledge to natural use as easily as we already do, because such knowledge would require so much more brain glucose than what we already use by accessing that knowledge in the minimal-and-subconscious way that we already access it. Besides, human life is not centrally what too many robotics and computer engineers today like to think it is. In any case, human reading is fundamentally more than simple visual-analysis protocols and input-sound conversion. And, the human visual task is fundamentally more energy-efficient, economic, *and* per-agent-variable, than what any 'seeing machine' has ever been made to do.


"Our reading paradigm is based on the knowledge that language and literacy skills are cognitive acts. Our instruction is based on a theory of cognition rather than reading methods or strategies that do not address the global needs of learners."