Though good readers recognize words as orthographic-wholes, beginning and struggling readers must learn to recognize wordsby using the letters within them (in some cases, the words surrounding them) to sound them out. For beginning and struggling readers, the process of sounding out words from their letters is confusing (see “Unnatural Confusion”). It is “unnatural”, in that the whole process is based on learning to use a technological artifact (see “It’s Unnatural – It’s Technology”) to inform and instruct the brain to create a simulated language experience (See What Is Reading?”). The central challenge faced by most beginning and struggling readers is learning to work through the confusing relationship between letters and sounds (see “Kinds of Confusion”) fast enough to sustain attentional engagement.
Resolving this confusion takes time. Taking too long to decode a word (work through its grapheme-phoneme correspondences to recognition) is the most common bottleneck to progress in learning to read. The starts, stops, and hesitations heard in the voices of struggling readers are ‘drop outs’ in word-recognition flow caused by brain processing delays in working through the code’s letter-sound correspondence confusion (as experienced by beginning and struggling readers). The greater a reader’s experience of letter-sound confusion in a word, the longer his or her attention must stretch/span while working out recognition of the word. The longer the span of attention required, the greater the stress on working memory and the greater the vulnerability to mistakes in decoding. Taking too much time to decode unfamiliar words stutters up the synchronization of the brain processes required to maintain attentional engagement and, consequently, fluency, and comprehension.
Though we systematically blame and shame kids, parents, teachers (as well as improficient adults) for their difficulties, the root of their difficulties – the underlying confusion – is in no way their fault. It’s the legacy effect of a series of historical accidents in the development of the English writing system itself (see “First Millennium Bug“). Many notables including, Benjamin Franklin, Noah Webster, Melvile Dewey, Theodore Roosevelt, and Mark Twain, recognized that the code’s letter-sound confusion was at the root of reading difficulties. But despite their efforts and those of hundreds of others, centuries of attempts to change the alphabet or reform English spelling – to render their relationship more simply phonetic – have failed. The central issue is inertia; any change to the alphabet or spelling would create a ‘before’ and ‘after’ disconnect in the continuity of written english and it would be a disturbance, nuisance, and expense to everyone literate in the system as it is (for more see COTC Thoughts about Orthographic Reform). Because changing the code – changing the alphabet or spelling – has such intolerable consequences, our conceptions of ‘teaching reading’ have been constrained to accepting the confusion as immutable and, consequently, to paradigms of reading teaching organized around training the brains of readers to deal with it (see “Paradigm Inertia”). Phonics and whole language methods are both attempts to compensate for (work around), rather than directly address, the confusing correspondence between letters and sounds. (see Alphaphon analogy)
All previous attempts to reform the code failed because they involved changing the alphabet and/or changing English spelling.
How else might we reduce the confusion between letters and sounds in our orthography? Constrained to the two-dimensional thinking of printing press based ‘type’ it’s not possible but with today’s modern font technology we have previously inconceivable options.
Reading is an artificially simulated language experience constructed by our brains according to the instructions and information contained in a c-o-d-e (see “What is Reading“). Though many factors contribute to learning to read difficulties, what most makes learning to read (English and other deep orthographies) difficult for most beginning and struggling readers – what most challenges their brains – is the confusing relationship between the naturally evolved and naturally learned code of speaking and listening, and the artificially created and artificially learned c-o-d-e of reading and writing (see “Disambiguation”).
Though it was inconceivable before the advent of modern digital typography, it is possible today to add another dimension to the visual attributes of ‘letters’ and to use variations in that dimension, without affecting spelling conventions or adding letters to the alphabet, to indicate which of a letter’s possible sounds it is actually making in each word it is appearing. In other words, without changing the alphabet or spelling, we can add another layer to modern digital typography that varies the appearance of letters in systematic ways that significantly reduce the kinds of confusion at the root of learning to read difficulties.
Comments are closed.