Learner Assessment and Text Design Strategies for Distance Education

This paper discusses the observations and implications of using speechrecognition access to computers. The authors used themselves as subjects in this initial exploration using VoiceType2. What they learned about the system was used in the training program they have initiated with dysfunctional adults. Observations from clients are reported. The paper suggests that such access may change the way we will write in the future. It also provides impressions for thoughts on ways in which we shall have to modify writing instruction for dysfunctional adults. Enhancements to such systems are portrayed concerning environments that may be disabling. Resume: Ce document formule des observations relativement a I'utilasation de systemes a reconnaissance vocale comme voles d'acces a I'ordinateur et en examine les incidences eventuelles. Les auteurs servent de cobayes pour cette prem iere experimentation du systeme Voice Jype2, dont ils ont applique par la suite les resultats a un programme de formation concu a I'intention d'adultes dysfonctionnels. Les observations de clients y son) egalement consignees. Cette etude laisse entendre que I'acces vocal a I'ordinateur pourrait bien changer completement nos fagons d'ecrire. Elle lance egalement des pistes de reflexion sur les nouvelles voles que Ton pourrait eventuellement suivre pour I'enseignement de I'ecriture aux adultes dysfonctionnels. Les auteurs decrivent certainesamelioraitons qu'il serait possible d'apporter a ce genre de systemes en fonctlon d'environnements presentant des handicaps. Adaptive vocabulary systems allow the user to access a computer by talking to it directly: thus bypassing the keyboard and 'writing without the use of the hands. It is different from any method of writing that I have ever used. "Writing" is not exactly the word that comes to mind. It feels nothing like I was taught about the act of writing. This article is being composed by talking to the computer. What you are reading is a result of this interaction. There is no pencil to sharpen, no pen to dip in an inkwell, nor any worry about maintaining a consistent cursive slant. There will be some cheating as not all the specialised keystrokes have been mastered yet. Have you noticed the difference? Apart from those few keystrokes, "what I say is what you get." It seems a little futuristic at times. Shows like StarTrek have characters saying "...Computer!..." weekly. Itis fiction; but is it all fantasy? Just how far away are "star dates" ? Picard and his crew might look back at us here in 1993 and smile at our crude technology. They Canadian Journal of Educational Communication, VOL. 23, NO. 2, PAGES 89 100. ISSN 0710-4340

Writing by Talking to the Computer: Experiences, Ideas and Questions Colin J, Laine Michele Breen Abstract: This paper discusses the observations and implications of using speechrecognition access to computers. The authors used themselves as subjects in this initial exploration using VoiceType2. What they learned about the system was used in the training program they have initiated with dysfunctional adults. Observations from clients are reported. The paper suggests that such access may change the way we will write in the future. It also provides impressions for thoughts on ways in which we shall have to modify writing instruction for dysfunctional adults. Enhancements to such systems are portrayed concerning environments that may be disabling.
Resume: Ce document formule des observations relativement a I'utilasation de systemes a reconnaissance vocale comme voles d'acces a I'ordinateur et en examine les incidences eventuelles. Les auteurs servent de cobayes pour cette prem iere experimentation du systeme Voice Jype2, dont ils ont applique par la suite les resultats a un programme de formation concu a I'intention d'adultes dysfonctionnels. Les observations de clients y son) egalement consignees. Cette etude laisse entendre que I'acces vocal a I'ordinateur pourrait bien changer completement nos fagons d'ecrire. Elle lance egalement des pistes de reflexion sur les nouvelles voles que Ton pourrait eventuellement suivre pour I'enseignement de I'ecriture aux adultes dysfonctionnels. Les auteurs decrivent certainesamelioraitons qu'il serait possible d'apporter a ce genre de systemes en fonctlon d'environnements presentant des handicaps.
Adaptive vocabulary systems allow the user to access a computer by talking to it directly: thus bypassing the keyboard and 'writing 1 without the use of the hands. It is different from any method of writing that I have ever used. "Writing" is not exactly the word that comes to mind. It feels nothing like I was taught about the act of writing. This article is being composed by talking to the computer. What you are reading is a result of this interaction. There is no pencil to sharpen, no pen to dip in an inkwell, nor any worry about maintaining a consistent cursive slant. There will be some cheating as not all the specialised keystrokes have been mastered yet. Have you noticed the difference? Apart from those few keystrokes, "what I say is what you get." It seems a little futuristic at times.
Shows like StarTrek have characters saying "...Computer!..." weekly. Itis fiction; but is it all fantasy? Just how far away are "star dates" ? Picard and his crew might look back at us here in 1993 and smile at our crude technology. They would also observe that we have defined and set up structures and models that are little different to those on the "Enterprise". Many technical features we see in films such as Star Trek are already here. By comparison, we see young children's attempts with language as crude; but we know that by the time they are five, most have a 5,000+ word vocabulary, with our complex language structures and models already in place. There are significant parallels between the ways we learn language and the computer's learning to recognise speech.

Language & Learning
We converse in language that increasingly contains acronyms that have become nouns: abstractions that are made concrete by the contexts in which they are used. Second, we have expanded how we reach, analyze and distribute our knowledge. Using the computer in these processes is changing how we learn, how we work, and how we communicate. Many learners (both children and adults) continue to grapple with understanding the essentials of communication. There are others who, because of disease (e.g., Multiple Sclerosis) or injury, look for hope in technology; yet many find that same technology and rehabilitation frustrating. They have active minds in distorted bodies, yet the technology that they need to use in the workplace often presents barriers, rather than access.
The dysfunctional and non-literate adults with whom we work rely on a few abilities that are relatively strong. Most of these strengths are used very little in the traditional learning of scribal skills. These adults are more competent in oral language than in visual language (Laine & Geddis, 1992). Our studies show that these clients are continually frustrated with visually-based learning and make slow progress where the sole or primary form of instruction is visual. Consequently, they have problems maintaining their high initial motivation. The abilities they areexpected to use in class focus on memory, on being systematic and on using symbolic content. Yetwe found their strengths to be in semantic content, evaluation, and using relations (analogies) to solve problems.
Our clients voiced their frustrations at the complexities of much of the 'accessible' technology. Software that was designed to assist often had features (like very limited time delay) that caused observable frustration in clients with disabilities. There were many questions related to language levels in manuals and errors due to spasticity. A variety of assistive devices were tried out with several exceptional adult colleagues. Most of them were frustrated by the physical rules and limitations demanded by traditional keyboard access to a computer. However, a most positive writing access features for those colleagues with disabilities was word prediction. The discussions concerning this assistive feature focussed on the separation of the mechanics from the semantics of writing. Into this context was raised the potential for adaptive vocabulary systems (voiceactivation and speech recognition) to access information and technology.
For adaptive vocabluary systems to work well, the user is presented with possible word options as responses to the speech sound the computer receives from the user. The more familiar the computer becomes with the user, the more efficient it becomes in offering the 'most correct' option first. This process paralleled the 'word prediction' feature that was a key facilitator to non-literate adults composing (Laine, 1989;Laine & Geddis, 1992). Speech-recognition technology is a very recent innovation and the literature relating the technology's application is small.

Extant Literature
There is a great deal of technical literature on the subject of how computers recognise speech sounds. It helped but we found most of it unintelligible! Little empirical work has been completed relating the interaction between speechrecognition technology and the user, its uses in learning, or its impact on independence and productivity. There is more relating voice-activated devices than speech-recognition. Only one article considers the topic of disabled persons' independence through voice-activation (Brown & Cavalier, 1992). One article discusses voice-activated vocabulary training with very young children (Kantrov, 1990). One describes using speech to simulate patient/doctor discussion in medical practice (Wilson, 1991) and one describes using voice-activated information-retrieval systems in libraries (Peters, 1989). Higgins & Raskind (in press) describe the use of speech recognition as a tool to help the writing skills of postsecondary students with learning disabilities. Much of the extant literature is based on assumption, and mostly produced by vendors. The contents of magazine articles that imply the value of speech recognition are based more on supposition rather than on empirical testing.
The best way to determine the accessibility and characteristics of speech recognition was to use the research team as guinea-pigs. Extensive notes formed the basis for research questions. After four months' trial using VoiceType2, the team raised several ideas, questions and paths for potential investigation. One idea -training and productivity of Employment Equity participants -is underway.

Technical Aspects
Before assessing writing by voice, it is important to review some technical aspects of speech recognition.

What is an adaptive vocabulary system?
There are two levels of an adaptive vocabulary system. A computer can be started by sounds (voice activated) and through a computer, one can send preset commands to activate other equipment. These do not necessarily have to be speech sounds. Speech is a different matter. Recognizing speech requires sounds be directly associated with a predictable language pattern. The computer then is not just changing switches through a single utterance. It provides the user with feedback in the same language form that one uses to compose. Even a 'grunt' can be associated with developing a word, or an idea -providing it is a consistent grunt. A single consistent utterance also can access particular system commands, parts or whole forms, letters, or papers. These commands are called 'macros' and can save a user with a disability significant time and energy.

How does the computer recognize speech or particular voices?
Speech-recognition systems like Dragon Dictate or VoiceType come with a base 'Vocabulary" of words and commands. The user repeats each word or command three to ten times into a microphone linked to the memory of the computer. The program digitises the phonetic presentation of a word, then compresses and stores the average of each word's repetitions as a mathematical code. VoiceType can combine these algorithms by sets of two (bigrams) and three (trigrams) embedded in its memory. They allow the program to recognize oftenrepeated patterns of digits so that it can begin the prediction process. Whereas voice-activated systems are multi-user, current speech-recognition systems are 'speaker adaptive' or speaker-dependent. This means that the program can recognise only one user at a time. Each user has a personal voice-file (about 2 Mb of memory) that has to be loaded before the computer will recognise the speaker. Each time we switch between users, the current user exits, savinghis or her voicefile, and the next user loads the name of his or her voice-file. It does not present a problem as there are only two users here, but as we accommodate more users, the system may become more cumbersome. This could be problematic in an office or classroom but within a year or so, this feature should change as programs able to adapt to multi-users become available.
3. How does one begin to use a speech recognition system?
Initially, one reads in the words supplied in the base vocabulary while using the tutorial. It can take about ninety minutes for VoiceType to digitise and store the phonetic patterns. As the program is used, some patterns are repeated frequently. For example, "nice" (N AY S) and "ly" (LIY) in the base dictionary are joinedas (user-defmedbigram)"nicely"(NAYSLIY). The more often this pattern is spoken, the more 'fixed' the bigram becomes in the computer's prediction program. The more fixed it becomes, the more accurate the response to our speech.

How is the program set up?
Any system has certain basic needs. For example, VoiceType will run on a 386 MS-DOS computer but will run more efficiently on a 486 computer with a clock speed of at least 25MHz. It requires a minimum of DOS 5.0 and 8Mb of memory. Each user's voice files will take up 1.8Mb of space. (We found it very useful to backup our voicefiles on separate disks). The computer will also need an Audio CaptureandPlaybackAdaptorcard. The system comes with its own software and a microphone that connects to the M-ACPA card. VoiceType can be set up in the base directory or within an application. When set up in the base directory, the computer becomes essentially "hands-free" and the microphone is active from the start. Bysaying". . . voiceconsole. . ."allaspectsofthecomputercanbeactivated vocally. The keyboard is also continually active, so one can move between each form of input. The system tested is compatible with several programs (e.g., dBase IV, Displaywrite 5, Lotus 1-2-3 V2.2, Microsoft Word 5.0, Wordperfect 5.1, Multimate V4, Wordstar 2000+ V 3.5). Voice commands (macros) are built into VoiceType to perform most of the basic functions offered by these applications.

How does it relate to our population?
We have found that the way in which the program recognises speech matches closely the ability profile we found in the non-literate adult population. The program makes analogies (RELATIONS) in the digitised speech to predict systematically the words being dictated (SYSTEMS). The program's prediction (EVALUATION) becomes more accurate with frequent use. Certain patterns become embedded in MEMORY that assist future prediction. Finally, the oral context (SEMANTICS) refines the accuracy of the letter patterns (SYMBOLS).
In summary, the technical journals and articles show that the speechrecognition models resemble both the ways in which young children acquire speech, and the cognitive patterns we found in our non-literate clients.

Observations on Speech Recognition
Many adults with disabilities report their computer use and their writing is more fluent when they do not have to concern themselves with using a keyboard. Those with eye-hand coordination problems have reported being able to work more freely through direct contact with the screen. Presently, speech synthesizers are not effective in relaying what is on the screen synchronously with speech input. Therefore, at this time, blind users cannot benefit from auditory scanning either word-by-word or spelling if they were to use this technology (Stoddart, 1994). As each word is impressed on the screen, it has to be checked for accuracy. When a word is impressed it modifies the user's voice-files. If a user cannot receive feedback until after the input is impressed, the voice-file will have to be re-examined constantly for errors between what has been said and what has been saved. Similarly, users with significant intellectual impairment are likely to be frustrated by the intellectual demands of the program.

Notes from Colin Laine
Initially, it felt very strange sitting talking to the computer: hoping no colleague would come to the office. Over the next few days, I spent thirty minutes daily reading several different passages of text. The initial accuracy rate was lower than I had expected. It was frustrating trying to remember the basic commands; how to spell words; the lack of speed and fluency. These things I ascribed to thecrudeness of the technology. In my frustration, I raised the volume of my voice (at times); I even said some things that maybe I should not have said (they got erased). As the microphone was live all the time, a cough, a sigh, or any guttural noise was translated to words on the screen. This became quite fascinating. I wanted to see what a cough really looked like! (Have you any idea what a sigh looks like?) The sighs, sneezes and coughs in stories I read as a child are all wrong! They do not come out that way.
I felt that I would never learn the International Communications Alphabet, so I typed it out and pasted it to the side of the monitor ('Alpha' through 'Zulu'). I also cheated the program by getting into the "spell mode", typing the word in, then saying "choose one". (Every little bit helps). Within six sessions I found that the computer had become very smart. It had learned so much in a couple of weeks and its accuracy rate was over 85%. We were now progressing at quite a clip. There were fewer spelling errors and I began to recall the International Communications Alphabet unprompted. When I gave commands (like "exit"), the program reacted as I wanted it to rather than just print up the word. This was progress, and I did not pause to realise that the computer was guiding my processes to match those in its algorithmic memory. I had forgotten that this effort was interactive.
Once I came to grips with this discovery, our relationship articulated itself. My voice became slower and softer (except on occasions); its accuracy rate went to over 90% rapidly. My frustration level decreased: its fluency and speed became more accommodating. I then took the bold step of designing custom macros. lean now start a letter by saying "open letter" or "open work" (for formal business letters) and four lines of address appear properly formatted at the top of the screen. "Date" puts in the day's date two lines below and I'm away. At the end of the letter, I say "close letter" and a polite salutation, four lines of space, then my title is printed. I say "print" and I get a hard copy: "save" and a spoken title files it on the disk.
We have to become accustomed to current programs. The codes and idiosyncrasies of speech-recognition seem more complex than keystrokes, but once the main commands are known and the International Communications Alphabet is mastered, the fluency suddenly jumps. Once one experiences that boost, the program is no longer a frustration. For those who find typing difficult, using speech-recognition should be less frustrating. The primary cost is eye-strain. VoiceType had little difficulty discerning my speech even when I had a cold. Like any listener, it took a few words to understand and recognise that something was different. When that adjusted patterning was established, the accuracy rate increased significantly. After getting over the cold it took a few words to readjust.
A frequent challenge is how the computer can recognize numbers from words and homophones. The number/word transformation is already built into most of these systems. For example, writinga number like'1993'; by choosingthenumber when you say "one" automatically sets the numeral in preference to the word when you say "nine". As to homophones, we have had to correct the computer regularly on this point. As we use VoiceType, the more frequent homophone appears as the first choice with the lesser-used homophones down the list. This does not mean that the program can distinguish among homophones, but it does mean that the more frequently-used spelling will more likely be the first choice.

Notes from Michele
When I began as a Graduate Research Assistant, little did I know that I would be talking to a computer. I did not realize that such advanced computer technology existed except on television. Initially I was both excited and intimidated about the prospects of interacting verbally with a computer. My past limited experiences with computers were not all that positive. Therefore, since I am hardly a computer whiz, I feared that VoiceType would not pick up my speech patterns or understand my Newfoundland accent. Questions that went through my mind were "What if I make a mistake? What if I ruin the program? Will it be as accurate if I do not use the program for long periods of time?" In retrospect, I am relieved to say that my apprehensions were unjustified. The program is user friendly and did respond well to my speech patterns. I now realize that this program has a different memory for each user, so although my accent is different from Colin's it does not make any difference. I was amazed at how well it responded to my training and only had to repeat words three times sequentially. Now and then I have had to retrain a certain command if I found that it did not respond after two utterances. I see myself gradually training VoiceType to "tab", "indent", or "edit". It is intriguing that this program can perform any of the tasks that can be completed with a regular keyboard. However, much patience and time is required to develop the dexterity, since the manual is anything but user friendly. Colin and I collaborate and through trial and error we usually figure things out.
VoiceType remains accurate even if I do not use it for three or four weeks. When I returned after Christmas break it was not VoiceType that was rusty. It took me a couple of hours to get reacquainted with it, the same as it would with any computer program, I assume. VoiceType adjusts to changes in speech patterns as they occur as longas the user is consistent when dictating and corrects mistakes as he/she goes along. When I say "consistent", I mean pronouncing the word I want the same way every time I use the computer. For example, I say "thee" instead of "thugh" when I want the word "the" to appear on the screen. If I continually interchanged my use of these pronunciations, VoiceType would become less accurate and definitely confused.
Speaking of accuracy, VoiceType is about 90% accurate most of the time for me and the speed is improving with practice. It was accurate because I have been consistent in dictating and corrected any unwanted utterances or mistakes as I have gone along. Still, when I am talking to the computer, I have come to feel that it is more than just a program in a computer: although it does not talk back! (You may have already noticed how often I have used the title VoiceType so far. I prefer to acknowledge it by name as opposed to just calling it "the program".) Unlike keyboarding, VoiceType is a more personal interaction. I have found myself scolding the computer when it did not perform my commands instantly! I have raised my voice and said "Wake up!" or "Go to sleep!". I'm sure many people outside the office were puzzled by these outbursts. The fact that the commands are human-like and down-to-earth probably contributes to this personification. I spent about sixteen training sessions with VoiceType ranging from 45 to 90 minutes in length. I find that after ninety minutes I need a rest for my eyes and my patience. My frustrations have diminished over time. Also, as I have to pause between utterances, my speech is becoming generally slower. This is a definite change, as I am told that I usually talk too fast.
Our presentations on speech-recognition havesparked a great deal of interest in people. Some questions posed were "Will VoiceType improve our oral skills?" and "Will such programs increase our writing skills or will they replace written work in the future?" As a teacher, I believe that they could be extremely beneficial in classrooms, especially for disabled students. During a writing period, for example, a student who could not physically write could use VoiceType to complete assignments. This might take additional pressure off teachers and parents. If introduced to students at an early age, voice-activated computers could spark children's interest and boost their confidence in future computer usage. This is especially important in our technological and computer-oriented world today. VoiceType has helped me overcome my fear of computers. Now I feel confident to attempt any macro.

Ideas, Questions and Conclusions
Speech-recognition has several features that hold much promise for all users and especially for those with disabilities. First, writing with VoiceType is not merely dictating. It requires as much creativity and involvement in the writing process as traditional hand-operated writing systems, but in a different way. We have found that using VoiceType has led us to change the way we compose and articulate our ideas. The composition is more likened to a discussion with another party. Right now, I am more engaged in adiscussion with the reader than I would be if I were using my keyboard. My eyes and my attention are focussed on the screen and what I am communicating, rather than on worrying about where my fingers are or on holding a pen. Our research subjects have found this aspect a freedom from having to learn how to use a keyboard. Some have said that using a keyboard continually is a distractive, even invasive, option. They can focus their energies and attention on what is happening on the screen and interact directly with it.
The down side has been that we have found ourselves more noticeably tired after a session with VoiceType due to the degree of concentration and focussing on the screen. We feel that the writer/user should take time out from the screen at regular intervals: just look away at more distant focal points. As an alternative, we have used the speech for a period, then switched to the keyboard and back again. We have the benefit of both options now. Second, like Higgins & Raskind (in press), we have found that we have used a greater variety of words and number of propositions in our writing. But then we find that, for most people, there is greater variety and complexity in oral discussion than in written work. We have to be very conscious of what we want to communicate when writing. This aspect should take advantage of the relatively stronger oral language abilities of our nonliterate population. If the hypothesis is true, we would advance the idea that use of speech-recognition systems should greatly enhance users' self-esteem and give them a greater sense of control (cf. Brown & Cavalier, 1992;Kantrov, 1991). We have initiated activating the entire computer system (start-up, e-mail, virus control, scheduler) through voice and speech recognition. This investigation has opened ideas for on-line library ordering and data-base searching: both localised and commercial. This interface has been suggested as well-suited for these processes (Peters, 1989).

Today & Tomorrow
Currently we have four adult clients with disabilities. There is a range of disability: Multiple Sclerosis, Cerebral Palsy, severe physical and vocal disability with intermittent spasticity, and learning disability. Each adult has been using the system at least three months. At this time, they have achieved 83% to 90% efficiency with the system. The productivity of all the adults has increased. In two cases, the time taken to write a letter or schedule appointments has more than halved. The use of macros has enabled them to design an opening, a closing, even an entire letter with a single vocalised set of keystrokes. In the early stages of training, each user was frustrated by the error-rate of VoiceType's recognition. The clients soon learned that by cleaning up the errors that they had originally accepted or had missed, the accuracy rate jumped. One client has marginally intelligible speech. He is more invigorated at his job because the computer can understand him better than his colleagues. Further, he can now work uninterrupted on an average 29 minutes. This compares to stopping to rest for five minutes four times hourly in a more traditional environment.
All our clients had ten to fifteen hours of direct instruction. They also had a tutorial on video and an on-line reference that comes with the system. We have also provided assistance, direct and by phone, when there have been difficulties. Over a period of three months, the calls for assistance have diminished from one or more a day in the first week to one a week or less. For anyone wanting to introduce such a system, we have concluded that VoiceType is not a selfinstructional system and a planned training program is essential (Laine & Breen, 1994). The manual uses sophisticated language and suffers from considerable vagueness. Several instructions are convoluted: some are wrong. A new user would have difficulty getting beyond the introduction without substantial computer knowledge. Even with that knowledge, a new user would have no appreciation of the critical importance of keeping voice files cleaned of phonetic or keystroke errors. The tutorials are not sufficient for self-instruction unless you have worked with some speech recognition system before. Any user will need some direct training. Any approved dealer should provide direct instruction/ tutorials to new users plus assistance and advice.
All the clients have experienced fatigue similar to ours. Eyestrain is a factor for the user who works with VoiceType for too long. An optimum period has been about thirty to forty minutes. After this period, we have advised everyone to take a break, look out the window and refocus. However, our clients have reported they are less physically tired and can work for longer periods of time than they have been accustomed to. Two clients told us they are no longer as tired at the end of their working day as they were before using speech access to their computers.
An intriguing side benefit of primary interest has been the changes in the ways the clients approach using the computer; the change in their styles of composition; and the ways in which their monitoring strategies have changed. First, they exhibit greater confidence in their computer use. They are more animated during their work-time on the computer. They are more conscious of what they are saying and how they are saying it. Second, we are seeing changes in their written composition. Their written language is now becoming more similar to their oral language than before: an important emergent feature for teachers. There is a closer match between written and oral reports. If this feature becomes a trend, then there are implications for the way we teach writing, the ways in which writers monitor their composition, and, in the ways we evaluate written composition.
We have had a teacher say that talking to the computer is not exactly writing. Using speech recognition to access computers for composition frees the writer from many mechanical demands. This freedom should allow the writer to focus on the content and propositions. We think that should lead to improved written communication by persons with disabilities.
Our logs also reveal changes to the ways we are approaching decision-making and learning new things. Mostly, we have seen an increase in our relating back to other experiences as we try new ideas. The use of relationships instead of sequential, systematic learning may be an important feature in increasing a reflective learningstance. Our clients are less impulsive in their approach to new learning situations and there has been an increase in peer-mentoring and cooperative working among the participants. The extent to which the introduction of a speech-recognition system has influenced these changes has yet to be documented, but I believe it has had a prominent role.
At the start, we spoke of Star Trek. Only a decade ago, this paper would have appeared fantasy to many readers. In the intervening years, the technology has leapt forward. The mathematical and statistical models that form the backbone of these systems have gone from monograms to multi-equation models. These models link algorithms together to predict what the new combined sound means, or will look like. Current systems work on a DOS or MAC base. Platforms like OS/2 and AIX can work with several applications simultaneously. The possibility for multi-user systems that require minimal training is at hand. Tomorrow is not a star-date away; it is not 300 years, months, even weeks. Tomorrow is a matter of a few hundred days. We should see accelerated speech recognition systems (simulating continuous speech) independent of speakers (allowing multiple users) as a reality very soon.
Currently we have stand-alone, speaker adaptive, versions. "Tomorrow's" systems -now being created and tested -will offer network versions coupled to multi-user platforms. These systems would allow offices or classrooms of users to access computer applications and write by voice. When the problems associated with word-by-word speech input/output are solved, then anyone in any environment should be able to access information, e-mail, emulators, information retrieval systems, or coded computer-controlled devices (especially in dangerous work environments). Alternatively, those who work in disabling ("hands-busy" or "eyes-busy") environments could work more safely with such adaptations. For example, surgeons could continue operating yet call for physical systems checks and analyses without stopping or looking up from their work.
The potential for speech-based multi-user, multi-language systems being available to ESL students in our colleges could decrease the time and costs of language training. Presently, we have to train the computer to recognize each user's speech pattern. Tomorrow's version will require little or no extended training of voice files. An oral introduction and voice-coded password would provide access to a specific voice file. The complexity of the new models will make predictions based on more intricate algorithms. With these more complex and accurate mathematical models, the computer will predict words contextually. In such a model, the challenge presented by homophones would be nearer to being solved. The computer would understand the context, provide the composer with the most appropriate word and, possibly suggest grammatical alternatives. Then how will teachers grade essays...?

Postscript-
First, the prices of these systems are falling rapidly. Second, this article has taken me about 125% of the time I would take with the keyboard as I'm still learning. The statistics tell me that the VoiceType has been 91% accurate. Maybe I'll give it an 'A': after all it has been very patient. Conference proceedings of the international conference of the division on career development, (49)(50)(51)(52)(53)(54)(55)(56)(57)(58)(59)(60)(61) Abstract: In a study of two-dimensional and three-dimensional illustrations, Abed (1991) found that scanning patterns varied significantly for the two Illustration types. The present study uses the two illustration types as components in illustrated texts to look at their effects on retention. Eighth graders read/viewed an illustrated text while their eye movements were recorded, and were administered a multiple-choice retention test im mediately after. It was found that scanning behaviors were different for the two treatment groups, with the 3D group spending more time on the illustration, less time on the text, and interrupting reading more often than the 2D Illustration group. The results support Abed (199') as well as Dwyer 's (1970) conclusion that realism in illustrations is not necessarily facilitating; in the present investigation it proved to be distracting.
In an experiment on the effects of dimensionality on eye movement, Abed (1991) compared scanning strategies used for three-dimensional and two-dimensional visuals. The three-dimensional visual stimuli consisted of color representational drawings with two components: a flat background picture plus a picture which folded out to present a three-dimensional (3D) model to the learner. The two-dimensional (2D) stimuli were photographs of the 3D stimuli. The results of the study showed significant differences between scanning behaviors associated with the two types of visuals. While the 3D treatment group fixated more on the foreground area than the 2D treatment group, the reverse situation occurred for the background area. Similarly, the left-side preference generally displayed in eye movement studies (and seen in the 2D treatment group) was disrupted by the presence of three-dimensional stimuli, so that subjects viewing 3D visuals spent roughly equal amounts of time viewing both left and right sides of the visuals. The results of this study illustrate that dimensionality influences viewing patterns. The purpose of the present study is to pursue this finding further in the context of learning; specifically, will two-dimensional and three-dimensional illustrated texts result in varying levels of retention as well as varying eye movement behaviors? Peeck (1987) has suggested the use of eye movement recordings to determine how and when viewers use illustrated text. By gathering this data along with retention test data, the investigator is able to determine the interaction between subjects' attention to specific areas and their retention of textual and pictorial information. Some eye movement data is available in connection with textbook illustrations. Fleming (1984) recorded the eye movements of graduate students as they studied illustrated texts. A number of variables were considered, including page layout and complexity of material, as well as the learner characteristics of sex, cognitive style and prior knowledge of material. Among'other findings, Fleming reported that more transitions (from text to illustration) were made while studying complex rather than simple material, the complex material being from ascientific research journal and the simple material from an 8th grade text. That complex stimuli attract more viewer attention (eye fixations) than simple stimuli has been found by many researchers, including Nunnally (1967,1968), and Mackworth and Morandi (1967).
In a study by Flagg, Weaver, Fenton, Gelatt, and Pray (1981), third and fifth graders' eye movements were recorded while they read passages with or without illustrations, and they were subsequently tested for recall. Duration of fixations on the text varied little for the three picture conditions (no picture, picture before text, and picture with text). In the picture with text treatment, the children spent an average of 5.6% of their total viewing time fixating or scanning the illustration. One of three approaches was used: examination of the picture before and after reading the text, examination of the picture either before or after reading the text, or interruption of reading to examine the picture. Half of the children used this last approach, and of these children half interrupted reading more than once. A few children did not look at the picture at all. Further, children who attended more to the illustration had significantly better recall of details in the text.
It is apparent, then, that different illustrations or illustration/text units can yield different eye movement patterns. Can they also influence learning in different ways? Much of the research points to the facilitating effects that illustrations can have on comprehension and retention (i.e., Bransford and Johnson, 1972;Duchastel, 1981;Peeck, 1974). But it is not certain that just any type of illustration would be useful in agiven situation. As Willows (1980) pointed out, researchers often assume that "a picture is a picture is a picture." In fact, pictures vary in numerous aspects, including color, size, placement, density of information and realism vs. symbolism. These variables can be combined in any number of ways to form an illustration that may enhance textual information or detract from it.
Three-dimensionality is perhaps most closely linked to realism, which has been described by Dwyer (1978) as "the amount of stimuli available to convey information to the student." Three-dimensionality in an illustration (i.e., a foldout model) would add the real-world perspective and depth lacking in an illustration confined to the limitations of a two-dimensional surface. The realism theories put forth by Morris (1946), Dale (1946), Carpenter (1953) and others suggest that learning will be facilitated by increasing realism. Accordingly, various types of visual stimuli will enhance learning to varying degrees, with realistic photographs being the most facilitating, followed by photographs of models, detailed and shaded drawings and simple line drawings. In each case color visuals would be more realistic and facilitating than black and white. By extension of this theory, a three-dimensional illustration would be somewhere on the upper end of the continuum.
After extensive study on the subject of realism, Dwyer (1970) summarized his research comparing various types of visuals designed to teach the anatomy and physiology of the human heart, and concluded that adding realistic cues led to increased learning only under certain conditions. His visuals included simple line drawings, detailed shaded drawings, photographs of heart models, and photographs of heart specimens, all presented in color and in black and white. Of particular interest here are three findings. 1) Highly realistic visuals tended to increase learning when study time was self-paced, but not when presentation times were fixed. Dwyer speculated that the fixed viewing time did not allow students to absorb and interact with all the information provided in such complex visuals. 2) Extra details in realistic visuals sometimes served as distractors rather than enhancers, causing students to miss more important visual or textual cues. 3) Different types of visuals were more or less facilitative for different educational objectives. If one objective of the illustration is to be aesthetically pleasingand thereby maintain interest in the text, this does not necessarily imply that other cognitive objectives will be achieved as well. Parkhurst (1982) also reached this last conclusion after studying the interaction between media presentation method, visual realism, and level of reading comprehension. The visual realism variable included four possibilities: no visuals, line drawings, detailed shaded drawings, and realistic pictures of the object. Test scores increased along with the degree of realism for subjects with high comprehension levels. For those with low comprehension levels, test scores decreased as visual realism increased.
These studies necessarily objectified pictorial realism as a factor made up of certain components. In fact there are numerous components to this principle, some more qualitative than others. Kawabata (1986) lists texture gradients, perspective, shadows, and intensity gradients as some of the various cues which contribute to the perception of realism or three-dimensionality in a two-dimensional plane. Haber (1980) discusses phenomena such as the relative size of objects in the picture, convergence of visual edges toward a distant vanishing point, and lighting as signals to the viewer that the picture is intended to convey three-dimensionality. For actual three-dimensional illustrations the realistic quality stems from several sources. First, thefact that the illustration is an actual paper model which reveals visual information in a three-dimensional mode adds the depth lacking in a conventional two-dimensional format. Second, the addition of any or all other variables such as color, detail, and Kawabata's and Habeas cues would serve to increase the realistic quality of the illustration.
The present study compares the effects of two-dimensional and threedimensional illustrated texts on eye movement strategies and on retention of information. Because of the tendency of three-dimensional illustrations and other complex visuals to alter viewing patterns (Abed, 1991;Fleming, 1984), it was hypothesized that fixation patterns would differ for the two types of visuals and that the three-dimensional illustration would attract more attention than the two-dimensional illustration. Taking into consideration the mixed results of realism studies and the fact that realistic illustrations sometimes act as distractors, it was further hypothesized that three-dimensional illustrations would fall into this category, and therefore would lead to lower retention test scores than two-dimensional illustrations.

Subjects
Twenty eighth graders from a public middle school were selected for participation in this study. All had average or above average reading skills (indicated by class placement and school records) and normal or corrected vision. Students were randomly assigned either text with three-dimensional (3D) illustrations or text with two-dimensional (2D) illustrations, so that 10 students saw each. The three-dimensional illustrations represented a greater degree of complexity and realism than the two-dimensional images.

Materials
Each subject saw three 11" x 14" stimuli covering the subjects of Cro-Magnon homes, art, and technology. Each stimulus contained a column of text on the extreme left-hand side of the page with the remainder of the page (about 80%) covered by an image depicting some aspect of the subject matter. Several librarians rated the three stimuli as being appropriate for fifth or sixth grades, which ensured that difficulty in text comprehension would not be a confounding factor in the experiment. Each stimulus contained separate information in the text (T) and the picture (P), as well as information contained in both (P+T). Students were screened for prior knowledge of the subject matter but none had any.
To understand the difference between the 2D and 3D visuals, it is important to make a distinction between real and perceived three-dimensionality. Perceived three-dimensionality is potentially available in any 2D illustration, and is characterized by the cues discussed above (see Haber, 1980 for an in-depth discussion of this concept). The objects in the illustration give it height and width, but since it is a flat illustration it is lacking the real depth (physical distance between foreground and background) available in the three-dimensional model. Such cues as shadows, source of light, and relative size of objects give the perception of three-dimensionality. Real three-dimensionality is found in a three-dimensional model and is characterized by height, width and depth. In this case depth is not merely perceived, but achieved by physical distance between foreground and background objects. For example, a two-dimensional illustration might depict a basketball in the foreground and a hoop in the background. The ball is slightly larger than the hoop to provide the viewer with the illusion of depth since both objects are illustrated on a flat surface. In the three-dimensional model the basketball is brought forward by means of a cardboard strip to provide the viewer with realistic rather than perceived depth. In other words, the ball is actually closer to the viewer than the hoop. All other cues are constant in the two illustrations; that is, the size of the ball, shadow, perspective, etc. all remain the same. The sole difference is the distance between the foreground and background.
For the 2D stimuli in the present study the perception of three-dimensionality was obtained by adhering to many of the cues listed earlier: the illustrations were fully detailed, realistic renderings of the subject matter, including such cues as color, shadow, texture, perspective, and relative size of objects. Foreground and background were not actually separated by physical distance, and the viewer relied on the aforementioned cues for the perception of three-dimensionality. For the 3D illustrations the same renderings were used, giving the same perception of three-dimensionality as in the 2D illustrations with the addition of depth. In other words, the foreground and background were physically separated by raising the foreground with a piece of cardboard. As an example, one of the illustrations depicted a group of Cro-Magnon people using spears to hunt a wooly mammoth. The foreground shows the wooly mammoth's head facing the viewer. Two hunters are standing before the animal with their backs to the viewer. The background depicts other hunters and the general terrain. In the 2D version the size and placement of the mammoth and the hunters show their relationship in space. In the 3D version the mammoth and the two hunters in front are raised to provide proximity to the viewer, adding real depth. All other cues are constant in the two illustrations.
An NAC Eye Movement Recorder Mark IV was used to collect the eye movement data. The camera was connected to a VHS video tape recorder and all data were recorded on a half-inch video cassette for later analysis.
The multiple choice test consisted of 45 randomized items (21 T, 17 P, and 7 P+T) coveringall three stimuli. The items were tested for reliability (alpha = 0.71) using 15 additional students who saw three-dimensional illustrations. The mean score for this group (25.6) was compared to that of the three-dimensional treatment group (25.8) to check for any effect that the eye movement equipment might have on retention performance, but none was evident. A sampling of test questions is provided in Appendix A.
Procedure. Students were tested individually in a small room provided by the school. Each student was seated at a self-determined comfortable reading distance from the easel holding the stimuli. Subjects were not allowed to turn or manipulate the stimuli in any way. For purposes of this investigation the frontal view was of primary interest, and multiple perspectives (i.e., the side or back of the 3D model) did not matter. The experimenter explained the apparatus while fitting it on the subject, and then used a target board to calibrate the equipment. The apparatus did not restrict head movement. Subjects were instructed to read silently at their own pace and to indicate to the experimenter when the stimulus should be changed. Once the reading was completed, the apparatus was removed. The experimenter instructed the students to think about both the text and the illustrations, and then the test was administered. Upon completion of the test, students were asked to refrain from discussing the experiment with others.
The eye movement data was used to compute three scores for each stimulus: 1) amount of time spent fixating the visual, 2) amount of time spent fixating the text, and 3) number of interruptions during reading of the text. The total amount of timespent fixating all threestimuli (text and illustration) was also determined. A counter (minutes, seconds, frames) was used to make the time calculations in seconds (accurate to the nearest half-second). Interruptions were defined as transitions from the text to the visual and back again. Returning to the text was deemed important in scoring reading interruptions since many subjects ended a page by fixating the visual. The retention tests were used to compute a total score, as well as separate scores for picture items, text items and picture + text items.

RESULTS
The analysis of the data served to answer three specific questions. First, did the dimensionality variable have an effect on the retention test scores? Second, were the recorded eye movement behaviors affected by dimensionality? Third, if dimensionality did have an effect on both test scores and eye movement, were the latter two variables related?
The retention test scores were significantly different for the 2D and 3D illustration groups (t = 6.604, df = 18, p<.001), with the 2D group scoring higher. Even though study time was self-paced, as Dwyer (1970) recommended for realistic illustrations, the greater degree of realism in the 3D illustrations did not facilitate learning. Individual scores graphed in Figure 1 clearly indicate that these scores represent two different sets.
Two-tailed t-tests were used to determine whether the dimensionality factor had any effect on the type of question remembered (P, T, P+T). Table 1 gives means and standard deviations. The 2D group scored significantly higher on both T items (t = 5.12, df = 18, p<.0005) and P+T items (t = 4.02, df= 18,p<.001). This

Total Scores for Recall Tests
result was different from that of Flagg et al. (1981), who found that subjects attending more to the illustration had better text recall. For P items the effect was almost significant (t = 2.01, df = 18, p<.059) with the 2D group again scoring higher.
The data suggest that dimensionality also had an effect on the recorded eye movement behaviors. Two-tailed t-tests were utilized to look at all three stimuli together. The results were significant for all three eye movement measures: illustration time (t = 6.547, df = 58,p<.0001); text time (t = 2.22, df = 58,p<.0304); and number of interruptions while reading (t = 6.438, df = 58,p<.0001).
The total amount of reading/viewing time for the combined stimuli varied little among subjects, ranging from 4 min. 13.5 sec. to 5 min. 13.5 sec. The mean reading/viewing times were 4 min. 43.7 sec. for the 2D group and 4 min. 45.4 sec. for the 3D group. Similarly, the mean reading/viewing times per stimulus varied little between groups. The reading/viewing patterns, however, differed considerably for the two groups in terms of time spent on text and illustration and the number of times reading was interrupted to view the illustration. For instance, on stimulus #1 the 2D group averaged 9.75 sec. of illustration timeand 73.35 sec. of text time. The 3D group averaged 15.9 sec. of illustration time and 66.6 sec. of text time. This trend was similar for all stimuli, with the 3D group spending more time on the visual and slightly less time on the text than the 2D group. For stimuli #2 and #3 the difference in visual time was even more pronounced, with the 3D group spending more than twice as much time on the visual as the 2D group. Additionally, the mean number of interruptions (for all stimuli combined) for the 2D group was 0.4, while for the 3D group it was 1.57. This finding is similar to Fleming's (1984) results showing that complex material led to more frequent interruptions in reading, although his stimulus complexity occurred primarily in the text rather than the illustration. Having seen that dimensionality had an effect on both retention scores and eye movement behaviors, it was left to determine if there was a relationship between test scores and any of the following: time spent on the illustration, time spent on the text, and/or number of interruptions. All correlations were low, the highest being an r value of -. 442 for the relation between text time and test score. Thus it can be concluded that the eye movement behaviors studied had little association with the retention test scores.

DISCUSSION
The data above indicate once again that dimensionality has an effect on eye movement behaviors, as Abed (1991) showed. Fixation patterns did indeed vary for the 2D and 3D illustrations, with the 3D illustration attracting more attention. Retention was also influenced by dimensionality. However, retention scores and eye movement behaviors were not found to be related to each other.
Subjects in the 3D group spent more time on the visual, less on the text, and interrupted their reading almost 4 times as often as those in the 2D group, and their retention scores were significantly lower. Like the children in the Flagg et al. (1981) study, about half the subjects in the 2D group interrupted their reading. In contrast all the 3D subjects did this. While interrupting reading is a useful strategy for gaming visual information, the overall pattern displayed by the 3D group appears to be one of distracted behavior which was reflected in the lower test scores for this group. This concurs with Dwyer's (1970) finding that extra details in realistic visuals sometimes served as distractors rather than enhancers, causing students to miss more important visual or textual cues.
It is interesting to note, as did Flagg et al. (1981), the various strategies used to study the stimuli. Out of 60 instances (20 students x 3 stimuli) 52 ended with the visual. For the 2D group, 28 of 30 instances began with the text; 18 of these involved students moving from the text to the visual without interruptions. This "business first" approach points to our educational emphasis on information being derived from the text, with illustrations for enjoyment rather than learning. In contrast, the 3D group was so distracted by the illustration that 21 out of 30 instances began with the visual and there were no cases of text-to-visual without interruptions.
Had this high amount of visual fixation time led to increased recall of information from the illustrations, the more realistic three-dimensional illustrations might have merited more support. However, the 2D group scored significantly higher on T and P+T questions, and the difference between the two group scores for P questions was just shy of significance (p<059) with the 2D group again scoring higher. Given the amount of time spent studying the illustrations, it is surprising that the 3D group did not remember more information. Perhaps this tendency to fixate on the illustration reflected simple visual appeal, interest, or novelty (as in Nunnally, 1967,1968;Mackworth and Morandi, 1967). These results are contrary to Dwyer's (1970) finding that highly realistic visuals tended to increase learning when study time was self-paced. In the present study three-dimensionality as an example of realism only served to distract learners, leading to lower retention.
It should be noted that even with the significant difference between the retention test scores for the two treatment groups, those for the 2D group were just passing (mean of 72.9% correct), while the 3D group had a mean score of 57.3%. Several factors might have contributed to these results. Perhaps some learner variable was contributing to the retention scores. The subjects did not know they were taking a test after reading, and their scores might have improved had they known. Also, design variables might be manipulated in future research studies to determine their roles. For instance, text placement, illustration placement, and the relative sizes of text and illustration might all be varied. Another possibility might be to integrate the text more fully into the illustration, rather than having a clear demarcation between text and illustration. Short sequences of text could be used in various sections of the layout rather than one long column of text. Manipulating some of these factors might yield more favorable results, though it is possible that the kind of realism displayed in threedimensional illustrations would simply prove to be distracting and inappropriate for this task. Dwyer (1970) cautioned that an illustration which attracts the reader does not necessarily serve a cognitive function as well, and the results of this particular investigation indicate that three-dimensional illustrations seem to serve the former goal better than the latter. Peeck, J. (1987).

Stimulus # 3 Cro-Magnon Technology
To cut meat from the animal, the Cro-Magnons used tools made of a) many small stone blades. b) slotted stones. c) several sharp antlers. d) mammoth bones.
During the hunt, Cro-Magnons used flaming torches to a) separate one animal from the herd. b) force animals into close groups. Abstract: The level of instructional development knowledge and its use in the instructional development process on the part of nurse educators was the subject of an ethnographic study in the province of Newfoundland. The study, completed in the summer of 1993, used a case study approach, interviewing five nurse educators, one selected randomly from each of the five provincial Schools of Nursing. Unstructured interviews of three to five hours with each subject, permitted subjects to set the interview agenda and to respond in their own term inology. Based on the selected sample, results of the study indicate that nurse educators in Newfoundland have a functional knowledge of instructional development, despite never having formally studied in the area, and that they plan instruction using a systematic approach. Their knowledge-base of instructional development, and their use of instructional development in the instructional planning process is, in fact, more consistent than that of teachers in the formal education system.

INTRODUCTION
A common concern among nurse educators is how to facilitate the students' application of knowledge to the clinical setting, and the utilization by nursing students of conceptual and factual knowledge in problem-solving and critical thinking processes. Until recently, nursing education was referred to as training, and was grounded in the apprenticeship system. The Concise Oxford Dictionary defines training as the bringing to a "desired state or standard of efficiency by instruction and practice" (p. 1354), and apprenticeship as "learning a craft, bound to service, and entitled to instruction from [an] employer for a specified term" (p. 55). Keddy and Lukan (1985) note that until the past half-century nursing education was not even well-grounded in apprenticeship, since novices learned from their peers, and not from skilled masters.
According to Griffin and Griffin (1969), the first real nursing school was established in 1860 in England by Florence Nightingale. She stressed the necessity of women being educated, and she had the insight to know that specialized training was needed to care for the sick. Her aim was to establish a career for women similar to that of medicine for men. "Nightingale contributed to the foundation of a knowledge-base unique to nursing, and stressed the importance of education for the nurse" (Cull-Wilby & Pepin, 1987, p. 516).
The first training school for nurses in Canada was established at St. Catherine's, Ontario in 1874. By 1900,20 hospital training schools of nursing had been established in Canada. This number escalated to 70 by 1909 (Duncanson, 1970). While ostensibly modelled on the Nightingale School in England, the underlying belief system of the Canadian schools differed from that of the Nightingale School. In the Nightingale School the training of nurses as professional women was paramount. In the Canadian schools service to the hospitals to which they were attached took precedence over educational concerns. Mussallem (1965) quotes Adelaide Nutting, the first professor of nursing at Columbia University: Heavy demands of the wards made it impossible for all students to attend their weekly lectures and it was always arranged that some students would choose to take very full notes and read them later to the assembled group of the less fortunate. Lectures came under the category of privileges like hours off duty to be granted, hospital duties permitting, (p. 6).
As a result of the exploitation of nursing students, leaders in nursing lobbied for the establishment of improved educational standards. The first iniative toward this goal was the publication of the Standard Curriculum for Schools of Nursing in the United States in 1917. This curriculum guide also became widely used in Canada (Mussallem, 1965).
In addition to the attempt to attain uniform educational standards, another thrust in trying to have nursing recognized as a profession has been the relocation of nursing education from hospital schools to the university setting. Mussallem (1965) notes "The purpose of a university school of nursing is to provide for the professional preparation of nurses through correlated programmes of liberal and professional education" (p. 80).
Early attempts to move nursing education to the university failed, but in 1919 the University of British Columbia established a nursing school. The program required two years of study at the university, followed by two years of nursing practice in a hospital and was completed with a final year of study at the university. Students were taught primarily by medical doctors in the university setting, and were supervised by floor nurses in their clinical experiences. Because the university had no authority over students or hospitals during the hospital experience, the non-integrated arrangement promoted discontinuity and confu-sion among the students. This separation of knowledge and clinical practice remained a problem for university-based programs until 1942, when the University of Toronto offered the first integrated program.
With the establishment of an integrated nursing program, the issue of qualified instructors arose for the first time in nursing education. Universities could not hire faculty members lacking in academic qualifications, and no graduate programs in nursing existed in Canada until 1959, when the University of Western Ontario offered the first graduate program. Furthermore, with the establishment of schools of nursing within the university system it was assumed that academic knowledge-in this case knowledge of nursing theory and science -qualified one to teach.
Today nursing education in Canada continues to be located in two distinct settings-diploma schools, associated with hospitals or community colleges, and baccalaureate schools, located in universities. However the Canadian Nurses Association (CNA) adopted the policy, in 1982, that all nurses entering the profession as of the year 2000 must hold a baccalaureate degree. All provincial associations supported this change in policy, and the Association of Registered Nurses of Newfoundland (ARNN), has developed new curriculum guidelines in keeping with the policy.
The ARNN by-laws (1991) have identified the minimum clinical and theoretical components of a nursing program. All provincial nursing programs -in hospital schools and in the university -must be approved by the ARNN. Presently all five schools are working collaboratively with the ARNN to develop a common curriculum for future nursing education, to meet the requirements of theyear 2000 policy. Primarily this new curriculum will lead to an undergraduate degree in nursing. In 1992 the Liaison Committee on Future Nursing presented a Strategic Plan for Future Nursing Education for Newfoundland: The goal of the Strategic Plan for Future Nursing Education is to develop a curriculum model to ensure that all future nurses entering the profession are prepared at the Baccalaureate (BN) level. As a first step toward that end, a collaborative curriculum model will be developed to make BN education accessible to all beginning nursing students, (p. 2)

Systems Approach: Instructional Development and Nursing
Ludwig von Bertalanffy, the author of general systems theory (Saettler, 1990) describes the theory as "a science of wholeness or holistic entities" (p. 353). According to Logan (1982), a system is a set of parts that relate to each other, individually or collectively, and operate in an environment for a particular purpose (p. 3).
Systems theory postulates that there are universal principles that may be applied whenever one defines a system in any discipline. A system is a set of components that mutually interact to accomplish a set of goals. The system acts as a whole but is studied in parts to provide information about its components and their relationships. (Moughton, 1982, p. 471) Applications of the systems approach were first noted in the 1960s in the design of electronic, mechanical, military, and space systems (Romiszowski, 1981). Systems design has also been applied to instruction, and Salisbury (1989, p. 42) notes that instructional systems design is the use of systems models specifically in the production of effective and replicable instructional programs. The characteristics of instructional systems design include an integrated plan designed to solve a problem; analysis of all components in a sequential but flexible order; research-based design procedures; empirical testing followed by necessary revisions; and, evaluation of the design model (Gustafson and Tillman, 1991). Brown and Kennedy (1988) refer to conceptual instructional development as the logical application of the notion of systems approach. They note that instructional development has been functional, rather than conceptual, in its application.
While the conceptual instructional developer is involved in the usual identification of problems and the seeking of solutions through the implementation of a chosen instructional development model, (s)he is also concerned with the ongoing functioning of the system. The conceptual instructional developer is concerned with maintaining the climate for change -energies are focused on continuous monitoring of the system so that potential problems can be anticipated. The conceptual instructional developer is concerned with maintenance of the relationships established during the instructional development activity, (p. 5) The foundation of nursing practice is known as the nursing process. It is a scientific problem-solving approach that nurses use when planning client care and making decisions in the clinical area. It is essential that nurses be able to define problems accurately, to make the best choice among possible alternatives, to safely implement a plan of care, and to evaluate the effectiveness of the intervention. The nursing process is a systematic framework composed of five phases: assessment, diagnosis, planning, implementation, and evaluation. The theoretical basis for the nursing process can be found in general systems theory, the problem-solving process, decision-making theory, the diagnostic reasoning process, and information-processing theory (Craven and Hirnle, 1992).
Nursing education must provide the opportunity for students to become proficient in the nursingprocess. While nurse educators have clinical competence and knowledge of nursing theory and nursing science, Kemp and Rodriguez (1992) speculate that nurse educators may not have the necessary skills to provide instruction that is consistent, systematic, and effective, since they are usually hired because of strong clinical and academic backgrounds in nursing. Reilly and Oermann (cited in Oermann & Jamison,1989) state: Knowledge of the subject matter and clinical competence are critical, but knowing how to teach is as important. A teacher with knowledge and expertise in clinical practice is not a teacher if unable to communicate that knowledge to students and facilitate their learning, (p. 65) There are many benefits to using an instructional development approach in nursing education. Once applied the principles of instructional development ensure congruence of objectives, instruction, and evaluation. Instructional development increases the effectiveness, efficiency, and relevance of instruction. More of the objectives are attained; they are attained more quickly, and students are learning what they need to know. The careful sequencing of objectives assures that students have prerequisite knowledge and skills (Gustafson & Tillman, 1991). These benefits are fundamental to the process of analysing educational problems and devising solutions to those problems. The process of instructional development can enhance the ability of nursing education to meet current challenges and the future needs of students.

The Methodology of the Study
A qualitative case study approach was used in the study, specifically that of ethnography. Merriam (1988) defines ethnography as: ... a set of methods used to collect data, and it is the written record that is the product of using ethnographic techniques.. .to collect data about the social order, setting, or situation being investigated, (p. 23) This study employed interviewing -specifically structured and semistructured interviewing-in the collection of data. Structured interviews were used initially to collect demographic data, while longer, indepth interviews used the semi-structured format. Merriam (1988) notes: Interviewing is necessary when we cannot observe behavior, feelings, or how people interpret the world around them...Interviewing for case study research, especially qualitative case studies, may use [the] highly structured format to gather common sociodemographic data from respondents. For the most part, however, interviewingis more open-ended and less structured. In the semi-structured interview, certain information is desired from all respondents. These interviews are guided by a list of questions or issues to be explored, but neither the exact wording nor the order of the questions is determined ahead of time. This format allows the researcher to respond to the situation at hand, to the emerging world view of the respondent, and to new ideas on the topic, (pp. 73-74) This study is one of a series of studies on instructional development knowledge, competency, and use among Newfoundland educators. To date these studies have focused on educators in the provincial school system. The interview guides developed for this study were adapted from the former studies of Graham (1991), Thomey (1991), Tobin (1989), and Gallant (1989).
The sample group consisted of five respondents. There are five schools of nursing in the province, with approximately seventy-five full-time faculty members. With the cooperation of the Director of each school, one faculty member was randomly selected to take part in the study. The curriculum committee in each school performs a major role in the planning and development of courses. Each committee is responsible for ensuring that all content is congruent with the school's philosophy and conceptual model. The committees also ensure that pertinent subject matter is presented in the curriculum. Because all faculty members in each school must submit to the same process for course approval, it was felt by the researcher that one faculty member from each school would be a sufficient sample for this study.
With the exception of the structured interview for demographic data collection, the interviews varied in length. In the case of the semi-structured interviews, the respondents were given the choice of having one extended interview or several short interviews. All chose the one extended interview. These interviews averaged approximately three hours per respondent. All interviews were audiotaped with the respondents' permission. The interviewer asked only broad questions to facilitate the data collection. The questions were open-ended, permitting respondents to reply in their own ways, and the interview was directed primarily by them -that is, they chose the order in which to discuss issues. There was no set order imposed by the interviewer.

Demographics of Respondents
The respondents have been identified as NE 1 through NE 5, to maintain confidentiality. Likewise the specific demographic data of each respondent is not disclosed. Demographics, in terms of the group, indicate that all respondents are female, with experience as nurse educators ranging from five to twelve years. Four respondents had served on their schools' curriculum committees. Their educational backgrounds showed some variety. Four had B.N. degrees, and one had both B.A. and B.Sc. degrees. Four of the respondents had completed some graduate work, with two holding Nursing degrees at the Master's level and one currently completing the thesis for that degree, and one holding a graduate diploma in Education. Three respondents had completed Education courses in the past, with only one having completed a course in Instructional Development. All respondents had completed university courses in the last five years (See Table 1).

Instructional Development Knowledge
Respondents were asked to describe their knowledge and understanding of instructional development theory and of specific instructional models. While fairly lengthy descriptive answers were provided, these contained no evidence of the theoretical knowledge relating to instructional development, and no respondents could refer to specific models. All attempted to define instructional development in their own terms, as follows: While none of the respondents were knowledgeable about the underlying theories and principles of instructional development, nor could they discuss specific instructional development models, it was interesting to note that they all felt knowledgeable about instructional development. They felt that their professional readings and their experience working with more knowledgeable educators gave them knowledge of instructional development. As can be seen from their comments, however, they erroneously equated instructional development with classroom delivery of instruction and teaching strategies.
Perceptions re Objectives. Respondents were asked to discuss their knowledge and use of instructional or behavioral objectives. All five respondents were very comfortable with their knowledge of objectives and they used them extensively in the instructional planning process. This is not surprising, since the nursing education curriculum has been organized around and based on behavioral objectives for the past two decades, with all sets of objectives having to meet the approval of the schools' curriculum committees.
NE 1 indicated that objectives are developed for all aspects of every course, and that students receive a copy of the objectives. She was unaware of any theorists in the behavioral objectives movement, but indicated that it was "all based on the Tylerian model." She noted: "We teach in a behaviorist system, so we teach by objectives ... Yes, I use them because I do say specifically what it is that I am intending to teach and what I want the student to get out of it." NE2 noted that most of the objectives used originally came from a textbook. While she used objectives to guide her teaching, she felt that they were "too restrictive -they box you in too much." She was unable to recall any literature on objectives, but she remarked that there was "a certain way to write objectives." She also noted that they are used extensively for evaluation, particularly in the clinical setting.
"Objectives were originally developed by the instructors, but they weren't made up by them. They came from books. Every year we fix up our objectives to go with the book we are using." NE 3 focuses her instructional planning on objectives. She noted: "I'm very objective-oriented. In my Master's program I did a course on teaching. That's where I learned all about writing objectives." She gets her objectives from various sources, most frequently from the textbooks and the instructional gu idebooks that accompany them. She also reads numerous books on her topics and cul Is them for objectives, wh ich she may modify to suit her courses. She explained that objectives are used in the evaluation of students, both in the academic and the clinical areas.
" I try to make the objectives very outcome focused rather than process focused. I do use my objectives to set my exams, so they have to be measurable. How do you test "become sensitive to"? So I tend not to do that. They can describe, they can list, they can analyse." NE 4 mentioned Bloom's taxonomy and recounted where she had originally learned about objectives. "... my initial use of objectives as an instructor was very much guided by Bloom's taxonomy. I learned about objectives by trial and error after I went to work at the school [of nursing as an instructor). When I think of objectives I automatically think of behavioral objectives although I realize that's not the only kind of objectives there are. I think of behavioral objectives whereby an instructor can outline subject matter that has to be taught or behaviors that have to be seen in order to realize that the student is actually meeting the goals of the program." NE 5 defined objectives as " guidelines that are written in a certain way that shows exactly what it is that you want to accomplish, how you are going to accomplish it, and in what length of time you are going to accomplish it." She talked at length about the various types of objectives, including short-term and long-term objectives. She felt that objectives were necessary for lecture preparation and for the evaluation process. While unable to provide literature sources on the objectives movement she noted that she was guided by Bevis (1993) and her comments on objectives. In all the nurse educators had extensive practical knowledge about objectives and used them regularly as a guide in the development of their courses and lectures, and as a guide in the evaluation of students. They frequently adopted or adapted existing lists of objectives, and occasionally wrote their own. While they seemed to see the benefit of using clearly defined objectives, they did not worry about the actual content or levels of the objectives, assuming that this was the responsibility of the curriculum committees who ultimately approved them.
Perceptions re Learner Analysis. The five nurse educators proved to have very little knowledge of the component of instructional development referred to as learner analysis. Other than through a recent awareness in the nursing education literature of learning styles (which typically refers to learner preferences rather than cognitive learning styles), they gave little consideration to the characteristics of their students. Only two of the five respondents seemed to be cognizant of the need for remediation or additional attention in cases where students' prerequisite knowledge was less than anticipated. NE 1 described the change in her students over the years, noting that there is now a much greater cross-section of students entering nursing, including mature students, male students, single mothers, and various levels of postsecondary education. She considered learner analysis to be equated with learning styles: "Well, they all have their own learning styles... Some need the visual aspect, some need to go offby themselves, some need more direction than others. As a classroom teacher you have to be aware of all these things." NE 2 had very little to say about learner analysis, and had no knowledge of the importance of learner characteristics. She did note a change in her students over the past several years: "We have more mature students now; they're not straight out of high school. Some are single parents. They are under more pressure at home. They can't give 100% because of the home pressure. The quality of their written work is not as good." NE 3 does attempt to assess her students and uses the information obtained when planning her teaching.
"They come from a wide variety of backgrounds. Most of them [now] have been out of school for more than 10 years. I have to consider their prerequisite knowledge, and I find that I have to do a lot of review of certain topics in class -more so than I had expected." NE 4 also discussed thechange in students over theyears, resulting in a much larger number of mature students. But she believes that the change to admit mature students has resulted in higher academic entry levels than previously. She equated learner analysis with awareness of learning styles: "In terms of... their desire for instruction I would say that students want everything lectured and they want to do as little as possible through independent study. They only want the bare bones. In the last year or so I've seen a difference in the students. There has been much more interest... in group work, poster presentations, and group projects that have been assigned." NE 5 described her students similarly to other respondents, as having a variety of backgrounds and educational experience. She deemed it important that the characteristics of students be considered when preparing instruction, but again the focus was on what she referred to as "learning styles." She noted: "We have to take into consideration all those backgrounds of the students because that has an effect on how the person learns. All those people with different backgrounds have different ways of learning, Perceptions re Evaluation. When the nurse educators were asked to provide information about their understanding of evaluation, all were aware of the broader context of program and course evaluation, but for the most part the focus of their responses was student evaluation. In the case of examinations set for students, all of these must be approved by the schools' curriculum committees. There are guidelines set for the percentage of examination questions allotted to each phase of the nursing process and the percentage allotted to the different dimensions of learning. Course assignments tend to evaluate the application of knowledge and critical thinking.
NE 1 noted that formal evaluation within nursing education is accomplished through examinations and assignments. For examinations, she develops the examination questions as the lectures are being prepared.
"Hopefully you will test what is relevant so when you are writing your lecture is when you are most focused about that topic. [In the clinical area] students are evaluated based on objectives. They have certain behavioral objectives they must meet in order to be successful." NE 2 described evaluation solely from the perspective of student evaluation through examinations. "All our evaluation is purely academics...You write the exam and you get this mark." She prefers to develop examination questions after a lecture has been given.
"If you have to do your exam questions first, then you have to make sure you cover that material in class. I prefer to lecture first and then do my exam questions. If I do them before I lecture, I find I emphasize that material in class. If I do the lecture first, everything is given the same emphasis. All of the exam questions will come out of the objectives." NE 3 described evaluation as testing students' knowledge in the classroom setting, while evaluation in the clinical setting is more complex, including testing of psychomotor skills and thinking skills in addition to knowledge, and selfevaluation on the part of students. She noted that her examinations are criterionreferenced: "When I do my tests, they're all objectives-based. When they are studying if they learn the material to answer what that particular learning objective was, then they probably know the answer to the exam question." NE 4 described evaluation as "an internal and external overall program evaluation and then specific evaluation of your students in the formative and summative sense." She noted that the quizzes and examinations used are criterion-referenced: 'Tor each question we put on a quiz or exam, we have to note the objective that question is testing. I would say our tests are norm-referenced.
[Researcher's note: NE 4 obviously did not understand the terms normreferenced and criterion-referenced]. What I have said in the past is that I wish I didn't have to have the questions done before I had the topic taught, But reflecting onitnow... if we are being totally objective-driven, we probably should be able to have the items developed beforehand." NE 5 defined evaluation as a measurement of how students are meeting objectives. She does not develop examinations by herself. Several instructors have input into students' examinations.
"I Student ] Evaluation is based on obj ectives... We use Bloom's taxonomy for blueprints for exams and we follow the nursing process. All items are multiple choice, and we get guidance from the coordinator as to how many questions are needed on a lecture topic." Perceptions re Course Revision. The nurse educators were asked to discuss the notion of revision, in terms of the instructional development process. Information was sought about the when, why, and how of course revision. All respondents cited two main reasons for course revision: data from student evaluations and the need to update existing courses once they have been offered a number of times.
NE 1 noted that course revision included revising the objectives, the student evaluation procedures, the teaching strategies and the resources, as well as the content. She noted that all substantive revisions had to be approved by the curriculum committee.
"The course would need revision if the way it is presently set up didn't seem to be working-either for me or for the students... Students write a formal evaluation of each course. Certainly we try to incorporate that." NE 2 cited one of the most important reasons for course revision is to keep content current and up-to-date. She felt that revisions should be contemplated each time a course is to be offered. Student feedback also would dictate the need for course revision.
"The students get to do a course evaluation after every course. They are reviewed and we make changes to the course within reason. The faculty also evaluate the course and revise based on their own evaluations." NE 3 utilized student evaluations in revising her own courses, which she did every year. Sometimes the revisions are minor, such as adding current literature. For major content revisions she would consider that necessary every four to five years.
"... every four to five years you need to go back and see if the whole thing is still relevant. Has it shifted from the original plan? Which is what I'm sure happens if everybody changes it a little bit every year. It looks almost the same as last year, but somehow it doesn't look any thing at all like it did five years ago. Content and objectives, teaching strategies, learning resources, and evaluative methods can all be revised." NE 4 used student evaluation information when considering course revisions. Again, all components of a course are evaluated by students.

Instructional Planning Process
The five nurse educators were asked to delineate, in considerable detail, their personal approach to the planning of instruction. It was felt by the researcher that this information would disclose any activity that could be equated with an instructional development approach, in the event that respondents tacitly knew more on the subject than they could elucidate. NE 1 believed that planning was essential to the development of good instruction, and that considerable planning was required before the course was implemented.
"You have to look at what you want as the end product and you have to see where the student is comingfrom, and I think you have to look at both ends before you can do all that stuff in the middle -looking at the prerequisite courses, looking at what courses this one is a prerequisite for." Her planning process begins with a review of the literature and texts on her topic. She considers the philosophy, conceptual framework, and curricular threads of the school, and determimes how these can be integrated in the course she is to teach. She then reviews, revises, or sets objectives, taking into account the prerequisite knowledge of the students. She then develops course materials and specific lecture notes. She develops tests during course delivery, as each unit or topic is finished.
NE 2 notes that the curriculum! committee of her school determines the basic format of the course. The nurse edmcator then develops the lectures to meet the objectives. The faculty members aire permitted input regarding the topics and objectives of each course. "We talk a lot about topics. W<e have so much input, but we don't make the final decision about what <we teach and what goes into the curriculum." Her planning process involves reading the course textbook and other textbooks, doing a literature search on the topic, and collecting community resources. She develops lecture outlines and materials, and feels that being current in the topic means that she has planned well. NE 3 had just taught a specific course for the first time, so she used that experience as an example in describ ing her approach to instructional planning.
"I was given some objectives anid some course topics to cover. I looked at them and then I decided what E thought the students should cover and I probably started with the topics first...Next I wrote the learning objectives...then I sat down with a calendar and mapped out the sequencing and the amount of ttime I would need for each concept." She planned the evaluation component of the course by going back to the objectives. Using these as a basis she had to make decisions about how many tests and assignments, and when they should be scheduled. She planned due dates by collaborating with other instructors, to ensure that students would not have too many assignments or examinations at the same time. The overall course planning was done prior to the beginning of the* term. Actual class lectures and materials were developed throughout the term. NE 4 begins planning before the term commences. She consults with the instructor that had previously taught the course, completes a literature review, and then decides on an approach to present the topic. She notes that her planning, at least at this stage, is content-driven. The overall course plan is done in collaboration with others who are responsible for teaching the students.
NE 5 begins by looking at or setting terminal objectives for the course. This is done through a review of the literature on the topic. She describes her process as follows: "I look at the textbook recommended to students, then at other textbooks. I go to the journals and look at the latest information and research. I would look at the time and how I can present it to get my point across." She did not elaborate on her decisions regarding the evaluation of students, but noted that evaluation is tied to the objectives and the content.

SUMMARY
The results of this study were compared with common elements of instructional development models as described by Knirk and Gustafson (1986).
Needs Assessment. Basic theoretical and clinical requirements in nursing education are provided by the ARNN. The schools of nursing evaluate societal trends and health/illness statistics in determining specific content areas to be included in nursing education.
Assessment of Learner Characteristics / Entry Skills. All students entering schools of nursing take the Scholastic Aptitude Test. Students entering the university baccalaureate program must complete a General Studies year before being accepted into that program. Therefore some basic academic prerequisite knowledge and skills are assessed. However it should be noted that nursing education is a post-secondary program with a discrete theory and practical base which students would not have experience with. It should be noted that in present programs there is little individualization of programs to meet the diverse needs of students.
Specification of Objectives / Performance Tests. The nurse educators all use behaviorally-stated objectives when planning the course, when planning individual classes, when evaluating students, when selecting teaching methodologies, and when revising courses. A primary focus for these nurse educators was to ensure that that selected content is delivered to students within the time limitations provided. However the content is directly linked to objectives. Students are made aware of all academic, clinical, and professional objectives at the beginning of each course.

Selecting Presentation Methods and Media. Time restraints and large classes
were cited by nurse educators as the two major limitations when selecting classroom methodologies. They were concerned about their learners and the content when deciding on teaching strategies, but cited constraints as limiting their actual implementation of anything but the lecture method, augmented by audiovisual aids. Implementation, Evaluation, and Revision Procedures. Students are evaluated frequently throughout the implementation process, both formatively and summatively. The evaluation of students, both in the academic and the clinical settings, is based on the course objectives, which all students are aware of. Courses are revised annually, based on student and faculty evaluation and on the need to update content by including the latest research as published in nursing journals.

CONCLUSIONS
This ethnographic case study of nurse educators, their knowledge of instructional development, and their instructional planning processes indicates that they do use a systematic approach in the planning of instruction. They believe that they have their own approach to instructional development. Their approach, because of the setting in which they work, focuses heavily on the delivery of content, but objectives are heavily utilized in their instructional planning and in the evaluation of students.
Their systematic approach does resemble a rudimentary, functional application of a basic instructional development model. Most of the nurse educators were unaware that they were using an instructional development approach, and were doing so in the absence of any theoretical knowledge of instructional development, or even superficial familiarity with instructional development models. Their knowledge level was very low, and seemed to be on a par with the teachers and teacher-librarians in the formal school system (Gallant, 1989;Tobin, 1989;Graham, 1991;Thomey, 1991). However they used a very basic instructional development approach in planning instruction, which is more than that indicated by the school system studies.
Nurse educators plan their courses by committee, and they work in groups to plan instruction. They use common sets of objectives and common tests, hence their approach to instructional planning is formal, organized and structured. The use of a rudimentary instructional development process permits them to plan as a group, and to prepare instruction that is aimed at the achievemnet of specified objectives.
There are obvious similarities between instructional development and the nursing process, if considered from a systems perspective. It is probable that past use of the nursing process has enabled nurse educators to utilize a systems approach in planning instruction, in the absence of formal study in the area. Abstract: In this article three learner assessment strategies are described, as well as approaches that can be used by the designers of distance education instructional materials. These are: prior knowledge; reading/readability; and learning style. In addition, suggestions are made for designing learning environments and providing learners with the tools to enhance their own learning experience. *• Resume: Get article decrit trois strategies devaluation de I'apprenant, de meme que plusieurs demarches pouvant etre suivies par les concepteurs de materiel didactique de tele-enseignement, a savoir selon les acquis, la lecture et la lisibilite, et le style d'apprentissage. On y suggere en outre des moyens de concevoir des cadres d'apprentissage et de fournir aux apprenants des outils qui leur permettront de tirer le plus grand profit possible de leur situation d'apprentissage.

Award of Excellence
• The writer has to produce a book or text about some subject with certain features. What strategies can be incorporated into the materials to improve the learner's likelihood of comprehension and successful learning? • The learner has to comprehend and ultimately learn from textual materials that have been given certain features. What best strategies can learners use to improve their desired level of achievement?
These problems relate to any designer-learner combination, but they are more critical in the distance education environment. Study guides, tutorial support, teleconferencing, study groups and computer networking all can play a greater or lesser role in helping translate the designer's intentions into the students' achievement of learning goals. But unfortunately, distance education systems frequently do not provide many of these extra learning resources. This suggests that text materials slated for distance consumption should include as much strategic design as possible.
With additional information, the designer can make the text better reflect the needs of the learner. Three characteristics can be assessed relatively easily and translated into design decisions: • prior knowledge • reading ability and text readability • learning style The purpose of this article is to describe some of the techniques that are available for assessing these learner characteristics and suggest strategies that can improve the design of distance education text materials.

Prior Knowledge
Prior knowledge refers here to the specific vocabulary and concept knowledge that a student possesses about a subject prior to beginning a course of instruction or a textbook. Prior knowledge can affect almost all aspects of study behavior (e.g., reading speed and time on task) and how the designer approaches the planning of text materials.

Measuring Prior Knowledge
The designer can get a sense of students' prior knowledge from a variety of sources, especially from members of the learner population. Many of these sources are described in more detail in the next section on reading level and readability. Global methods, like casual talks with teachers, can help define the general style and scope of text materials. However, more objective and systematic procedures are needed for the designer to judge exactly what content should be included or excluded.
One method that can be used with any of a variety of knowledgeable individuals, including students, is concept rating. A sample of relevant concepts is given to raters along with a scale (e.g., ranging from do not know to definitely know). This scale and the resulting data can be quantified, depending on the precision needed for design judgments. As with most methods, it is best to obtain information from more than one source and more than one representative of that source.
Other quantifiable student assessment procedures are: formal tests (standardized or unstandardized); unstructured free recall (specifically or globally graded); structured interviews; and structured recall tests (fill-in-the blank or short answer). If multiple choice and true-false tests are to be used, respondees should be asked not to guess because, as in achievement testing, guessing results in ambiguity of measurement.
Students can also be asked to provide outlines or concept maps of the content being assessed, but these methods may require some instruction in technique in order to achieve reasonable uniformity in the data. Also, these techniques are very time-consuming but may yield useful qualitative data.
With any pre-structured or testing approach, adequate coverage of the content is paramount. There are two approaches that seem reasonable here: a) drawing from preliminary content under the assumption that if beginning materials are not known, then more advanced content will not be known either; b) drawing a reasonable sample from all of the content that is available. The second approach provides more certainty of what students know and do not know, while the first method requires less time and preparation.
As you will become aware in the next section on measuring readability, fillin-the blank tests and the cloze procedure can be similar; both requiring content blanks. The cloze test, explained in more detail in the next section, has the advantage of testing connected discourse, but this can also be a disadvantage, because less variety in concepts can be tested. Text designers can use any of the above techniques, but before choosing one, they should decide: • the purpose of the testing; • what kind of information will best facilitate the development of materials.
DESIGNING FOR PRIOR KNOWLEDGE * In texts designed for the general consumer, it is difficult to design for the actual prior knowledge of the learner. The typical approach is to grade texts as introductory, intermediate or advanced along with age, and gear the language and the concepts being taught to this assumption. For truly advanced or truly introductory texts this is by far the best approach. It is unreasonable to expect that novice learners should be exposed to materials that is too advanced, or that truly advanced learners should be expected to wade through all too familiar content -you really cannot produce a text for both. However, in the mid-range of difficulty, a strategy which balances familiar and unfamiliar can be the best approach. And it is entirely possible to design a textbook which caters to both the confident and the uncertain student. Naturally, this approach takes more space and considerably more time and thought on the part of the designer, but in the end the success rate of less knowledgeable students, in particular, will be worth the effort. Figure 1 shows this approach for designing text for intermediate students that contains both remedial content and moderately advanced content.
Providing definitions of basic terms, as well as advanced terms, is one way of solving the dilemma of variability in prior knowledge. Definitions can be set off in boxes, with special notes in text indicating their presence. Illustrations, charts and diagrams can be of great use to readers who need to see as well as read about the content. In general, it is best to provide several ways of understanding content, but there should be a clear connection between text and illustration.

Advance Organizers
When prior knowledge of a subject is low, advance organizers can be used to aid student learning. An advance organizer (Ausubel, 1968) is a short prose passage or graphical adjunct that appears before the content to be learned. The organizer provides information at a higher conceptual level than the text. In Ausubel's words, this gives the learner "ideational scaffolding" on which the new learning can be built.
It is a "top down" approach to learning based on the notion that specific detailed learning can only be remembered if it is connected to a pre-existing schema. An advance organizer can be designed as an expository organizer, for new information, or a comparative description of previously known content.
Expository organizers provide the conceptual framework that undergirds the up-and-coming material. A comparative organizer is constructed to provide a known parallel to the new information. For instance, if the learner has prior content knowledge, that schema can be used as a basis for comparison to unknown content (Jonassen, 1982).
According to Ausubel, advance organizers are not the same things as other kinds of introductory materials, such as summaries or objectives, that are often placed at the beginning of chapters. These are at the same conceptual level as the content and do not provide the higher-order conceptual organization as an advance organizer.
One of the problems of advance organizers is that they do not work for all students in all learning situations (Stone, 1983). If the student already possesses the prerequisite knowledge, the organizer is redundant. As well, certain curricular patterns such as spiral curriculum that, through design, pre-condition the learner to incoming new information, may also inhibit the effectiveness of an organizer. However, it is arguable that advance organizers cannot hurt the reader, and it is possible that they might help.

Figure 2. Differences in Organizer Type by Instructional Group.
Figure 2 shows three hypothetical learning situations-one-to-one instruction, specific group instruction, and general instruction -and suggests the kind of organizer that might be appropriate. Each organizer is more specific than the next, moving from lower left to upper right, and a fewer number might be expected. In addition, expected benefits would decrease as a more general and more variable audience is addressed. The student's prior knowledge and possible non-use of the organizer are two of the aspects of general instruction that make organizers questionable.

Graphic Organizers
The concept of graph ic organizers was developed by Barren (1969) around the same theoretical notions upon which advance organizers are based. The main difference is that the content is presented in graphical, rather than verbal, form.
Graphic organizers can be illustrative or pictorial if their form effectively communicates an organizing aspect of the content to be addressed. However, they should be labeled as organizers to improve their effectiveness (Bernard, 1990b). As with most aspects of text, graphic organizers should be pilot tested on the target audience.

READING ABILITY / READABILITY
Reading ability and readability can be viewed as two sides of the same coin. But according to Rye (1982), there are many aspects of each that impinge on both text (e.g., non-verbal elements) and the learner (e.g., motivation) which cannot be assessed by either readability or reading ability measure.

Measuring Reading Ability
The theoretical and practical literature on measuring reading ability is quite extensive, and there are a large number of instruments available. Reading tests for children are often non-verbal and bound into multi-dimensional skills packages.
Reading tests for young adult and adult groups, such as the Nelson-Denny Test, typically measure three aspects of reading ability: comprehension, vocabulary and speed, which can either be interpreted individually or as a composite based on national or locally produced norms. These norms help determine where the reader stands in relationship to the larger norming sample.
The Nelson-Denny Test, and instruments similar to it, can provide useful information to teachers and students. Having a measure of reading ability is an important piece of information to the designer, as well. Scores can be obtained through actual testing or from school records. However, a reading score by itself does not help much in specifying how text should be written and structured. Readability estimates can help with this determination.

Measuring Readability
This section includes a discussion of traditional approaches to readability measurement, followed by a discussion of the cloze procedure. An interpretation example of the cloze procedure used in conjunction with a traditional reading measure is provided.
All of the following approaches require the use of a sample of the content material to be tested.
Qualitative techniques. Content experts, instructional designers, colleagues, librarians, editors and instructors can all provide invaluable input into the assessment of draft manuscripts or selections. Each will approach the task from his/her own perspective, and this must be considered in interpretation. In the absence of a systematic way of collecting data from them, however, their opinions should be considered anecdotal and supplementary.
Talkaloud or thinkaloud approaches (Flagg, 1990) can be used with small numbers of students, preferably a selection of high and low ability readers. Talkaloud refers to the students' behavior during a testing session, where an examiner sits with students while they read aloud. The students are given the instruction that, when they encounter problems in understanding or difficulty with any aspect of the text, they are to mention it. Explanation of the difficulty is also desirable. This technique gives an enormous amount of formative information, including suggestions for structuring text, word choice and exam-pies. This method usually requires extensive notetaking or tape recording.
Open-ended questions can be used to gather data on readability from experts or from students in the target population. Either procedure could involve using a structured questionnaire or interview protocol that reflects the kinds of information desired. Sometimes a small follow-up sample of questionnaire respondees is interviewed to obtain more information. Designers requiring information from distance education students, especially through the mail, should be aware of the possibility of a small return rate. Special consideration, such as the length of the text, should be given to questionnaires that will be sent by mail.
Students can also be asked direct questions about the text. This could be accomplished in a one-to-one testing format or in larger groups. Students could be questioned after reading with a pre-set group of questions. If test-like questions are used, they should assess factual information, and be thought of as an equivalent to the prior knowledge test from the previous section.
Single grade level formulas. The traditional way of assessing readability is to judge the text, according to aformula, without as kingstudent or expert opinion. Klare (1969) lists 31 such readability formulas. These tests differ in the way they operationalize terms like comprehension and vocabulary, but are similar in that they all produce a grade-level equivalent readability estimate.
Three of the most important and commonly used are the Fry Readability Formula (1968), theFlesch Test (Flesch, 1948) and the Dale-Chall formula (Dale & Chall, 1948). The Flesch Test is the easiest to use and the shortest test to conduct. The Dale-Chall formula is the most accurate, however, having undergone the most research and having produced the highest correlations with other tests and teacher ratings of text.
The Dale-Chall formula will be used as an exampl^af a single grade-level equivalent procedure (Gilliland, 1972).
• Select 100-word samples throughout the material to be rated.
• Compute the average sentence length in words (x 2 ). • Compute the percentage of words outside the Dale list of 3000 (x 1 ).
This list can be procured from Dale's Work or from specialized books on the subject. • Use the formula: X50 = . 1579x' + .0496x 2 + 3.6365 -where X50 is the grade level of a student who could answer 50% of the questions on the McCall-Crabbs Test (1925).
Since this way of expressing grade-level equivalent provides a single reading measure, it does not easily match reading ability test scores, unless the reading test itself is expressed in grade-level equivalents. When the target audience possesses high prior knowledge of the content, a higher number does not necessarily mean that the reading level is difficult for that group.
The cloze procedure. The cloze procedure is particularly adaptable for use in formative assessment, because it measures readability in terms of an individual student's understanding of an actual passage from the text. In a sense, it is the sum of a variety of linguistic variables estimated independently by other techniques.
The cloze procedure is appropriate for distance education, since an instrument can be formed from samples of the material to be taught and administered, untimed, in settings similar to those where students usually study. A negative feature of the cloze procedure is that it produces arbitrary scores, which can be expressed in percentages, but which will require interpretation.
Since the cloze procedure is not a test, per se, or a formula, a set of guidelines has been established for producing individual testing instruments. The following list includes these guidelines (Rye, 1982) plus some comments from personal experience: • The selected passage should be devoid of standard instructional design features, such as illustrations or inserted questions. It should also be representative of important, rather than "quirky" content. It should include instructions to students. • Selections should be at least 250 words in length with at least 50 deletions. However, since the test is not timed, students should not become frustrated by excessive passage lengths. It must be remembered that finding the correct word for the blank is much harder than readingthe passage without the blanks. It is imperative to take time constraints and the motivation of thestudent to complete the test into consideration, in judging the length of the test. Generally speaking, the older and more mature the reader, the longer the test can be. • Cloze tests can be made progressively more difficult by shortening the span of words before the next deletion. This technique allows the testee to become progressively more comfortable with the testing procedure, which is surprisingly difficult even for simple passages.
• Prior knowledge of the content is not a desirable aspect of using the cloze procedure. For this reason, it is better to select as many nonspecific terms and words for deletion as possible. If content items are used, the test takes on the character of a fill-in-the-blank test, and would be better used to test prior knowledge. • It is generally better to use selective deletion when testing for readability, rather than using a semi-random procedure or astrategy with picks every-nth noun or verb. Detailed schemes for assigning values to different types of deletions (i.e., nouns versus adjectives) and their position in text are available (Rye, 1982). • Research on scoring is conflicting; if comprehension testing is the desired outcome of cloze testing, it is better to score with synonyms, as well as exact matches. However, this is the harder procedure, since someone must construct a list of acceptable synonyms, and scoring is complicated by the need for multiple scorers and inter-rater reliability estimates. • If a more exact, and less subjective, interpretation of the test is necessary, only exact matches should be considered. • Pilot test the instrument for time and difficulty on a few readers first, so as to avoid multiple administrations with a larger group. m A sample from a cloze test, taken from a Reader's Digest article, is shown in Figure 3.
It is easy to confuse reading ability with readability, when the cloze procedure is used. What makes this application a test of readability and not of reading ability is that a content sample is being evaluated. An independent reading test makes interpretation much easier, because of the tendency to view the readers as having the problem, not the passage.

Interpretation of Reading Ability and Readability Data
Figure 4 (Gilliland, 1972. p. 97) shows the theoretical relationship between text difficulty, in grade level equivalents, and the development of reading ability. Notice that the relationship is linear, meaning that reading level progresses at a relatively stable rate over different text difficulties. The second line is curvilinear. At the early stages of reading a small increase in text difficulty produces greater difficulty for students. As reading skills develop, difficult text is more easily dealt with because of the history of reading skills that have been acquired by the student.
Early readers need careful planning in terms of readability in text design, whereas older, more mature readers, need less consideration concerning readability. Age, however, is not always the determining factor on scale to the left in Figure 4. Ayoungprecociousreader may beable tohandlevery sophisticated text, whereas an older, but poorer reader, may requ ire considerably more development time and effort on the part of the designer.
Good readability-reading ability should fall within the gray area that marks the variability that surrounds these joint concepts. Consider the graphing  approach with some fictional data. Afterwards we will deal with the statistical interpretation.
This fictional example (the raw data are not included) describes one sample of 10 readers of differing reading ability, expressed in percentile form, and three different text selections measured in cloze units. Figure 5 shows scattergrams of the three sets of data, Notice that the first test approximates the previous illustration. This is not a requirement, but the reading test by cloze distribution has the marks of a good match. Tests 2 and 3 are too hard and too easy, respectively. Now look at these same data statistically. The means tell what we expected; the texts are different on the cloze tests. The standard deviations show us how variable the test scores are. Test one and two are moderately variable while Test 3 is very variable.
The mean-to-standard deviation ratio M to SD. The mean-to-standard deviation ratio helps us understand these univariate descriptive data more thoroughly. Technically, the M-to-SD should not be less than 1.00 (there can be no negative values), because means should always be larger than their standard deviations. Distributions with means smaller than their standard deviations are very flat and variable. M-to-SD Ratios of about 2.0 are a much better spread of scores around the mean. M-to-SD Ratios of greater than 2.0 (particularly as high as 4.0) indicate narrow distribution where the majority of scores are close to the mean (opposite of a large SD itself).
The Mean-to-Max Percentage. This statistic is simply the percentage of correct responses of the average student. For Text 1 the average student received about half of the items or 50% correct. This number is about right for this sample. However, it can be too low (30% or less in the case of our example), or too high (70% or greater).
Pearson Product Moment Correlation. The correlation coefficient tells us how much grade level and test scores match in the sample. The .83 is a high correlation and the 0.50 and 0.56 are moderate. We would guess that the high correlation coefficient indicates that Test 1 is the best, since there is a good match between reading ability and readability.
Correlation coefficients produce a useful complement to the graphic representation ( Figure 5). However, the number itself can be misleading. Several peculiarities of correlation coefficients can make their interpretation misleading (e. g., variability could be limited on the grade side as well). It is recommended you examine both the coefficient and its accompanying scattergram.
Standard interpretation. Looking at all of the graphical and statistical information together suggests that Test 1 is preferable. It has a good graphic shape, and the M-to-SD Ratio as well as the Mean-to-Max Percentage are reasonable. Its correlation coefficient in combination with the scattergram ( Figure 5) indicates that the match between grade level and readability is high. This suggests that students of different reading abilities scored about the way they should have on the readability measure.

DESIGNING FOR READING ABILITY/READABILITY
There is no substitute for clear, concise writing, and this above all is what makes text readable. But good learners do not just read; they display a set of behaviors that turn readability into learnability. Some of these are: organization, reiteration, rehearsal and exercise. The strategies discussed below are intended to implant, in poor readers and hence poor learners, some of the traits of good readers/learners.

Headings and Sub-headings
Headings and sub-headings are organizers, of sorts, but they are "running organizers;" the organization revealing itself as the material progresses. While they are critical, their over-use creates the impression of an outline. Establishing and maintaining consistency is important (Hartley, 1988). A student should be able to count on the scheme that the designer has devised. As important too is how consistency accounts for itself in the placement and characteristics imbued in the scheme. Hartley has amply demonstrated the importance of proportional spacing regarding headings.
An outline of headings and sub-headings may also appear at the beginning of a section or chapter. This amounts to a "table of contents" for the chapter. A chapter outline is a summary and organizer rolled into one, but since the organization will eventually reveal itself in the text, a more descriptive summary may be preferred. The one exception to this is in scientific texts, where invariant order of topics provides advance organization.

Overviews
Overviews are descriptive passages which are intended to introduce a section or topic by summarizing it. However, overviews are often far too literal. Instead of reading, "In the following chapter you will find...", overviews could easily begin with, 'In the last chapter you saw that...", and go on to say, 'In this chapter we'll see how..." We'll call this a "continuity overview" for want of a better term. Summaries of this type lead knowledge to knowledge in a way that simple reiterative summaries cannot. This may sound like an advance organizer, but there is a very big difference, from Ausubel's point of view. "Continuity overviews" summarize previous and future learning rather than organize.

Student Objectives
Student objectives, usually placed at the beginning of a section or chapter, are seldom used, or even read. When forced to, research shows that students will use them, and even then they are only modestly effective (Klauer, 1984).
If the content objectives are concrete (e.g., the student will be able to insert the bulb into the socket) then behaviorally based objectives make perfect sense. But if the content deals with information that is richer in meaning, a behavioral objective such as "the student will be able to recall the steps leading to knighthood", is little more than an instructional strategy f^r memorization, or at best an overview of the bottom level content.
Instructional objectives, on the other hand, provide a variety of possibilities. Instructional objectives can, but do not always, convey information that is valuable from a strategy point of view. An experimental study by Bernard (1990a) pointed out how important related procedural statements can be used to create an overall strategy for improving achievement. Through a series of statements that can be put into objective form, the student can be given a "pathway" through the material. Instructional objectives can be more interesting, linguistically, and more helpful, instructionally.
Take the following example: • Learn the variance formula -it is critical to everything that follows.
• Remember the symbology: a 2 = variance; 2 = summation; X= a score on a variable; M = mean of the distribution of scores; n = number of subjects in the distribution. • Learn the form of the formula variance: "the sum of the squared deviations of scores from the mean, divided by number of subjects".
You will see this form in many different disguises in future chapters. • Explain how the variance can be converted into standard deviation units. Remember that variance, by definition, is in squared units, and SD is in unsquared units.
These objectives, admittedly are not conventional, but they express critical relationships among concepts and tie into prior knowledge. This is just one idea that could improve the practice of using objectives.

Inserted Questions in Text
Inserted questions have a long and honored research tradition. Rothkopf (1965) coined the term mathemagenics, which became equated with questioning, and this led to a spate of studies of every variant on inserted questions imaginable.
There is little question, that, if assiduously used, inserted questions can improve scores on future achievement measures (Hamaker, 1986). However, the level of the question is a determining factor. Simple rote recall questions will, invariably, cue students to simply memorizing. More complex, comprehension level questions will lead to better understanding of the content, and may serve less well as models for presumed future questioning. Application and/or problem solving questions provide the best opportunity for real growth and the use of previous knowledge. Application questions require much more than simple comprehension, they require generalization, adaptation, analogy and articulation.
Problem solving questions go one step further. They demand information from various places (presumably from within the target source) and the engineering of the information for some purpose.

Exercises and Problems
The data are mixed on the usefu Iness of exercises and problems placed at end of text selections (Paschal, Weinstein & Walberg, 1984). If left ungraded, or without feedback, exercises and problems are only modestly effective. If graded, with real feedback on better performance, they can be very effective. But unfortunately, the provision of appropriate feedback is outside the realm of the instructional text designer.
Good "end of chapter" exercises need not all be at the same instructional level. If memory is required (i.e., "memorize the variance formula"), then a number of memory questions are desirable. If they are to be used, it is most important for the text designer to know why particular questions are included and the presumed content and instructional veracity of each.

LEARNING STYLE
In addition to the consideration regarding prior knowledge and reading ability/readability, there are other measurable learner characteristics that can provide information to the designer of instructional text.
The past 20 years of research in the area of learning styles (e.g., Pask & Scott, 1972;Marton & Saljo, 1976;Entwistle & Ramsden, 1983;Biggs, 1985;Schmeck, 1988) have found that certain types of characteristics are more favorable to successful learning than others. Several approaches to measure these differences in style have been based on observations and interviews (Pask & Scott, 1972;Marton & Saljo, 1976) resulting in paper-and-pencil questionnaire or inventories.
The results of learning styles research suggest that students employing a deeplevel approach driven by intrinsic motivation matched with good reading habits seem also to perform better on common university tests. One inventory was devised by Entwistle and Ramsden (1983), and validated by a variety of researchers in many settings including distance education (Kember, 1989). This measure, and others like it, may help the designer determine whether the learner is operating at the level of surface (e.g., rote memorization, etc.) or deep (e.g., meaning and understanding) approaches to learning, and whether other factors, such as improvidence (i.e., search for details), operations learning (i.e., proceduralized learning), and intrinsic versus extrinsic motivation are also operating. This list of factors has been identified as either contributing to or impeding learning success (e.g., Hattie & Watkins, 1988).
Findings from these studies suggest three explicit strategies for text-designers: • to include a learning style inventory as a self-diagnostic test to allow students to determine how they might improve their study habits. • to incorporate built-in text design suggestions that prompt the students to take responsibility for their own learning, such as the use of meta-cognitive learning strategies. • to suggest alternative ways of viewing and representing the information in other ways by constructing graphical representations and designing their own learning exercises.
Then, exactly how do you design instructional text for individual differences. The best answer is to do everything: perform careful readability assessment, provide structures that aid in comprehension and memory and provide some form of training for some organizational instructional strategies such as concept mapping. However, knowing something about the kind of environment in which a text will be used, as well as the objectives, the time that students will be able to invest and learning style information can contribute to the designers' decisions regarding the amount of structure to provide and how much help should be provided in the form of suggestions for study.

DESIGNING FOR SUCCESSFUL LEARNING AT A DISTANCE:
A QUESTION OF STRUCTURE Figure 6 shows the various strategies that can be used for specific learner difficulties. Most of the suggestions are designer-based, in that they are adjuncts that can be designed into the material. However, successful learning involves more than simply responding to pre-designed elements. For real learning success to occur a gradual change from designer structured texts to the inclusion of selfdirected learning activities has been found to encourage independent deep-level learning (Beaudoin, 1990). Independent learning, by the nature of distance education, is one of the essential requirements for success.

Figure 6.
Learner Characteristics and Potential Instructional Strategies.
The problem of providing less structured approaches to the learner is greater than the reverse. Some readers prefer to take their own notes, in one form or another, but these notes can simply duplicate the text structure in a highly structured text. However, some learners, probably the truly deep learners on the deep and surface scale, prefer to create their own organization, independent of the organization of the text. These are advanced learners in content as well as style, but this is not always the case. Most learners need to create their own organization in order for content to become meaningfully "theirs".
The question regarding how much structure and which instructional strategies to include in text is a difficult one (Grow, 1991). Figure 6 may help suggest which types of strategies are possible for different kinds of learners. But in adopting one strategy, some learners may be excluded. The real trick is knowing the learners and the nature of the structure they require. A training manual, for example, for a very homogeneous group will require a different design than a general text, intended for a heterogeneous audience.
The complexity involved in the designer's task is shown in Figure 7. Here, three variables -learning style (deep-surface), textual structure and prior knowledge (high and low) are displayed in relationship to one another. Learning Style is shown on the vertical axis, varying text structure is on the horizontal axis and diminishing subject matter familiarity (prior knowledge) is on the depth dimension.

Relationship of Learning Styles to Learning Strategies.
At one time or another, most learners cross categories, so this graph is not intended to characterize the behaviors of these learners as much as to suggest a progressive strategy to both student and designer. The depth dimension in Figure 7 -diminishing content familiaritysuggests that as learner's prior knowledge or readiness for the content decreases, surface-oriented strategies are likely to help even normally deep learners. Encountering new terminology, relating unfamiliar concepts, having to form a new knowledge structure based on content frameworks or hierarchies demand many of the behaviors that characterize the surface learner. Fragile knowledge in learners demands rehearsal and gradual cognitive restructuring, before even deep comprehension, much less creativity, can occur.
One current line of thinking and research suggests that a solution to the question of structure lies in guiding learners towards creating their own structure (e.g., Novak& Gowan, 1990;Patten, Chao& Reigeluth, 1986), thus allowing them to link concepts to their own personal schemata. This means that the designer needs to provide the learner with strategies rather than structures. A wide variety of such strategies have been proposed, including concept mapping, cognitive mapping, idea mapping, patterned note-taking, webbing, flow charting and networking. All of these strategies fall within the general theoretical perspective known as elaboration theory (Reigeluth & Stein, 1983).
Procedures for encouraging cognitive elaboration include planning, attending, encoding, reviewing and evaluating (Weinstein & Underwood, 1985), all of which can be activated in textual materials to improve learning achievement. A study of concept mapping (Bernard & Naidu, 1992) in distance education demonstrates this point.
One-hundred forty-one female distance education subjects participated in one of four treatments: a control group (standard materials for the course); an inserted questions group (linked to course materials); a concept mapping instructions group and a combined group. The concept mapping group was further divided according to the number of concept maps they turned in and received feedback on. Achievement scores at the end of the 12-week course for high persistent mappers were better than both the control condition and the inserted questions group. They were close to significantly better than low persistence mappers. This suggests that, even for a general audience, concept mapping, built into course materials, can provide genuine benefits in terms of achievement and learning self-sufficiency.

SUMMARY AND CONCLUSIONS
This article has attempted to sketch a perspective on text design that emphasizes assessment of learner characteristics from three perspectives. These are prior knowledge, reading ability/ readability and learning style. It is argued here that for distance education, in particular, it is important that designers possess some understanding of these characteristics in their intended audience. This is because distance education materials form the basis for distance learning, especially when other kinds of learning support are lacking.
Designing materials is often not enough, however. In addition to modifying text to make it more learnable, it is often desirable for the designer to provide tools and strategies that can facilitate independent learning. Concept mapping was cited as an approach that can be designed into distance education texts, to increase learner control.
"Cool graphics! Neat sounds! This is rad!" These are just a few of the comments I have heard from my students about the new program from MECC called, DinoPark Tycoon. It is not very often that you get Grade 6, 7, or 8 students showing up an hour before school starts to use the computers for something other than games. Dino Park Tycoon has had this effect on my students.
Dino Park Tycoon is a theme park simulation. In the simulation you purchase, build, operate and continually maintain a roadside dinosaur park. For direct business applications most of the factors, events and problems are very realistic and practical. For math applications Dino Park includes estimation, percentages, averages, decimals, graphing, problem solving, and all of the basic math operations. There are also direct science and social studies applications with respect to the history, habitat, diet, and eras of the 19 dinosaurs who can inhabit the DinoPark.
Installing Dino Park on your computer is very simple because the disk it comes on is equipped with an installer program. The installation process for both the Mac and MS-DOS formats is quite similar. As with other MECC products, the documentation is extensive and thorough. Upon opening the binder the first item encountered is a product overview section which contains the background information on the program, then there is a getting started section for those who are impatient to "get going", a detailed product description with many screen displays for reference and an extensive classroom resources section containing both teacher support materials and student handouts.
I found that the students enjoyed working out the everyday problems they encountered in Dino Park Tycoon. As a result, taking the concepts they had learned back to the classroom for discussion was made easy. I liked the fact that when students worked through the program they found success in steps. While it would take them some time to get by the $100 000 revenue mark, at the $300 000 (and others) they were faced with, not only more problems, but more difficult and complex problems. In this way, the areas of forecasting and decision-making became much more relative. Second, the successes and failures of the students were shared equally. This enabled students to comprehend why they did or didn't find success. For me, the bottom line is that this program is effective and fun to use. The kids learn facts and concepts without agonizing over which is which. They are able to develop their information processing skills and not even know they are doing it.
Dino Park Tycoon is recommended for use in Grades 3-12. From what I have seen, it is best suited for Middle Years i.e.. Grades 5-9 students. The concepts are a bit above most of the younger ones, and the older students often find the program too simple, too quick. Like all students, as they used the program, my students found tricks to being successful. However, most of the tricks were realistic and appropriate. For example, one trick everyone finds sooner or later, is that if they hire the maximum number of security guards possible, their income will begin to increase dramatically and unrealistically.
there is little emphasis on theoretical aspects of communicating, this book would be a useful tool as an introduction to the creation of visuals or as a reference for presenters wishing to refresh their knowledge. Anyone who has to create a presentation, whether in the educational system, government or business would find this book useful! REVIEWER K. Bruce Lane is President of Spectrum Design Corporation, a graphic design and audiovisual production company. He is also a Sessional Lecturer of Media Courses with the Faculty of Education, Memorial University of Ne wfou nd land.
Reviewed by Richard F. Lewis, Ph. D.
Aspects of School Television in Europe contains 66 articles, written by producers, teachers and researchers. The editor explains that allowing authors to speak in their own voices would permit their messages to be communicated more faithfully. As a result, magazine-style articles blend with scholarly works. Short articles highlight specific viewpoints while longer pieces review literature, history and research projects. In the book, the term school television refers to programs which supplement or replace instruction directly in schools as opposed to educational broadcasting, which includes programs broadcast directly to homes. While in-school broadcasts appear to have declined in Canada since early efforts in Newfoundland and Nova Scotia, Europeans continue to use them actively. The editor's lead article is an excellent introduction to school television in Europe, detailing how television authorities are organized, how they are governed, how liaison with schools occurs and how research is integrated into the process. Other chapters deal with specific aspects of school broadcasts. Rather than proceed in section-by-section review, I will describe the major themes and issues in the book.
Television has been used to teach many subjects in Europe. Programs deal with science, mathematics, media literacy, language learning, health and special education. Case studies which pepper the volume cite the importance of television as a motivating force, especially in science, mathematics and language learning. In mathematics, one author mentions three goals: mathematics is fun, it is useful, and it helps you understand your world. Teachers, education authorities and government education departments set content and approaches. Most programs appear to have been effective in achieving their goals. The impact of Sesame Street is felt throughout the volume. Authors credit the research and production model and results of that program with enhancing school television all over the world.
Many authors cite the need for support services for school television. Inservice and pre-services workshops, teachers guides, and education officers (like curriculum specialists in Canada) ensure that programs can ach ieve their goals effectively. Many authors comment on the need to publicize the availability of the television materials.
Research, both formative and summative, figures heavily in many articles. The needs of students and teachers, the effectiveness of the programs and utilization information all receive attention. Research ensures that money is being well-spent and that intended outcomes can be measured.
International co-production is a fact of life in the mainstream media. Authors in this text suggest some specific guidelines for international coproductions which would seem to apply to co-production of any type. Authors highlight the need for clear objectives and standards, research, pilot programs and the use of a few co-operators instead of many in program production.
Authors highlight the importance for quality control in school television. Never mentioned but underlying the articles is the sense that we could very easily lose school television -it is still not a need but more like a want. As a result, they emphasize the importance of relevance, close contact with their audiences, accuracy and appropriateness. They suggest that multi-media products enhance televised material. Companion computer software, learning kits, printed material and teachers manuals containing activities all help television programs become part of an active learning model.
The book contains a wealth of information which will be useful for provincial media producers as well as educational broadcasters. Anyone contemplating the production of any types of mediated instruction would be wise to consult it. It will also be useful as a reference for students in communication studies and educational technology programs. With the increased interest in distance education, producers and educators working within that framework would also be wise to consult the work.
Television continues to be one of the most powerful worldwide communication channels. By continuing to harness its power to present valuable information, we can counteract the effect of less-than-desirable programming. In North America, we would be wise to heed the messages contained in the book, not necessarily to re-introduce school television but to continue to use it to achieve critical educational objectives.