Early UCSF contributions to the development of multiple-channel cochlear implants

The early contributions of the UCSF cochlear implant (CI) research team to the development of multiple-channel cochlear implants from about 1971 through the mid-1980s are briefly summarized. Scientists at UCSF conducted fundamental studies related to device safety, the control of patterned electrical stimulation, and the designs of intracochlear electrode arrays, coders, and implanted multiple-channel electrode drivers. They conducted many original studies documenting parameters of hearing with cochlear implants relevant to next-generation CI designs. On these bases, the UCSF team constructed early models of multichannel devices that were progenitors of the Advanced Bionics' Clarion CI. This article is part of a Special Issue entitled .

I was recruited to join the UCSF faculty in 1971 by the distinguished otologic surgeon Francis Sooy. Upon arrival in San Francisco, it soon became clear that one of Dr. Sooy's objectives had been to provide a source of hearing neuroscience-informed advice and assistance for Dr. Robin Michelson, a key cochlear implant pioneer. Michelson was enthusiastic about my helping him in his cochlear implant development efforts. At the same time, weekly conversations with him frustrated collaboration, in part because of my own clinical naivety and skepticism, but also because of Michelson's limited understanding of the inner ear and of electrophysiological principles, and his strong commitment to pursuing his own implant development model. While Dr. Michelson was an exceptionally capable and informed clinician-scientist in many other waysdand a wonderful, infectiously-positive human individual who later became a key scientific collaboratordhis early notions about how to build a cochlear implant that could restore speech reception were, in my scientific view, naïve, and impractical. His single-channel device was designed to "replace" the mass electrical action potential generated by hair cells in the normal inner ear, the cochlear microphonic. To replicate the microphonic and deliver it into the chambers of the inner ear, he had developed an analog signal processor with the collaboration of a Beckman Instruments electrical engineer, Melvin Bartz. That analog processor fed a 7e8 mm long exposed pair of wires (7e8 kOhm impedance; we called it a "railroad track electrode") delivered in a molded silicon appliance designed to fill the basal-most scala tympani. Michelson hypothesized that if cochlear microphonic replacement was implemented in an optimal way, frequencyselective excitation would be the expected result, on a level that should support speech hearing recovery. Of course a more conventional electrophysiologist's (my) view was that the frequency selectivity in the inner ear above the F 0 /F 1 frequency range was achieved via mechano-electric cochlear tuning, that the surviving spiral ganglion cells would be excited strictly on the basis of their electrical properties, that recovery of patterned inputs capable of representing intelligible speech would require at least 11 separate channels of electrical stimulation (the minimum for achieving intelligible speech via a frequency-channel vocoder) (Flanagan, 1965), that the requisite multi-channel analog device would be plagued by inter-channel interference especially without channel isolation, and that an effective device would have to be implemented via at least a 15e20þ mm long electrode (to engage the several octaves required to represent the input frequency span for a position-translated aural speech spectrum; see 2), whose insertion into the inner ear would probably kill it.
Fortunately, Michelson used his weekly research day to try to engage my collaboration and support, regaling me with stories about how much his single-channel cochlear implant patients could hear. After nearly a year of often-frustrating conversations with him, I finally asked him to bring his best patient to my laboratory, where we would set up an experiment designed to quantify what she could and could not hear with her device. With a week's preparation, we set up simple parametric psychoacoustic studies, supplemented by stimulus sets generated on a music synthesizer (and other environmental sound stimuli) provided by a UCSF ENT resident, Robert Petit. Michelson showed up with a wonderfully cooperative older woman who had acquired her deafness a decade or so earlier; we put her (her single-channel cochlear implant) "to the test" over a 4 or 5 h long testing period. I was astounded both by her listening abilities, and by her great enthusiasm for her cochlear implant. We were quickly able to see that via this presumably predominantly basal-turn acoustic nerve stimulation with this single analog stimulation channel, her device operated like a frequency-channel vocoder "baseband" (Flanagan, 1965;Schroeder, 2004). That is, she could surprisingly accurately discriminate pitch differences up to and across the first formant range for aural speech (with significant discriminative abilities up to nearly 1 kHz), could distinguish repetition/modulation rates in a not-far-from-normal manner, and identified and described different predominantly-low-frequency complex sounds in terms that at least crudely paralleled descriptions of low-frequency-range hearing in normal listeners (Merzenich et al., 1973). For example, to our surprise, she was able to easily distinguish (and identified) the sound of a bassoon (with its triangular-wave acoustic musical signature) from a horn or flute in the same pitch range.
Observations with this patient, soon confirmed and extended in two other Michelson patients (Merzenich et al., 1974a;Merzenich, 1975), was transformative for me because it strongly indicated that it should be possible to represent speech as intelligible with a multi-channel, vocoder-model device. This patient's single-channel device, presumably engaging a normally-high-frequency sector of the auditory nerve, appeared to operate like a baseband channel, i.e., represented all frequencies presumably via temporal coding broadly across the F 0 /F 1 frequency range. The Bell Laboratory scientists and engineers who created baseband frequency-channel vocoders only required 5 additional bandpass channels feeding band-centered oscillators to represent aural speech with high intelligibility (Flanagan, 1965;Schroeder, 2004). That seemed likely to be achievable with a CI (Merzenich et al., 1973(Merzenich et al., , 1974aMerzenich, 1975;Merzenich et al., 1974b). Moreover, it was obvious that Michelson's relatively large form-fitting silicon insert, bearing electrodes that had been continuously stimulated in this patient's inner ear for more than a year, very obviously had no substantially destructive consequences for her surviving auditory nerve array.
After this compelling demonstration, we initiated research designed to support the development of a multi-channel CI potentially capable of representing intelligible speech, for treating deafness in profound hearing loss patients.

On the path to developing the modern multichannel cochlear implant
In 1972e3, we took three initial steps toward achieving multichannel cochlear implant development. First, with the support of Dr. Michelson and an ENT resident Dr. Robert Schindler, we organized and convened a workshop balancing strong CI supporters (e.g., the pioneer surgeons William House, Blair Simmons, and Robin Michelson, the distinguished Austrian otologist Heinrich Spoendlin, and the Bell Laboratories vocoder expert James Flanagan) and equally strong CI detractors (e.g., the Harvard surgeonanatomist Harold Schucknect; the cochlear anatomist Joseph Hawkins; the distinguished physiologist Merle Lawrence; the speech scientist/audiologist Ray Carhart) to help define a path to development (Merzenich and Sooy, 1974;Merzenich et al., 1974c). A second step was to secure the first of a long series of grants and contracts from the NIH that would support what appeared to be the most obvious studies requisite for designing and implementing practical, long-surviving multi-channel CIs. The third was to build a multidisciplinary research team capable of achieving basic research and device engineering goals.
In this era, experts were sharply divided on the likely future of CIs. At our early 1973 workshop, for example, professors Shucknecht, Hawkins and Lawrence were critics of the future applicability of CIs, arguing that the fragility of the inner ear would disallow long-term use, and that nerve response pattern simulation on a level that could be expected to result in speech recovery via multi-site electrical stimulation was probably unachievable. By contrast, the three pioneer implant surgeons, and James Flanagan, one of the fathers of vocoder development, were optimistic about ultimate success (Merzenich and Sooy, 1974;Merzenich et al., 1974c).
The goals for our CI research initiative were crystalized by these discussions: 1) We had to determine how to best achieve either a simulation model of distributed auditory nerve coding, or alternatively, accomplish the localized stimulation required to generate the energy-modulated channel-specific pitches imagined to be the products of higher-frequency band pass channels representing 2nd and 3rd formants (and other higher-frequency speech features) in a baseband or other channel vocoder implementation. 2) We had to show that implantation of the requisite 2e3 cm long intra-cochlear electrode arraysdand chronic stimulation with implanted electrodesdwas safe, re survival of spiral ganglion cells. 3) We had to develop implantable multi-channel devices, potentially implanted at a young age, yet assuredly surviving, or reliably replaceable, over a human lifetime.

Addressing issues of implant safety
Robert Schindler led our team's initial implant safety studies. With the collaboration of a visiting scientist from the Karolinska Institut in Stockholm, Birgitta Bjorkroth, he showed that silicon rubber appliances mounted with bipolar scala tympani electrodes inserted about a centimeter into the basal turn induced acceptably limited damage to cochlear spiral ganglion cells in a cat model, at least over a several month long post-implantation period Schindler et al., 1977). In 1976, the anatomist Patricia Leake joined our team and extended these studies with the support of Sheila Walsh Walsh et al., 1980), ultimately showing that chronic chargebalanced stimulation with scala tympani-implanted devices did not have major deleterious impacts re the survival of 8th nerve ganglion cells in deafened animals. In the same mid-1970's period, a study led by Dr. Margaret Wong-Riley also showed that chronic electrical stimulation had positive metabolic trophic impacts for brain stem auditory nuclei (Wong-Riley et al., 1981), an outcome later extended by Dr. Leake and colleagues (Leake et al., 1999(Leake et al., , 2013. On the basis of these more recent studies, we now know that there are generally largely neutral or positive consequences of the forms of chronic stimulation applied in CIs, for both the auditory nerve and brain stem. By the early 1980s, we believed that we had shown that there was an acceptably low risk associated with implantation and chronic intracochlear stimulation of the appliances necessary for implementing our specific models of multichannel CIs Merzenich, 1979;Merzenich et al., 1980a).

Addressing issues related to controlling patterns of auditory nerve inputs
In our earliest studies, we assumed that multichannel cochlear implants were most likely to be implemented in one of two forms (Merzenich, 1975;Merzenich et al., 1977;Merzenich et al., 1979;Merzenich et al., 1980a). A first model form would simulate, as faithfully as possible, the normal distributed patterns of auditory nerve inputs delivered from the inner ear to the brainstem, achieved via patterned multi-site electrical stimulation. Nelson Kiang and colleagues had already described what they imagined would be "required" to implement an 8th nerve response multichannel CI response simulation strategy (Kiang and Moxon, 1972). These important Eaton-Peabody Laboratory studies were richly augmented by experiments documenting the detailed representations of simple and complex acoustic stimuli (i.e., aural speech) across the nerve array in the squirrel monkey, by Jerzy Rose and colleagues at the University of Wisconsin (Brugge et al., 1969). The second CI model form was based on the channel vocoder's capacity to represent speech in a highly information-reduced form, as described earlier. Channel vocoders were developed by the communication industry in the '40's-'70's to determine the minimal patterns of acoustic information requisite for representing speech in an intelligible form. In the simplest version, the "frequency-channel vocoder", 11 bandpass channels modulating channel-centered oscillators driven as a function of the integrated power of inputs in each frequency band accurately encoded intelligible aural speech (Flanagan, 1965;Schroeder, 2004). Again, in a key vocoder variation, if the lowest-frequency oscillator was allowed to rove in frequency to track the frequency of highest energy in time for speech frequencies below about 800 kHz, only 5 higher-frequency band pass channels were required to achieve full intelligibility.
It is important to note that in this era, auditory neuroscientists had argued that the fine spatiotemporal structure of auditory nerve array inputs contributed critically to the coding and representation of complex acoustic signals like aural speech. There were obviously great limitations in our ability to simulate those fine details by patterned electrical shocking of the auditory nerve array. On the other hand, the generation of systematically varying location-specific sound percepts achieved by selectively exciting separated nerve array sectors was believed to be achievable. That was, hypothetically, all that would be required to implement a vocoder-model device capable of generating intelligible speech.
In either implementation, we required a device that could relatively selectively excite fibers across a several-octave span (>2 cm) across the auditory nerve arraydeither to support the detailed (albeit cruder) representation of distributed speechrepresenting activity to achieve first-level auditory input pattern simulation, or to support the evocation of clear channel-specific percepts (ideally sharply tonaldthe equivalent of the tuned oscillator outputs evoked by each channel of a speech-representing channel vocoder) that we hypothesized to be important for implementing vocoder-encoded speech.
It might be noted that the Stanford otolaryngologist Blair Simmons, intelligently advised by Jim Flanagan, had preceded us down this logical path (Simmons, 1966). To achieve discrete channelspecific percepts as required to implement a hypothetical vocoder-based CI, Simmons investigated the feasibility of direct auditory nerve micro-stimulation, specifically to determine whether or not or how he could achieve the kind of localized stimulation he imagined to be required for the realization of this class of device.
It might also be noted that we progressively realized, beginning early on, that stimulation channels would have to be isolated and current-controlled, and because of channel interference and electrode erosion considerations, long-surviving multi-channel devices would probably have to implement digital (brief pulsatile) stimulation coding schemes.
To determine how to best control patterned electrical stimulation to implement either coding model, we constructed cochlear electrodes that could be moved to variable locations within the scala tympani in an acute electrophysiological study. We then applied a single unit response mapping strategy via recording in the inferior colliculus (IC) to define electrical 'tuning curves' generated by those cochlear electrodes (Merzenich et al., 1974a;Merzenich, 1975;Merzenich et al., 1974bMerzenich et al., , 1977Merzenich et al., , 1979. That electrode mapping, conducted with Mark White and Mariam Reid, was relatively easily achieved by taking advantage of the fact that there is an aligned map of sound frequency for both ears that can be easily reconstructed across the tonotopic axis of the central nucleus of the IC. With an electrode in place in a dead ear, we could systemically define electrical response threshold as a function of cochlear position re the contralateral normal ear, thereby defining "electrically-evoked tuning curves". In these studies, we quickly determined that it was possible to relatively discretely excite the auditory nerve array from within the scala tympani with wellplaced (modiolar-facing, narrowly-separated, transversely-oriented) electrodes, using bipolar stimulation. While an excitation peak was also recorded with monopolar stimulation (i.e., for stimulation of an implanted electrode against a remote ground), the electrically evoked response pattern was more diffuse, and more highly divergent from normal tone-specific response patterns, with broad overlap in engaged sectors of the IC recorded for even widely separated cochlear electrodes. These findings were later supported by psycho-acoustical studies mapping channel interference patterns in implanted patients conducted by Robert Shannon (Shannon, 1983a), and by electrode mapping studies in human patients, again using a channel-interference auditory brain stem response recording strategy .
It might be noted that later studies conducted at UCSF in a guinea pig model using the same electrode mapping strategy but applying a scala tympani-filling silicon rubber insert carrying the electrodes, again to 'ideally' locate stimulus contacts under the bony modiolus, recorded discrete monopolar stimulation-evoked response tuning that was equivalent to that achieved with bipolar stimulation (Schoenecker et al., 2012). The origin of these sharp differences with our studies in the 1970s and early 1980s is unclear. Recording methods were not identical, but a more important difference may have been the small movable inserts used in our earlier studies, which were very different from the scala-filling rubber (insulating) appliances applied in this more contemporary experiment.
As a secondary goal of these studies, we also 'mapped' excitation patterns generated by Michelson's "railroad track electrode". There, we confirmed that at least in our animal (cat) model, stimulation with his electrode engaged the spiral ganglion in the basal cochlea (Merzenich et al., 1974a;Merzenich, 1975;Merzenich et al., 1974b). That finding supported an important presumption also supported by vocoder and other speech model studies: while we had to devolve the representation of sound frequency representation by engaging auditory nerve fibers spanning several octaves, it was probably not crucial to engage apical turn ganglion cells normally representing lowest-frequency octaves, to represent the lowfrequency range of speech inputs in our model CI devices.
As noted earlier, an obvious difference in electrically-evoked inferior colliculus (IC) vs acoustically-evoked IC responses was evident in temporally evoked patterns of activity (Merzenich et al., 1974a;Merzenich, 1975;Merzenich et al., 1979;Merzenich, 1983a;White, 1984). Electrical stimuli evoked more-simultaneous nerve array excitation. The temporal dispersions (stochastic firing and systematic time-/phase-shifted responding as a function of cochlear location) of acousticallyevoked activity along the cochlear spiral were then thought to be key aspects of encoding of both spectral and temporal acoustic features important for speech feature representation. If that was indeed the case, although we recorded some dispersion at the level of the inferior colliculus, responses evoked by these simple forms of stimulation did not accurately simulate the fine structures of normal inputs. At the same time, we were clearly able to drive auditory nerve fibers to high rates, recording significant phaselocking up to the level of the IC when we applied modulated analog stimuli and rapid biphasic pulse trains up to 700e800 Hz (Merzenich et al., 1974a;Merzenich, 1975;Merzenich et al., 1974b;Merzenich, 1983a;White, 1984). These findings were consistent with observations from Michelson's patients and with later studies of temporal responses in our CI patients (Shannon, 1983b). That outcome indicated that it should be possible to represent relatively refined temporal features of acoustic inputs, potentially including stochastically and phaseordered activation via a CI.
From these studies, we concluded that a vocoder-model device, and possibly, a crude form of a simulation-based prosthesisdor perhaps some still more effective combination of the twodwas achievable Merzenich, 1983a). We also interpreted these outcomes as demonstrating that electrode placement would be important for multi-channel implementation. If there was an 'ideal' location for stimulating electrodes, how could we be assured that it would be reliably achieved in the deaf patient, especially with insertions 20e30 mm into the scala tympani? And how could those long insertions be achieved without endangering the fragile basilar membrane or bony modiolus?

Developing long 'mechanically-ideally-positioned' CI electrodes
To develop commercially deliverable electrode array models with which we could achieve this 'ideal' stimulation in human patients, we recruited two mechanicians (Charles Byers and Stephen Rebscher) to work with our team anatomist (Patricia Leake) and surgeon (Robert Schindler) to address these key safety-function issues. Seven British ENT fellows also contributed to this research over a 6e7 year development period. This team collectively helped create and validate the use of a novel electrode form in which the mechanical properties of the rubber/multi-wire multi-electrode insert (Fig. 1c) were controlled by a central 'rib' formed by stacking flattened electrode leads down the array's backbone (Merzenich et al., 1980bLoeb et al., 1983). Schindler designed special tools that facilitated safe, reliable electrode array delivery. With his refined implantation strategies, these spiralform multichannel CI electrodes could be straightened in surgery and implanted to a depth at which their 'memory rib' assured that they non-destructively tracked the modiolar wall, with relaxation to a near-ideal location achieved at full insertion (see 35, for a review that also documents later UCSF research and development on CI electrode array designs).

Addressing a secondary (but potentially crucial) problem. Creating a replaceable cochlear implant
In the course of this development, we came to the conclusion that long-surviving cochlear implants must have replaceable electronics. That reasoning was based on four premises. First, our chronic animal studies evaluating electrode safety indicated that replacement of a 20e30 mm long connective tissue-ensheathed electrode array in the scala tympani could be a difficult procedure, with (we concluded) a likely low probability of full success (but see 36). Second, the mean time between failure for this class of implanted electronic devices was about 30 years in the mid-to-late 1970's. That meant that there was a high probability of re-acquired deafness in any patient implanted before about their 50th birthday. Surely, we told ourselves, there must be a way to replace failed electronics without a requirement for replacing a scala tympani electrode array that would be far less prone to failure. Third, it would be highly advantageous to initiate our multichannel studies using a percutaneous connector or otherwise-transparent receiving-stimulating appliance, to evaluate alternative electronics model forms. Ideally, these temporary through-the-skin or fully-transparent implanted appliance could later be replaced, via a connector, with a now-research-optimized implant. Fourth, it was a virtual certainty that the quality of implanted electronic encoderstimulators would improve over the next decade or two of CI R&D. Surely, we told ourselves, it would be a great advantage to deploy a device with up-gradable electronics not requiring electrode replacement, to implement adventitious device upgrades.
Led again by Charles Byers and Stephen Rebscher, but now also assisted by a new team member, Dr. Gerald Loeb (on sabbatical from the Neuroprotheses Contract Program of the NINDS/NIH) we created an implantable connector that was designed, like marine and other salt-water environment cable connectors (then operating reliably in some marine uses for more than a century), to be safely sustained in connection by bringing a rubber sealed appliance under very high closing pressures (Merzenich et al., 1980bLoeb et al., 1983). Foreseeing future applications in deaf children, as we created these models, we also designed and tested strategies for electrode lead extension that could enable their non-destructive device adaptation to the growing head.
Device survival testing in pig and benchtop environments indicated that this strategy could be safely and effectively applied in our fully implemented multichannel cochlear implant models. This 'transparent' 4-channel form, constructed in our UCSF laboratories, was designed to implement any conceivable coding/patterned-stimulation model, as a practical step toward defining 'ideal' CI implementation forms. Electronics (b, left) were mounted over an implantable connector, with the 16-lead electrode pad (right) layered beneath the electronics capsule. The electrode array (c) was designed for a 22 mm insertion into the scala tympani; a central rib (not seen) formed by the flattened-lead-wire stack, dominating the mechanical-memory properties of the insert, enabled accurate electrode contact placement control within the scala. At the time of implantation of these devices, our goal was to rapidly refine device designs given the functional versatility of this limited device formdthen replace it with an 'ideal' multiplexed 8-bipolar-channel device via the implanted connector. d) The re-engineered, now-commercialized Advanced Bionics (AB) "Clarion" (incorporating many important changes contributed by Joseph Schulman, Gerald Loeb, Blake Wilson, and others). Again, as noted earlier, the "Clarion" AB CI, implanted without a connector, was designed to be versatile in application, subject to potentially great, progressive refinement without any requirement for implant (receiver/stimulator/electrode) replacement.

Addressing other issues related to practical questions about prosthesis design
Many other practical issues had to be considered on the path to designing the encoding, transmitting, receiving and currentcontrolling electrode-driving electronics to achieve appropriately patterned stimulation of the auditory nerve array. An engineering team guided by Drs. Loeb and White and supported by David Patterson, Lindsay Vurek and Peter Zimmerman constructed external vocoder-based coding and transmitting stages, receiving electronics, a low-noise multi-channel compression system, and current-controlled electrode drivers that allowed for isolatedchannel control (Merzenich et al., 1980bLoeb et al., 1983;Loeb, 1985;White, 1986;Loeb, 1990). These models initially drove an 8-bipolar channel array accessed via a temporary percutaneous connector, then were engineered as 4channel CI implants (see Fig. 1aec) e then later, were reengineered by Advanced Bionics as 8-16 channel CI devices (Fig. 1d).
With our isolated-channel current-controlled analog stimulation models applied with "peak picking" stimulation (differentially engaging the 8th nerve array as a function of spectral energy) to reduce channel interference, we recorded higher speech reception abilities that had been recorded with single channel devices (White et al., 1990;Merzenich, 1985;Schindler et al., 1987;Schindler and Kessler, 1989). Still, we realized that devices would ultimately have to be implemented using digital pulsatile stimulation to further reduce channel interference. Don Eddington had already noted that interleaved pulsing could be used to minimize it. In studies largely led by Mark White, we had directly documented interference parametrically in both animal and human models (Merzenich et al., 1974b;Shannon, 1983a;White, 1978;Schoenecker et al., 2012;White, 1984;White et al., 1984)dand well understood the conditions for minimizing or avoiding it. Those and other observations led to our development, in 1984-86, of an initial model of an interleaved pulse-processor (IPP) CI model, designed to drive an 8-channel device, operating at rates up to about 600 Hz. Blake Wilson was in close communication with us over this period in the mid 1980's; with the support of a new Neuroprosthesis Contract Program contract charged with comparing and iteratively idealizing speech coding strategies, he rapidly extended and expanded our studies, ultimately directly demonstrating the positive speech reception benefits (presumably driving morenatural, more-stochastic, and less interference-contaminated auditory nerve inputs) that resulted from using very brief pulses applied on an interleaved schedule at very high stimulus rates. His initial demonstration of the comparative advantages of IPP came from studies conducted in our first multichannel CI devices (the UCSF-Storz Cochlear Implant; see Fig. 1b, and below) (Merzenich et al., 1974a).
It might be noted that we made several other discoveries in this research and development period that bore important implications for modern multi-channel CI designs. For example, we saw early on that the impedance between electrodes was not a simple or reliable function of the distance between them (White, 1978). At that time (and for several applied CI devices, well out into the future), common (non-isolated) voltage sources were used in implants, in part because it was assumed that electrode currents would flow to nearby electrodes in a systematic way. We realized that to control stimulation, isolated current-controlled stimulation would be required (Merzenich et al., 1980b;Loeb et al., 1983;Loeb, 1985). All commercial devices now use isolated, current-controlled drivers.
In studies led by Robert Shannon, Elmer Owens and Marlene Ochs, we conducted parametric studies that provided an early foundation for understanding CI-generated sound and phonemic and speech perception (Shannon, 1983a;White et al., 1990;Merzenich, 1985Merzenich, , 1983bShannon, 1981;Shannon, 1985;. These studies documented interference and successivesignal masking effects that were highly informative for CI processor designs. Among many other findings, we were the first group that showed, in animal then in human models, that we could "steer" stimulation to create "virtual" CI channels. For example, we could differentially bias adjacent channels and thereby systematically shift the boundaries between two confusable phonemes, or roll phonemic representations back and forth across two adjacent electrodes (White et al., 1990). Strategies to improve CI function are now appropriately strongly focused on generating more continuously coded representations by using more sophisticated forms of current steering, necessarily requiring bipolar (or tri-or quadripolar) stimulus control.
We constructed low-noise multichannel compressors that are a key component of all modern CIs (Loeb, 1985;, as well as transcutaneous information and power-transmission system stages like those applied in related forms in all modern CIs. We first documented loudness summation effects that provided the basis for adjusting channel levels in multichannel CIs . Our studies first showed that progressively higher pulse rate stimulation resulted in progressively more stochastic neural firing, and showed that this resulted in an expansion of the electricallyevoked dynamic range for individual CI channels (White, 1984). Our group first applied brain stem response (ABR) recording to '"map" electrode tuning in human patients . We were the first to observe that CI application could be used to actively suppress chronic tinnitus (McKerrow et al., 1991). We documented multiple aspects of the relationships between patterned stimulation and electrically evoked perceptions, systematically relating them to normal sound-evoked hearing, showing that they challenged predominant contemporary hypotheses of speech feature representation accounted for by 8 thnerve auditory signal and aural speech coding. Up to that point, almost every aspect of coding was believed to be accounted for by simple aspects of coding on the 8th nerve itself. Robert Shannon's studies were particularly important for showing that phenomena recorded in implant patients (for example, successive-signal masking) could only be accounted for by central auditory nervous system processes.

Addressing issues of practical engineering; constructing a fullyimplantable prosthesis
By 1980-81, all of the pre-requisites for implementing a practical multichannel implant appeared to be in place. We had demonstrated that we could safely introduce electrodes into deaf ears for long epochs, and limited evidence indicated that electrical stimulation was tolerated by the spiral ganglion. We had shown that we could discretely excite different 8th nerve sectors, potentially to instantiate either a crude patterned-input auditory coding multichannel CI model, or a vocoder CI model. We had constructed electrode arrays that we had shown could be safely inserted the requisite long distances into the scala tympani, with what we concluded were acceptably low risks for inducing trauma to the basal membrane or bony modiolus, and with fully inserted electrodes achieving near-optimal placements for achieving mostcontrolled patterned stimulation. We had tested and validated use of an implantable connector that we believed would assure that we could replace percutaneous cables with implanted receiving electronics of progressively increasing sophistication, without a requirement for replacing effectively implanted cochlear stimulating arrays. On these bases, our UCSF team designed, constructed and implanted the first true (i.e., isolated-channel; simultaneouslyengaged) multi-channel CI, in 1981 (Rebscher et al., 2008;Jackler et al., 1989;Loeb, 1985Loeb, , 1990Merzenich, 1985).

UCSF-Storz and UCSF-Minimed (Advanced Bionics) CIs
To further achieve our practical goals and to transfer the benefits of a UCSF CI to the public in need, we asked the UCSF technology transfer officers to assist us in identifying a medical device manufacturer to commercialize our implant model. They ultimately agreed to license our technology to the Storz Instruments Company. Storz was initially unwilling to sign an agreement with the University because over the prior decade, we had chosen not to file patents on the many aspects of cochlear implant design that had originated in our laboratory, arguing as a team that the fruits of our research should be in the public domain. We realized, in our attempt to help UCSF come to an agreement with a manufacturer whose large investment would be at risk without patent protection, that we had acted in error. We quickly wrote two patents covering a small body of still non-disclosed research; with that limited protection combined with an earlier Michelson patent, Storz initiated commercialization of UCSF CI devices.
With limited engineering resources (the initial UCSF-Storz implants were actually produced in our UCSF laboratories; see Fig. 1aec), Storz undertook the manufacture of 8-channel bipolar electrode arrays, and ultimately, implantable 4-channel devices (meant to be on the path to 2nd-generation 8-channel CIs; see Fig. 1b). Gerald Loeb led our model team's device production and technology transfer processes. Electronic designs of coders and stimulus drivers for second-generation multichannel prostheses were based on outcomes achieved in patients initially implanted with percutaneous connectors. Once constructed, those percutaneous connectors were removed, and implanted receivers and electrode-driving electronics mounted onto the earlier-described implantable connector, without disturbing implanted electrode arrays.
After a significant period of effective use, in which as noted earlier, we documented speech reception abilities that substantially exceeded those achieved with single channel devices (Loeb, 1985(Loeb, , 1990White et al., 1990;Merzenich, 1985;Schindler et al., 1987;Schindler and Kessler, 1989), to our dismay, implantable connectors began to fail because of unanticipated dehission of ions across the narrow connector gaps under these high (>200atm) connector closure pressures.
Across this UCSF-Storz device application period, the UCSF research engineering team was focused on implementing a nextgeneration CI model, an 8-channel digital CI using an inter-leaved pulse processing coding strategy. In 1984, we described this device development strategy in detail in an NIH program project grant. Unfortunately, this grant received critical technical reviews that specifically questioned the wisdom of IPP coding! As Storz struggled with their connector failure issues, and with our failure to obtain sustaining support for this speech processor/stimulator model refinement and testing, our own IPP speech processing model was never applied in a human subject. Fortunately, Blake Wilson had been awarded a contract to continue and extend studies of coding refinement including an IPP model, and quickly showed the functional superiority of IPP over earlier stimulation/ speech coding strategies (Wilson et al., 1988). In 1986-87, Robert Schindler and Gerald Loeb worked with the UCSF licensing office to come to an agreement with a second medical device company, Minimed, which created Advanced Bionics Corporation to reengineer the UCSF implant to create a more-advanced and reliable CI model. The product of that collaboration, richly supported by continuing progress made in the evolution of interleaved pulse processors by Blake Wilson and his colleague Charles Finley (and with consultant assistance from Mark White), was Advanced Bionics' Clarion cochlear implant (Fig 1d). This isolated-channel, current-controlled instrument flexibly supported 8-16 channels of "ideally positioned" bipolar or monopolar electrodes, minimized uncontrolled channel interactions by implementing an IP strategy shown to generate more-stochastic auditory nerve inputs, applied a vocoder front-end to control channel-by-channel signaling, and implemented the advanced form of multichannel compression that Mark White had originally designed to reduce noise interferences.
Early Clarion CIs were notable because of the great versatility that the Advanced Bionics team built into its design. These CI models supported very high stimulus rates; supported either 'monopolar' (common-ground) or bipolar stimulation implementation; supported any practical analog or digital electrical stimulation form; richly supported 'virtual channel' encoding models; among their other 'transparent' design features. In effect, this versatility addressed a major problem that our implantable connector strategy was designed to address: as CI designs advanced, for at least several decades, almost any superior coding/ stimulation device form could be implemented without alteration of the implant itself.
All of these features were (with the Wilson's team's, and later, with the Advanced Bionics re-engineering contributing very important extensions) the product of an outstanding team of UCSF engineers, neuroscientists, psychologists, technical specialists and surgeons, all working, to the best of their ability for more than a decade, in the laboratories of the University of California at San Francisco, to bring the gift of speech understanding back to profoundly deaf individuals.

Goals achieved; lessons learned
As I have related elsewhere (Merzenich, 2011;Merzenich, 2013) working on the creation of effective multiple-channel CIs was a wonderful way to spend a research decade. I know that the Lasker Awardees share in the positive good feelings that come from our greater hearing research community's overall grand success in this arena. We participants on the UCSF research team were thrilled to have had a role to play in developing these devices. Recovering hearing in the profoundly deaf is, after all, the stuff of miracles.
For me, this research opened up other important research avenues. By the time I began focusing more on other areas of research in the late 1980's, I realized that the primary initial epoch of invention for CI development was over. CIs were becoming a practical reality, applied to overcome profound hearing loss, ultimately in millions of individuals in need of help.
I also realized that cochlear implants were more a 'miracle' of brain plasticity than of our device engineering. Encoding schemes that were very different from those applied in our device modelsdfor example, the low-pulse-rate round-robin multipleformant-tracking vocoder scheme applied in the Australian implantdresulted in the re-acquisition of intelligible speech that was almost as good or as good as that recorded by use of the UCSF-Clarion device (although devices are now all closer in design to our original conceptual models). Large differences in intracochlear stimulation appliances also resulted in only small differences in patient outcomes. In a very real sense, the brain simply did not care. If the engineers applied devices that devolved acoustic inputs in place and time to an adequate degree, the plastic brain effectively recoded them, ultimately seamlessly relating these very different, new auditory system "front ends" to all of the brain's historically recorded information acquired before hearing was lost in the first place. In the end, patient after patient, independent of device, said that the speech they heard, now substantially understandable, "sounded completely normal", that is, as it did before they lost their hearing. This remarkable re-establishment of speech understanding, usage and identity remains the single most compelling adult (and later infant) brain plasticity experiment(s) conducted in human subjects up to this time. My UCSF colleagues and other scientists across the world have subsequently conducted many studies documenting different aspects of this neurological remodeling achieved via CI use . The remarkable reacquisition of speech understanding with CI use was a major impetus in my own professional life for re-directing my attention in an equally fruitful later career, pursuing the source of this apparent miracle, adult brain plasticity itself.
Of course the great achievement of cochlear implant research has been the production of devices, now in three successfully commercialized forms, that have the power to establish hearing in the congenitally deaf child, or to restore useful hearing in an individual who is socially isolated by an severe, acquired deafness. We UCSF team members congratulate the three Lasker Award recipients, and acknowledge their seminal contributions for providing this great gift for so many individuals in such great need.