The Artificial Retina Project

Fred Hapgood

For decades SF writers for both print and screen have explored the consequences of a general integration of the two great information processing systems on the planet -- brains and computers. Everyone from Philip K. Dick to the writers of the Matrix has sensed that the moment is coming and that the implications will be momentous, ranging from general artificial intelligence to cloning minds to distributed consciousness to virtual immortality (because people would be able to both back up their minds and download them into new bodies) to such dystopian possibilities as thought spam.

All these writers begin their work after the transition, the point when the lines demarcating the various species of intelligence, natural, artificial, animal, copied, and augmented, have evaporated, or nearly so. (SF author Verner Vinge refers to this point as 'the Singularity'.) None get into how the integration was actually managed. Still, given that this is almost certainly the most important engineering project of the 21st century, the one that seems likely to change the nature of the species right to its root forever, it is hard to read these stories without wondering: What was the first big step? Who took it? What were they thinking?

In this universe, this timeline, one good candidate might be the day in 1986 when Dr. Joseph Rizzo, a neurosurgeon at Massachusetts General Hospital in Boston, cut a nerve in the retina of a rabbit and was immediately struck by two thoughts. Then or now Rizzo had no thought for singularity fantasies like mind cloning; his focus was the diseases of the retina and their therapies. For two years he had been exploring the hypothesis that one possible fix might lie in transplanting cells from the retina of a healthy animal to a diseased one. If that operation were done in just the right way, those cells might take root and start to function, restoring sight. Progress had been slow to zero but the surgeon had persevered. After all, what alternatives did he have?

The central character in this story stretches around the back half of the eyeball, where it discharges two responsibilities: measuring light intensities at about a hundred million points and compressing those measurements so they can be carried to the brain more efficiently. It does the latter by recognizing basic features like borders, edges, contours, peripheral movements, etc., and sending the brain a description of the visual field in terms of those "primitives," e.g., "Contour #234 starts at point #456 and runs to #789" or something like that. (Creating these summaries may sound simpler than it is: we have been trying to do as well with our own machines for decades without success.) Upon receiving these messages the cortex unpacks and reconstitutes the images as necessary in order for the scene to appear in our consciousness.

On the day in question Rizzo, though working on rabbits, was thinking about the human diseases he wanted the technique to address. One (there were several) was retinitis pigmentosa (RP), the leading cause of inherited blindness. Despite its name, RP is more a disease of the rods and cones than the whole retina; the neural tissue, the processing circuitry, typically remains normal for a long time after onset of the disease (though it deteriorates eventually). At one point the surgeon snipped an axon. It was healthy, because the rabbit was healthy. It struck him that even if the rabbit had had RP the nerve would still be healthy, because RP doesn't attack axons. That meant that even if cell transplantation could be made to work, it would involve cutting perfectly good nerves. And that is an act for which good neurosurgeons, Rizzo very much included, have a visceral, almost genetic, aversion.

As it happens, the circuitry responsible for the summarizing function described above lies on top of the photosensors, not under them, where you might expect to find it. (The photosensors are not occluded because the processing circuity is transparent; the retina is one of the few objects the retina itself cannot see). The rationale usually given for this architecture is that rods and cones need blood and lots of it; placing the circuitry over the receptors allows an industrial- strength circulatory infrastructure unobstructed access from underneath. So if you picture the retina as a seven story building (because it has at its most complex point seven tissue layers), the vascular plumbing would be in the basement, the rods and cones on floor one, and the processing cells stacked on the floors above. A million tiny wires (axons) carry the final results of all this computation up out onto the "roof," converging towards a hole in its middle. When they reach the hole they bend down into it, wrap themselves into a cable, drop through the retina, and run out rearwards to the brain. This is of course the optic nerve. It was the axons crossing the roof whose sacrifice Rizzo was regretting.

Then a second thought exploded behind those regrets: all the circuitry was right there, directly under his hand. He was looking right at it. if the neurons, the ganglia and their axons, were healthy, why not stimulate them directly? "Oh my God," he recalls saying.

Over the next year Rizzo did his best to interest a number of engineers in building his device. He imagined it as having two parts: a camera- computer- transmitter that would be worn outside, as a pair of glasses, and a receiver-electrode array that would be permanently implanted inside the eye. The camera would look out at the world, transform the visual field into a signal, and beam it to the implant. The implant would receive, reprocess, and distribute the signal over a 2-D array of electrodes, which would broadcast an electric field, exciting the ganglion cells nearby.

Probably none of the engineers he spoke with used the actual words "piece of cake," but with one exception they were of the opinion that Rizzo had the right idea at the right time. People had already done something that sounded similar for deaf patients, with a technology called cochlear implants. What could be so hard about extending the idea to vision? It would be like television is to radio, at a time when television had already been invented. "They had no doubt" the device could be built, the surgeon recalls.

On the other hand, no one actually offered to take the project on, a reluctance that in retrospect might have raised some flags. But their reactions plus Rizzo's native optimism kept him knocking on doors, until eventually he knocked on John Wyatt's, a Professor of Electrical Engineering at MIT. Here he met the exception.

At the time was one of the few electrical engineers in the country who happened to have hands-on lab experience with retinal research. As a graduate student in Berkeley, he had developed an interest in how the retina works as a signal processing system and done work in the lab of Frank Werblin, one of the top retinal neurophysiologists in the country. "In the end," he says, "the amount of hard work per unit idea was more than I could imagine for my career, and I went back and did my doctorate in EE."

This background left him with a deep appreciation of the engineering quality of natural computational and communications systems, and an abiding sense of how really far man-made technology is from equaling it. Devices made by humans require far more energy, and therefore dissipate more heat, than anything delivering the comparable function in nature. That incompetence imposes drastic limitations on what a neural prosthesis made by humans can accomplish. For all our boasting about miniaturization, function for function human machines still weigh much more than a layer of cells. What would happen when a patient with an implant bolted to the inside of his or her eyeball ran up and down a flight of stairs, week after week, month after month? Wouldn't it just rip out? Or tear the whole retina off the eyeball? The spontaneous detachment of retinas is already a health problem, even without weighing them down with a backpack of electronics.

Finally, suppose you could wave a magic wand and solve all these problems. Current technology could not hope to replicate the output of more than the tiniest fraction of the receptors. Would they be enough? Would all this effort result in anything that blind people actually found useful? Wyatt found no encouragement in the success of cochlear implants, which was anyway fairly limited: the retinal environment was both mechanically and chemically far more demanding than the inner ear and the data management end way more complex.

No doubt it makes sense that the engineers that had been so sanguine had not gotten involved. (The observation that nothing is impossible for the man who doesn't have to do it himself probably dates to the pyramids). If so, then perhaps it also makes sense that it was Wyatt, who saw nothing but immense problems ahead, who signed up for the actual engineering. "I couldn't see any clear reason why it was totally impossible," he said dryly several years later. Over the next few years he and Rizzo hammered out the project infrastructure -- grants, staff, etc.-- while organizing the preliminary research addressing Wyatt's concerns.

The immediate question was whether enough power could be pushed into the eye to do anything useful without cooking it. Signals in the natural retina are carried by individual nerves to specific ganglions, one at a time, exactly as needed. This level of nanotechnology is way out of reach for humans, which meant that Rizzo's electrodes had to work wirelessly, using electric fields to broadcast a sphere of energy to the neighborhood. This is a much less efficient system; fields affect everything, which means that everything sucks up a bit of their power, which means that you have to pipe in more energy and therefore dissipate more heat to get a given result. In the worst - - but perfectly possible -- case, so much heat would be released in the name of getting a useful work product that the cells doing the work would die.

No published research bore on the question. There was no protocol, no lab equipment specialized to the task, and no one to turn to with experience in that type of lab work. Rizzo did know an experimenter at the Southern College of Optometry in Memphis with a reputation for exceptionally meticulous research in retinal physiology. That was probably as close as he was going to get, so in the late 80's what was now the Boston Retinal Implant Project cut a contract with Dr. Ralph Jensen to figure out if it had a future.

Jensen's day would start by cutting a bit of retina, including the optic nerve, out of a rabbit's eye, laying it in a nutrient bath, and placing a monitoring electrode as near to the nerve as possible. He would then point a tightly focused light at the excised retina and move it around until he got some activity in the monitoring electrode. This told him the light had found the particular ganglion cell attached to the axon running closest to the monitoring electrode. He then inserted a stimulating electrode at that position and turned it on. Once the setup was working, Jensen would adjust the intensity of the field, move the stimulator around, and record the effects.

The work needed all his meticulousness. Retinas are very sticky, so if the electrode got a bit too close it would glue itself to the underlying tissue. (This was called "dragging the retina".) On the other hand, if it got just a bit too far from the tissue the connection would break, and once a connection was broken it was very hard to be certain of reconnecting with the same axon. Jensen worked on a vibration isolation table, a working surface that automatically senses and compensates for ambient motions, but even so it was usually just a matter of time before any given experimental session collapsed.

Unfortunately, the more the physiologist worked, the more complications he kept finding. Doubling the stimulation charge did not double the activation rate. A charge that gave you an effect in cell A wouldn't necessarily give you an effect in cell B. Proximity to an stimulating electrode didn't guarantee response. How a ganglion cell processed stimulation changed over time. Each new suite of data revealed new complications; each complication required more experiments to illuminate their borders. In the event thousands of measurements, requiring years of laboratory time, were necessary before Wyatt felt he had enough for even a preliminary set of calculations.

The news these revealed was mixed: stimulating a cell with an electric field took a depressing amount of power (several hundred times more than the retina needs to perform the same function its own way), but it did seem as though a prosthesis might be able to drive a very modest array of electrodes at thirty frames a second without doing too much damage. In real life inefficiencies and various glitches would keep the levels of resolution even lower, but at least the implant project continued to pass the Wyatt test: it wasn't obviously, patently, impossible.

Jensen's results had cleared the way for Rizzo and Wyatt to grapple with by far the most mysterious difference between biological and man- made computation, which is that biology can make what we call consciousness, while human engineers don't have the faintest idea how to begin to do the same. Consciousness, defined as awareness of what are sometimes called "looks and feels," is different from any other entity in the universe in that nobody has even a bad idea as to where it comes from or how to generate it. Nobody knows how to test for its presence, let alone measure its magnitudes or classify its variations, if any, the way you can test for heat or gravity or color. Nor is there any solution on the horizon: No one knows what a consciousness meter would even look like, so there is no target, no development path to follow.

For decades working engineers had derided philosophers trying to think about this problem for wasting their time on "metaphysics," but the integration project has forced them into the trenches. No matter what an engineer might want to do on this frontier: build an artificial hand, a third arm, an eye that can see in the infrared, a memory prosthesis, patch a cellphone into the nervous system so users could enjoy all the pleasures of telepathy, that device will have to register its signal outputs in consciousness, which means the designing engineers have to lose the attitude and grapple with this mystery.

Specifically, in the context of this project, imagine that the implant camera is looking at a T shape out in the world, perhaps a tree, or a beam and lintel. Imagine further that the electrode array is a rudimentary 3x3 (just to keep things simple). The implant device controller looks at the camera output and orders electrode #s 1, 2, 3, 5, and 7 to fire, coarsely replicating the 'T' shape. The retina records this pattern of stimulation, compresses it, and mails its summary to the cortex. What does the patient see?

What eventually emerges in consciousness depends less on what the retina saw than on what the cortex, after consulting its vast memory stores and powers of inference and imagination, thinks the retina saw. The cortex edits, and it edits in depth. So one possibility was that the cortex would decide the retina had lost it and junk the report entirely, in which case the patient would see nothing. Recall that the team was using electric fields as their stimulating agents, as opposed to wired connections, and fields by their nature broadcast over volumes. A given field might easily stimulate several ganglion cells at once. If you think of the natural retina as playing one note at a time, an artificial retina played chords. (Wyatt likes to push the analogy further; he talks about "playing a piano with medicine balls".) A cortex might well dismiss a report composed of such "chords" as so much random static. (Indeed, the retina might do the same, depending on how strict its inhouse quality control might be.)

Even if the cortex accepted the retina's email, it might jump to any conclusion at all about what the retina meant: in the case of the example, it might show a tree or a giant T, or something totally mystifying, something with no obvious "Tness" at all. If the patient did see a T, it might be organized out of single lines or stripes or solid bars or something weirder, maybe a crucifix once glimpsed in childhood. Anything was possible. Rizzo had shown that if you monitored a rabbit's visual cortex (through electrodes taped to its skull) while its retina was stimulated, the cortex did spike. So apparently the rabbit was seeing something, but what?

The researchers couldn't infer the percepts from a circuit diagram. There were no formulas that could be used to calculate them. There was no way of simulating the experiment on a computer (no matter how intelligent a computer seems to be, you can't learn about consciousness from it, since there is no way of excluding the possibility that it is "just" a smart zombie claiming to be conscious). You couldn't ask the rabbits. Medical research has a general prejudice against cutting up people who don't need it just to advance basic research, but here there was no alternative.

By the late nineties the Project had recruited five persons with RP (and one with a cancer that had a normal eye scheduled for removal) who willing to let their eyes be cut open and have instruments, including a 4x5 electrode array, placed on or next to their retinas. All the researchers devoutly hoped to find a direct, consistent, one- to-one correlation between the pattern of stimulation and the contents of the visual field: one stimulation, one point of light in consciousness; two stimulations, two points of light with the same orientation, like a pair of headlights. Results like these would snap the working methodology together in a second: use patterns of stimulations to "spell out" the significant visual elements in the landscape, the way LEDs spell out graphics in a sign.

The time available for the sessions involving the first two volunteers (you couldn't keep poking in a person's eye forever) were consumed in learning how to do the experiments, the first of their kind. The third subject, a 68 year old woman, had been legally blind for fifteen years. She was exposed to 50 single simulation trials spread over a bit more than two hours. In 33 of those trials she saw nothing. In various other cases she saw small clusters of two or three faint, flashing images; long straight lines; and "something real dim". She saw a dot or point precisely once. The other subjects had slightly more encouraging results: Volunteer five, a 28 year old man who had been blind for eleven years, had 178 trials, reported a percept 109 times, of which 38 were dots. The champ was Volunteer six, a 47- year old man; legally blind 15 years: 88 trials, 59 percepts, 50 dots. More complicated patterns of electrode activation were tried but the results made no sense. Certainly nobody saw any geometry, no T's or L's or X's.

The bad news, as Wyatt pointed out, was that these results had no common thread. They didn't give you any handle on how to do better, how to improve the odds of a patient's seeing what they should be seeing. (There were differences among the volunteers that might account for the differences in the results, but going down that path just shrunk the pool of beneficiaries.) The good news was that the bad news wasn't worse. At least the subjects had seen some detail (the worst possible outcome would have been a blank undifferentiated field), including some dots, and the results, while baffling, were generally reproducible in a given subject.

Rizzo, as always, was optimistic. You had to look at this from the point of view of the cortex: for years their retinas had been silent and then one day some strange, garbled, messages started flooding in, signals like nothing the cortex had seen even when the system was normal, only to stop after an hour or two. How would you perform under those conditions? The neurosurgeon's experience had left him with great faith in the adaptability of the brain; give it a consistent connection between a percept and the outside world and you could trust it to do the rest. All the cortex needed was time to figure out what was going on and then it would take charge of its own development. Patients with cochlear implants kept improving their word recognition for years. It might take as long to get a grip on the potential of retinal implants.

Rizzo's plan was reasonable enough in the abstract, but it posed a pretty miserable prospect from a project management perspective. Academic engineering research is not the kind of engineering you read about in the paper, like building a new bridge or a consumer product; typically it involves a small team, often just a senior professor and two or three grad students, checking out the basic questions surrounding some innovative approach to an important problem and then writing their results up in a proof-of- concept article. If they do a physical demo it is usually a last minute lashup that no one really expects to work (The corollary of Murphy's Law is that demos never work). The zillions of mind- numbing details that have to be addressed in making something real, otherwise known as implementation drek, are usually abandoned to a big company with lots of resources or a startup organized to do nothing but make that one idea work.

And of course the reality is that even those companies don't get all the details right initially. Typically the first few versions of a new technology are crude and fragile and error- and accident- prone. It is only after a few back- and- forths with large numbers of real users that the product begins to work as envisioned by that professor and his students when they began work years ago. For every critic who says that companies abandon unfinished products to the market out of greed and impatience, an engineer can be found to argue that you just can't do it any other way, that the complexity and variety of the experiences you get back from the market are essential to making a design of any complexity work dependably.

This model is about as basic to innovation as a paved road is for a car. However, it was not available here. If the BRIP was going to put machines into people's eyes for long periods of time (years!), those machines had to be very close to perfect the very first time they went into a subject. You couldn't keep cutting into people's eyes to tweak this or that, especially not if the object was to see how the brain adapted to extended exposures. That meant that not only did the experimental implants have to be built to a much higher order of engineering than usual, they had to be built better than most new medical devices made by big companies that are actually put out for sale in the real world.

And that level of perfection had to be achieved just to get a grasp on an issue -- the nature of the link between electrode stimulation and consciousness -- that was probably central to basic design. The device had to be built to the highest imaginable standards of engineering without fully understanding the device's function. It was a bit -- this is a bit overstated -- like asking a blind man not just to paint a picture, but one of the best pictures ever painted.

Still, there was no real doubt that BRIP would steam on, despite the huge task ahead. For one point, by the end of the 90's several other retinal prosthesis projects had started around the world and elsewhere in the US. In 1993 there had been two presentations on the topic at the annual meeting of the Association for Research in Vision; in 1999 there were 33. Not all of these were retinal implants: some contemplated passing stimulating electrodes right through the skull directly into the visual cortex, bypassing the retina and optic nerve entirely. (Cortical implants have the theoretical advantage of being able to address any and all diseases of the visual system, but the practical disadvantage of requiring the physical disruption of a very poorly understood and highly critical set of tissues. Besides, it is not all clear how people will get around in daily life - - dancing, jogging, bouncing over potholes -- with needles in their brain.)

With all these other groups suiting up, the retinal prosthesis business was beginning to feel like a nascent discipline, very much including a discipline's competitive dynamics. Dropping out now would mean something worse than that the goal wouldn't be reached; it would mean someone else would reach it (if anyone did). Neither MIT nor Mass General got to where they are by hiring people comfortable with outcomes like that. Besides, kicking up the standard of academic research engineering ten notches was hard, but there was no obvious reason why it was flatly impossible. The Project continued to squeak past the Wyatt test.

So Wyatt and Rizzo turned themselves into grant writing machines. The project staffed up, turning itself into a multi-million dollar/year enterprise, with a staff of dozens drawn from eight institutions, including the renowned Cornell Nanofabrication Facility. Today five people work in administration and management alone, not counting the principals. (Major funders include the VA, the NSF, and the Foundation Fighting Blindness.)

Since the BRIP experiments, research on the brain-computer integration has flowered. Work is underway throughout the world on memory and sensory prostheses, thought-actuated machinery, and large-scale neural simulations. It is no longer a subject for comment when a university opens a center for neural engineering. Still, the problem in front of the Boston researchers (of course they are not the only engineers tackling it) remains the core of the integration project.

If you step up just one level of generality you can characterize the work here as about building not a retinal prosthesis but something far more ambitious: a consciousness driver. Drivers are programs that take general commands from a number of sources and translate them into instructions that work for specific devices. A printer driver can take a command to print "A" and calculate how much ink has to be sprayed by this particular brand and make of printer to form that specific letter in a specific font on a piece of paper. A display driver does the same for computer terminals. The consciousness driver would work on the same principle: give it the name of a color or a shape, and it would know how to make that color or shape appear in that person's "inner eye".

The consciousness driver is critical because almost all neurocomputing applications will have to report to consciousness eventually. There is very little point in having a memory prosthesis or an unconventional sense organ or an extra limb if you cannot be conscious of its contents and behaviors. Granted, you could always channel its outputs to an external display and then take the data back in through a conventional sensory modality, like sight or hearing, but in some cases those conventional senses are not 100% intact and in others we want to go beyond them. You can see where the technology is headed in SF movies like The Terminator , where alphanumeric information is shown scrolling through the visual field of the characters. A prosthesis for people with retinal diseases is one important application of the consciousness driver, but there will be a host of them.

Since there is no theory of consciousness to guide the effort, building a consciousness driver is going to take field work. Since we can never be sure that either animals or computers are ever conscious, most of that field work will have to be done on humans. Since consciousness seems to be interactive and plastic that field work needs to be long term. The current agenda of BRIP is to build a device that can sit inside the head, maintaining a working connection to the nervous system, for very long periods. It would allow the researchers to build up a set of basic data about how consciousness works, how it changes over time, how it interacts with experience, imagination, memory, the degree to which consciousness can control itself, on how much we can expect from what sort of training, and the range in individual variation among consciousnesses.

You can be both optimistic and pessimistic about the prospects. Compared with other engineers working on grand challenges (like quantum computing) BRIP's researchers have the advantage of having brilliant solutions to all their problems right in front of them, in the biology; their disadvantage is that at present no one knows enough to follow this example. We are like 19th century telephone engineers employed by time travellers to figure out how to hook up their technology to the 21st century internet.

For instance, as BRIP member Luke Theogarajan, perhaps the only experienced circuit designer in the world with a concentration in organic chemistry, points out, nature communicates with neurotransmitters, not electrons. The virtue of this approach is that it allows you to mix messages addressed to lots of kinds of cells (there are dozens of neurotransmitters) in the same channel, whereas electrons affect everything indiscriminately. But this system depends on exact dosages; even very slight errors can and do make subjects mentally ill. You can't just add chemicals; you either have to replicate the whole system, including all the regulation and uptake pieces, or find some midway approach more compatible with what we are capable of doing now. The project has made enough progress walking this line that it hopes to have a functioning wireless test device implanted in an animal by Spring of 2005. "This will enable animal tests and perhaps further design revisions while we seek FDA approval for experimental implantation in a number of blind human volunteers a year or two thereafter," Wyatt says.

A pessimist will might point out that the success of all these integration projects, including the prosthesis, depends on the differences between brains being minimal, so that a separate driver doesn't have to be written for every single person. We know these differences exist; they are the core of our individuality. If we find that every brain talks to its own body really differently, and that only a few of us, if any, are flexible enough to learn another "language," the entire integration agenda will come to a screeching halt. We would have a simple reason why these projects won't work. They finally would have flunked the Wyatt test.