About Murray Sachs

Murray Sachs

Sachs went to MIT for his undergraduate and graduate career, both in electrical engineering, receiving his PhD in 1966. He worked in the emerging field of bioengineering, and in particular modeling the way sound is received, transmitted, encoded, and comprehended between the ear and the brain, via the neurons of the auditory nerve. His advisors and mentors included Moise Goldstein, Nelson Kiang, Dick Johns, Bill Peak, and Jerry Letvin. After three years in the Navy doing work on submarine-to-submarine acoustic communication, and underwater speech perception and a year at Cambridge for a postdoc doing frequency analysis of the retina with John Robson and Jack Nachmias, he went to Johns Hopkins as a professor, where he has spent the rest of his career. His research includes realizing that the ear comprehends sound by doing a sort of Fourier analysis on incoming signals; figuring out the mechanics of the inner ear, figuring out the encoding process of sound and space into discharge patterns in the auditory nerve, and trying to understand how speech is encoded in the auditory nerve. His modeling of how the cochlear nucleus works (along with work by his colleague Eric Young) contributed to the development of cochlear implants; his current work on figuring out how speech encoding goes wrong in damaged ears may help to build hearing aids that (for example) work in noisy rooms. He has also worked on how birds process birdsong, in their ears and cortex.

Sachs became director of the Johns Hopkins biomedical engineering department in 1991. He thinks the Hopkins program is special because its long-term collaboration with the medical school gives it a fundamentally biological rather than engineering approach, and because its small number of graduate students per faculty allows for a close, mentoring relationship. The Department is a leader in physiological modeling and biomedical instrumentation and imaging, and he is trying to direct it towards becoming a leader at interpreting images, pattern recognition, the computational modeling of physiological systems, tissue engineering, and drug delivery. He believes the field in general is growing and that the trend in the field is towards more imaging. He recommends an oral interview be conducted with someone in the Whitaker Foundation, which has done much to develop the field nationally. At Hopkins, it is turning the department into the Whitaker Institute for Biomedical Engineering.

About the Interview

Interview #394 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:Murray Sachs, an oral history conducted in 2000 by Frederik Nebeker, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.

Interview

Childhood, family, and educational background

Nebeker:I see that you were born in 1940 in St. Louis. Would you tell me a little about your childhood?

Sachs:My first memory must have been at the age of 15 months, Pearl Harbor. I swear I remember it.

Nebeker:Is that right?

Sachs:I have a vivid memory of sitting in my parents’ car in front of the Christmas displays in downtown St. Louis hearing it on the radio. I’m sure it could not be true, yet the memory is crystal clear. I grew up in what is now the inner city of St. Louis, near Washington University, until I was twelve years old. Then we moved to University City, where I finished high school.

Nebeker:Was your father associated with the university?

Sachs:No. My father was a lawyer by training, but always said he was the one who should have been an engineer. His father forced him to go to law school. He apparently passed the bar exam before he went to law school – while he was fixing radios to make a living. This was before the Second World War. He became partners with a man who I believe was Jimmy Hoffa’s lawyer. However my father did not go back to law after the war. He just couldn’t stand it. He started a woodworking factory that made cabinets and showcases and so on for hardware stores. He put me through college by driving trucks and running this company. As a young teenager I didn’t realize how much he was doing for me.

Nebeker:Did you have siblings?

Sachs:No, I am an only child. My mother is still alive and is ninety-one years old. My father died about fifteen years ago.

Nebeker:Were you interested in science gadgets as a youngster?

Sachs:I was not a technology geek.

Nebeker:Had your father continued his hobby of radios?

Sachs:No, but he could make anything work. When he was very old and had long been disabled he would tell me how to fix the washing machine in the basement. He used me as his hands. Upon entering my sophomore year in high school, my high school advisor told me that because of Sputnik and others things I had to go to MIT. I graduated high school in 1958 and I felt a lot of pressure to make it to MIT. I felt I had no choice. I went to MIT. I worked for a construction company during the summers in high school and after my first year at MIT. If I had had the guts, I think I would have become an architect. I didn’t think I had enough creativity. Looking at it now, I’m not sure I’d want to be doing that.

Electrical engineering studies at MIT

Nebeker:What was your major at MIT?

Sachs:Electrical Engineering.

Nebeker:How did you make that choice?

Sachs:I think I made the choice the same way most undergraduates at Johns Hopkins make the choice: by chance. I had built Heath kit and things but knew a little about it. I took some classes at Northwestern the summer before my senior year in high school through an NSF program. We did a lot of chemical engineering, which I hated.

Nebeker:That one was ruled out.

Sachs:That one was ruled out. Unlike most of my classmates at MIT, I didn’t go into physics. Most went into physics. Then after their freshman year they went into the business school. I went into electrical engineering. I’m glad I did. I was there for eight years, through all of my undergraduate and graduate training.

I met my wife the summer before my sophomore year at MIT. She was from St. Louis and I visited her when I was home for Christmas. Her brother-in-law, who was a philosopher psychoanalyst, told me I ought to be studying the brain because the brain was an electronic computer. That sounded pretty good to me and I decided that I would become the best electrical engineer I could and then study the brain. I went through four years at MIT with this plan in mind.

Nebeker:Were you able to take any classes relevant to that?

Sachs:No, I stuck to my decision to become an electrical engineer. It’s an indication of the change that’s happened in biomedical engineering over the last thirty or forty years. I hadn’t thought about it in these terms. There was no such thing as biomedical engineering, although there was a group in MIT of which David Geselowitz was a part. He was on sabbatical while I was there. I started out to do a master’s thesis in some kind of radar detection theory.

Nebeker:It was still not biomedical engineering at all.

Sachs:No. I was working with Harry van Trees. I loved Harry. He was a wonderful teacher. I think that was his name. Anyway, I got bored with it. I wanted to do something less strictly mathematical. My graduate advisor was Moise Goldstein. Dick Johns may have mentioned Moise Goldstein to you.

Nebeker:Yes, certainly.

Sachs:I have had advantages in my career. I had three of the most wonderful mentors imaginable. One of these was Moise Goldstein. He was ultimately both my master’s thesis advisor and the person who brought me to Johns Hopkins. The second was Nelson Kiang, a very tough but wonderful doctoral thesis advisor and a very dear friend of mine to this day. Thirdly, Dick Johns was my mentor and my predecessor as chairman of this department. He was my mentor, friend and leadership role model. I also had wonderful students. With that combination of mentors and students, I was hard-pressed to fail.

Nebeker:How did you come in contact with Moise Goldstein?

Sachs:He was randomly assigned as my graduate school advisor.

Nebeker:That was lucky.

Graduate research in bioengineering; Communications Biophysics Laboratory at MIT

Sachs:I got tired of doing radar kinds of things very early on – in my first semester of graduate school. I asked Moise what I could do in bioengineering – though it wasn’t called that at that time. The Communications Biophysics Laboratory at MIT had been started years before by Walter Rosenbluth and was doing what we would not call bioengineering. It focused largely on processes of hearing, applying engineering and communication theory and computers such as existed at that time to solving biological problems. There was a very famous paper in the neurophysiological literature called “What the Frog’s Eye Tells the Frog’s Brain” by Jerry Letvin and some other people. Moise Goldstein had worked on the first computer for averaging biological transients and wrote a paper with Larry Frishkoff and Bob Capranica both of whom were at Bell Labs. Capranica was also graduate student with me at MIT. I think their paper, published in the Proceedings of the IEEE, was titled “What’s the Frog’s Ear Tells the Frog’s Brain,” something like that. It was a very influential paper because it was the first paper to demonstrate that some animals have communication systems – both receivers and transmitters – that are attuned to each other. The frog’s auditory system is closely attuned to its vocal communication system. Bullfrogs from neighboring counties have different croaks and different inner ears that filter the croaks. I went to work with Moise Goldstein studying that phenomenon in the frog and did that for my master’s thesis. That is how I got into this business.

Nebeker:Was that in MIT’s Building 20?

Sachs:That’s a dagger through my heart.

Nebeker:That it is gone?

Sachs:Yes. I’m on two visiting committees at MIT right now and every time I go back there it feels so sad.

Nebeker:Tell me a little of why you regret the passing of that building.

Sachs:It was a wonderful culture. We were all piled together. People used to say that the best advantage of being in Building 20 was that you didn’t have to look at it from the outside, but we had people like Bill Siebert, Tom Wiess , Bill Peak, Walter Rosenbluth, David Geselowitz and Dennis Klatt, all of whom were giants in communication theory.

Nebeker:What groups there were doing what we would call bioengineering at that time? Was your work with Moise Goldstein of that type?

Sachs:That was the only one with which I was toying. There were others, but we were very narrowly focused. Until I became chairman of biomedical engineering here – except for the fact that I have been immersed for thirty years in this very broad department – I considered myself more of a neuroscientist than a bioengineer. I have a learned a lot in the last ten years, so I feel a little more like a bioengineer now.

There was a speech group headed by Ken Stevens that I associate with bioengineering. Jerry Letvin was in RLE at the time studying vision. Jerry Letvin was about as good an engineer as anyone but a biologist by training. I knew about the neurological kinds of things.

Nebeker:That lab was a good atmosphere.

Sachs:It was a very good atmosphere. Larry Rabiner, who I think you interviewed, was a student of my graduate school roommate. Thinking about it now I realize that there was a good deal of activity in biomedical instrumentation.

Nebeker:Tell me more about Goldstein’s lab.

Sachs:It was all part of the Communications Biophysics Lab, which was one big laboratory. We had to sign up to get time to work in the soundproof chambers. I guess it was in the late winter of 1962 when Moise took me on when I first became a graduate student. In the spring of 1963, right after Spring Break, I ran into a friend who told me Moise was leaving. I said, “He can’t leave. He’s my master’s thesis advisor.” Sure enough, he was leaving to go to Johns Hopkins, which I knew very little about. My memory was that a high school friend of mine who had gone there became a dentist, so I assumed they had a dental school. However I don’t think Hopkins ever had a dental school. He had obviously been an undergraduate there before going on to dental school.

Moise Goldstein offered me the opportunity to come to Johns Hopkins with him, but I didn’t go. I want to stay at MIT. He remained my thesis advisor from a distance. I really struggled through my thesis, and he was extremely supportive. One of the things I had to do – and I was the first in the Communications Biophysics Lab to do this – was to record from single neurones. It was a technology that was just coming into play, and Jerry Letvin had dotted the i. No one in the lab knew how to make electrodes or how to stabilize preparations.

Nebeker:Dick Johns was telling me about making his own microelectrodes.

Sachs:Right.

Nebeker:Is that something that you had to do?

Sachs:Yes. Things just didn’t work at first. In retrospect, it was wonderful experience for me. One forgets the pain. In fact I remember that I was up all night working on one of the first experiments the night John F. Kennedy was killed.

Nebeker:That was November of 1963.

Sachs:I was walking from the lab to my office and was exhilarated by the fact that this first experiment had finally worked, but then heard that Kennedy was assassinated. Like everyone else, I remember exactly where I was when I heard the news.

One of my problems was that I didn’t know how to make electrodes, so I went down to Jerry Letvin’s lab. Do you know much about Jerry Letvin?

Nebeker:No, I don’t.

Sachs:He is a legend. Everyone who knew him has a Jerry Letvin story. My story is that while in the laboratory making electrodes he said, “Come watch me polish an arrowhead on the tip of this one-micron electrode.” Sure enough, he was polishing arrowheads.

I had to learn how to make electrodes, how to make the electronics work and how to do all the neurophysiology.

Nebeker:With what kind of animal were you working?

Sachs:Frogs. I made a lot of classical mistakes and got a lot of help from Larry Frishkof at Bell Labs. I went down to see how things were done in a professional lab and came away with the right idea about how to do things. With Moise’s help I managed to get through my master’s thesis and even got a publication out of it. In those days, electrical engineering graduate students did not often publish.

Nebeker:Had you gotten useful measurements out of that auditory nerve?

Nebeker:Had you already decided that you were going to go for a Ph.D.?

Sachs:I had decided a long time before that. I was very narrowly focused and an A-type personality – probably since kindergarten. One time my parents came to some kind of parents’ weekend at MIT when I was an undergraduate student, and one of my advisors told my mother, “Everybody wants to stay at MIT, but only the best can stay here.” That goaded me on. And that was the sort of pressure that one worked under there, I appreciate that to this day.

I did my doctoral thesis in the laboratory of Nelson Kiang, who was one of the pioneers in auditory physiology. He started a laboratory across the river in Boston. It’s attached to the Mass General though not part of it.

Nebeker:Did he have a lab in Building 20?

Sachs:He had worked with Moise in Building 20 and then was asked to start this lab across the river. By the time I got to him, he was working solely in what was called the Peabody Laboratory. He has a very active laboratory and wrote a very famous monograph on the coding of information in the auditory nerve.

Nebeker:Did you talk with him about your master’s work?

Sachs:Not until just before I went to work with him as a Ph.D. student. I didn’t really know him until then. A young graduate student and very good friend of mine, Russ Feiffer, was without question one of the brightest people ever in auditory science. He was doing interesting things working in Nelson’s lab and seemed excited about it, so I asked Nelson if I could work in the lab. He apparently investigated the fact that I had succeeded in doing the frog recordings and said, “Come on over.” There started a very interesting period of my history. Unfortunately, Russ Feiffer was killed in an automobile accident about twenty years ago.

Nelson Kiang was written up in the Boston Globe many years ago with a full-page feature about how he needs only two or three hours of sleep a night. It was a feature on people that don’t need much sleep. When I first went to work in his lab he was on an enforced vacation. His wife had said, “You must be in bed with me at 8 o’clock every morning.” However she didn’t say where he had to be at 4 o’clock in the morning, and that was the time of my first meeting with him and when we laid out the subject of my thesis.

Nebeker:Are you also a person who needs very little sleep?

Sachs:Not as little as he. I can get up very early in the morning. I used to be a marathon runner. I am now a walking example of biomedical engineering. I’ve got an artificial hip. I used to get up at 5 o’clock in the morning and run ten or fifteen miles before going to work. The hours didn’t bother me. Doing neurophysiology in those days meant staying up all night anyway. I think part of the reason for the 4:00 a.m. meeting was a test to see how I functioned.

For my doctoral thesis I recorded from single auditory nerve fibers, primary efferent neurones in the auditory nerve, and studied a phenomenon we called two-tone inhibition. It is very much like center surround inhibition in the retina. I remember my very first experiment. I guess I was doomed – or privileged – to continue to find myself in situations where I had to be the first to do things. This was the first time the link computer was ever used in the Peabody Laboratory.

Nebeker:Is that right?

Sachs:It was the first time. The lab had been undergoing renovations and I had to take the painters’ drop cloths off the link to do my experiment. I remember my excitement when recorded from my first single unit. I called Nelson Kiang at 1 o’clock in the morning and told him, “It works, it works, it works.”

Nebeker:For what purpose were you using the link?

Sachs:Data processing.

Nebeker:Were you digitizing the signal?

Sachs:We used Schmitt triggers. We recorded spike trains and used a threshold device to trigger the computer to record the times of occurrences of the spikes and then used those to construct interval and post-stimulus histograms. It was the signal processing frontier at the time. Everything was stored on link tapes. Those became DEC tapes when DEC took over the industry. I remember transferring the link tapes to IBM mag tapes and schlepping them across the river to the TX-0 computer in Building 20. Then it took all night to process the data from one experiment – which would take about 20 minutes today. We had to read in all the programs with paper tape and control the computer with toggle switches. And things would go wrong. It was like boot camp, and there was such a spirit of camaraderie that we were working on the edge of something that was going to really blossom.

Nebeker:That link computer is mentioned as an important part of the development of signal processing.

Sachs:That’s right, though I was not in on the ground floor. I was not one of the people who actually went to St. Louis and put them together. However I was the first one in this laboratory to use it. I remain proud of that.

Nebeker:Were you also working with frogs at that time?

Sachs:No, we were working on cats. Through Nelson’s work cats became the animal of choice for auditory neurophysiology. They are a hardy animal, able to survive long surgical procedures, and their ears are very tough in the sense of being able to deal with anesthesia for a long time. Our lab still uses cats. Many other laboratories have gone to gerbils because they are cheap and working with them is easy. However they last only three or four hours – eight at best – in experimental situations. The experiments I was doing took three and four days, so we had to use a hardy animal.

Nebeker:How did your work go on the BHT research?

Sachs:It went very well, but was not easy. Nelson Kiang was a hard taskmaster and is known as such. He will never admit this, but he would lock me away in his little office for hours at a time telling me I was smarter than 99 percent of the people in the world but that around MIT that was nothing. We developed a very close relationship.

Graduate examinations and presentations

Sachs:

I had to give a seminar on my master’s thesis after my second year. The seminars at the Communications Biophysics Lab were literally called “bloody,” because they were so rough on the speakers. The speaker the week before, a visiting professor from UCLA, walked out in the middle of his seminar. He couldn’t take it anymore. Walter Rosenbluth came up to me before my seminar and said, “We like to push people against the wall and see how they bounce back.”

Nebeker:I’ll bet that made you feel good right before the seminar.

Sachs:I got up to give my seminar and said, “This is my master’s thesis. It was done under the supervision of Moise Goldstein.” Then Rosenbluth said, “That’s Dr. Goldstein to you.” Walter is also a very good friend of mine. From the back of the room Moise yelled, “In Baltimore they call me Mugsy.” I said, “This is a picture of the frog’s ear, and this is some structure in the frog’s ear.” Walter said, “How big is it?” I said, “I don’t know.” He asked me no further questions. Obviously having admitted that I didn’t know something was what he wanted to hear. I flew through. It was a wonderful experience. It was a lesson I carried with me all my life.

I had my graduate oral exam on my thesis during the thesis proposal stage. That exam was about three hours. Have you ever met Bill Siebert? Bill is a very well known signal processing person and a legend at MIT.

Nebeker:I think I have met him.

Sachs:Bill was on my committee. He and I argued for three hours about how I would do the thesis. He had his point of view and I had my point of view. Nelson Kiang sat there doodling. He swears this never happened, but I could draw a picture of it. He was doodling a ship with SS Sack – sinking. At the end of the exam Siebert said, “Well, you think you know how you want to do it and I think I know how you should do it. Do it your way and we’ll see who’s right.” I spent about a year locked away in his office trying to understand how he thought I should do the thesis.

Then the time for my thesis exam came around. I had done a bunch of calculations for the thesis. I had done a model of the data process that I studied. Interesting in terms of bioengineering, about three-fourths of the way through my thesis I had done what could have qualified as a neurophysiological thesis. Bill Peak was my official thesis advisor. He said, “This is not an electrical engineering thesis unless you do some modeling.” That had a profound effect on my life. I did models that I have pursued ever since and still use today.

Nebeker:Looking just at that question of that phenomenon with the cats’ hearing, was modeling the process valuable for the field?

Sachs:Yes, and I will say so with complete immodesty. At that time it was strictly a phenomenalatrical model, and really it still is, more or less. It describes some very complicated data very nicely. I’m reading a book called The Elegant Universe. I don’t want to compare myself with quantum mechanicists and relativistic physicists, but quantum mechanics is curve-fitting in many ways. This was curve-fitting.

Nebeker:I understand you proposed a way of processing auditory inputs that would actually duplicate occurrences. Is that right?

Sachs:Yes. It was such a simple model that the calculations were all done with a slide rule. We didn’t use computers to do simulations and computations in those days; we used them for analyzing data. I did, at least. The night before my thesis defense, I decided to redo the computations. I was in the lab and my slide rule was at home, so I borrowed someone else’s slide rule. I discovered that all the computations were wrong. I have never felt more panic in my entire life. It turned out that I was using the borrowed slide rule incorrectly and the computations were right. My wife had been in a serious accident and had not driven a car for five years. I was so shaken that I couldn’t drive. She had to drive me home that night.

The next day I had my defense – and it was public. It was the first public defense this group had, and the room was filled with psychologists from other parts of RLE and the Psychology Department of MIT. They asked questions I could have answered irrelevantly, except that Bill Sieberg waved them off saying, “That’s trivial. Here’s the answer.” I barely had to answer a question. After the public exam I was supposed to have the grilling where the thesis committee would really stick it to me behind closed doors. Bill Peak was the chairman of my committee. He said, “Well, let’s go for the grilling.” Bill Siebert said, “I’m satisfied.” Nobody argued with Bill Siebert. I never had to endure the feet-to-the-fire part of the exam.

Nebeker:Had you realized by that time that your original calculations were right?

Sachs:Yes. I was okay by then. I don’t think I could have stood up there – I know I couldn’t have – had they been wrong.

Expansion of thesis model as a professor

Sachs:

Skip ahead a bit, when I came here my second student started to do some experiments that were expansions on my thesis. He got a different answer, and that panicked me. I feared my calculations were really wrong or the data were wrong. He got very unexpected results, and I would not let him publish them. Not that I thought they were wrong, but I would not let them go to press until we had a model that would explain them in some rational way. Suddenly I understood how his results fit with an expansion of my thesis model. That was one of the most – if not the most – exhilarating experiences in my scientific career. I worked day and night doing the calculations needed to fit the model to the data. At the time my wife and I had weekend visitors from out of town, but I was so excited about this I could barely go home to talk to them.

Nebeker:Was it a case of where first it appeared incompatible with the earlier work but then when you worked out an expanded model it all fit?

Sachs:It all fit. It was explainable in terms of some of the underlying mechanical processes that had been discovered in the inner ear between the time I did my thesis and the time of this paper. It was a fascinating experience, because the mechanical data from the inner ear were not well accepted due to the fact that they showed that the inner ear was non-linear. At that time no one believed biology could be non-linear. Fourier transform said one had to deal with linear systems. I went way out on a limb with this model because it tried to relate the auditory nerve data to non-linear cochlear mechanics. It was the only time in my life that one of my papers was rejected by a journal. In fact, it was rejected by Russ Feiffer, who was then the editor of the Physical Society Journal. He didn’t review it himself, but he accepted the reviewer’s comments. I fought back and finally got the paper published. It’s the most important paper I ever published. Experiments and modeling have been done many times in the subsequent twenty-four years since 1976, and much more elegantly than I did them, but the answer is still the same. The non-linear theory has proven to be correct.

One of my next students did a thesis working on further expansions of these models. Early in his graduate career he asked me, “What if we find out that your model is wrong?” I said, “We’ll learn a lot more if we find out that the model is wrong. The way the model is wrong will tell us something that having a model fit the data would not have told us.” I think that’s the essence of modeling in the bioengineering tradition.

ROTC and Naval service, Vietnam War

Nebeker:Did you complete your Ph.D. in 1966?

Sachs:That’s right. Then I went into the Navy.

Nebeker:There was the draft in the sixties. Did you ask to go into the Navy?

Sachs:I was ROTC. He had been in the Transportation Corps during the Second World War and ended up schlogging in the trenches. My father had told me it was better to be on a ship, so I went into Naval ROTC.

Nebeker:Of course there was the Vietnam War at the time. Not a good time to be in the trenches.

Sachs:That’s right. I really wanted to go to Annapolis to be on the faculty there, but they wouldn’t take two-year reservists so I ended up at the Underwater Sound Lab in New London.

Nebeker:You were in ROTC but a reservist?

Sachs:Right. I had to give two years of active duty.

Nebeker:Then you still had to do four additional years of reserve duty?

Sachs:My four years of reserve duty were done in graduate school. I turned in my thesis on a Friday, and the following Monday, a hot afternoon in September of 1966, I walked in the gate of the Underwater Sound Laboratory. I was wearing my ensign’s stripes and the guard at the gate said, “You’re a lieutenant junior grade now.” I said, “What do you mean?” He said, “You have lived two and a half years beyond the time you were ensign, so you are a lieutenant junior grade.” About six months later I became a lieutenant. It was a fast promotion – just by having lived four years beyond getting commissioned.

Nebeker:Were you commissioned before that?

Sachs:I was commissioned when I graduated MIT, so simultaneously with getting my bachelor’s degree I was commissioned. I didn’t go to meetings. I think I took one correspondence course in signaling or something that I had to do to keep my commission on the right track. I remember studying for a correspondence course in communications at a pub in Harvard Square. I loved the Navy, except for the fact that I couldn’t quit and I was always fearful that I would be sent off to Vietnam at any moment.

Underwater submarine-to-submarine acoustic communications research

Nebeker:What were you doing initially for active duty?

Sachs:The entire two years I was on active duty I was doing underwater submarine-to-submarine acoustic communications work. One of the highlights of my career there was a three-week submarine trip to the Azores where we had two submarines. We were transmitting sonar signals between the submarines, trying to communicate human voice with a bandwidth of about 10 hertz.

Nebeker:That had not been done before using sonar?

Sachs:As far as I know it still hasn’t been done.

Nebeker:Is that right?

Sachs:The bandwidths are just too low. I don’t know what the state of the art is now, but we didn’t get very far.

Nebeker:I remember hearing about the Word War I effort to develop underwater signaling. That’s apparently just a very difficult thing to do. If the radio doesn’t work, acoustic signaling is difficult.

Sachs:Acoustic signaling has a very limited bandwidth. The reason for that is because there are many reflections off the surface of the bottom of the ocean. The sound waves have many paths, and the signals get smeared out in time. When I went into the Navy I was really looking forward to doing pure engineering. I wanted to see how I would do signal processing without any biology. I hadn’t been there six months yet when I decided I was really interested in how these underwater channels affected our perception of speech. I ended up spending most of my time studying speech perception in these channels. I didn’t get very far from bioengineering even there.

Nebeker:Were you allowed to investigate what you wanted?

Sachs:More or less.

Nebeker:That’s amazing.

Sachs:I was one of twelve officers in a laboratory with 1,200 civilian scientists. The lab was run by a very liberal guy and the head of my group was very forward thinking. He essentially let me do what I wanted.

Nebeker:Sounds like good active duty to me.

Underwater Sound Laboratory work environment

Sachs:It was wonderful when I finally decided they weren’t going to ship me off to Vietnam because my hair was too long and my shoes weren’t shined. I was the hippy officer. The weekend before I went on active duty my girlfriend (now wife) and I went down to look for a place for me to live. It was a Sunday, and the officer on duty took us through the lab. I had worn sandals during that visit, and had a beard like the one I have now. When I came back a week later to join the forces I had already shaved off my beard. The wives of the officers had had the captain look into the Navy regulations, and it turned out that if I kept the beard neatly trimmed I could have kept it. Apparently they were greatly disappointed that they weren’t going to have a real hippy as an officer.

Nebeker:Did you know you might be permitted to keep the beard?

Sachs:I didn’t even think about it. All I wanted to do was keep my nose clean, not go to Vietnam and get out. As it turned out, I had a marvelous time. I loved the submarine trip and loved the research I was doing. It convinced me that I wanted to do bioengineering, and that I could do it as a good engineer.

Nebeker:What was your perception of that laboratory as a whole?

Sachs:It was wonderful. I very much admired both the naval and civilian administration of the laboratory. It was a tough time because of Vietnam. Although I consider myself extremely liberal politically, I was a little bit to the right of some of my friends at MIT. They couldn’t understand why I would even go into the Navy at that time. My view was totally different when I came out of the Navy. These were people who were serious about what they were doing, and they weren’t out to murder people. Many of these extremely competent scientists were as politically liberal as I. There were some who had government tenure and were hangers on, though not many. By and large it was a good group of people.

Satellite navigation system research

Sachs:

I had my first experience with Johns Hopkins while on active duty, though I didn’t realize it at the time. In the experiments we were doing at sea we had to know precisely where the two submarines were relative to one another. To do that required satellite navigation, which had been developed at the Applied Physics Lab (APL) at Johns Hopkins. We were on two old World War II submarines that didn’t have computers or satellite navigation systems. APL gave us the receivers for the Doppler the signals from the satellites as well as reams and reams of computer printouts that allowed us to interpolate the readings and determine where we were.

I was sent to APL for two days to learn (a) how to use the equipment, and (b) how to use the diagnostics if something went wrong. The first night out of New London we were hit by a vicious storm. When the captain asked me where we were I said, “Oh, that’s easy. We’re 100 miles off the coast of Norfolk.” Actually we were about 15 miles off the coast of Connecticut. Needless to say, he wouldn’t listen to my satellite navigation for a while. Basically, although I’m an electrical engineer I’m not the most technically adept. APL training and diagnostics was so good that even I could diagnose and fix the system within about 10 minutes, such as replacing a board that was bad.

Nebeker:Was that satellite system more accurate than the LORAN or whatever they usually used?

Sachs:Yes, it was much more accurate. It was a question of feet rather than yards. It was the kind of resolution we needed to pinpoint locations.

Nebeker:Did it actually work on that trip?

Sachs:It worked from then on, but the captain wouldn’t listen to me. He didn’t ask me again until we were out at sea about four days with no stars and no sun. The captain had to come to me and ask where we were. When we landed in the Azores I was off by about 3 feet on the location of the pier, and I swore it was the helmsman’s fault – that he didn’t really know where we were. It was amazing.

Cambridge University post-doc; transition to John Hopkins

Sachs:

I went to Cambridge University for my post doc for a year and went back to do research in the Navy for a year before I came to Johns Hopkins. The Navy wanted me to stay and work in the lab as a civilian, and I liked it so much that I decided I would. However after a little while away from academia I really missed it and came down to Hopkins. The first thing I did when I got down here was teach a course in physiology at the Applied Physics Lab. I subsequently realized that half the people in the class were people who had taught me how to work the satellite navigation system.

Nebeker:What did you do that year at Cambridge?

Sachs:I spent the year abroad. I say that not entirely facetiously. When I went to my thesis advisor, Nelson Kiang, he asked me with great trepidation what I wanted to do for a postdoc. I said, “Nelson, I really want to go to Europe.” That was my prime goal. And, “Secondly, I want to work in vision. I want to broaden myself away from the auditory system.” Much to my surprise, he said those were both admirable things to do and arranged this postdoc for me in Cambridge. While there I studied a sort of frequency analysis in the retina using psychophysical techniques. I worked with John Robson and Jack Nachmias. Jack was there on sabbatical from Penn, and he and I have become lifelong friends. John Robson was my preceptor sponsor and became a lifelong friend. That was just another one of my wonderful life experiences.

To make a long story short, I lucked out once again. We did experiments and modeling computations that were probably more widely cited in the first ten years of my career than any of my work in hearing. This is largely because there are so many more people studying vision than hearing. It was another case of applying a sort of bioengineering modeling approach. It was controversial, but it worked.

Nebeker:Were the majority of the people working in the visual and auditory areas sort of classical scientist types that did not take much of an engineering approach?

Sachs:That’s right. Auditory science was really in the forefront of bioengineering approaches. That’s largely because the ear’s mechanical system is very complicated and requires very sophisticated mathematical analyses. Essentially the ear does a Fourier analysis on its incoming signal. It is a highly non-linear kind of analysis, but it can be roughly thought of as a Fourier analysis. One really has to have that kind of background to even think about the problems. We brought that kind of thinking to the visual system.

Nebeker:Did you work with visual systems?

Sachs:No.

Nebeker:Did you return to New London?

Sachs:I stayed in New London for a year. I was offered the position at Hopkins either right before or right after I came back from Cambridge. I came to Hopkins because Moise Goldstein offered me a position here. It was a great opportunity and I took it. That was 1970, and I have been working on hearing ever since.

Research on the inner ear, encoding of speech in the auditory nerve

Nebeker:Would you tell me a little about the research that you did from the seventies until now?

Sachs:The research I’ve done has taken two main thrusts, both of which really began with my doctoral thesis. I have already talked about trying to understand how the mechanics of the inner ear worked and the encoding process from sound and space into discharge patterns in the auditory nerve. We have done a lot of experiments working out the fine details of the transformation between sound in air and auditory nerve discharge patterns applying the kinds of models I worked on in my Ph.D. That’s been very productive in that it has allowed us to associate what’s happening in the auditory nerve with the mechanisms of what’s going on inside the inner ear. For a long time that was the only way to go because people couldn’t make the kinds of measurements in the inner ear that we have been able to make for the past fifteen or twenty years. The window we had for what was happening in the inner ear was measurements from the nerve. Thus, a major part of my research has been trying to understand how the inner ear works by looking at discharges and auditory nerve fibers.

A second thrust was really just a wild idea I had. I’ve had two advantages: (1) wonderful mentors, and (2) wonderful students. My first student, Eric Young, is now my closest friend and colleague. He’s been a professor in this department for twenty years. I won an award about two years ago from an association called the Association for Research in Otolaryngology. Eric presented the award at the presentation and it was his duty to roast me and it was my duty to roast him back. One of the things I said besides saying that he was my best friend was that every good idea, the best ideas I had, were the ones he had at first said were insane.

This was one of the ideas he said was insane. The idea was to try to understand how speech is encoded in the auditory nerve. The auditory nerve consists of 30,000 neurones carrying electrical signals from the ear to the brain. Any information that the brain gets about the sound has to be carried in those electrical signals. The question was: How is the sound of the voice – which is very complicated – encoded in those patterns of those 30,000 neurones? We tried to record from as many as 300 or 400 in the same animal, which is why the experiments took three or four days. By doing this we were able to build up a picture of the coding of the sound across this whole population of 30,000 neurones. We sampled 400 in one animal and extrapolated from that – reasonably, we think. We now know how sound is coded in those discharge patterns.

Nebeker:Is it not coded in single-minded Fourier wave?

Sachs:Yes and no. It is clearly coded as a spectrum. When we started out there were a lot of questions about how that could possibly work. There were limitations on the systems – largely dynamic range and non-linear properties – that made it questionable that the code could work. We discovered that by applying mathematical analysis of the discharge patterns signals could be pulled out even with a highly limited dynamic range. You can hear and understand my voice if I whisper, and you can hear and understand my voice even if it hurts you when I shout at 120 decibels.

Nebeker:Is there separate coding of the intensity from the spectral information?

Sachs:Yes, in a sense. There is a gradation in thresholds for neurones. Some neurones respond to low intensities and saturate at moderate intensities, some respond at higher intensities and yet others saturate at still higher intensities.

Nebeker:Are these neurones each responding to a certain bandwidth?

Sachs:That’s right. Each neurone has a fairly limited bandwidth at intensities near— one of the problems is that the bandwidth gets much broader as sound intensity increases. In a way, the non-linear phenomenon I studied as a graduate student enabled the bandwidth to be cut narrow in a population of these neurones at higher intensities. Thus, the mechanistic studies we had done and are still doing played into the whole notion of Fourier analysis. It’s a highly non-linear Fourier analysis.

Nebeker:Has it been pretty completely worked out how this sound is actually encoded in these 30,000 neurones?

Sachs:I like to say that, accountably, there is probably an infinite number of doctoral theses left to do on it. It is basically worked out, but there are still some second order things.

Hearing aid design; coding in a pathological ear

Sachs:

We’re going in two directions with what we’re working on now. One is to try to understand what goes wrong with this coding in a pathological ear. Hearing aids are notoriously bad in very noisy environments. Are you familiar with Arnold Beckman?

Nebeker:Yes.

Sachs:Fifteen years ago he would have given us $15 million if we could have even suggested that we could build for him a hearing aid that would work in a noisy room. We simply could not do it. We believe the reason for this is that we don’t have a good enough understanding of what is really going wrong in the ear of a hearing impaired person. One thing is that is that the non-linear phenomenon seems to disappear. In a hearing impaired person, the ear gets linear. The non-linearity that enables the bandwidth to remain fairly narrow is gone. It is as though the ear is trying to listen through filters that have a bandwidth too wide to filter individual frequency components. Therefore our current approach is to try to understand the coding in the pathological ear. We use cats that are deaf in some ways.

Nebeker:Is this specifically with the intention of being able to design better hearing aids?

Sachs:Ultimately, yes.

Nebeker:Then this would be equivalent to the calculation of a wind tunnel; a good enough model of hearing to test hearing aid designs.

Sachs:That’s right. I think we’re getting there. I think that’s going to be an achievable goal and that is likely to make a major dent in the hearing aid problem.

Cochlear implants and coding; role of central nervous system in understanding speech

Nebeker:Have you had any involvement in the design of hearing aids?

Sachs:Not really, though we talk with the hearing aid community a lot. I think that in the early days of cochlear implants our work was influential in how sound was coded in those implants. That turned out to be not a particularly good idea. Do you know what I mean by cochlear implant? It is electrical stimulation of the inner ear; a bionic ear kind of thing. However it is so primitive and the ears these stimulators are in so damaged that it’s probably not a good idea to try to exactly replicate the patterns in the normal ear.

Nebeker:Then that hasn’t been terribly successful?

Sachs:The cochlear implant business has been wildly successful, far beyond what many of my basic science colleagues predicted would be possible twenty-five years ago. I had the great good fortune of keeping my mouth shut at the time. I didn’t say that they wouldn’t work, though I was skeptical. However they have worked, and much more so than visual prosthesis. There are people walking around with cochlear implants who are stone deaf without them and who can even communicate over the telephone now.

Nebeker:Is that right?

Sachs:Another thread that the research took after the initial thrust into understanding how speech was coded in the auditory nerve was trying to figure out how the code was broken by the central nervous system. We lucked out. There’s a first station in the brain that decodes speech called the cochlear nucleus. The form the code took in our characterization of it required certain kinds of signal processing having to do with intensity phenomenon, the non-linearities and the ability of the ear to maintain its functioning at high sound levels. This led us to some hypotheses about how the cochlear nucleus might be organized to process speech. We did exactly the same experiments on cochlear nucleus cells that we did in auditory nerve fibers. Instead of putting our electrodes into the auditory nerve we put them into the brain in this cochlear nucleus. We recorded responses from single cells in the cochlear nucleus. Therefore the input to the cell is an auditory nerve. There are a number of auditory nerve fibers. The output is a neurone, an axon that goes to higher levels in the brain.

Nebeker:Whether it is triggered can be determined?

Sachs:That’s right.

Nebeker:Are these cells in the cochlear nucleus identifiable individually? In other words, do you have a map where you can say, “Okay, we’ll look at that neurone now”?

Sachs:There are many maps. One is a frequency map. The auditory nerve itself is laid out in a very orderly tonotopic way by frequency. Thus, neighboring auditory nerve fibers respond at the same frequency ranges. Those fibers go to the same cells in the central nervous system. The central nervous system is also laid out by frequency.

Nebeker:You can map the functionality, so to speak?

Sachs:That’s right. That’s one dimension. There are many other dimensions. And in fact the cochlear nucleus has several subdivisions. Each subdivision has a number of different types of neurones that have different properties. I, say, go for cell number 453, but I can go for cells that respond to frequencies in speech range that have certain properties and be pretty sure of making contact with them. Then I can tell from the electrical recordings that a particular cell does respond to a particular frequency range and does have specific properties that identify it as a certain type of cell.

We found a subpopulation of cells in the cochlear nucleus that did exactly the kind of processing we had predicted. That was pure luck.

Nebeker:Did you predict this on the basis of explaining how something works?

Sachs:We predicted it on the basis of the model we had (1) relating the coding in the auditory nerve to the sound, (2) our hypotheses about how that code had to be broken down in order to maintain the information that was available, and (3) what information had to be extracted from that code – how the neurones would extract that information. We built an actual neuronal model that displayed demonstrated one possibility of how that processing would occur in neural circuitry.

Modeling auditory nerves’ neurones and the cochlear nucleus

Nebeker:Do you actually have a model?

Sachs:A computer model.

Nebeker:You have a computer model of the cochlear nucleus?

Sachs:We have computer models of individual cells in the cochlear nucleus; not the whole cochlear nucleus. However my colleague Eric Young is now the world’s leading expert in the cochlear nucleus. He has very detailed models far beyond what I have done.

Nebeker:Was going from dealing with the auditory nerves’ neurones to the cochlear nucleus a big move?

Sachs:In some ways, yes; in some ways, no. Many people had recorded from the cochlear nucleus. In fact, quite by accident, the cochlear nucleus had been recorded from before the auditory nerve had been. They thought they were recording from the nerve but actually had their electrodes in the brain. In many ways, it’s easier to record from the nucleus. However, trying to analyze how the nucleus processes sound using models was a major step. Eric and I were the first to do that. That is what makes our laboratory unique and the biomedical engineering approach unique. It was once unique. It’s not so unique anymore, because others have hopped on the bandwagon.

Nebeker:Has it become more of a general practice to construct models to reproduce what happens to the data?

Sachs:Right.

Nebeker:I’ve read that in other areas of science it’s a criterion of understanding whether one can model what is happening on the computer.

Sachs:There are two aspects to modeling. That is one aspect, and that’s exactly why I wouldn’t let my student rush to publish the data that seemed to be contradicting my thesis model. It was not that I was tied to my thesis model. I wanted to have a model that would explain the data, because they were data that could very well have been artefactual. They were bizarre in a way, and something that had not before been seen. Once we had a model, for better or for worse, I felt much more comfortable that the data were correct.

In the case of the cochlear nucleus work, the models led to questions that would not have otherwise been asked, so that the models are both confirmatory and predictive. One is able to learn much more when questions are raised that would not have been thought about had there not been a model.

Birdsong studies

Nebeker:I noticed that some of your work has dealt with birds.

Sachs:When I first came to Hopkins it was my goal to try to understand how birds process birdsong. I continued my cat studies, my fundamental studies with the inner ear and the auditory nerve, and simultaneously I did birdsong studies. I recorded from the cochlear nucleus and the auditory nerve in the bird to try to understand. However there is something about birdsong processing. It doesn’t really exist at that level. The bird is pretty much like the cat or the human. The inner ears of all these species are not that different, though there are some fine differences. When one goes farther into the central nervous system there are bigger differences.

Nebeker:Had you hypothesized that there is a special way that birds process birdsong?

Sachs:Yes, and that’s probably true. In fact it’s becoming clear that there are at levels of what might be akin to the cortex of a cat, if birds had a cortical structure like mammals. What happened was that I got so involved and excited by this speech work that I just couldn’t keep working on both. Neurophysiology is a physically demanding endeavor. In our case with these experiments with the cats would go on for three or four days at a time. One has to stay with it. I never could have more than three students working with me at one time because I couldn’t have kept up the pace with more than three. It became clear that doing two major projects at once was beyond me.

Career at Johns Hopkins; strengths of the biomedical engineering department

Nebeker:You came to Johns Hopkins in 1970.

Sachs:Right. I’ve been here ever since.

Nebeker:And you became director of the department in 1991.

Sachs:Right.

Nebeker:I talked to Dick Johns about some of the ways that the Hopkins program is special. What are your views on this?

Sachs:There are a number of ways in which the Hopkins program is special. Our most unique feature has been our involvement with the medical school for the past thirty-five to forty years. We are one of eight basic science departments in the medical school. I don’t think there is another biomedical engineering department in the world that can say that. It’s given us a much more biological approach to problems than if we had been in an engineering school – though virtually all the faculty in the department have engineering degrees. It is foremost that we are all interested in biological problems, and that’s the way our students are trained. Our grad Ph.D. students take the first year medical school courses. That is like hell and boot camp for them, but they almost unanimously come out the end of it saying they’re glad they did it. They forget the pain, and learn to think both like engineers and biologists – and really not like either but as a synthesis of the two.

Another related factor has been that, on average, most of our faculty have about two graduate students. Very few of our faculty have big laboratories with five, ten or more graduate students. Consequently our students get a much more one-on-one and classical kind of mentorship than most engineering schools. That’s another thing makes Hopkins special. We do have lots of argument on this issue, and there are faculty members who would like to have more graduate students, but we don’t have the facilities or resources to that.

Nebeker:It would seem that it would depend on the type of research a person is doing.

Sachs:That’s right.

Nebeker:What you’re doing would be very difficult for a professor to have many. It is different in mathematics and some other sciences.

Sachs:Yes. And we have some faculty members who are doing things that actually require more graduate students. There are faculty members who have ten people in their laboratory. Primarily, these are the people who do modeling and the people who do tissue engineering.

Trends in imaging and modeling

Nebeker:There as been a trend in biomedical engineering over the last couple of decades toward putting more and more attention to imaging. I’m interested in your comments on these bigger trends.

Sachs:I assume that you have talked to or will talk to someone from the Whitaker Foundation.

Nebeker:Not yet.

Sachs:You have got to do that. They have been a major factor in the development of the field, and are wonderful people. I recommend that you talk to Peter Catona, Miles Givens, Ruth Whitaker Holmes or Bert Holmes.

Obviously there are trends. This department was and still is a leader in physiological modeling, such as I have done of the auditory system. We have done a lot of modeling of the cardiovascular system. That is how I found out about Dave Geselowitz after we separated from one another at MIT. We’ve done a lot of modeling and experimental work in the vestibular, balance and other systems. That’s how the department made its name in the seventies and early eighties. In some senses we were unidimensional in that we didn’t have many people working on instrumentation and imaging was not an issue at the time. Imagining had not yet become a trend.

In about the middle eighties we began to get into things like biomedical instrumentation and imaging. The trend went far very fast, to the point where all the graduate students coming into the department wanted to do imaging, and in particular magnetic resonance imaging. We had a very strong group built up around one person in this department and an army of people in the radiology department, which has been a very research-oriented department. What made our program unique in imaging was the fact that the people in biomedical engineering attacking the imaging problems knew the biology. That gave them a great advantage. And they were working hand-in-glove with the radiologists.

Nebeker:That would be a big advantage. It seems that there is a strong radiology department here.

Sachs:That’s right. They were biomedical engineers coming at a biomedical problem, and from both sides. It’s been very interesting to watch the evolution of imaging students and even our imaging faculty, some of whom have come into the department with less background in biology than most of our students. They learn about biology and very rapidly become experts – in our case largely in cardiac imaging – and know as much about the heart as any cardiologist or cardiovascular physiologist. Imaging has become a very major thrust in the department.

Sachs:Thanks to the Whitaker Foundation, we are growing from a department into the Whitaker Institute for Biomedical Engineering, with a new building on the other campus. We have hired ten new faculty members who are all staying down here. One other faculty member and I will be shuttling back and forth between campuses.

There are three areas of thrust on which we are focusing with our new hires, one of which is image understanding. We’re not trying to be the leaders in developing imaging technology, the optics of imaging or the magnetic resonance.

Nebeker:Your focus is on interpreting the images then.

Sachs:Interpreting the images. We’ve just hired the world expert in image understanding. I think we have also just hired someone from another university who is one of the world’s leading experts in the area of pattern recognition and applying very sophisticated mathematical analysis to images to try to understand how we can program computers recognize images. For example, your brain and my brain may look entirely different from one another. One of the classical problems is how to form a template that will say something about your brain and my brain that allows us to distinguish normal from abnormal. This will help us to understand imaging that is done of brains with dysfunctions.

The second area of thrust is computational modeling of physiological systems. That’s interesting in that it’s a new trend in bioengineering, which is that of massive computer models.

Nebeker:Physiological modeling is longstanding.

Sachs:That’s right. What’s neat about this is that it ties to our past. However it really starts with the genome and says, “Can you make a model that starts with the genome and the expressions of proteins from the genes to the function of something like the heart?” There are articles all over the place about Ray Winslow, the head of our computational group, with headlines like, “He has Heart in the Smithsonian Institutions.” That’s because his computer model of the heart was two years ago voted one of the forty-two most innovative uses of computer technology. It is permanently enshrined in the Smithsonian Institution.

Just knowing the genome is not going to tell us how the system functions. It’s a very complicated system. And we’re actually able to go what we know in limited cases to really understanding the function and pathophysiology of something like the heart.

Nebeker:That sounds extremely optimistic.

Sachs:It really is not. What’s exciting about it is that it’s going to put – and has put – biomedical engineering in dead center of the frontiers of biology and medicine. To understand and make progress about functions and pathology from this vast pool of data we know about genes requires an understanding that’s like that of a biomedical engineer – one who has not only the computer tools to analyze data, but the mathematical tools to make models that make sense of the data as well. Our colleagues in the other basic science departments are beginning to realize we’re not only building devices, but we’re also helping them understand.

Nebeker:Yes. This is analogous to computational chemistry getting its own strength from quantum mechanics.

Sachs:That’s right. Well, it’s not like computational chemistry in that computational chemistry is a component. I don’t mean that as an arrogant statement. Let me get straight to what I mean. We go from computational chemistry through computation of models from the whole organ. I’ll give you a personal example. My son is a third year graduate student in our program. He started out wanting to model the heart working for Ray Winslow, but he is much more of a reductionist than most of the people in the department. Now he is working with someone in the physiology department working with models on how protein structure relates to membrane properties of cells. That’s getting down to the level of atomic interaction.

Nebeker:Yes. Starting with the protein structure something is said about the membrane.

Sachs:That’s right. He’s biomedical engineer and views himself as a biomedical engineer. Ray Winslow is getting more into the area of genomics and working at the level of proteins and molecules. Bioengineering spans quite a lot, and it’s a very challenging process to become conversant at a deep level on this whole range of biology and engineering.

Nebeker:What’s the third area?

Sachs:Tissue engineering. That has become a big trend. All three of these have been major trends in bioengineering for ten years or more and still are today. By and large, tissue engineering is a science or engineering of trying to replace human tissues and organs with engineered tissues and organs, such as artificial skin and artificial hearts. One of our thrusts is going to be to try to understand how tissues and cells interact with materials and with one another. If a foreign material is placed in the body, what are the biohazards of such a system? How can we make use of the interactions between materials and cells and the interactions between cells and other cells to build organs and to build replacements for things like the pancreas?

Nebeker:I understand that it’s still problematic to have something piercing the skin, for instance if the heart is paced from outside.

Sachs:That’s right. One is much better off with something totally implanted.

Nebeker:The tissue and artificial material won’t form a seal.

Sachs:The problem is the seal and potential infection. Another major area in bioengineering closely related to tissue engineering is drug delivery. That is, the ability to deliver drugs and genes directly into cells. We view all of that as part and parcel of tissue engineering because one has got to be understood in order to be able to the other and vice versa. We have people working on systems for embedding DNA in very fine bioerodible polymer particles that can actually invade cells and be taken up by the cells. The idea is to put the genes directly into the cells and have them replicate and produce normal proteins in the face of genes that are producing pathological proteins.

Nebeker:That’s amazing. Are there any other comments you would like to make on either your own work or about the field of bioengineering?

Sachs:I would like to get away from my own work and say a few words about bioengineering. I’m obviously a big advocate. It’s a field that has exploded in the thirty-five or forty years that I’ve been in it. It’s almost forty years since I started my Ph.D. It virtually didn’t exist as a field then, and now virtually every university in the world wants to have a department of bioengineering. I think I could make a living consulting for universities that want to start bioengineering departments, so eager are they to do that. That’s partly due to the Whitaker Foundation making money available and spending it very wisely.

The Whitaker Foundation is going out of business in the year 2006. They decided not reproduce themselves but spend all their resource over a twenty-five-year period and make a major impact. And it’s been a remarkable endeavor that came around at a time that was right. Mr. Whitaker saw that the applications of engineering to medicine were going to be enormous. In the development of modern technology, computation and optics imaging, bioengineering has rightfully assumed a position in front center in all of biological science.

It’s attractive to students. One of our problems is that we have too many students. We don’t have enough graduate students because of our limited space resources, but there are 500 students in our undergraduate program. Of the 1,100 or 1,200 students in the engineering school at Hopkins 500 are bioengineers. This is also happening in places other than here. If an undergraduate bioengineering program is begun, it rapidly becomes the largest and best in the university. That is because there are a lot of kids who want to be engineers and want to be technologically involved but also want to do things for humanity that involve medicine and biology. This is the way they can do it. And also it gives students a very wide range of career opportunities. They can go into industry as well as into academic careers.

Other academic fields have been criticized for producing too many Ph.D.s and just cloning themselves. In bioengineering we are lucky to have the ability to produce students who can do the kinds of things we’ve done in academics while also making them very hirable where they can make a lot of money and a big contribution industry. Industry is becoming a very important part of bioengineering.

When I took over as department chairman from Dick Johns nine years ago, we had $50,000 in industrial support for our research. Dick probably knew more people in industry and had more experience with industry than virtually anybody in biomedical engineering at the time. I like to characterize myself at that time as a poor country neuroscientist. Clearly this has nothing to do with me. However we now have over $750,000 a year industrial support. The time has come and we are seeing that the industry has resources, good people and problems on which we like to work. Industry is beginning to see that they can make a profit – which is their bottom line – by collaborating with academia. Bioengineering really is a frontier area for that. It’s a very exciting place to be right now. I wish I were thirty years younger and starting out again.