Engineering Achievements in the 20th Century and
Challenges for the 21st
by: William Wulf
National Academy of Engineering
For two years the National Academy of Engineering worked with the engineering professional societies to identify the 20 greatest engineering achievements of the 20th century. The criteria was not the technological "Gee-whiz," but rather the greatest impact on our quality of life. The result is a pretty remarkable list. If you remove any single achievement from it, our lives would be dramatically changed, and not in a positive way.
The list covers everything from the vast electric power grid, which was Number 1, to the development of high performance materials, which was Number 20. In between were achievements that fundamentally changed the way people live (safe drinking water, for example, was Number 4), the way people work (computers were Number 8; telephones were Number 9) and the way people travel (automobiles were Number 2; airplanes were Number 3).
The impact of many of these achievements was immediate, and so itís not surprising to see automobiles and airplanes on the list. The impact of other achievements, on the other hand, was less obvious. For example, together with sanitary sewers, the availability of safe drinking water fundamentally changed the way people live and die in the United States. In the early 1900s, water-borne diseases, like typhoid fever and cholera, killed tens of thousands of people annually. Dysentery and diarrhea, the most common among those diseases, constituted the third largest cause of death in the United States. By the 1940s, water treatment and distribution systems almost totally eliminated those diseases in America and other developed countries. As a result of these and other advances, life expectancy in the U.S. rose from 46 years in 1900 to 76 today, an increase of 30 years. Two-thirds of that increase is due to clean drinking water and sanitary sewers.
Engineering is all around us. Iím not going to read you the whole list of 20 achievements, but let me note a couple that I havenít mentioned. One of them is agricultural mechanization. In 1900, 50 percent of the U.S. population lived on farms, and it took that half of the population to feed the other half. Today, two percent of the population live on farms and feed not only the other 98 percent of us, but the much of the rest of the world as well.
Number 10 on the list was air conditioning and refrigeration. You probably couldnít have had the breakfast you had this morning without refrigeration. Someone in your family couldnít have had the medicines they need if not for the refrigerator.
Number 15 was household appliances, which radically and dramatically changed lives, especially within the first half of the century.
But letís move on from congratulating ourselves and look to the future.
Iím an optimist. I believe that quality of life in 2100 will be significantly better than it is today, and as different from todayís life as todayís is from that in 1900. I also believe that we will achieve a much broader distribution of that quality of life around the world. But neither of those beliefs is guaranteed, and there are challenges between here and there.
I would want to take a little side track for a moment and talk about a particular program element that the Academy is undertaking; itís called Earth Systems Engineering. First, I want you to realize that in a very real sense the Earth has already become an engineered artifact. Whether you consider very large projects, like the damming of the Mississippi, or whether you consider small projects such as paving over a parking lot and consequently interfering with the aquifer that is used several hundred miles away, the fact is we are engineering the planet. The trouble is, weíre not doing it holistically. Weíre not doing it ethically. We donít understand the global impacts of our local actions. So, part of what I mean by Earth Systems Engineering is simply being holistic and ethical about what we are already doing.
But, secondly, I think we need to at least contemplate the possibility of intentional intervention in large-scale macroscopic ecological systems. An experiment was just conducted recently off Antarctica to seed the ocean with iron, the purpose of which is to encourage algae formation and consequently sequestering carbon. Thatís an example of a very large scale intervention in our ecosystems. Frankly, I think itís pretty scary. What Iím going to come back to later is, perhaps, a quantification of why I think it is scary and why I think it presents a special challenge to the engineering profession.
I could go into a great deal more depth on other such challenges, but I am not going to do that. Instead, Iím going to try to talk about just one of the challengesĖone that I believe may be the greatest engineering challenge for the 21st century. That challenge is engineering ethics.
Let me start out by trying to be very clear. I believe engineers are, on the whole, extremely ethical. There are ethics courses in virtually every engineering school. I knew there were myriad books on the subject, but I didnít know how many until I started to prepare this talk. Every professional engineering society has a code of ethics, and they almost invariably start with something like "Öuphold as paramount the health and welfare of the public." Those particular words come from the National Society of Professional Engineers (NSPE), but they have been copied by virtually every other professional society. The codes of ethics are quite elaborate and go on to detail each engineerís responsibility to clients, employers, each engineerís responsibility to report illegal or dangerous acts, an engineerís responsibility to respect the public interest, and on and on. The NSPE Code of Ethics has some 170 specific points.
I have vivid memories of talking with my father and my uncle, who were engineers, and with my professors and with colleagues over the years, about the ethics of everything from safety margins in engineering products, to dealing with inappropriate pressure from management to try to cut corners. All of that is still in place. Frankly, one of the reasons I feel very proud to be an engineer is because of the strong ethical orientation.
So, why do I want to talk about ethics? Why do I believe that ethics will be the greatest engineering challenge in the 21st century? Why do I think the NAE needs to start a new program? There are two reasons that are closely intertwined.
First, the practice of engineering is changing; and, in particular, itís changing in ways that raise a different kind of ethical issue. Second, the issues that are arising from this particular nature of engineering practice are macro-ethical issues that the profession has not dealt with before.
In preparing this talk, I ran across a wonderful quote by John Ladd, an emeritus professor of philosophy at Brown University. He said, "Perhaps the most mischievous side effect of ethical codes is that they tend to divert attention from the macro-ethical problems in our profession to its micro-ethical problems." The literature on engineering ethics, the myriad books, the codes of all of the professional societies, the courses I have been able to review, all focus on individual behavior. The behavior of the individual is micro-ethics. When I say "micro," I donít mean theyíre small and unimportant, but simply that they are individual.
The changes in engineering practice are ones that I believe pose ethical questions for the profession rather than the individualĖthese are called "macro" ethical issues. I have yet to convince you what I just said is true, but itís the reason I believe the NAE should develop programmatic activity. Engineering has not squarely faced these macro-ethical issues before.
Macro-ethical questions are more common in some other fields--medicine, for example. The micro-ethical questions in medicine are almost identical to those in engineering--they map almost exactly word for word. But in medicine, the macro-ethical questions have been dealt with. The most common and easily explained example is allocation. Who gets the scarce organ for transplant? Who gets the physicianís attention if there are more patients than can be treated? Who gets the medicine if thereís not enough to go around? The individual physician cannot make that decision. The profession, or better yet, society guided by the profession, has to set up guidelines, and then it becomes a micro-ethical question for the physician to implement those guidelines.
So, I assert that engineering hasnít had to deal with macro-ethical questions, and now Iím going to talk about what is changing in engineering that has given rise to macro-ethical issues. Iím going to focus on just one change, complexity. In particular, Iím concerned with the complexity arising from information technology and biotechnology, both of which are going to show up in virtually every engineering product.
I will elaborate on this in a minute, but let me say it in a bald-faced way first. Increasingly, weíre building engineering systems whose complexity is such that it is impossible to predict all of their behaviors. Let me say that again just to make the point. I am not saying itís hard to predict. I am not saying that you somehow have to think about it differently. I am saying that it is impossible to predict all of their behaviors.
There is extensive literature on engineering failures. I havenít read all of that literature, but I have a stack of it on my shelf, and I happened to pull off two volumes when I was preparing this talk. One is a 1984 text by Charles Perrow called Normal Accidents. The other one is a 1997 book by Ed Tenner called Why Things Bite Back. I found these two interesting because in the 13 years that separate those two books there had been a clear progression of thought about why failures happen and what engineers ought to do about it.
For Perrow, in 1984, the problem was simply that we werenít thinking about the possibility of multiple simultaneous failures in highly interconnected systems, and the clear implication wasóthink about it! In fact, the systems engineering community and risk analysis community have been very good about doing just that. The probability that two or three simultaneous failures will take down even the most complicated system is much lower than it was 15 or 20 years ago.
In Tenner, the more recent book, we begin to see a glimmer of a notion that for very complicated systems it might be really, really hard to predict all of the possible failures. But I still get the feeling that he thinks that if we just thought a bit harder, we would be okay--we would figure out all of the possible failures, we would anticipate the problems. Thatís not what I am talking about.
By the way, neither Perrow nor Tenner are engineers. Oneís a sociologist and the other is a historian. One of the things I find interesting is that they clearly bring their disciplinary tools to bear in looking at these problems, and their tools donít include the kinds of very sophisticated mathematics that engineers and scientists employ. So, in particular, they donít think about the fact that there might actually be a technical explanation for why systems fail. And, of course, they are probably partly right in looking back at failures and systems that werenít as complicated as the ones we are engineering now.
Over the past several decades, at places like the Santa Fe Institute, increasingly sophisticated mathematical models of complex systems have been developed. In some ways, those mathematical models areówhat shall I sayó"squishy." Theyíre not as finely honed as the mathematics that we use in other parts of engineering. But one thing is very solid, and that is, a sufficiently complex system will exhibit properties that cannot be predicted a priori.
I said that they are squishy, thatís partly deserved and partly not. The deserved part simply has to do with the fact they are not all that mature yet. The part thatís not deserved is that it is associated with a couple of other things that are questionable. The term used to describe behaviors that are not anticipated is "emergent properties," a phrase that first arose in the late 1930s in the context of sociologists trying to explain group behaviors. Those theories have pretty much been discredited. The second thing is that thereís been some effort by post-modern, anti-science types to use the phrase to discredit reductionist scientific approaches.
I donít want to get too technical, but I want to give you a flavor of what I mean when I say itís impossible to predict behaviors. Let me work from my field, software, and ask the question, "Why is software so flaky?" There are lots of reasons. But one of them is not "errors" in the conventional sense of the term. Itís not that the software does something it wasnít intended to do. In fact, these "errors" often happen in the course of the softwareís doing exactly what it was specified to do. Itís just that the consequences of those specifications were not understood. The number of circumstances under which that behavior would be appropriate or inappropriate was simply impossible to contemplate.
Let me try and indicate this to you by some numbers. There are probably scientists in the audience who know the right number here, but my recollection is that thereís something on the order of 10 to the 100th atoms in the universe. The number of states in my laptop is 10 raised to 10 to the 20th power. The exponent has a 1 followed by 20 zeros. If every atom in the universe were in the computer, and if every one of those computers could analyze 10 to the 100th states per second, there isnít enough time from the Big Bang until now to analyze all of the states.
Thatís what I mean by impossible. Itís not that there isnít a process that, given enough time, could analyze the situation. The situation is so complicated, there simply isnít enough time. Thatís why itís impossible, or "intractable," which is the technical term.
Thatís what has changed. We can now build systems all of whose properties we cannot predict. We can, however, with some certainty, predict that any system we build will have behaviors that we canít predict, that some of those behaviors will be negative; some of them may even be catastrophic. We just donít know what they will be.
This wouldnít be an ethical question if we didnít know there would be negative behaviors. Legal system ethicists have long agreed that if the engineer or scientist doesnít know what the consequences will be, they are not responsible. But, here, we know. We know that there will be behavior we canít predict, and thereís a high probability it will be negative, maybe even catastrophic. So how do we behave? How do we engineer in situations like that?
Harking back to the NAE program in Earth Systems Engineering that I mentioned earlier --itís clear that the ecosystem of the earth is a very complex system. It is exactly the kind of system weíve been talking about here. Itís a system that, even if we scientifically understood the behavior of every part, and even if we understood all of the potential interactions, we could not predict all the consequences of our behavior. If we change it, there will be consequences that we cannot predict. Itís a good example of a system where everything has a consequence and the effect of our behavior may show up in totally unexpected places. We certainly have examples of that.
And, yet, in many cases, we must act. Not acting is also an action. We canít just not act and say, "Iím being ethical." That may be the most disastrous act of all. So how do we do that ethically? Itís clearly a deep question, clearly an ethical question, clearly a macro-ethical question. Our codes of ethics donít tell us how to behave in such circumstances.
Let me switch to another example. Last spring, Bill Joy, somebody Iíve known for a long time, who co-founded Sun Microsystems and is a leading Silicon Valley technologist, raised a somewhat related but different issue in what I frankly thought was an irresponsibly alarmist article. Joy mused about whether individually or collectively information technology, biotechnology and nano-technology would develop self-replicating systems that would replace humans. He then went on to raise the question of whether, given the specter of the possibility of that, we should stop research in all of these areas.
Frankly, I abhor the way Bill raised the question, but I think we have to deal with the fact that something like this is really at the root of public concern about cloning, about genetically modified organisms and so on. I think they are rightfully concerned. Do we, the science and engineering community, know all of the consequences of our actions?
Joyís question, should we stop doing research, is something that personally repels me. The notion that there is truth that we should not know is absolutely abhorrent to me. I can embrace the notion that there are ways by which we should not learn the truth, that some research practices are unethical. The obvious example is the way the Nazis conducted some medical experiments in World War II. I donít happen to agree with it, but I can understand the arguments of people who object to fetal-tissue research. So, I can agree with the notion that there are ways of gaining knowledge that we should prohibit. I can also embrace the notion that there are ways that we should not use knowledge. But the notion that there is truth, that there is knowledge that should not be known is something that I find impossible to accept.
Itís somewhat ironic that the first academies in western Europe, academies of science, were created because science, this empirical way of knowing, this new way to search for truth, was not accepted by the scholastic university establishment of the 17th century. Thomas Jefferson was making a radical assertion, even more than a hundred years later, when, in founding the University of Virginia, the first secular university in the United States, he said,
"This institution will be based upon illimitable freedom of the human mind. For here we are not afraid to follow the truth wherever it may lead."
Thatís the notion of truth that I teethed on, and, yet there I was, two weeks ago, in the Academy, asking the question, is there truth we should not know? I have to admit that we donít have a stellar record on the misuse of freedom of knowledge, but that, I think, is where controls have to go.
One could rightly ask, "Why is that we engineers should ponder this as our ethical question?" Well, because science is about discovery of truthĖso scientists have to deal with the question of, "What are the appropriate ways to discover truth?" But engineering is about using knowledge to solve human problems. So, we have to deal with the ethical question of the misuse of knowledge.
While I canít bring myself to agree with implied answer in Bill Joyís question, I do believe it raises a very deep macro-ethical question about the use of knowledge. How do we ethically, as engineers, ensure the proper use of knowledge? It is not a question the Code of Ethics tells us anything about. It is something that society, guided by the profession, informed by the profession, needs to deal with.
The first part of my talk emphasized that engineeringóreally, engineersóhave made tremendous contributions to our quality of life, particularly so far as the developed world is concerned. There is much to be done to bring that quality of life to the rest of the planet. But, I am, basically, an unabashed optimist about the prospects in the 21st century for further improving that quality of life and spreading it around the globe. There are many challenges to achieving that, and I have only touched on one--the question of engineering ethics.
Projects like the further reclamation of the Everglades will be done with imperfect knowledge of the consequences of the actions being taken. They should be done with an awareness that some of the consequences might be disastrous. Weíve got to figure out how ethically to cope with that, how to rethink the process of engineering so that we can backtrack if we need to, so that we can adapt, so that we can work within a very complicated system.
My dadís engineering consisted of a specification from his boss. I donít think that works any more. You canít write the specification that will function, in all cases, the way we want it to. What we need is, somehow, to adapt ethically as we go along. Again, I want to point out that we donít have the option of choosing not to act. That is also an action. So weíve got to face the question. We donít have a choice.
Back to top | Copyright ©2013. All Rights Reserved.