About Sigma Xi Programs Meetings Member Services Chapters Giving Affiliates Resources American Scientist
   Annual Meeting &
   International Research
   Conference


Meetings » Archive » Past Forums » 2000 Forum Proceedings »
Responsibilities of Scientists to Society

Responsibilities of Scientists to Society

Panel
Robert J. Eagan, Vice President Energy, Information & Infrastructure Surety Division Sandia National Laboratories

Scientific Ethics for Policy Participants
Robert A. Frosch, John F. Kennedy School of Government
Harvard University

Considering the Implications and Applications of Research
Beverly K. Hartline, Acting Deputy Associate Laboratory Director
Los Alamos National Laboratory

Scientific Ethics for Policy Participants
by: Robert A. Frosch
John F. Kennedy School of Government, Harvard University

As you can see from my biography, I have spent a peculiar life, partly as a scientist, partly as an engineer, and largely as a government and industry manager of R&D. I have dealt with ethical questions on one scale or another, including macro-scale and micro-scale, every day of my professional career.

A scientist in public life is not just a scientist, in the usual sense, but may be scientist and engineer and technologist, as well as a policy official, and a personal and professional decision-maker. It is necessary to keep clear in your mind what hat youíre wearing at a particular moment. When am I being a scientist as a scientist? When am I making decisions and being a policy official; what else am I doing and being?

Hence, I use 'science' in this context in a broad way, not only to mean scientific research as the creation of knowledge. I also include technology (knowing how to do something, which is different than knowing how things work), and engineering development and implementation (how to complete the job into practice). These aspects of what is sometimes called innovation all have somewhat different ethical and professional dimensions. (Note that it is not always the case that the scientific knowledge leads to technology leads to development. Sometimes it works backwards, from technology or product to science, and there are frequently feedback and feed-forward loops.)

Since scientists acting in the R&D world, the policy world and the industrial world see complications of the various aspects of 'science' from a personal ethical point of view, they are always setting ethical limits or asking ethical questions, or having the questions asked of, or set for, them. These questions are frequently of the following kinds: Will I or wonít I pursue this question, whether itís science, technology or development? Will I or wonít I argue with people about whether this question should be pursued? If the question is pursued in the way I have argued against, will I or wonít I quit.

If you are in such a job, you have to think about those kinds of ethical questions and limits, and the answers arrived at may be different for each of us. You have the right to decide what you, yourself, will do. You do not have the right to decide what anybody else will do, except in a social, consultative sense. You always have the right to argue. I will return to these questions later.

The role of the scientist, now used in the narrower sense of a seeker of truth, a seeker of knowledge, is to be an advocate of scientific method, by which I mean: hypothesis tested against reality by experiment or observation. The scientist should be an advocate of scientific method not only in the areas which are normally delineated as science and technology, namely, in physics, chemistry, biology, etc. The scientist should be an advocate of scientific method applied to all aspects of decision making and policy. When people talk about finance, or when they talk about assertions concerning the results of a public policy, a scientist has an obligation to say, 'How would you know? How could you find out if you were right? What body of facts can this idea be tested against? Given the results, how sure are we? What are the errors and potential errors?' This is an ethical imperative for the scientist. In public life thatís part of the obligation of the scientist: to be a scientifically oriented critic.

That critical role of the scientist is different than being a scientist purely in an academic sense. The scientist ought to be the organized skeptic, the representative of skepticism and of critical questioning. It is very important, not only to state 'the state of the science,' but to be careful to say: 'This is what we know.' 'This is what we know well'. 'This is what we kind of think we know, etc.', all the way to: 'We donít know anything about that.' Stating that 'we don't know' something can be a crucially important public role. Going beyond such statements, (in the sense of 'trans-scientific,' using Alvin Weinbergís term), we must say, when applicable: 'It is very unlikely we could ever find out about that.' Further, we may be obligated to say, as Bill Wulf has pointed out: 'We may want to know that for this decision, but itís going to be tough to find anything out. In any case, weíre not going to have the knowledge in time for the decision time you have scheduled.' Thatís an important ethical role for the scientist to play. It also applies to the role of scientist as technologist, developer and innovator.

Scientists ought to spend more time and effort insisting they be included at the policy table where the questions are defined, particularly if the questions have scientific dimensions in the strict sense, or in the expanded senses defined above. As I just said, we have something to say there, not just with our 'I am a citizen' hat on, but with the hat: "I am a scientist and I have ways of criticizing and being skeptical about things that will be useful in better defining the policy questions.' If the problem is badly set out, the answer may be terrible. Social and political scientists call such question asking 'framing.' I just call it asking what the key questions are.

When we consider technology, engineering and development, the purpose is to create useful 'how-toí capabilities. That is, if I follow certain procedures, I get a strong, ductile metal; I build a strong bridge. However, in technology, engineering and development we are always impaled on what we donít know. The idea that I wonít develop and apply a technology or use a machine until I know it is completely safe is a delusion about an uncapturable will-o'-the-wisp. There will always be some unknown risk. In the Pentagon, we talked about what were the knowns in a development project, and the unknowns (really the known unknowns), and the unknown unknowns. (Unknown unknowns were called 'unk-unks'.)

Knowns: We know the bridge can fall down; therefore, we do certain things that we know how to do to make sure itís strong enough to carry the expected loads.

Unknowns: We know that we donít actually know what all the loads will be. For example, we don't know what winds will blow on this bridge. We have some historical knowledge of probable winds, but we know we donít know what winds may really blow. We do know how to be reasonably careful in the face of this ignorance. (Sometimes, as in the case of "Galloping Gertie," the Tacoma Narrows Bridge, it turns out we are wrong; we didn't know what the forces, and the bridgeís response to them, would really be. We learned.

Unknown Unknowns: Well, weíve never yet seen a steel bridge designed with this new suspension and this new kind of steel that actually lasted 150 years, because weíve never built one before. In spite of all our theory and experiments and tests, there may be something lurking in the properties of this new suspension, or this new steel that we havenít tested, because nobody was wise enough to think of the new possibility. Or, in a case that I know well, when youíre developing an automobile with new technology in it, you only have a few months, or at most a year or so to test the technology realistically in an actual test automobile. (And the test automobile cannot be a sample from the production line after it's been running for a while.) If thereís something thatís going to happen once in 100 million miles, you arenít going to have driven 100 million miles on the test track before you put the car out. Thus you don't even know what you don't know. Thatís an unknown unknown.

The knowledge that there are 'known unknowns' (which you can try to design for) and 'unknown unknowns' (which you cannot design for because you don't even suspect what they might be) must be considered in the ethical balancing of decisions. The ethical issues are part of the engineering question. There are ethical decisions about what you say about the project, and what cannot clearly be said because it is unknown that it is unknown. It is not always clear how to deal with this problem. It is another reason for the scientist/technologist/engineer to be at the policy table.

When youíre developing something, thatís where 'I will' or 'I wonít' arises. Do I think whatís being developed is worth developing; is this a good thing or a bad thing; should it be developed, or not be developed? Will I or wonít I argue about what is being developed, and what risks it poses? Will I or wonít I quit if I don't like what I'm being asked to do? Is it really all right to ask someone to work on a particular idea, given the consequences I can envision?

Even there, there are unknown unknowns, especially with regard to future possible uses of a technology. Even if I think the proposed use is ok, do I know what else might be done with it? I know of no civilian technology that I couldn't figure out how to use for some military purpose, and I have never seen a military technology that I didnít know how to use for a civilian purpose, frequently more valuable than the military purpose. Thatís a function of the imagination, not a function just of science and technology knowledge.

When Maxwell and Marconi and Graham Bell started what they were doing, they certainly didnít have in mind what weíre doing now. They certainly didnít have in mind the telephone becoming what the telephone is, or communication beyond the wired telephone to cellular, etc. They could not plan for the unknown future 100 years away. When Thomas Midgely invented the chlorofluorocarbons, the CFCs, he was solving a terrible problem. Refrigerators were being run with poison gasses, sulfur dioxide and ammonia, chosen because they had the right thermodynamic properties. Gases leaking from defective refrigerators were killing people. Refrigerators were occasionally exploding. He was finding a refrigerant to solve those problems. He had no way of knowing, because nobody knew, that there was a stratospheric ozone layer, and that it blocked the sun's ultraviolet radiation. (At the time I don't think anyone knew much, if anything, about ultra-violet radiation.) He couldn't know that the CFCs would deliver chlorine to the stratosphere, and that chlorine chemistry in the stratosphere was going to interfere with the processes that shield the earth from the sun's ultraviolet radiation. How could he anticipate the unknown unknowns of the long future of chemistry and geochemistry?

All of these problems produce ethical dilemmas that I do not believe can be solved with simple rules, or simple principles (e.g.: the 'precautionary principle,' in any of its versions). One learns to solve them (sort of) with simple basic ethical principles, and with your logic and your gut feelings. I learned much at my father's knee (he was a physician). Even with guidance it's hard to learn and hard to teach to students. It certainly isn't going to do be completely done with a course and a textbook, but that might expand minds, so that continued experience in practice will lead to continued ethical learning.

(In addressing an orientation class of freshmen, in my fatherís day, or perhaps before, a dean of Columbia College said: 'You have come to Columbia College, among other things, to open your minds. Open your minds, but, for Godís sake, donít open your minds so much that your brains fall out.')

The principles of disclosure and transparency are very important. I make the assumption that anybody who has a real connection with a subject and an intellectual interest in it almost certainly has some bias or conflict of interest. You are obligated to do your best to understand your own biases and conflicts and to try to explain what they are. If it appears that there is a financial or business connection, then oversight by third parties is very useful. However, I would not like to disqualify the best possible person to do a piece of work from doing it because they may have a conflict of interest. Iíd much rather have a third party help by watching over the process, keeping track of it and calling a halt if thereís a problem.

(An anecdote about perceived biases and conflicts. I talked the other day with a postdoc who was working on a very interesting problem. She was trying to understand the political, psychological and social background and motivation of some global warming contrarians. She said, roughly, 'I went into this with the standard assumption that these guys are in the pockets of industry, and that they have those conflicts of interest. As I met and talked to them, I realized they are all sufficiently old and distinguished so that that is probably not a relevant issue. There must be something deeper than that in their contrarianism.' As it happens, I know some of her interviewees. They are not in anybodyís pocket; they are contrarians on many subjects, perhaps most subjects, and perhaps a little conservative in their politics, so their contrarianism on climate is not unusual for them, and does not imply that they have been 'bought.' Deeper explanations for their views must be sought.)

I am more concerned with clear explanations and honesty than with looking under the bed for conflicts. It is the ethical responsibility of the scientist to be as clear as possible about what he/she thinks they know, to what degree they are sure, and why. One should be as clear as possible about uncertainties, what might or might not reduce them, and what is and is not known about consequences. Try to be as clear and explicit as possible about your own biases and conflicts.

On the policy scene, it is the additional obligation of the scientist to be critical (in the scientific sense) about intellectual rigor, quality of data and logic, but not to claim too much.

I have chosen to talk about this general aspect of the scientist as ethicist on the policy scene because most discussion of scientific ethics focuses on small issues and suspicions, and I wanted to discuss the general problem and its characteristics from various angles. In discussions of science and ethics there has been far too little emphasis on the large positive ethical responsibilities of the scientist in public and policy life.

Considering the Implications and Applications of Research
by: Beverly K. Hartline
Los Alamos National Laboratory

I donít have nearly the experience with these dicey issues that Bob has, but Iím perfectly capable of being nearly as controversial. One thing I can say now: Itís always the people who donít need to come to meetings like this who come to meetings like this. We would get a lot farther a lot faster if people who arenít in this room were here, and one challenge is to make that happen.

We are definitely very fortunate members of society as scientists. We have specialized expertise, and use it to explore and understand the unknown. The responsibilities of scientists to society are larger than, of course, the questions we were given today, but what Iím going to do is propose some thoughts and answers and then ask some other questions that relate considering the implications and applications of research before one undertakes the research and what we do.

I believe very strongly that one does have a responsibility to consider possible implications and applications of research. Itís not only socially responsible, but it is scientifically enriching. Itís the type of thinking that can prepare the researcher to be alert to developments and connections--details, that might be missed by an investigator whose planning was restricted narrowly to the technical specifics of the study. Moreover, possible implications and applications often dominate the justification presented in a typical grant proposal for why the research is important and worth funding.

In addition, an awareness of possible applications can help the researcher communicate to peers and to the public the context and potential value of the research, as well as implement measures to prevent the worst, if the worst is potentially knowable or imaginable. A follow-on question that was not asked is whether the researcher should then eschew a line of inquiry if its result might have adverse implications or applications. If so, at what probability level of adverse implications or applications should this self-prohibition kick in?

I agree with Bob that itís basically up to every individual to make the choice. Each person should make it conscientiously, I think, and I would be interested in your thoughts on a process that could help investigators anticipate possible outcomes and thus be in a position to make a wiser choice than if they were missing something. These are the obvious or unobvious, the known and the unknown. Different people could easily make different choices. People could have agreements or disagreements about this.

In most cases, I think my choice would be to proceed with the study and simultaneously do everything I could to minimize or mitigate possible damaging uses of the result. My reasoning is based on the fact that much of research is the investigation of the unknown. Anticipating consequences could be erroneous. I could make mistakes when Iím guessing what dire consequences could happen, and I would hate to see the creation of new scientific knowledge systematically blocked by preconceived concerns about a possible adverse impact.

Moreover, individuals or organizations, such as terrorists, could choose not to abandon the research, if it occurred to them. Defense against the abuse of new technologies and knowledge is much easier when theyíre understood by the good guys. I like to think Iím a good guy. Instead of limiting the horizons of research, I feel we should work deliberately to create a social and political environment that somehow neutralizes the potentially damaging uses of discoveries and inventions and knowledge.

The second question we were posed was should we become involved in developing restrictions on the use or boundaries of our research. If researchers are not involved in the development of restrictions or boundaries in areas where thereís a lot of public concern--cloning, genetic engineering, nuclear power--then the likelihood of less knowledgeable people developing uninformed and unwise restrictions is extremely high.

We live in a litigious and regulation-rich nation, where the public expects the government to protect it totally from harm. I was at the White House Office of Science and Technology when Dolly, the cloned sheep, was announced. Congress rapidly introduced draft legislation to outlaw human cloning, and most of the draft bills included features that would have been devastating to valuable biomedical research. At OSTP, we would have had no success opposing any restrictive legislation outright. Our challenge was to allow the acute public anxiety to abateótime does help in these casesóto help more information emerge, and to work with the system to develop alternative approaches and legislation that would provide the protection the public wanted without handicapping the research enterprise.

Iím not in biomedicine. I wasnít really in the biomedical policy. It was mostly my colleagues that were engaged with this issue, but, of course, thereís a lot of dialogue that goes on when you have acute questions like this. What the President did was ask his already empanelled National Bioethics Advisory Panelóit had been empanelled a few months before thatóto consider the development of Dolly, the sheep, and to provide advice, which it did after working on the issue for several months.

The participation of experts is the only defense against the imposition of many nonsensical and unreasonable boundaries in research. In many cases, the damage of ill-conceived restrictions to research, education, health, quality of life and the economy would far exceed the dreamed-up consequences of possible misuse of the research being regulated. As Thomas Jefferson noted back when this great nation was created: "Reason and free inquiry are the only effective agents against error. They are the natural enemy of error and error only."

Self-regulation has, in fact, been chosen in cases where there was a serious concern about a grave consequence to society. Perhaps the most noteworthy example is the secrecy associated with early fission research. Owen Chamberlain gave a talk at the University of California at Berkeley in 1969 on the social responsibility of scientists and how the physics community handled this situation in the early 1940s. The following is all in Chamberlainís words:

"The early work, the work that really started in this country sometime in the middle of 1939, was kept secret by a completely voluntary process. There was no government regulation of this. It was kept secret by a decision among scientists to have a committee of scientists who would act as secondary referees on papers that were to be published. Thus, any articles that the editors of the physics journals thought should not be published for reasons of secrecy were sent to the committee, members of which would consult with the author. As far as I know, in every case they obtained the authorís cooperation in simply not publishing material that might have a direct bearing on the possible military application."

"There was a general feeling that it was to everyoneís interest in this country to see that any military project in Germany did not accidentally get help from the people here. Although the secrecy was remarkably well maintained on a voluntary basis, as time went on and the project moved to Los Alamos, we began to encounter stricter government regulations."

The voluntary secrecy was in effect before there was any sizable government commitment to support physics or nuclear weapons, and by the middle of 1942 it turns out only $40,000 of government funding had been spent on the Manhattan Project and its precursors. All the rest was done by university faculty harnessing graduate students and the like.

Now, you have no doubt heard about the impact of increasing government regulations and restrictions on Los Alamos and other places as time goes on, but I wasnít going to spend much time talking about that. Currently, weíre in a state where scientific knowledge and the development of new technologies are opening whole new fields of inquiry, and the interface of science and society is growing at an enormous rate and becoming ever more complex. Researchers are the only people who have a hope of understanding some of the implications of the research before it is published or before it is pursued.

In my view, our active engagement as scientists and research managers is essential for defining effective mechanisms for managing this interface. Our involvement in creating only appropriate boundaries and restrictions can help society benefit, rather than suffer, from our discoveries and results, and will be essential for a promising future for both science and society.

 

Back to top | Copyright ©2013. All Rights Reserved.