Dianne N. Irving, M.A., Ph.D., and Adil E. Shamoo, Ph.D.
Accountability in Research February 1993, 3:77-100
WHICH ETHICS FOR SCIENCE AND PUBLIC POLICY?
ABSTRACT
The problem of inaccurate, misapplied or fraudulent data could be addressed by government regulations, or by self-regulation from within science itself. To many, self-regulation implies the grounding of research activities in some "neutral" standard of "ethics" acceptable in a "pluralistic" society. Yet, there is no such thing as a "neutral ethics"; and many "contemporary" theories contain such serious theoretical deficiencies and contradictions that they are practically inapplicable. As a viable alternative to these theoretical and practical problems, an objectively based realistic framework of ethics is considered, and used to ground both the individual scientific and the collective public policy decision making processes. This is an ethics of properly integrated relationships. It is then applied to an analysis of many of the causes of incorrect data, as well as of many of the internal and external pressures and abuses often experienced by scientists today. This approach respects the integrity of each decision maker as a human being and as a moral agent - which in turn better insures the integrity of the protocol, the data, and the public policy decisions which follow - and ultimately, the integrity of the scientific enterprise itself. The alternative is government regulations.
INTRODUCTION
Contemporary science is, in many respects, big business. It consumes hundreds of billions of dollars annually from our national treasure. The structure of science and its fueling energy - the research and development operations - consume tens of billions of dollars with over 1 million scientists involved in the process (Shamoo, 1989).
Many individuals are attracted to scientific careers because of a desire to be useful, the excitement of exploring new territory, the hope of finding order, or the desire to test established knowledge (Kuhn, 1970). Additionally, the reward system is an important fuel component that can enhance enthusiasm and creativity. Even deep ideological drives find expression in the scientific enterprise. As the U.S. National Academy of Science has noted: "... a strong personal attachment to an idea is not necessarily a liability. It can even be essential in dealing with the great effort and frequent disappointments associated with scientific research" (U.S. National Academy of Science, 1989) (Hopefully ideology will not inadvertently cause the scientist to "prejudice" his or her data). In effect the scientific enterprise will also necessarily reflect the various ethical principles and social values espoused by the multitude of individuals involved in the scientific enterprise. In acknowledging the even more basic "humanity" underlying each of these individuals, Arthur Kornberg, the Nobel Laureate biochemist, perceptively quipped: "Science is great, but scientists are still people" (Kornberg, 1992). At times it would seem that contemporary science has lost sight of the "person" behind the starched white lab-coat.
One of the more serious concerns of contemporary science is how to deal with inaccurate, misapplied or fraudulent data (Irving, 1991, 1993A, 1993B; Kischer, 1993). That such data exists, or is problematic, or that some form of over-sight is necessary is no longer the issue. The scientific community no longer debates whether some form of oversight should be exercised over the scientific enterprise, but rather how much and in what form" (Shamoo, 1992A)? In other words, what is the balance to be struck between self-regulation and federal regulations? Kornberg, for example, does recognize "laxity and negligence", but he opposes any "bureaucratic procedures" (Kornberg, 1992). Others insist that scientists are incapable of any meaningful "self-regulation", and so the only real alternative to scientific fraud is governmental regulation.
Several of the causes of such problematic data include: the inevitable conflicts of interests, the very complexity of the information, the remoteness of the information from its sources (Shamoo and Annau, 1990; Shamoo, 1991), the lack of quality control and of quality assurance, the proliferation of information and data, and its reliance on the computer (Shamoo and Davis, 1990). Other serious influences on the scientist include the institutional pressures to "publish or perish"; to obtain funding - in industry, university settings, governmental agencies and in Congress; and media pressures.
However, our question here concerns a broader cause or source of error - that is, the presence of intellectual artifacts in decision-making processes - on both the individual and the institutional levels - which ultimately impact and more subtly influence how we design our protocols, produce data, or analyze public policy. It is with a view toward the development of a viable theoretical and practical structure for scientific "self-regulation" that we attempt to articulate and clarify here some of these intellectual artifacts. The intent is neither to debunk nor ensconce any particular individual philosopher or his teaching - but rather to examine briefly some of the major ethical frameworks available for our consideration and evaluation.
NO SUCH THING AS A "NEUTRAL ETHICS"
Often we are told that one sure way to insure the integrity of our data as well as the integrity of our policies is by being "ethical". If we are only "ethical" we can relax, be accountable, responsible, and assure good data and good policies - thus also avoiding over-reaching governmental regulations. And in a democracy, where no one's values should be imposed on others, the best ethics to employ is a "neutral" ethics. But, unfortunately, and incredibly, what is usually not explained by ethicists is that there is no such thing as a "neutral" ethics. Even risk/benefit analysis is grounded on variations of a utilitarian theory - which by definition is a normative ethical theory - i.e., it takes a stand on what is right and what is wrong, just as any other normative ethical theory does (Beauchamp and Childress, 1979; Beauchamp and Walters, 1978, 1982, 1989). As any good historian of philosophy will tell you, there is even no such thing as a "neutral" logic (Gilson, 1963; Copleston, 1962).
So what are we to do? If no ethical theory is "neutral", then which ethics is to be used for science and for public policy - especially in a democratic society? No one likes the "god-squad" - particularly scientists and public policy makers! But what we want to convey is that someone is playing "god" - regardless - on every level of our collective decision-making - even if it sounds "neutral". And so the question is no longer whether science and public policy should be ethical - but rather, which ethics should be used? To even begin to address this question we will explore briefly the individual and institutional integrity essential for decision making concerning scientifically sound protocols, accurate data, and responsible public policy. Clearly, if scientists are really serious about "self-regulation", they need to consider "integrity" seriously as well. The alternative is governmental regulation.
RECENT HISTORY OF THE PROBLEM IN ETHICS
There are several problems with some of the present theories of ethics which are likely candidates for use in science and public policy. Generally they are currently theories borrowed from bioethics. As recently pointed out (Irving, 1993C), the early "classic" bioethics texts basically restricted our considerations (for all intents and purposes) to ethical theories questionably derived from Kant or Mill. Renditions of Kant's theory - or deontology - was suppose to represent the defense of the interests of the individual; Mill's theory - or utilitarianism - was suppose to represent the defense of the interests of society. From primarily variations of these "theories" were derived the basic guiding theoretical ethical principles so often quoted and elaborated today: autonomy, beneficence and justice (Beauchamp and Childress, 1979, 1989; Beauchamp and Walters, 1978, 1982, 1989) - often referred to fondly as the "Georgetown Mantra" (a reference to the Kennedy Institute of Ethics, Georgetown University).
Although perhaps the intentions were good, and given that the field was in its infancy - as is your field right now - over the years a sort of "empirical ethics" has slowly been taking place. Often the identification and application of these theoretical ethical principles in routine daily medical practice did indeed help to clarify otherwise murky issues. But cracks began to form. For example, these three ethical principles were held to be prima facie (Beauchamp and Childress, 1989; Beauchamp and Walters, 1978, 1982, 1989) - that is, no one principle held precedence over either of the others. Yet in real-life medical situations, these theoretical principles would often come into conflict (Pellegrino, 1993; Shamoo and Irving, 1993B) - with no theoretically structured means by which to resolve the conflict, or to really balance them.
Soon each of these principles began to take on a life of its own, each one approaching an "absolute", and separated or split from the other two. For example, autonomy was claimed as the "ethical" ground for a patient to insist on extraordinary and often extremely expensive medical care, because it was his - or his family's - free autonomous choice. Or, a physician could rationalize the use of certain care for her patient against the patient's wishes because it was clearly medically indicated and the doctor knows best what is beneficent or good for her patient. Or again, institutionalized mentally ill patients could be used in experimentation for the benefit of others in their "class" of diseases (Shamoo and Irving, 1993C), or in purely experimental research for the sake of obtaining knowledge in general or for the "greater good of society" (see Tannenbaum and Cook, 1978; U.S. Code of Federal Regulations 45 CFR 46, 1989) because in justice they owed society for providing them with shelter, food and care, or because this is one way in which the mentally ill could presume to want to help to "share" the burdens of society (McCormick, 1974).
However, through daily empirical observations by practitioners in the field, the inadequacies and inconsistencies in these theoretical "ethical principles" began to show. There has been a steady growing uneasiness with their formulation and application, which somehow seems sometimes counter-intuitive; and newer attempts to move beyond this original formulation and stage of the field have begun (Irving, 1993C; Hamel, DuBose and O'Connell, 1993).
THE HISTORY OF PHILOSOPHY REVEALS AN ARTIFACT
However, there is an even more fundamental problem with these ethical theories, although in order to see this requires consulting the history of philosophy itself [Fig. # 1]. Briefly, each of the major philosophers in the history of philosophy have defined "being" or reality differently (Gilson, 1963). If one defines "being" differently, then one defines "human being" differently, and then the ethics are different. Or they then define "material being" differently, and then the science is different (Crombie, 1959). The major common source of the errors and inaccuracies which follow from some of these definitions is what is known in philosophy as a "mind/body split" (Wilhelmson, 1956; Fox, 1989; Meilander, 1987).
Closely related to these metaphysical and anthropological differences are the methodological (or, epistemological) differences [Fig. # 2]. The starting point of knowledge for some philosophers is outside the mind, i.e., the things they are investigating. Information about these things is induced through the senses, worked on by the intellect, and finally checked back with the things outside the mind which have been experienced in order to determine if there is a correspondence between the information or concepts formed in the intellect with the things outside the mind - their criteria for the truth or falsity of their information or concepts. The limited validity of both sense and intellectual cognition is acknowledged (Irving, 1992; Veatch, 1974; Klubertanz, 1963).
For others [Fig. # 3], the starting point of knowledge is inside the mind, e.g., a concept or a hypothesis; and information about reality is deduced from this internal rational starting point, and checked with the original store of knowledge systems in order to determine if the "information" or concepts cohere with, or fit in with, these systems - their criteria for the truth or falsity of their information. Only intellectual cognition is acknowledged as truly valid; the validity of sense cognition is generally rejected. Thus the major sources of error in methodology usually concern the starting points of the investigation, and the reliability of the check-backs for truth and falsity (Gilson, 1963; Wilhelmson, 1956; Irving, 1992).
All of these theoretical differences eventually trickle down to cause very different and often contradictory conclusions - not only throughout the history of philosophy, but also throughout many other fields as well. This is only a very quick and brief indication of from where, we would argue, these intellectual artifacts originate. Regardless of how objective we try to be concerning our observations or the information which we use, these intellectual artifacts constitute a much more subtle machine that is driving our scientific and public policy decisions, and more difficult to rout out (Irving, 1993A, 1993B).
What are some of the general implications of all of this [Fig. # 4]? If, like Plato (Jowett, 1937; Vlastos, 1978; Gilson, 1963), a human being is defined as two separate substances - soul and body - and if the body (as matter) is non-being - then there is not only a separation or split between the soul and the body, and therefore no interaction possible or explainable; but a human being is defined only in terms of the "rational" part of the soul, since his body (as non-being) isn't. Even Plato himself acknowledged that this mind/body split was theoretically devastating to his own philosophy and wouldn't work. But despite Plato's own self-effacing warnings, philosophers throughout the history of philosophy have perpetuated these metaphysical and anthropological artifacts - which, in turn, have seriously influenced their ethics (as will be noted later).
The modern counterpart of Plato was Descartes (Gilson, 1963; Wilhelmson, 1956; Copleston, 1962), who also defined a human being as two separate substances - mind and body - but ultimately, again, only as mind. If you want to see a philosopher sweat once he has painted himself into a theoretical corner, you might want to consult his Sixth Meditation (Cottingham, 1984). There Descartes is trying to explain how the pain in a physical leg is expressed or "felt" in the immaterial mind. But because he has a mind/body split, he cannot explain any interaction between the immaterial mind and the physical body because they are separated. Even his attempt with the pineal gland in the brain will not work, because the penial gland, too, is really only a part of the physical body.
To indicate how Descartes-the-physicist's metaphysical and anthropological presuppositions - or, intellectual artifacts -impacted or influenced his ability to even do science (Edwards, 1967), consider the following. The very validity of the fundamental laws of physics and mathematics depended on his proving the truth of the "cogito" ("I think, therefore I am") and on the existence of God - neither of which he was successful in doing. And because he rejected the existence of a void, Descartes' material substance, i.e., Extension, is continuous. These metaphysical presuppositions, in turn, had serious consequences for his physics - especially his scientific theory of the vortex.
For example, the material world for Descartes is therefore not composed of ultimate atoms, but only volumes, which must then move as a whole, i.e., a simultaneous movement of matter in some closed curve. Planetary motion must be explained as one infinite three-dimensional continuous and homogenous extended body. If there is only one continuous extended substance which constitutes the whole material universe, then he can only distinguish one body from another body in terms of differential volumes and secondary qualities. Therefore he cannot have a definition for density, or for viscosity.
Descartes omits "matter", therefore, from his definition of motion. Motion = speed x size; but "size", for Descartes, is a continuous volume of body. Therefore his laws of impact are actually in error. Also, he cannot isolate a particular force, e.g., gravity, in terms of how a body would move if it were free from resistance, because to imagine it moving without resistance is to imagine it in a void - the existence of which he had rejected.
Animals have no penial glands, for Descartes, and therefore no souls. Therefore they cannot feel any pain, or any other kinds of sensations - as sensations were really only modes of thought. Animals are only physical bodies, i.e., "machines", and the only sense in which they can be hurt is to "damage" them.
Thus, not only can Descartes not explain any interaction between his immaterial Mind and his material Body; he also cannot guarantee the very laws of physics and mathematics he was so anxious to protect. He cannot substantially distinguish one body from another, correctly define planetary motion, density, viscosity, motion, size, gravity, and his laws of impact are in error. Even animals are incapable of feeling any pain.
Well - so where does all this "theorizing" leave us? Amazingly, our "man" (including the scientist-man) is no longer a whole man; and our knower or investigator (including the scientist-investigator) can no longer know the real world! Such a fractured broken remnant of a human being can no longer even formulate or ask the appropriate questions necessary for survival - much less those questions needed for performing basic research.
From Descartes came the later rationalists [Fig. # 5], who took one of Descartes' substances, i.e., "Mind", and defined a human being only in terms of "rational attributes". The later empiricists took Descartes' other substance, i.e., Extension, or matter, and defined a human being only in terms of matter or sentience (or the ability to feel pain or pleasure)(Irving, 1993A).
And here one meets up once again with the contemporary bioethical theories, using primarily variations on Kant (the rationalist) or variations on Mill (the empiricist). The inherent theoretical and practical problems in these two current approaches have already been noted earlier. Now one can see that many of these problems stem from the fact that these theories retain from their historical predecessors very definite and problematic metaphysical, epistemological and anthropological presuppositions - specifically their definitions of a human being, and how that human being comes to know material reality. Now consider the impact of these philosophical presuppositions on their ethics. What is "ethical" is now based on variations of either "Kant's" pure rational autonomy - or on variations of "Mill's" pure materialistic (although more sophisticated) utilitarian calculus of pain and pleasure (the ultimate philosophical ground of risk/benefit analysis).
But if "Kant" is right, and human persons are not to be defined with a material body, but only in terms of "rational attributes"; and if only rational "autonomous" human beings are "persons" - and therefore due ethical respect and protection; then non-autonomous human beings are not persons, e.g., Alzheimer's and Parkinson's patients, persons with mental illness, drunks, drug addicts, the comatose - even very young children. Therefore one would logically have to argue that they have no ethical standing as human persons, and perhaps no legal or social protections as well, as many bioethicists, lawyers and public policy makers do argue. These writers even conclude that therefore the infanticide of normal healthy human infants and young children is ethically permissible (Englehardt, 1985; Tooley, 1974; Robertson, 1989; Hare, 1988). Well, I would argue, that leaves a great number of human beings in serious trouble.
If "Mill" is right, and human persons are to be defined and ethically protected only in terms of "sentience" (or a material substance capable of feeling pain or pleasure); and if what is ethically relevant is only degrees of pain and pleasure; then many higher mammals which are highly sentient (e.g., dogs or chimpanzees) are persons, and many human beings (e.g., newborn human infants) are not persons. And if we do know empirically that in human beings full "rational attributes" or "full sentience" are not present until years after birth (Moore, 1982), then one would again have to conclude that the infanticide of perfectly normal healthy human infants and young children is ethically permissible - as is argued by many bioethicists, lawyers and public policy makers today (Singer, 1981; Kuhse, 1986; Lockwood, 1988).
These reality checks should be taken quite seriously. Consider the conclusions and consequences for all of us human beings today if "Kant" or "Mill" were correct. Consider what is at stake. If these theories theoretically don't work, and lead to such counter-intuitive and drastic conclusions, then why use them? Can we even justify using them? Should they be used to ground the ethics of research or public policy formulation? These are just some of the intellectual philosophical artifacts which have found their way down into the medical and scientific communities, and on which much public policy is already presently being based (Irving, 1993B).
A MORE OBJECTIVELY-BASED ETHICAL THEORY FOR SCIENCE AND PUBLIC POLICY DECISION MAKING
" ... for wickedness perverts us and causes us to be deceived about the starting-points of action. Therefore it is evident that it is impossible to be practically wise without being good." [Aristotle, Ethica Nicomachea, p. 1035]
If scientists are really serious about avoiding governmental regulation by means of "self-regulation" to prevent scientific fraud, this will require an equally serious consideration of how one goes about "self-regulating" in a way which both theoretically and practically maintains the integrity of both the "man" and the "institutions" of which he is a member. Presently either no really viable constructive alternatives are proposed at all, or those which are proposed fall into one of the problematic categories above. Simply because of their theoretically and practically unsolvable problems those theories are inherently unworkable and should clearly not be used as a viable ethical basis for scientific "self-regulation".
On the other hand there is at least a third viable ethical framework one might want to consider; one in which the integrity of the "man" and of the "institutions" is maintained [Fig. # 6]. Such an ethical framework was originally advanced by Aristotle the biologist (Aristotle, "Ethica Nicomachea", in McKeon 1941, p. 935; Veatch, 1974; Gilson, 1963; Bourke, 1951; Irving, 1992) , and critiqued and improved on through the wisdom gained by trial and error throughout the centuries. The deeper grounds of the starting point of this theory are the investigation of everything with the induction of information from objectively based sensitive and intellectual experience of the real things in the world outside our minds - and any such information or concepts thus obtained must be referred back to those real things outside the mind in order to determine their truth or falsity (Aristotle, "Analytica Posteriora", pp. 136, 184-186, "Ethica Nichomachea", p. 1033 in McKeon 1941). This method will sound very familiar to the life research scientist - i.e., the scientific method! Here a human being is not defined as two separated substances, but only as one whole complex substance with both formal and material aspects (Aristotle, "De Anima", p. 554 in McKeon, 1941). Thus there is no mind/body split; and the human knower can both reach and know objective reality to a very sophisticated degree. The impact of these philosophical premises on one's ethics, and on one's treatment of the decision making process, is considerable.
Ethics now is not based only on pure autonomous rational choice; nor on merely physical pain and pleasure - but proximately on the whole human being, relating properly within him or herself; as well as relating properly with society and the environment with which and in which he or she must flourish and survive (Finnis, 1980; Fagothey, 1963). It is a much more complex ethical system - but then, doesn't it more accurately match the objective reality of the very complex human beings and the very complex world in which we live? It is worth placing this ethical theory under a "microscope" for just a moment.
What is probably one of the most compelling elements is its insistence on correctly identifying the proper proximate goal or good of any human being - that is, a human being's goal or good simply by virtue of his or her being human (Aristotle, "Ethica Nichomachea", in McKeon, 1941, pp. 935-947). That end or goal is variously rendered as "happiness", "flourishing", the excellence of living well, or being the "best that you can be" - as real live human beings. Common human goods have variously included: food, shelter, education, life and even recreation (Finnis, 1982). This implies, again, no mind/body split - for these various goods represent the several real aspects of a whole, unfractured human being. It is the integrity of the whole human being - and the integrity of his relationship with society and the environment - which is at issue. It is an ethics of relationships. Here ethics is about the rightness and wrongness of human actions as they relate to our proper human goal or good.
Variations on this basic framework have further examined in more detail the decision making process itself, as well as the various criteria which should be considered in evaluating the human actions which follow from that process. The ethical aspect of our actions is not determined by any one single criteria, but, more complexly, in terms of several criteria (e.g., the very nature of the action, the circumstances under which they were performed, the intention of the actor, the consequences of his or her actions, etc. (Fagothey, 1963; McInerny, 1982). What the human good is, and what brings us closer to or takes us further from it is not to be determined by relative opinions, but to be objectively determined by the daily observations and experiences of real live human beings. There is no "is/ought gap" (as the academic argument goes), if this objectively determined goal-oriented feature of Aristotle's ethics is properly understood and theoretically retained (Aristotle, "Ethica Nichomachea", in McKeon, 1941, p. 1032).
THE ROLE OF INFORMATION IN ETHICAL AND POLICY DECISION MAKING
The role that information or data play in the ethical and policy decision making processes will be considerably different within this framework. Certainly Aristotle's warning - that "a small error in the beginning leads to a multitude of errors in the end" (Aristotle, "De Caelo", in McKeon, 1941, p. 404) - captures in one sentence the gist of that role. Such a warning should give us pause here, considering that such an error in information which is embedded deep within a policy would impact thousands of scientists - as both individuals and as policy makers.
A. Individual decision making
The individual decision making process can be broken down into several distinct components. Each should be considered separately on its own merits, although each also needs to be understood as having a direct influence on the integrity of the other components as well [Fig. # 7].
1. The starting point of the decision making process that will lead to ethical actions is "objectively" correct information -of what the physical world is, of what the proper human goal is, as well as of what different sorts of intermediate things or actions we need to consider as means to reach that goal (Aristotle, "Ethica Nichomachea", in McKeon, 1941, pp. 1030-1036). Indeed, one of the most important of the intellectual virtues (as distinct from the moral virtues) is literally none other than the virtue of "science" (Aristotle, "Ethica Nichomachea", in McKeon, 1941, pp. 1022-1024; Fagothey, 1963) - or the habit of knowing correctly what the true "objective" facts of reality are (as best we can). Relative to the bench scientist's engagement in his or her experiment, we would distinguish two different criteria for obtaining "objectively correct information": the unbiased and independent observation of phenomena; and the unbiased and independent selection of data.
Unfortunately, just as there is no such thing as a "neutral ethics", there is also no such thing as a "neutral observer" or a "neutral scientific analyst". For example, during the experiment itself, personal biases could cause the scientist to either "see" or "not see" phenomena or mechanical measurements which do not accurately reflect what is actually taking place. That is, such biases could immediately affect the reliability of the observations which the scientist records. Similarly, even if these observations are accurately recorded, personal biases could cause the scientist to select for analysis only those data which fit neatly into a pre-conceived explanation of these observations.
Thus every attempt must be made by the conscientious scientist to be aware of and to filter out of his or her observations and selections of data any personal biases derived from particular backgrounds, ideologies or other intellectual artifacts, pressures imposed by professional conflicts of interests, institutional and governmental demands, media and political considerations, etc. (Irving, 1993A; Shamoo, 1991, 1992; Kischer, 1993). Otherwise these biases and pressures might in fact seriously "prejudice" his or her "data" - i.e., result in both the prejudicial recording of observations, and the prejudicial selection of data to be analyzed. In effect such biases would negate any possibility of deriving any valid or real truth or understanding of the objective reality outside his or her mind that is being investigated.
In fact, if our knowledge of our proper human goal is incorrect, or if our information about the real world is incorrect, then the entire decision making process will be wrong, because the will simply accepts as true or good what the intellect presents to it as true or good, without question.
2. Once we possess the correct relevant information, then we deliberate about the several possible things or actions which would best attain our goal.
3. Next, we prudently choose and intend those actions or means which will best achieve these goals.
4. We then perform the action, under specific particular circumstances (which themselves can alter the rightness or wrongness of an action).
5. Finally, the habit of periodically going back to reflect on earlier decision-making processes and their outcomes is developed, evaluating them both informationally and ethically, and questioning if they were properly thought out and executed.
So this decision-making process starts (hopefully) with "objectively" correct information about the world and our proper human goal as human beings. This information is presented to the will as objectively true and good. Ways and means of achieving that goal (or intermediate goals) are weighed and measured or deliberated about. One means is chosen and intended - and then we execute the action under particular circumstances. Finally, earlier decisions, conclusions and actions are reflected on and evaluated informationally and ethically. Although each step in the process is important, the rightness or wrongness of the entire process hinges on and is determined by the objective correctness of the intellectual information about the real world with which the process begins. And this is determined by the intellectual virtue of "science". Clearly, the integrity of the individual scientist's information is critical in the decision making process leading to the integrity of his data (Shamoo and Annau, 1987; Shamoo, 1991, 1992A).
There is one final interesting (and perhaps cryptic) remark which Aristotle made about this process, a remark which concerns the integrity of the "man" [Fig. # 8]. Roughly as we have just seen, only a good thinking man acts good; but, he adds, only a good acting man thinks good (Aristotle, "Ethica Nichomachea", in McKeon, 1941, p. 1035). In other words, the process is actually circular. And the implication is that if a person habitually acts unethically, sooner or later this habit can effect even his or her ability to think objectively and correctly about the physical world itself - which, in turn, will cause his decision making processes to start with incorrect information about the physical world - which, in turn, will corrupt every step in his entire individual decision making process - etc., etc. In fact, the use of incorrect information or data can negatively impact not only scientific and public policy decision making - but even moral decision making as well. And this in turn will negatively impact the integrity of the individual and his or her relationship with society and the environment. The issue now becomes the integrity of the scientist! "A small error in the beginning ......" actually works both ways. And thus this Great-Chain-of-Decision-Making has just become a small Circle.
B. Institutional decision making
But how does this relate to decision making on the organizational or social levels [Fig. # 9]? Briefly, we need to consider the correct understanding of the "common good". Besides being an individual, "man" is, according to Aristotle, also a "social animal" (Aristotle, "Politica", in McKeon, 1941, p. 1129), and therefore requires relationships with "others" in order to flourish. But the mistake should not be made that the "common good" is anything other than the equivalent of what is objectively and commonly good for each individual human being as human. Organizations and societies are not their own raison d'etre. They should exist fundamentally to foster or further on a social level the ability of each individual human being to flourish. In other words, there is really no such thing as a big individual particular substance called a "Committee" or a "Society" which is walking down the street with its own Good. It is not a thing or an individual itself - but only a concept or a term which we use to name or designate a collection of real individual human beings who make it up. And it is in their roles of fostering individuals (Aristotle, "Ethica Nichomachea", in McKeon, 1941, p. 946) that societies or institutions constitute their place in an even larger Circle (as will be indicated below).
In sum, viable ethical theories can not contain principles such as autonomy, beneficence and justice which become "separated" or "split" from each other, or from the real, live, integrated whole human being - or from his or her critical relationship with society and the environment. In fact, an individual and social ethics would be more realistic and objective if based on all of these ingredients. Unfractured and whole human beings who happen to be autonomous (and who are simply therefore ethically responsible and accountable for their decisions and actions) would ultimately make decisions based on what they can know is objectively true or good or beneficent for themselves as human persons, for others who are human persons (whether autonomous or not), and for their commonly shared environment - i.e., according to what Aristotle called "right reason" (i.e., not simply pure, unadulterated, isolated "reason") (Aristotle, "Ethica Nichomachea", in McKeon, 1941, pp. 1035-1036; McInerny, 1982; Fagothey, 1963). And in so doing they act justly and should be fostered by collective decision-making which is also in accord with those legitimate human goals which human beings hold in common as objectively good.
APPLICATION: THE ROLE OF ETHICS IN SCIENCE AND PUBLIC POLICY
If scientists are really serious about "self-regulation" in order to prevent the prevalence of scientific fraud in contemporary science, such "self-regulation" will succeed only to the extent that individual scientists identify for themselves those elements of a viable individual and social ethics - an ethics which ensures the integrity of the scientist who produces the "data", as well as of those social institutions which so deeply impact on the scientific enterprise itself. What, then, is the role that such an ethics could play in science and public policy?
A. The ethical scientist
Individual "ethics" - accurately understood - does not immediately determine the accuracy of scientific data, statistical probability, computer analysis, the peer-review, grant funding, government regulatory or Congressional decisions. But it does immediately affect the integrity of the "man" who is running these machines, and his decisions - i.e., the person behind the starched white lab coat, as Kornberg might have put it! If the "man" or his decisions are factually wrong - or his or her conception of "ethics" is wrong - - - well, "a small error in the beginning ......". It is often easy to loose sight of the fact that first and foremost the "man" is a human being and a moral agent; and then he or she is a scientist, peer-reviewer, granting agent, government official, journalist or Congressman. Indeed, this is precisely why individual ethical responsibility and accountability accrues all along the entire length of the Great-Chain-of-Decision-Making-Circle. Just as important to understand is that when the individual is a part of a "collective" policy decision making process, the impact of the individual's decisions and actions on others - both positive and negative - are greatly multiplied.
Applicable to the individual scientist is the critically important job of starting off the entire chain of decision making with the correct experimental design - which requires the correct information about his or her particular field of science. Thus the first ethical duty as a scientist (because of the expertise possessed) is to be academically competent in the knowledge and information of his or her field. If the scientist is incompetent about the correct empirical facts of the objective physical world which is within his purview, this incompetence becomes disastrous for everyone and everything that follows. Autonomy for everyone who uses this incorrect data is immediately precluded; harm is caused instead of beneficence or justice. Therefore, in terms of the correctness of the experimental design, or the correctness of the interpretations or accuracy of the data, the scientist him or herself takes responsibility for his or her own scientific competence. In this regard, no one has a greater ethical responsibility in the Great Chain than the individual scientist.
But deliberating about the design and analyzing the data can often be seriously affected by outside pressures as well. For example, clinical researchers have expressed their concerns that several pharmaceutical companies - all of which are simultaneously supporting them on grants - put pressures on them simultaneously to design the protocol (or, interpret the data) in such a way as to make their own drugs or devices look better experimentally than they really are. Or - does this scientist or clinician own stock in the drug companies, biogenetic companies, scientific or ethical software, etc., with which they deal - and therefore "fudge" the data for the "company's" benefit, which is thus to their benefit as well? Or, is a favor owed to the colleague in Neurosurgery - who chairs the university grants program, or who also owns stock?
The list of conflicts of interest affecting the scientist's decision making is endless - our point being that even the very design of the protocol - not to mention the production or the interpretation of the data - can be considerably influenced by various pressures which tantalizingly or threateningly weaken the scientist's resistance and lead to not only incorrect, misapplied or even fraudulent data - but to the very breakdown of his or her character, his or her own integrity as a human being, his or her very commitment to objective truth. And this in turn can become habit forming, becoming easier to do each time the occasion arises anew. Aristotle's "reversal" kicks in - the result, bad data. No longer devoted to unravelling the ever-fascinating mysteries of nature, the scientist instead becomes a mere and grovelling pawn in the hands of corrupt "others", being corrupted and used, instead of respected and admired.
Individual scientists themselves need to effectively identify and resist these corrupting outside pressures. Every scientist is a moral human agent himself, and responsible for his or her own decisions and actions. And they should be respected as such by the "others". Better for a scientist to refuse complicity, rather than to allow one's self to be corrupted. A scientist's integrity should come to be recognized by the scientific community and political institutions as more important than racking up the publications, volumes of articles and books; millions of dollars in grants; medals, honors and accolades in boundless quantities - or even Nobel Prizes - especially if all of these are really built on a house of cards that will fall, eventually, dragging him or her down with it, and causing real harm to all of those "others" along the way as well.
"Harm" is often not considered by the bench scientist. But a scientist is not an island unto him or herself; he or she does not work in a vacuum - as some would often like to believe. Particularly today, when a scientist's work is applied in myriads of ways, he or she can no longer glory in the absolute "freedom of inquiry" in which we all were educationally drenched. "Absolute Freedom of Inquiry" is as mythical as "Absolute Autonomy", "Pure Beneficence", or "Perfect Justice" are in bioethics - they are all really myths or fictions - and sometimes actually only rationalizations. There are limits or bounds to what any one can do - and that includes a scientist. If a scientist's work is going to be applied to potentially millions of innocent human beings or to our shared environment, then this cherished "vacuum" of absolute solitude evaporates - and the scientist does bear moral responsibility of harm or injury done to those "others" because of his or her scientific incompetence, or freely choosing to succumb to corrupt institutional pressures. This is particularly true of the clinical researcher, who's incompetence and moral cracks are born by his or her very vulnerable human patients whose good, or so it is espoused, is primarily the goal.
B. Ethical institutions
On the "collective" level, such "institutions" also do not immediately determine the accuracy of the information or data. But as an integral part of the Great Chain-Circle, they do immediately cause such excessive and misdirected influences and pressures that, again, these influences and pressures in turn can seriously compromise the integrity of the individual scientist and the "others" with whom he or she are affiliated, eventually causing similar harm along the entire length of the Great Chain-Circle, calling into question the integrity of the institution and indeed of the entire scientific enterprise itself. "Collectively" this is perhaps even more problematic, given the hundreds and thousands of individuals whom these institutions directly influence and pressure on a daily basis.
Rather than fostering the common good, institutions can in fact abuse as well as use the scientist - in politics, committees, or organizations; or through unrealistic or overburdening regulations; or by the media. These outside pressures also effect the scientist's decision making processes. For example, inappropriate or inordinate pressures might sometimes be placed on the scientist to conform to supposedly "standard" or currently popular, or politically correct, or politically motivated "scientific" frameworks of reference or explanation - even if those basic "frameworks" might be objectively false. The "politization" of science seems to be running at an all time high.
Similarly, unimaginative and stubborn resistance to innovation is one thing; blocking it, because a committee, organization, industry or political party with power and influence does not want the old "status quo" to be demonstrated wrong - is ethically reprehensible. It corrupts the scientist as both human being and scientist by bringing sometimes unbearable pressures on him or her. It also corrupts the other co-committee members who are also pressured into compromising themselves as moral human agents in consort with each other. And it often corrupts the institution itself. Such pressures or influences require nothing less than the rejection or suppression of the true facts about nature and the substitution of incorrect facts in their place - which facts in turn are used as the false starting point of the Great Chain-Circle.
Or again, a scientist can be abused by oppressive and costly governmental over-regulating, which crushes creativity and unnecessarily bogs down the entire scientific enterprise itself. Nor should undo pressures influence the scientist to produce "perfectly auditable" data, which must sometimes be made to artificially conform to a "financial set" theory. How many times in the history of science have major breakthroughs occurred because of creative and innovative approaches or explanations, or because of simple errors which were honestly recorded and admitted - and which turned out to be true, even though they threw the T-test off?
Nor should scientists be abused by pressures from a media which can hype up the public with unrealistic caricatures, supposedly promising new "miracle" cures or treatments; or which misconstrues or misinterprets a scientific "controversy" - again misinforming the public, sometimes even breaking confidentiality and destroying the reputations and careers of scientists - in order to get a "great story" out first.
On a higher "collective" level, research institutions are causing numerous pressures and challenges among themselves which at times are conflicting. For example, there are great pressures on universities to cash in on their discoveries (Sugawara, 1993); yet at the same time there are serious concerns about the presence of a conflict of interest (Shamoo, 1992A, 1993A). There are pressures on the FDA to accelerate drug approvals, e.g., for AIDS patients; yet the FDA must continue to be concerned with the safety and efficacy of these drugs (Stolley and Lasky, 1992). There are pressures from universities on funding agencies to use merit in funding research projects; yet universities themselves exercise the power of pork barrel to achieve an increasing amount of funding, bypassing all peer-review systems (Marshall, 1992).
Finally, consider the effect on the integrity of decision making of the current "fad" of "consensus ethics". Probably due initially to fear of legal liability, many group or institutional decisions are now often based on what the majority agrees upon, or the "consensus" of the group. This is not necessarily an ethical judgment or decision, and is unfortunately sometimes used only as a means by which to dilute the moral (or legal) responsibilities of the members. And given that such institutional policies themselves are often the targets of outside pressures, their construction and promulgation by others often serve to provide merely a psychological cover of semi-anonymity for many of those individuals taking part in such collective decision making processes.
It is worth considering that "majority" or "consensus" decisions - even on the national scale - have at times been simply wrong and unethical. Most of the time it seems to work - but not necessarily always. We need to question constantly whether the "collective" decision of the majority actually compromises the integrity of each of the individuals in the minority - or of the institution itself? Ethically, at least, each and every member of that committee or institution taking part in that "consensus" bears the individual moral responsibility and accountability for his or her own decision.
In addition to "consensus ethics", we are now seeing a move toward "consensus science". But how can we ensure that this "consensus science" is actually scientifically correct? The scientific establishment wants "consensus science" to be the only science accepted in the courtrooms (Marshall, 1993). These same groups complain bitterly about government guidelines to be used for ensuring the integrity of scientific data. Such guidelines, they complain, will stifle new and creative science, and by definition these guidelines are not approved by "consensus". Yet often there is simply no consensus within science itself on important and critical issues.
For example, there is the major ongoing debate about the very definition of "scientific misconduct" itself between the two most important agencies of the federal government which fund science - the NSF and the NIH. NSF would like to retain in the definition of "scientific misconduct" the broad clause: "or other serious deviation from accepted practices in proposing, carrying out, or reporting results". Yet NIH has dropped this section from their definition (Zurer, 1993), thus narrowing the definition as well as tying the hands of future investigative bodies. The claim that scientists are usually objective and unbiased would seem to run contrary to NIH's insistence on such a narrow definition of "scientific misconduct" because they fear potential abuse if the definition is "too broad". Such lack of objectivity within this scientific institution calls to mind Gerald Geison's comments on Pasture's "scientific objectivity". "Pasteur's message for contemporary science" was to puncture the "hopelessly misleading" image of science as "simply objective and unprejudiced," a myth that scientists have perpetuated in order to advance their work and attain a "privilege status" (Russell, 1993). Will the scientists' call for "self-regulation" be ultimately a myth as well?
CONCLUSIONS
One last check on reality, then. There are no easy answers to the questions about government regulations vs. self-regulation. But if overly-burdensome government regulations, required to assure the integrity of scientific data - as well as the public policies which are often based on that data - is so objectionable to scientists, "self-regulation" is in order. Scientists can't have it both ways. Especially in view of the very real harm which scientific fraud can and does cause, it must be dealt with clearly, firmly and unambiguously - one way or the other.
Yet "self-regulation", invoked by the scientific community to assure scientific integrity, must be based on much more than what is "consensual", "efficient", "productive", or statistically valid. It involves more than considerations of protocol designs and accurate data. Cryptically or not, the wisdom of the empirical experience of the centuries is unmistakably clear: real "self-regulation" requires a consideration of the integrity of the individual scientist as a human being and as an individual decision maker and actor. It also involves consideration of the integrity of the many institutions whose decisions and actions heavily influence and supposedly foster the scientist both as an individual and as a member of a "collective". This requires a grounding in a theoretically and practically viable personal and social ethics, itself grounded in a realistic and objectively based philosophy. The only other viable alternative, we would argue, is governmental regulation.
There is more at stake, then, than the integrity of experimental designs, the integrity of information, or the integrity of the data. More fundamentally at stake is the integrity of us as human beings - whether scientist, clinical researcher, quality analyst or controller, peer-reviewer, university, industry or government funder, regulator, journalist or Congressman. The Great Chain does not just "start" with "information" - but with a human being - who produces, insures, reviews, funds, regulates, reports on or governs - or, to complete the Great Circle, who is harmed by or compromised by not only the information or data, but by each other in their various levels of relationships, decisions and actions. Ultimately, it is the scientific enterprise itself, we would suggest, that is at stake.
REFERENCES
*** An edited version of this paper was originally presented by Dr. Irving at the Third Conference on Research Policies and Quality Assurance, Baltimore MD, May 2, 1993.
1. McKeon, Richard (1941) The Basic Works of Aristotle. New York: Random House.
2. Beauchamp, Tom L. and Childress, James F. (1979) Principles of Biomedical Ethics. (1st ed.) New York: Oxford University Press, pp. 7-9, 20; ibid. (1989)(3rd ed.), 51.
3. Beauchamp, Tom L. and Walters, LeRoy (1978) Contemporary Issues in Bioethics. (1st ed.) Belmont, CA: Wadsworth Publishing Company, Inc., pp. 3, 51; ibid. (1982) (2nd ed.), pp. 1-2, 23; ibid. (1989) (3rd ed.), pp. 2, 12, 24, 28.
4. Bourke, Vernon J. (1951) Ethics. New York: The Macmillan Company, p. 192.
5. Burns, Stephen J. (1991) Auditing high technology ventures. Internal Auditor 486:56-59.
6. Copleston, Frederick (1962) A History of Philosophy. New York: Image Books, Vols. 1-9.
7. Cottingham, John, Stoothoff, Robert and Murdoch, Dugald (trans.) (1984) The Philosophical Writings of Descartes. (Vol. 2) Meditations on First Philosophy. New York: New York Press Syndicate of the University of Cambridge, (Sixth Meditation) p. 59- 60.
8. Crombie, A.C. (1959) Medieval and Early Modern Science. New York: Doubleday Anchor Books.
9. Edwards, Paul (1967) The Encyclopedia of Philosophy. (Vol. 1, reprint edition 1972) New York: Collier Macmillan Publishers, pp. 352- 353.
10. Englehardt, H. Tristram (1985) The Foundations of Bioethics. New York: Oxford University Press, p. 111.
11. Fagothey, Austin (1963) Right and Reason. (3rd edition) Saint Louis, MO: The C.V. Mosby Company, pp. 92ff, 101-113, 198.
12. Finnis, John (1982) Natural Law and Natural Rights. Oxford: Clarendon Press, pp. 85-97, 134-156.
13. Gilson, Etienne (1963) Being and Some Philosophers. Toronto: Pontifical Institute of Mediaeval Studies.
14. Hamel, Ron P., DuBose, Edwin R. and O'Connell, Laurence J. (1994) A Matter of Principles? Ferment in U.S. Bioethics. Valley Forge, PA: Trinity Press International.
15. Hare, R.M. (1988) When does potentiality count? A comment on Lockwood. Bioethics 2 3:216, 218, 219.
16. Irving, Dianne N. (1991) Philosophical and Scientific Analysis of the Nature of the Early Human Embryo. (Doctoral dissertation, GeorgetownUniversity: Washington D.C.).
17. Irving, Dianne N. (1992) Science, philosophy, theology and altruism: the chorismos and the zygon. in Hans May, Meinfried Streignitz, Philip Hefner (eds.) Loccomer Protokolle. Rehburg-Loccum: Evangelische Akademie Loccum.
18. Irving, Dianne N. (1993A) Scientific and philosophical expertise: an evaluation of the arguments on personhood. Linacre Quarterly 60, 1:18-46.
19. Irving, Dianne N. (1993B) The impact of scientific 'misinformation' on other fields: philosophy, theology, biomedical ethics, public policy. Accountability in Research 2, 4:243-272.
20. Irving, Dianne N. (1993C) Philosophical and scientific critiques of 'autonomy-based' ethics: toward a reconstruction of the 'whole person' as the proximate ground of ethics and community. (delivered to the Third International Bioethics Institute Conference, San Francisco, CA, April 16, 1993.)
21. Jowett, B. (1937) The Dialogues of Plato. New York: Random House.
22. Kischer, C. Ward (1993) Human development and reconsideration of ensoulment. Linacre Quarterly 60 1:57-63.
23. Klubertanz, George P. (1963) Introduction to the Philosophy of Being. New York: Meredith Publishing Co.
24. Kornberg, Arthur (1992) Science is great, but scientists are still people. Science (Editorial) 257:859-859.
25. Kuhn, Thomas (1970) The Structure of Scientific Revolutions (2nd Edition). Chicago: The University of Chicago Press, p. 7.
26. Kuhse, Helga and Singer, Peter (1986) For sometimes letting - and helping - die. Law, Medicine and Health Care 3:4; also, Kuhse and Singer (1985) Should the Baby Live? The Problem of Handicapped Infants. Oxford University Press, p. 138.
27. Lockwood, Michael (1988) Warnock versus Powell (and Harradine): when does potentiality count?. Bioethics 2:3:187-213.
28. Marshall, Eliot (1992) George Brown cuts into academic pork. Science 258:22-22.
29. Marshall, Eliot (1993) Supreme Court to weigh science. Science 259:588-590.
30. McCormick, Richard A., S.J. (1974) Proxy consent in the experimentation situation. Perspectives in Biology and Medicine 18:127.
31. McInerny, Ralph (1982) Ethica Thomistica. Washington, D.C.: The Catholic University of America Press, p.63-77.
32. Moore, Keith L. (1982) The Developing Human. Philadelphia: W.B. Saunders Company, p. 1.
33. Pellegrino, Edmund D. (1993) Ethics. Journal of the American Medical Society 270 2:202-203.
34. Robertson, John A. (1986) Extracorporeal embryos and the abortion debate. Journal of Contemporary Health Law and Policy 2:53.
35. Russell, Cristine (1993)Louis Pasteur and questions of fraud. Washington Post Health. February 23, 1993, p. 7.
36. Shamoo, Adil E., and Annau, Z. (1989) Data audit: historical perspectives. in Principles of Research Data Audit. ed. A.E. Shamoo. New York: Gordon and Breach, Science Publishers, Inc., Chapter 1, pp. 1-12.
37. Shamoo, Adil E., and Davis (1990) The need for integration of data audit into research and development operations. Principles of Research Data Audit. A.E. Shamoo (ed.). New York: Gordon and Breach Science Publishers, Inc., (Chapter 3) pp. 22-38; also in Accountability in Research 1:119-128.
38. Shamoo, Adil E. (1991) Policies and quality assurances in the pharmaceutical industry. Accountability in Research 1:273-284.
39. Shamoo, Adil E. (1992A) Role of conflict of interest in scientific objectivity: a case of a Nobel Prize work. Accountability in Research 2:55-75.
40. Shamoo, Adil E. (1992B) Introductory Remarks. Accountability in Research 2:i.
41. Shamoo, Adil E. (1993A) Role of conflict of interest in public advisory councils. in Ethical Issues in Research. D. Cheney (ed.), Frederick, Maryland: University Publishing Group Inc, (Chapter 17) pp. 159-174.
42. Shamoo, Adil E., and Irving, Dianne N. (1993B) The PSDA and the depressed elderly: intermittent competency revisited. Journal of Clinical Ethics 4, 1:74-80.
43. Shamoo, Adil E., and Irving, Dianne N. (1993C) Accountability in research using persons with mental illness. Accountability in Research August (present journal volume).
44. Singer, Peter (1981) Taking life: abortion. in Practical Ethics. London: Cambridge University Press, pp. 122-123.
45. Stolley, Paul D., Lasky, Tamer (1992). "Shortcuts in drug evaluation". Clinical Pharmacology Therapeutics 52:1-3.
46. Sugawara, Sandra (1993). "Cashing in on medical discoveries". The Washington Post/Business. January 4, 1993, p. 1.
47. Tannenbaum, A.S., and Cook, R.A. (1978). Report on the Mentally Infirm: Appendix to Research Involving Those Institutionalized as Mentally Infirm. 1-2.
48. Tooley, Michael (1974) Abortion and infanticide. in Marshall Cohen et al (eds.) The Rights and Wrongs of Abortion. New Jersey: Princeton University Press, pp. 59, 64.
49. U.S. Codes of Federal Regulation (1989) 45 CFR 46.
50. U.S. National Academy of Science (1989) On Being A Scientist. Washington, D.C.: National Academy Press, p. 9.
51. Veatch, Henry B. (1974) Aristotle: A Contemporary Approach. Indiana: Indiana University Press.
52. Vlastos, Gregory (1978) Plato: A Collection of Critical Essays. Indiana: University of Notre Dame Press.
53. Wilhelmson, Frederick (1956) Man's Knowledge of Reality. New Jersey: Prentice-Hall, Inc.
54. Zurer, Pamela S. (1993). Divisive dispute smolders over definition of scientific misconduct", Chemical and Engineering News. April 1993, 5:23-25.