Medical ethics is a system of moral principles that apply values and judgments to the practice of medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, and sociology.

By the 18th and 19th centuries, medical ethics emerged as a more self-conscious discourse. In England, Thomas Percival, a physician and author, crafted the first modern code of medical ethics. He drew up a pamphlet with the code in 1794 and wrote an expanded version in 1803, in which he coined the expressions "medical ethics" and "medical jurisprudence".[1] However, there are some who see Percival's guidelines that relate to physician consultations as being excessively protective of the home physician's reputation. Jeffrey Berlant is one such critic who considers Percival's codes of physician consultations as being an early example of the anti-competitive, "guild"-like nature of the physician community.[2][3]

In 1815, the Apothecaries Act was passed by the Parliament of the United Kingdom. It introduced compulsory apprenticeship and formal qualifications for the apothecaries of the day under the license of the Society of Apothecaries. This was the beginning of regulation of the medical profession in the UK.

Since the 1970s, the growing influence of ethics in contemporary medicine can be seen in the increasing use of Institutional Review Boards to evaluate experiments on human subjects, the establishment of hospital ethics committees, the expansion of the role of clinician ethicists, and the integration of ethics into many medical school curricula.[5]

A common framework used in the analysis of medical ethics is the "four principles" approach postulated by Tom Beauchamp and James Childress in their textbook Principles of biomedical ethics. It recognizes four basic moral principles, which are to be judged and weighed against each other, with attention given to the scope of their application. The four principles are:[6]

Respect for autonomy - the patient has the right to refuse or choose their treatment. (Voluntas aegroti suprema lex.)

Beneficence - a practitioner should act in the best interest of the patient. (Salus aegroti suprema lex.)

Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts.

When moral values are in conflict, the result may be an ethical dilemma or crisis. Sometimes, no good solution to a dilemma in medical ethics exists, and, on occasion, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. Some argue for example, that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life-saving; and truth-telling was not emphasized to a large extent before the HIV era.

The principle of autonomy recognizes the rights of individuals to self-determination. This is rooted in society's respect for individuals' ability to make informed decisions about personal matters. Autonomy has become more important as social values have shifted to define medical quality in terms of outcomes that are important to the patient rather than medical professionals. The increasing importance of autonomy can be seen as a social reaction to a "paternalistic" tradition within healthcare.[citation needed] Some have questioned whether the backlash against historically excessive paternalism in favor of patient autonomy has inhibited the proper use of soft paternalism to the detriment of outcomes for some patients.[7] Respect for autonomy is the basis for informed consent and advance directives.

Autonomy is a general indicator of health. Many diseases are characterised by loss of autonomy, in various manners. This makes autonomy an indicator for both personal well-being, and for the well-being of the profession. This has implications for the consideration of medical ethics: "is the aim of health care to do good, and benefit from it?"; or "is the aim of health care to do good to others, and have them, and society, benefit from this?". (Ethics - by definition - tries to find a beneficial balance between the activities of the individual and its effects on a collective.)

By considering autonomy as a gauge parameter for (self) health care, the medical and ethical perspective both benefit from the implied reference to health.

Psychiatrists and clinical psychologists are often asked to evaluate a patient's capacity for making life-and-death decisions at the end of life. Persons with a psychiatric condition such as delirium or clinical depression may not have the capacity to make end-of-life decisions. Therefore, for these persons, a request to refuse treatment may be taken in consideration of their condition and not followed. Unless there is a clear advance directive to the contrary, in general persons lacking mental capacity are treated according to their best interests. On the other hand, persons with the mental capacity to make end-of-life decisions have the right to refuse treatment and choose an early death if that is what they truly want. In such cases, psychiatrists and psychologists are typically part of protecting that right.[8]

The term beneficence refers to actions that promote the well being of others. In the medical context, this means taking actions that serve the best interests of patients. However, uncertainty surrounds the precise definition of which practices do in fact help patients.

James Childress and Tom Beauchamp in Principle of Biomedical Ethics (1978) identify beneficence as one of the core values of healthcare ethics. Some scholars, such as Edmund Pellegrino, argue that beneficence is the only fundamental principle of medical ethics. They argue that healing should be the sole purpose of medicine, and that endeavors like cosmetic surgery and euthanasia fall beyond its purview.

The concept of non-maleficence is embodied by the phrase, "first, do no harm," or the Latin, primum non nocere. Many consider that should be the main or primary consideration (hence primum): that it is more important not to harm your patient, than to do them good. This is partly because enthusiastic practitioners are prone to using treatments that they believe will do good, without first having evaluated them adequately to ensure they do no (or only acceptable levels of) harm. Much harm has been done to patients as a result, as in the saying, "The treatment was a success, but the patient died." It is not only more important to do no harm than to do good; it is also important to know how likely it is that your treatment will harm a patient. So a physician should go further than not prescribing medications they know to be harmful — he or she should not prescribe medications (or otherwise treat the patient) unless s/he knows that the treatment is unlikely to be harmful; or at the very least, that patient understands the risks and benefits, and that the likely benefits outweigh the likely risks.

In practice, however, many treatments carry some risk of harm. In some circumstances, e.g. in desperate situations where the outcome without treatment will be grave, risky treatments that stand a high chance of harming the patient will be justified, as the risk of not treating is also very likely to do harm. So the principle of non-maleficence is not absolute, and balances against the principle of beneficence (doing good), as the effects of the two principles together often give rise to a double effect (further described in next section).

Depending on the cultural consensus conditioning (expressed by its religious, political and legal social system) the legal definition of non-maleficence differs. Violation of non-maleficence is the subject of medical malpractice litigation. Regulations therefore differ over time, per nation.

Double effect refers to two types of consequences that may be produced by a single action,[9] and in medical ethics it is usually regarded as the combined effect of beneficence and non-maleficence.[10]

A commonly cited example of this phenomenon is the use of morphine or other analgesic in the dying patient. Such use of morphine can have the beneficial effect of easing the pain and suffering of the patient while simultaneously having the maleficent effect of shortening the life of the patient through suppression of the respiratory system.[11]

Autonomy can come into conflict with beneficence when patients disagree with recommendations that healthcare professionals believe are in the patient's best interest. When the patient's interests conflict with the patient's welfare, different societies settle the conflict in a wide range of manners. In general, Western medicine defers to the wishes of a mentally competent patient to make his own decisions, even in cases where the medical team believes that he is not acting in his own best interests. However, many other societies prioritize beneficence over autonomy.

Examples include when a patient does not want a treatment because of, for example, religious or cultural views. In the case of euthanasia, the patient, or relatives of a patient, may want to end the life of the patient. Also, the patient may want an unnecessary treatment, as can be the case in hypochondria or with cosmetic surgery; here, the practitioner may be required to balance the desires of the patient for medically unnecessary potential risks against the patient's informed autonomy in the issue. A doctor may want to prefer autonomy because refusal to please the patient's will would harm the doctor-patient relationship.

Individuals' capacity for informed decision making may come into question during resolution of conflicts between autonomy and beneficence. The role of surrogate medical decision makers is an extension of the principle of autonomy.

On the other hand, autonomy and beneficence/non-maleficence may also overlap. For example, a breach of patients' autonomy may cause decreased confidence for medical services in the population and subsequently less willingness to seek help, which in turn may cause inability to perform beneficence.

The principles of autonomy and beneficence/non-maleficence may also be expanded to include effects on the relatives of patients or even the medical practitioners, the overall population and economic issues when making medical decisions.

The human rights era started with the formation of the United Nations in 1945, which was charged with the promotion of human rights. The Universal Declaration of Human Rights (1948) was the first major document to define human rights. Medical doctors have an ethical duty to protect the human rights and human dignity of the patient so the advent of a document that defines human rights has had its effect on medical ethics. Most codes of medical ethics now require respect for the human rights of the patient.

The Council of Europe promotes the rule of law and observance of human rights in Europe. The Council of Europe adopted the European Convention on Human Rights and Biomedicine (1997) to create a uniform code of medical ethics for its 47 member-states. The Convention applies international human rights law to medical ethics. It provides special protection of physical integrity for those who are unable to consent, which includes children.

No organ or tissue removal may be carried out on a person who does not have the capacity to consent under Article 5.[12]

As of December 2013, the Convention had been ratified or acceded to by twenty-nine member-states of the Council of Europe.[13]

The United Nations Educational, Scientific and Cultural Organization (UNESCO) also promotes the protection of human rights and human dignity. According to UNESCO, "Declarations are another means of defining norms, which are not subject to ratification. Like recommendations, they set forth universal principles to which the community of States wished to attribute the greatest possible authority and to afford the broadest possible support." UNESCO adopted the Universal Declaration on Human Rights and Biomedicine to advance the application of international human rights law in medical ethics. The Declaration provides special protection of human rights for incompetent persons.

In applying and advancing scientific knowledge, medical practice and associated technologies, human vulnerability should be taken into account. Individuals and groups of special vulnerability should be protected and the personal integrity of such individuals respected.[14]

There is disagreement among American physicians as to whether the non-maleficence principle excludes the practice of euthanasia.[citation needed] An example of a doctor who did not believe euthanasia should be excluded was Dr. Jack Kevorkian, who was convicted of second-degree homicide in Michigan in 1998 after demonstrating active euthanasia on the TV news show 60 Minutes.

In some countries such as the Netherlands, euthanasia is an accepted medical practice.[citation needed] Legal regulations assign this to the medical profession. In such nations, the aim is to alleviate the suffering of patients from diseases known to be incurable by the methods known in that culture. In that sense, the "Primum no Nocere" is based on the belief that the inability of the medical expert to offer help, creates a known great and ongoing suffering in the patient.[citation needed]

Informed consent in ethics usually refers to the idea that a person must be fully informed about and understand the potential benefits and risks of their choice of treatment. An uninformed person is at risk of mistakenly making a choice not reflective of his or her values or wishes. It does not specifically mean the process of obtaining consent, or the specific legal requirements, which vary from place to place, for capacity to consent. Patients can elect to make their own medical decisions, or can delegate decision-making authority to another party. If the patient is incapacitated, laws around the world designate different processes for obtaining informed consent, typically by having a person appointed by the patient or their next of kin make decisions for them. The value of informed consent is closely related to the values of autonomy and truth telling.

Confidentiality is commonly applied to conversations between doctors and patients. This concept is commonly known as patient-physician privilege. Legal protections prevent physicians from revealing their discussions with patients, even under oath in court.

Confidentiality is mandated in America by HIPAA laws, specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules have been carved out over the years. For example, many states require physicians to report gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles. Confidentiality is also challenged in cases involving the diagnosis of a sexually transmitted disease in a patient who refuses to reveal the diagnosis to a spouse, and in the termination of a pregnancy in an underage patient, without the knowledge of the patient's parents. Many states in the U.S. have laws governing parental notification in underage abortion.[15][16]

Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable tenet of medical practice. More recently, critics like Jacob Appel have argued for a more nuanced approach to the duty that acknowledges the need for flexibility in many cases.[17]

Confidentiality is an important issue in primary care ethics, where physicians care for many patients from the same family and community, and where third parties often request information from the considerable medical database typically gathered in primary health care.

It has been argued that mainstream medical ethics is biased by the assumption of a framework in which individuals are not simply free to contract with one another to provide whatever medical treatment is demanded, subject to the ability to pay. Because a high proportion of medical care is typically provided via the welfare state, and because there are legal restrictions on what treatment may be provided and by whom, an automatic divergence may exist between the wishes of patients and the preferences of medical practitioners and other parties. Tassano[18] has questioned the idea that Beneficence might in some cases have priority over Autonomy. He argues that violations of Autonomy more often reflect the interests of the state or of the supplier group than those of the patient.

Routine regulatory professional bodies or the courts of law are valid social recourses.

Many so-called "ethical conflicts" in medical ethics are traceable back to a lack of communication. Communication breakdowns between patients and their healthcare team, between family members, or between members of the medical community, can all lead to disagreements and strong feelings. These breakdowns should be remedied, and many apparently insurmountable "ethics" problems can be solved with open lines of communication.[citation needed]

Often, simple communication is not enough to resolve a conflict, and a hospital ethics committee must convene to decide a complex matter.

These bodies are composed primarily of healthcare professionals, but may also include philosophers, lay people, and clergy - indeed, in many parts of the world their presence is considered mandatory in order to provide balance.

With respect to the expected composition of such bodies in the USA, Europe and Australia, the following applies [3].

U.S. recommendations suggest that Research and Ethical Boards (REBs) should have five or more members, including at least one scientist, one non-scientist, and one person not affiliated with the institution. The REB should include people knowledgeable in the law and standards of practice and professional conduct. Special memberships are advocated for handicapped or disabled concerns, if required by the protocol under review.

The European Forum for Good Clinical Practice (EFGCP) suggests that REBs include two practicing physicians who share experience in biomedical research and are independent from the institution where the research is conducted; one lay person; one lawyer; and one paramedical professional, e.g. nurse or pharmacist. They recommend that a quorum include both sexes from a wide age range and reflect the cultural make-up of the local community.

The 1996 Australian Health Ethics Committee recommendations were entitled, "Membership Generally of Institutional Ethics Committees". They suggest a chairperson be preferably someone not employed or otherwise connected with the institution. Members should include a person with knowledge and experience in professional care, counselling or treatment of humans; a minister of religion or equivalent, e.g. Aboriginal elder; a layman; a laywoman; a lawyer and, in the case of a hospital-based ethics committee, a nurse.

The assignment of philosophers or religious clerics will reflect the importance attached by the society to the basic values involved. An example from Sweden with Torbjörn Tännsjö on a couple of such committees indicates secular trends gaining influence.

In increasing frequency, medical researchers are researching activities in online environments such as discussion boards and bulletin boards, and there is concern that the requirements of informed consent and privacy are not as stringently applied as they should be, although some guidelines do exist.[20]

One issue that has arisen, however, is the disclosure of information. While researchers wish to quote from the original source in order to argue a point, this can have repercussions. The quotations and other information about the site can be used to identify the site, and researchers have reported cases where members of the site, bloggers and others have used this information as 'clues' in a game in an attempt to identify the site.[21] Some researchers have employed various methods of "heavy disguise,"[21] including discussing a different condition from that under study,[22][23] or even setting up bogus sites (called 'Maryut sites') to ensure that the researched site is not discovered.[24]

Culture differences can create difficult medical ethics problems. Some cultures have spiritual or magical theories about the origins of disease, for example, and reconciling these beliefs with the tenets of Western medicine can be difficult.

Some cultures do not place a great emphasis on informing the patient of the diagnosis, especially when cancer is the diagnosis. American culture rarely used truth-telling especially in medical cases, up until the 1970s. In American medicine, the principle of informed consent now takes precedence over other ethical values, and patients are usually at least asked whether they want to know the diagnosis.

The delivery of diagnosis online leads patients to believe that doctors in some parts of the country are at the direct service of drug companies. Finding diagnosis as convenient as what drug still has patent rights on it. Physicians and drug companies are found to be competing for top ten search engine ranks to lower costs of selling these drugs with little to no patient involvement.[25]

Physicians should not allow a conflict of interest to influence medical judgment. In some cases, conflicts are hard to avoid, and doctors have a responsibility to avoid entering such situations. However, research has shown that conflicts of interests are very common among both academic physicians[26] and physicians in practice.[27] The The Pew Charitable Trusts has announced the Prescription Project for "academic medical centers, professional medical societies and public and private payers to end conflicts of interest resulting from the $12 billion spent annually on pharmaceutical marketing".

For example, doctors who receive income from referring patients for medical tests have been shown to refer more patients for medical tests.[28] This practice is proscribed by the American College of Physicians Ethics Manual.[29]

Fee splitting and the payments of commissions to attract referrals of patients is considered unethical and unacceptable in most parts of the world.

Studies show that doctors can be influenced by drug company inducements, including gifts and food.[30] Industry-sponsored Continuing Medical Education (CME) programs influence prescribing patterns.[31] Many patients surveyed in one study agreed that physician gifts from drug companies influence prescribing practices.[32] A growing movement among physicians is attempting to diminish the influence of pharmaceutical industry marketing upon medical practice, as evidenced by Stanford University's ban on drug company-sponsored lunches and gifts. Other academic institutions that have banned pharmaceutical industry-sponsored gifts and food include the Johns Hopkins Medical Institutions, University of Michigan, University of Pennsylvania, and Yale University.[33][34]

Sexual relationships between doctors and patients can create ethical conflicts, since sexual consent may conflict with the fiduciary responsibility of the physician. Doctors who enter into sexual relationships with patients face the threats of deregistration and prosecution. In the early 1990s, it was estimated that 2-9% of doctors had violated this rule.[37] Sexual relationships between physicians and patients' relatives may also be prohibited in some jurisdictions, although this prohibition is highly controversial.[38]

The concept of medical futility has been an important topic in discussions of medical ethics. What should be done if there is no chance that a patient will survive but the family members insist on advanced care? Previously, some articles defined futility as the patient having less than a one percent chance of surviving. Some of these cases are examined in court.

Advance directives include living wills and durable powers of attorney for health care. (See also Do Not Resuscitate and cardiopulmonary resuscitation) In many cases, the "expressed wishes" of the patient are documented in these directives, and this provides a framework to guide family members and health care professionals in the decision making process when the patient is incapacitated. Undocumented expressed wishes can also help guide decisions in the absence of advance directives, as in the Quinlan case in Missouri.

"Substituted judgment" is the concept that a family member can give consent for treatment if the patient is unable (or unwilling) to give consent themselves. The key question for the decision making surrogate is not, "What would you like to do?", but instead, "What do you think the patient would want in this situation?".

Courts have supported family's arbitrary definitions of futility to include simple biological survival, as in the Baby K case (in which the courts ordered a child born with only a brain stem instead of a complete brain to be kept on a ventilator based on the religious belief that all life must be preserved).

In some hospitals, medical futility is referred to as "non-beneficial care."

Baby Doe Law establishes state protection for a disabled child's right to life, ensuring that this right is protected even over the wishes of parents or guardians in cases where they want to withhold treatment.

^ abBruckman A (2002). "Studying the amateur artist: A perspective on disguising data collected in human subjects research on the Internet". Ethics and Information Technology4 (3): 217–31. doi:10.1023/A:1021316409277.

^Swedlow A, Johnson G, Smithline N, Milstein A (1992). "Increased costs and rates of use in the California workers' compensation system as a result of self-referral by physicians". N Engl J Med327 (21): 1502–6. doi:10.1056/NEJM199211193272107. PMID1406882.