DEV Community

Aniket Satbhai
Aniket Satbhai

Posted on

Utilitarianism

  • Utilitarianism
    • Is a family of ethical theories and version of consequentialism, which states that the consequences of any action are the only standards of right and wrong
    • Conceives “benefits” as actions that maximize well-being across all affected individuals

Past, Future, Presence

  • According to utilitarianists, morally right actions are the ones that produce the greatest balance of benefits over harm for everyone affected.
  • Unlike other, more individualistic forms of consequentialism (such as egoism) or unevenly weighted consequentialism (such as prioritarianism), utilitarianism considers the interests of all humans equally.
  • However, utilitarianists disagree on many specific questions, such as whether actions should be chosen based on their likely results (act utilitarianism), or whether agents should conform to rules that maximize utility (rule utilitarianism). There is also disagreement as to whether total (total utilitarianism), average (average utilitarianism) or minimum utility should be maximized.

  • For utilitarianists, utility – or benefit – is defined in terms of well-being or happiness.

  • Jeremy Bentham, the father of utilitarianism, characterized utility as

    "that property… (that) tends to produce benefit, advantage, pleasure, good, or happiness…(or) to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is considered."

  • Utilitarianism offers a relatively simple method for deciding, whether an action is morally right or not:

    • Identify the various possible actions
    • Estimate how each action would benefit and harm
    • Choose the action that provides the greatest benefit after considering costs
    • According to “diminishing marginal utility” principle, the utility of an item decreases as the supply of units increases (and vice versa).
    • For example, when you start to work out, at first you benefit greatly and your results get dramatically better. But the longer you continue working out, each individual training session has a smaller impact. If you work out too often, the utility diminishes and you’ll start to suffer from the symptoms of overtraining.
    • Another example is that if you eat one candy, you’ll get a lot of pleasure. But if you eat too much candy, you may gain weight and increase your risk to all kinds of sicknesses.
    • This paradox of benefits should always be remembered when we evaluate the consequences of actions. What is the common good now may not be the common good in the future.

Diminishing Marginal Utility

  • The problems of utilitarianism

    • Utilitarianism is not a perfect account on moral decision making.
    • It has been criticized on many grounds.
      • For example, utilitarian calculation requires that we assign values to the benefits and harm resulting from our actions and compare them with the consequences that might result from other actions. But it’s often difficult, if not impossible, to measure and compare the values of all relevant benefits and costs in advance.
    • "Risk" is commonly used to mean a likelihood of a danger or a hazard that arises unpredictably, or in a more technical sense, the probability of some resulting degree of harm. In AI ethics, harm and risks are taken to arise from design, inappropriate application, or intentional misuse of technology. Typical examples are risks such as discrimination, violation of privacy, security issues, cyberwarfare, or malicious hacking.
    • In practice, it is difficult to compare the risks and benefits for the following reasons:
      • Risks and benefits are influenced by value commitments, subjective and diverse preferences, practical circumstances, and personal and cultural factors.
      • Harm and benefits are not static.
        • The marginal utility of an item diminishes in a way that can be difficult to foresee. Moreover, a specific harm or a specific benefit may have different utility value in different circumstances.
          • For example, whether or not the faster car will be more beneficial depends on the intended use of it – if it is intended to be a school bus, then we should prioritize safety, but if it is used as a racing car, then the answer may be different.
      • Real-world situations are typically so complex that it is difficult to foresee or compare all the risks and benefits in advance.
        • For example, let’s analyze the possible consequences of military robotics. Although contemporary military robots are largely remotely operated or semi-autonomous, over time they are likely to become fully autonomous. According to some estimates, robots reduce civilian and military casualties. But according to other estimates, they do not reduce the risk to civilians. Statistically, in the first decades of war in the 21st century, robotic weaponry has been involved in numerous killings of both soldiers and noncombatants. The possibility to use various techniques – such as adversarial patches (which interrupt a machine’s ability to properly classify images) – to fool and manipulate automated weapons complicates the situation by increasing the specific risk of causing harm to civilians. The overall level of risks is also dependent on the ease in which wars might be declared if robots are taking most of the physical risk.
      • Utilitarianism fails to take into account other moral aspects.
        • It is easy to imagine situations where developed technology would produce great benefits for societies, but its use would still raise important ethical questions.
          • For example, let’s think about the case of a preventive healthcare system. The system may indeed be beneficial for many, but it still forces us to ask whether fundamental human rights, such as privacy, matter. Or what happens to the citizen’s right not to know about possible health problems? (Many of us would want to know if we are in a high-risk group, but what if someone does not want to know? Can a city force that knowledge on them?) Or, how can we make it sure that everyone has equal access to the possible benefits of a preventative system?
  • Nozick’s utility monster

    • Technically, utility is only a measure (a numeric quantity) that describes some kind of underlying “good” which we want to maximize.
      • Say, pleasure, or well-being (which hedonist philosophers would claim to be the same thing). Pleasure is at least to some extent a subjective experience, and utility, as a measure, should transform it into an intersubjectively comparable number. That is a high bar to reach.
    • Assuming such a measure as utility does in fact exist, philosopher Robert Nozick presents the following puzzle. There is a creature called the Utility Monster. Their hedonistic mind is wired so that, given any resource, they will receive more pleasure from it than any other individual would. They simply enjoy apples, cars, coffee, freedom, etc., more than anybody else does. This means that they gain more utility from them, and if we are morally obligated to maximize the utility produced by the resources we have, the conclusion is clear: everything we have to the Utility Monster. Nothing to anybody else.

Ref. : Ethics of AI

Top comments (0)