Utilitarian: Difference between revisions

From metawiki
mNo edit summary
Line 1: Line 1:
[[File:Doug_Forcett.jpg|thumb|right|What if Doug Forcett wrote a Wiki?]]
[[File:Doug_Forcett.jpg|thumb|right|What if Doug Forcett wrote a Wiki?]]


The [[ethics]] of [[metaculture]] is based on [https://en.wikipedia.org/wiki/Utilitarianism Utilitarianism], which in its simplest definition seeks the ''greatest good for the greatest number of people'', based on the [[consequences]] of a given act. Maximizing that which is good is an obvious goal for any system of [[ethics]]; less obvious is how we define good, how we quantify it, and [[Theoretical people|which people are included]] in the [[Generating equation|equation]].<blockquote>''“Society between equals can only exist on the understanding that the interests of all are to be regarded equally.”''  -[[wikipedia:John_Stuart_Mill|John Stuart Mill]]</blockquote>Utilitarianism is used as a shorthand since it is the more commonly recognized term. Those who seek [[philosophical]] precision will recognize the [[ethical]] [[perspective]] described here as [[wikipedia:Consequentialism|Consequentialism]], or more specifically a version of [[wikipedia:State_consequentialism|State Consequentialism]], where ethics are considered in terms of how they would impact the [[happiness]] of a whole [[society]] if they were enforced through the [[legal]] system or via [[social norms]]. This resolves the dichotomy between [[wikipedia:Consequentialism|consequentialism]] and [[wikipedia:Deontology|Deontology]], since both seek to find a rule-based system that can be "[[wikipedia:Universal_law|universal law]]," rather than considering isolated [[ethical]] dilemmas.
The [[ethics]] of [[metaculture]] is based on [https://en.wikipedia.org/wiki/Utilitarianism Utilitarianism], which in its simplest definition seeks the ''greatest [[good]] for the greatest number of people'', based on the [[consequences]] of a given act. Maximizing that which is good is an obvious [[goal]] for any system of [[ethics]]; less obvious is how we define good, how we quantify it, and [[Theoretical people|which people are included]] in the [[Generating equation|equation]].<blockquote>''“Society between equals can only exist on the understanding that the interests of all are to be regarded equally.”''  -[[wikipedia:John_Stuart_Mill|John Stuart Mill]]</blockquote>Utilitarianism is used as a shorthand since it is the more commonly recognized term. Those who seek [[philosophical]] precision will recognize the [[ethical]] [[perspective]] described here as [[wikipedia:Consequentialism|Consequentialism]], or more specifically a version of [[wikipedia:State_consequentialism|State Consequentialism]], where ethics are considered in terms of how they would impact the [[happiness]] of a whole [[society]] if they were enforced through the [[legal]] system or via [[social norms]]. This resolves the dichotomy between [[wikipedia:Consequentialism|consequentialism]] and [[wikipedia:Deontology|Deontology]], since both seek to find a rule-based system that can be "[[wikipedia:Universal_law|universal law]]," rather than considering isolated [[ethical]] dilemmas.


Of course there are more detailed distinctions that could be made, but the point here is to demonstrate an [[ethical]] system that is robust and subject to empirical [[evidence]], so we can find more satisfying [[moral]] answers with [[science]] then we have been able to with [[philosophy]] or [[religion]].
Of course there are more detailed distinctions that could be made, but the point here is to demonstrate an [[ethical]] system that is robust and subject to empirical [[evidence]], so we can find more satisfying [[moral]] answers with [[science]] then we have been able to with [[philosophy]] or [[religion]].


Many of the arguments made here and on other pages reflect a similar [[point of view]] to the one expressed in [https://www.goodreads.com/book/show/7785194-the-moral-landscape The Moral Landscape by Sam Harris]. While most [[philosophers]] agree that this [[book]] leaves many relevant questions unresolved, it is a popular [[book]] that makes many of the points made here more eloquently.  
Many of the [[arguments]] made here and on other pages reflect a similar [[point of view]] to the one expressed in [https://www.goodreads.com/book/show/7785194-the-moral-landscape The Moral Landscape by Sam Harris]. While most [[philosophers]] agree that this [[book]] leaves many relevant questions unresolved, it is a popular [[book]] that makes many of the points made here more eloquently.  


The [[wiki]] has the advantage of providing unlimited space to address all of the popular counterarguments to utilitarianism. Additional critique and response based on the arguments and criticisms of [https://www.goodreads.com/book/show/7785194-the-moral-landscape The Moral Landscape] can be found under [[Objective Morality]].<br>{{#ev:youtube|https://www.youtube.com/watch?v=-a739VjqdSI||center|Philosophy Crash Course - Utilitarianism|frame}}
The [[wiki]] has the advantage of providing unlimited space to address all of the popular counterarguments to utilitarianism. Additional critique and response based on the arguments and criticisms of [https://www.goodreads.com/book/show/7785194-the-moral-landscape The Moral Landscape] can be found under [[Objective Morality]].<br>{{#ev:youtube|https://www.youtube.com/watch?v=-a739VjqdSI||center|Philosophy Crash Course - Utilitarianism|frame}}


The following sections address the many problems and counter-arguments to basic utilitarianism that are resolved by state [[consequentialism]] and the [[universal in-group]].
The following sections address the many [[problems]] and counter-arguments to basic utilitarianism that are resolved by state [[consequentialism]] and the [[universal in-group]].


== What is Good? ==
== What is Good? ==
Line 15: Line 15:
Good is the word we use to describe the things that benefit our survival. Even those that don't believe in [[evolution]] still universally share a [[concept]] of what constitutes good that is exclusively pro-survival. [[Evolution]] is what instills your [[beliefs]] in you, regardless of whether you [[believe]] in [[evolution]].
Good is the word we use to describe the things that benefit our survival. Even those that don't believe in [[evolution]] still universally share a [[concept]] of what constitutes good that is exclusively pro-survival. [[Evolution]] is what instills your [[beliefs]] in you, regardless of whether you [[believe]] in [[evolution]].


The mechanism that [[evolution]] has instilled in our [[brains]] in order to tell us what is good and what is bad are our emotions. We feel pleasure when we eat or have [[sex]], we feel pain when we or those we [[love]] are injured. When sustained over [[time]], a good ratio of pleasure to pain results in [[happiness and well-being]].
The mechanism that [[evolution]] has instilled in our [[brains]] in order to tell us what is good and what [[is bad]] are our [[emotions]]. We feel [[pleasure]] when we eat or have [[sex]], we feel pain when we or those we [[love]] are injured. When sustained over [[time]], a good ratio of pleasure to pain results in [[happiness and well-being]].


The purpose of pleasure is to encourage your [[brain]] to do more of whatever it was doing that led to the pleasure, and the purpose of pain is to discourage.
The [[purpose]] of pleasure is to encourage your [[brain]] to do more of whatever it was doing that led to the pleasure, and the purpose of pain is to discourage.


It can therefore be concluded [[logically]] that [[evolution]] has formed our [[brains]] to direct our bodies to seek greater pleasure while minimizing pain, which happens to be the goal of Utilitarianism. It empirically matches the actual [[ethical]] calculations our [[brains]] make when our [[neurons]] decide on a course of action.
It can therefore be concluded [[logically]] that [[evolution]] has formed our [[brains]] to direct our bodies to seek greater pleasure while minimizing pain, which happens to be the goal of Utilitarianism. It empirically matches the actual [[ethical]] calculations our [[brains]] make when our [[neurons]] decide on a course of action.


[[Happiness]] is therefore the measure of good. Beyond just pleasure or [[joy]], which can be fleeting and result in greater harms when pursued shortsightedly, [[happiness and well-being]] represent long-term, sustained positive [[emotional]] states when referred to in this [[wiki]], and have empirically been shown to be our goal in [[life]].
[[Happiness]] is therefore the measure of good. Beyond [[just]] pleasure or [[joy]], which can be fleeting and result in greater harms when pursued shortsightedly, [[happiness and well-being]] represent long-term, sustained [[positive]] [[emotional]] states when referred to in this [[wiki]], and have empirically been shown to be our goal in [[life]].


== You Can't Measure Happiness ==
== You Can't Measure Happiness ==


A common objection is to dismiss the idea that [[happiness]] can be measured. But there are [https://positivepsychology.com/measure-happiness-tests-surveys/ a number of ways to measure happiness] and while they may not perfectly capture the state of mind of any individual, they do give good aggregate results that say whether a large population is able to achieve satisfaction in life.
A common objection is to dismiss the [[idea]] that [[happiness]] can be measured. But there are [https://positivepsychology.com/measure-happiness-tests-surveys/ a number of ways to measure happiness] and while they may not perfectly capture the state of [[mind]] of any individual, they do give good aggregate results that say whether a large population is able to achieve satisfaction in life.


Therefore, an ethical utilitarian society would seek to constantly increase the measured aggregate [[happiness]] in its people and should actively pursue this goal directly, rather than through proxy [[Meterstick|measurements]] such as [[economic]] activity and GDP, or [[Ideology|ideological]] goals that are disconnected from any [[Meterstick|metrics]].
Therefore, an ethical utilitarian society would seek to constantly increase the measured aggregate [[happiness]] in its people and should actively pursue this goal directly, rather than through proxy [[Meterstick|measurements]] such as [[economic]] activity and GDP, or [[Ideology|ideological]] goals that are disconnected from any [[Meterstick|metrics]].
Line 33: Line 33:
== What If Hurting This Guy Makes 2 People Happy? ==
== What If Hurting This Guy Makes 2 People Happy? ==


This is the most common hypothetical counterexample that entirely too many people see as disproving [[utilitarianism]]. However, like many moral hypotheticals, it posits a situation that is fundamentally impossible given the way our [[brains]] have [[evolved]] and how [[societies]] work.
This is the most common hypothetical counterexample that entirely too many people see as disproving [[utilitarianism]]. However, [[like]] many moral hypotheticals, it posits a situation that is fundamentally impossible given the way our [[brains]] have [[evolved]] and how [[societies]] [[work]].


Humans are empathetic, social creatures. There is no way for us to harm another human without [[Emotions|emotional]] repercussions on ourselves. Only a psychopath can blithely torture or kill another human and not feel their pain. And, being a psychopath, they're probably not feeling very [[happy]] either. So that's no way to go.
Humans are [[empathetic]], [[social]] creatures. There is no way for us to harm another human without [[Emotions|emotional]] repercussions on ourselves. Only a psychopath can blithely torture or kill another human and not feel their pain. And, being a psychopath, they're [[probably]] not [[feeling]] very [[happy]] either. So that's no way to go.


Then you must also consider the ramifications of implementing this hypothetical within the [[legal]] framework of a [[society]]. Let's say you think that killing and taking the organs from one drifter in order to save 5 parents of young children and prevent them from being orphaned yields a net positive in [[happiness]]. The drifter has no [[family]] or friends and won't be missed by anyone, while the parents who were saved will go on to have fulfilling lives, and their kids won't end up in foster care. Within this superficial closed system, it appears that [[utilitarianism]] would support this choice.
Then you must also consider the ramifications of implementing this hypothetical within the [[legal]] framework of a [[society]]. Let's say you think that killing and taking the organs from one drifter in order to save 5 [[parents]] of young children and prevent them from being orphaned yields a net positive in [[happiness]]. The drifter has no [[family]] or [[friends]] and won't be missed by anyone, while the parents who were saved will go on to have fulfilling lives, and their [[kids]] won't end up in foster [[care]]. Within this superficial closed system, it appears that [[utilitarianism]] would support this choice.


However, if it is right for one person to kill a drifter for their organs if it saves a few lives, then it has to be right for everyone to do it. And what does a [[society]] look like where it is legal to harvest the organs of the unhoused? Not like one that anyone would actually want to live in. Because it wouldn't actually make us [[happy]], it would be morbid, fearful, and lawless.
However, if it is right for one person to kill a drifter for their organs if it saves a few lives, then it has to be right for [[everyone]] to do it. And what does a [[society]] look like where it is legal to harvest the organs of the unhoused? Not like one that anyone would actually want to live in. Because it wouldn't actually make us [[happy]], it would be morbid, fearful, and lawless.


Any [[society]] that benefits from harming others, through forced labor, unfair [[taxation]], [[grift]], [https://en.wikipedia.org/wiki/Rent-seeking rent-seeking], etc. will never be as [[happy]] as one that treats all people fairly and minimizes the burden of obligations that contradict a person's [[Freedom|desires]] and [[Free will|self-determination]].  
Any [[society]] that benefits from harming others, through forced [[labor]], unfair [[taxation]], [[grift]], [https://en.wikipedia.org/wiki/Rent-seeking rent-seeking], etc. will never be as [[happy]] as one that treats all people fairly and minimizes the burden of obligations that contradict a person's [[Freedom|desires]] and [[Free will|self-determination]].  


While the hypotheticals posit more people winning than losing, [[historically]] these unfair arrangements have always been pyramid shaped, with the beneficiaries at the top and the people being harmed at the bottom. And a billionaire on ecstasy at a yacht party is not sufficiently [[happy]] to mathematically make up for the desperation and toil of the thousands of workers who make that lifestyle possible.
While the hypotheticals posit more people winning than losing, [[historically]] these unfair arrangements have always been pyramid shaped, with the beneficiaries at the top and the people being harmed at the bottom. And a billionaire on ecstasy at a yacht party is not sufficiently [[happy]] to [[mathematically]] make up for the desperation and toil of the thousands of workers who make that lifestyle possible.


== Whose Happiness Counts? ==
== Whose Happiness Counts? ==


People primarily want to benefit their [[in-group]]. Not necessarily at the expense of others, but usually in favor of them. The closer the [[in-group]] the more we favor them, with [[family]] taking priority over community, nation, and larger [[cultures]]. This prioritization makes sense in terms of our personal [[life choices]] as well as an [[evolutionary]] perspective. But when considering [[universal]] [[ethics]] you need to use a universal [[in-group]]. In other words, if you don't consider the [[happiness]] of everyone your system is inherently [[unjust]].
People primarily want to benefit their [[in-group]]. Not necessarily at the expense of others, but usually in favor of them. The closer the [[in-group]] the more we favor them, with [[family]] taking priority over [[community]], nation, and larger [[cultures]]. This prioritization makes sense in terms of our personal [[life choices]] as well as an [[evolutionary]] perspective. But when considering [[universal]] [[ethics]] you need to use a universal [[in-group]]. In other [[words]], if you don't consider the [[happiness]] of everyone your system is inherently [[unjust]].


"Forward thinking" [[crypto-bros]] think that they can cleverly circumvent the need to care about the living by contributing to the theoretical happiness of [[future people]]. This is an ethical cop-out used to justify selfish behavior, such as using [https://en.wikipedia.org/wiki/Effective_altruism Effective Altruism] as an excuse to [https://en.wikipedia.org/wiki/Bankruptcy_of_FTX steal billions of dollars in a cryto pyramid scheme]. Someday you plan to use it for charity, so the more money you get now the more good you can do later!  
"Forward thinking" [[crypto-bros]] think that they can cleverly circumvent the need to care about the living by contributing to the theoretical happiness of [[future people]]. This is an ethical cop-out used to justify [[selfish]] behavior, such as using [https://en.wikipedia.org/wiki/Effective_altruism Effective Altruism] as an excuse to [https://en.wikipedia.org/wiki/Bankruptcy_of_FTX steal billions of dollars in a cryto pyramid scheme]. Someday you plan to use it for [[charity]], so the more [[money]] you get now the more good you can do later!  


When calculating the utilitarian benefit of any action, the tangible effects on the living should be prioritized significantly above the needs of any [[Theoretical people|theoretical humans]], or ones you may theoretically help in the [[future]] once you get [[rich]]. While we can't ignore our impact on [[future]] [[generations]], we cannot prioritize their needs over our own either. Part of our [[happiness]] depends on knowing we are leaving a better world for our [[children]], but a better world is always one that helps the living first. Just like in an airplane, put on your own oxygen mask first before helping your children with theirs. Especially if they [[Theoretical people|haven't been born yet]]!
When calculating the utilitarian benefit of any action, the tangible effects on the living should be prioritized significantly above the needs of any [[Theoretical people|theoretical humans]], or ones you may theoretically help in the [[future]] once you get [[rich]]. While we can't ignore our impact on [[future]] [[generations]], we cannot prioritize their needs over our own either. Part of our [[happiness]] depends on knowing we are leaving a better world for our [[children]], but a better world is always one that helps the living first. Just like in an airplane, put on your own oxygen mask first before helping your children with theirs. Especially if they [[Theoretical people|haven't been born yet]]!
Line 58: Line 58:


== A Model for Utilitarian Rights ==
== A Model for Utilitarian Rights ==
One of the main arguments against utilitarianism is that it doesn't offer a framework for [[human rights]], since theoretically anyone's rights can be overridden if the [[happiness]] [[math]] justifies it. The [[legal]] systems in most countries use a rights-based model, not a utilitarian one, which is cited as evidence. One again, state [[consequentialism]] comes to the rescue. When you consider the aggregate [[happiness]] of states that provide [[universal]] [[human rights]] to those that don't, the [[evidence]] overwhelmingly supports the [[Ideas|idea]] that those rights are [[universally]] good.
One of the main arguments against utilitarianism is that it doesn't offer a framework for [[human rights]], since theoretically anyone's rights can be overridden if the [[happiness]] [[math]] justifies it. The [[legal]] systems in most countries use a rights-based model, not a utilitarian one, which is cited as [[evidence]]. One again, state [[consequentialism]] comes to the rescue. When you consider the aggregate [[happiness]] of states that provide [[universal]] [[human rights]] to those that don't, the [[evidence]] overwhelmingly supports the [[Ideas|idea]] that those rights are [[universally]] good.


The fact that it is hard to base a [[legal]] system on complex moral calculations that have only recently been quantified does not mean that those calculations aren't the most accurate way to make informed [[ethical]] decisions that consider the relevant [[evidence]].
The fact that it is hard to base a [[legal]] system on [[complex]] [[moral]] calculations that have only recently been quantified does not mean that those calculations aren't the most accurate way to make informed [[ethical]] decisions that consider the relevant [[evidence]].


Just because it is theoretically possible for the utilitarian [[math]] to offer a repugnant conclusion, the actual [[mathematics]] of human [[emotion]] avoids those outcomes in the real world. If your [[Impossible hypotheticals|hypothetical]] assumes people are just going to be cool with hurting or violating the rights of others, then empathy has been ignored like friction in a Physics 101 problem.
Just because it is theoretically possible for the utilitarian [[math]] to offer a repugnant conclusion, the actual [[mathematics]] of human [[emotion]] avoids those outcomes in the real world. If your [[Impossible hypotheticals|hypothetical]] assumes people are just going to be cool with hurting or violating the rights of others, then [[empathy]] has been ignored like friction in a [[Physics]] 101 problem.


== Emergent Utilitarianism ==
== Emergent Utilitarianism ==
Line 70: Line 70:
[[wikipedia:Machiavellianism_(politics)|Machiavelli]] popularized the unfortunate saying--"[https://simple.wikipedia.org/wiki/The_end_justifies_the_means The ends justify the means]." The idea being that you can justify moral flexibility in the pursuit of the greater good. This rationalization is how we get to the hypothetical situations described in the previous sections.
[[wikipedia:Machiavellianism_(politics)|Machiavelli]] popularized the unfortunate saying--"[https://simple.wikipedia.org/wiki/The_end_justifies_the_means The ends justify the means]." The idea being that you can justify moral flexibility in the pursuit of the greater good. This rationalization is how we get to the hypothetical situations described in the previous sections.


Due to the [[Time|temporal]] realities of existence, whatever means we choose to pursue our goals turns out to be the thing we experience much more of in [[life]] than the actual goal. For the loftiest utopian visions, those goals will not even be realized in one's lifetime. But if you live your life in pursuit of that goal, you will be living with whatever ''means'' that you utilize.
Due to the [[Time|temporal]] realities of [[existence]], whatever means we choose to pursue our goals turns out to be the thing we [[experience]] much more of in [[life]] than the actual goal. For the loftiest [[utopian]] visions, those goals will not even be realized in one's [[lifetime]]. But if you live your [[life]] in pursuit of that goal, you will be living with whatever ''means'' that you utilize.


This is why there can never be a justification for [[war]] or [[genocide]]. Every evil act has ''"greater good"'' at the heart of it that and ''"the ends justifies the means"'' as an excuse. The new saying should be ''"the means are the ends"'' to reflect the fact that we must live with the choices we make in pursuit of our goals.
This is why there can never be a justification for [[war]] or [[genocide]]. Every [[evil]] act has ''"greater good"'' at the heart of it that and ''"the ends justifies the means"'' as an excuse. The new saying should be ''"the means are the ends"'' to reflect the fact that we must live with the choices we make in pursuit of our goals.


== Extra People is Not Extra Happy ==
== Extra People is Not Extra Happy ==
Line 84: Line 84:
== Temptation and Delayed Gratification ==
== Temptation and Delayed Gratification ==


Much of morality is centered around the concepts of avoiding [[temptation]] and its corollary [[delayed gratification]]. These topics are discussed in greater detail on those pages.
Much of [[morality]] is centered around the [[concepts]] of avoiding [[temptation]] and its corollary [[delayed gratification]]. These topics are discussed in greater detail on those pages.


== The Psychology of Enjoyment ==
== The Psychology of Enjoyment ==
Your preferences are based on your experiences and associations. Which [[food]] you like depends more on your [[Emotions|state of mind]] when you first tried the [[food]] than it does on actual taste. Once you realize this, you can make the [[conscious]] decision to choose to enjoy these things. It takes practice, but it is absolutely possible to expand the range of what you find enjoyable.
Your preferences are based on your experiences and [[associations]]. Which [[food]] you like depends more on your [[Emotions|state of mind]] when you first tried the [[food]] than it does on actual taste. Once you realize this, you can make the [[conscious]] decision to choose to enjoy these things. It takes practice, but it is absolutely possible to expand the range of what you find enjoyable.


Doing so expands the possibilities of [[happiness]]. There are so many situations you will find yourself in when there is no choice available to you that is [[Salience|salient]]. The fewer things you enjoy, the more likely this is. You can't always choose to do the things you enjoy, so the best solution is to enjoy [[everything]].
Doing so expands the possibilities of [[happiness]]. There are so many situations you will find yourself in when there is no choice available to you that is [[Salience|salient]]. The fewer things you enjoy, the more likely this is. You can't always choose to do the things you enjoy, so the best solution is to enjoy [[everything]].
Line 95: Line 95:
== Can a TV Show Explain It? ==
== Can a TV Show Explain It? ==


For those that prefer to get their [[ethics]] from sitcoms, it's basically the point system from [https://en.wikipedia.org/wiki/The_Good_Place The Good Place]. The show is actually a robust introductory course in [[ethics]] and is truly a ''good place'' to start learning about this subject if the idea of utilitarianism is new to you.
For those that prefer to get their [[ethics]] from sitcoms, it's basically the point system from [https://en.wikipedia.org/wiki/The_Good_Place The Good Place]. The show is actually a robust introductory course in [[ethics]] and is truly a ''good place'' to start [[learning]] about this subject if the idea of utilitarianism is new to you.


[[metaculture]] is [[Allegory|like]] an attempt to create ''WikiChidi''.
[[metaculture]] is [[Allegory|like]] an attempt to create ''WikiChidi''.

Revision as of 09:25, 6 February 2025

What if Doug Forcett wrote a Wiki?

The ethics of metaculture is based on Utilitarianism, which in its simplest definition seeks the greatest good for the greatest number of people, based on the consequences of a given act. Maximizing that which is good is an obvious goal for any system of ethics; less obvious is how we define good, how we quantify it, and which people are included in the equation.

“Society between equals can only exist on the understanding that the interests of all are to be regarded equally.” -John Stuart Mill

Utilitarianism is used as a shorthand since it is the more commonly recognized term. Those who seek philosophical precision will recognize the ethical perspective described here as Consequentialism, or more specifically a version of State Consequentialism, where ethics are considered in terms of how they would impact the happiness of a whole society if they were enforced through the legal system or via social norms. This resolves the dichotomy between consequentialism and Deontology, since both seek to find a rule-based system that can be "universal law," rather than considering isolated ethical dilemmas.

Of course there are more detailed distinctions that could be made, but the point here is to demonstrate an ethical system that is robust and subject to empirical evidence, so we can find more satisfying moral answers with science then we have been able to with philosophy or religion.

Many of the arguments made here and on other pages reflect a similar point of view to the one expressed in The Moral Landscape by Sam Harris. While most philosophers agree that this book leaves many relevant questions unresolved, it is a popular book that makes many of the points made here more eloquently.

The wiki has the advantage of providing unlimited space to address all of the popular counterarguments to utilitarianism. Additional critique and response based on the arguments and criticisms of The Moral Landscape can be found under Objective Morality.

Philosophy Crash Course - Utilitarianism

The following sections address the many problems and counter-arguments to basic utilitarianism that are resolved by state consequentialism and the universal in-group.

What is Good?

Good is the word we use to describe the things that benefit our survival. Even those that don't believe in evolution still universally share a concept of what constitutes good that is exclusively pro-survival. Evolution is what instills your beliefs in you, regardless of whether you believe in evolution.

The mechanism that evolution has instilled in our brains in order to tell us what is good and what is bad are our emotions. We feel pleasure when we eat or have sex, we feel pain when we or those we love are injured. When sustained over time, a good ratio of pleasure to pain results in happiness and well-being.

The purpose of pleasure is to encourage your brain to do more of whatever it was doing that led to the pleasure, and the purpose of pain is to discourage.

It can therefore be concluded logically that evolution has formed our brains to direct our bodies to seek greater pleasure while minimizing pain, which happens to be the goal of Utilitarianism. It empirically matches the actual ethical calculations our brains make when our neurons decide on a course of action.

Happiness is therefore the measure of good. Beyond just pleasure or joy, which can be fleeting and result in greater harms when pursued shortsightedly, happiness and well-being represent long-term, sustained positive emotional states when referred to in this wiki, and have empirically been shown to be our goal in life.

You Can't Measure Happiness

A common objection is to dismiss the idea that happiness can be measured. But there are a number of ways to measure happiness and while they may not perfectly capture the state of mind of any individual, they do give good aggregate results that say whether a large population is able to achieve satisfaction in life.

Therefore, an ethical utilitarian society would seek to constantly increase the measured aggregate happiness in its people and should actively pursue this goal directly, rather than through proxy measurements such as economic activity and GDP, or ideological goals that are disconnected from any metrics.

This is discussed further on the happiness and well-being page.

What If Hurting This Guy Makes 2 People Happy?

This is the most common hypothetical counterexample that entirely too many people see as disproving utilitarianism. However, like many moral hypotheticals, it posits a situation that is fundamentally impossible given the way our brains have evolved and how societies work.

Humans are empathetic, social creatures. There is no way for us to harm another human without emotional repercussions on ourselves. Only a psychopath can blithely torture or kill another human and not feel their pain. And, being a psychopath, they're probably not feeling very happy either. So that's no way to go.

Then you must also consider the ramifications of implementing this hypothetical within the legal framework of a society. Let's say you think that killing and taking the organs from one drifter in order to save 5 parents of young children and prevent them from being orphaned yields a net positive in happiness. The drifter has no family or friends and won't be missed by anyone, while the parents who were saved will go on to have fulfilling lives, and their kids won't end up in foster care. Within this superficial closed system, it appears that utilitarianism would support this choice.

However, if it is right for one person to kill a drifter for their organs if it saves a few lives, then it has to be right for everyone to do it. And what does a society look like where it is legal to harvest the organs of the unhoused? Not like one that anyone would actually want to live in. Because it wouldn't actually make us happy, it would be morbid, fearful, and lawless.

Any society that benefits from harming others, through forced labor, unfair taxation, grift, rent-seeking, etc. will never be as happy as one that treats all people fairly and minimizes the burden of obligations that contradict a person's desires and self-determination.

While the hypotheticals posit more people winning than losing, historically these unfair arrangements have always been pyramid shaped, with the beneficiaries at the top and the people being harmed at the bottom. And a billionaire on ecstasy at a yacht party is not sufficiently happy to mathematically make up for the desperation and toil of the thousands of workers who make that lifestyle possible.

Whose Happiness Counts?

People primarily want to benefit their in-group. Not necessarily at the expense of others, but usually in favor of them. The closer the in-group the more we favor them, with family taking priority over community, nation, and larger cultures. This prioritization makes sense in terms of our personal life choices as well as an evolutionary perspective. But when considering universal ethics you need to use a universal in-group. In other words, if you don't consider the happiness of everyone your system is inherently unjust.

"Forward thinking" crypto-bros think that they can cleverly circumvent the need to care about the living by contributing to the theoretical happiness of future people. This is an ethical cop-out used to justify selfish behavior, such as using Effective Altruism as an excuse to steal billions of dollars in a cryto pyramid scheme. Someday you plan to use it for charity, so the more money you get now the more good you can do later!

When calculating the utilitarian benefit of any action, the tangible effects on the living should be prioritized significantly above the needs of any theoretical humans, or ones you may theoretically help in the future once you get rich. While we can't ignore our impact on future generations, we cannot prioritize their needs over our own either. Part of our happiness depends on knowing we are leaving a better world for our children, but a better world is always one that helps the living first. Just like in an airplane, put on your own oxygen mask first before helping your children with theirs. Especially if they haven't been born yet!

There is also no justification for harming the living in order to benefit the theoretical. To do so only creates a world in which such harm is acceptable, and that will always be a drag on happiness. The living may choose to make sacrifices for their children or future generations they may never know, but they must do so freely and without coercion.

Prioritizing the needs of the living over the theoretical is also what makes abortion an ethical choice whenever it is desired. Quality of Life is the most important pursuit.

A Model for Utilitarian Rights

One of the main arguments against utilitarianism is that it doesn't offer a framework for human rights, since theoretically anyone's rights can be overridden if the happiness math justifies it. The legal systems in most countries use a rights-based model, not a utilitarian one, which is cited as evidence. One again, state consequentialism comes to the rescue. When you consider the aggregate happiness of states that provide universal human rights to those that don't, the evidence overwhelmingly supports the idea that those rights are universally good.

The fact that it is hard to base a legal system on complex moral calculations that have only recently been quantified does not mean that those calculations aren't the most accurate way to make informed ethical decisions that consider the relevant evidence.

Just because it is theoretically possible for the utilitarian math to offer a repugnant conclusion, the actual mathematics of human emotion avoids those outcomes in the real world. If your hypothetical assumes people are just going to be cool with hurting or violating the rights of others, then empathy has been ignored like friction in a Physics 101 problem.

Emergent Utilitarianism

Emergent Utilitarianism describes the way that utilitarian ethical principles are an emergent property of a brain that is built on operant conditioning based neural networks. Since neural networks are trained using reward and punishment, and they are the drivers of behavior, they will naturally seek the greatest amount of pleasure possible. Add in some mirror neurons to create empathy, and now you want the greatest good for the greatest number. It is not simply one of many possible ethical systems, it is the one built into our brains.

The Means Are the Ends

Machiavelli popularized the unfortunate saying--"The ends justify the means." The idea being that you can justify moral flexibility in the pursuit of the greater good. This rationalization is how we get to the hypothetical situations described in the previous sections.

Due to the temporal realities of existence, whatever means we choose to pursue our goals turns out to be the thing we experience much more of in life than the actual goal. For the loftiest utopian visions, those goals will not even be realized in one's lifetime. But if you live your life in pursuit of that goal, you will be living with whatever means that you utilize.

This is why there can never be a justification for war or genocide. Every evil act has "greater good" at the heart of it that and "the ends justifies the means" as an excuse. The new saying should be "the means are the ends" to reflect the fact that we must live with the choices we make in pursuit of our goals.

Extra People is Not Extra Happy

Some math wizards point out that if you are adding up all the happiness in society, then adding to the population would make number go higher. This is not how Quality of Life works and you know it.

There Are No Infinities

The notion of a moral trump card is equivalent to having an infinite moral value on some particular action, which leads to bad math and undesirable outcomes. Read More.

Temptation and Delayed Gratification

Much of morality is centered around the concepts of avoiding temptation and its corollary delayed gratification. These topics are discussed in greater detail on those pages.

The Psychology of Enjoyment

Your preferences are based on your experiences and associations. Which food you like depends more on your state of mind when you first tried the food than it does on actual taste. Once you realize this, you can make the conscious decision to choose to enjoy these things. It takes practice, but it is absolutely possible to expand the range of what you find enjoyable.

Doing so expands the possibilities of happiness. There are so many situations you will find yourself in when there is no choice available to you that is salient. The fewer things you enjoy, the more likely this is. You can't always choose to do the things you enjoy, so the best solution is to enjoy everything.

The Variety page expands on this.

Can a TV Show Explain It?

For those that prefer to get their ethics from sitcoms, it's basically the point system from The Good Place. The show is actually a robust introductory course in ethics and is truly a good place to start learning about this subject if the idea of utilitarianism is new to you.

metaculture is like an attempt to create WikiChidi.

The Good Place: How Afterlife Points are Assigned

Can You Sing a Song About It?

Apparently you can. If this wiki has proven one thing it's that you can sing a song about anything.

Spoon - Utilitarian