I thought I would try and break my rather extended absence from posting by offering up a few posts that are shorter than normal. This might get me to post my thoughts more often! As always, what I post is continually up for revision and I look forward to getting some input from readers.
Attempting to undermine Strawson’s naturalist turn, Paul Russell saddles him with the “naturalist dilemma” – that if Strawson adopts type-naturalism he has failed to adequately address pessimist concerns, but if he employs token-naturalism he endorses an unreasonable form of naturalism (Russell, 151). I aim to show that Russell is not very charitable in his reading of possible responses on Strawson’s part, given that the token-naturalism Russell allows does little to conform to the shift in the language of excuse and exemption Strawson makes. First I’ll recapitulate the distinction Russell makes between type- and token- versions of both pessimism and naturalism (as far as it goes), with a brief discussion of why he sees this as a dilemma for Strawson’s position. From there I will outline why I think Russell fails to understand how Strawson’s notions of excusing and exemption allow him to provide a more robust response than the one Russell relegates him to. This is because the bifurcation of Strawson’s response to Pessimists into “rationalistic” and “naturalistic” components that are at odds with one another is simply a misunderstanding of Strawson’s project.
In giving an account of utilitarianism, J.S. Mill seeks to identity what type of proof is sufficient to accept the utilitarian principle that happiness is our only desirable end. The proof Mill offers is particular to first principles and suffers from several weaknesses. I shall first outline Mill’s proof, in two parts, of the utilitarian principle. From there I shall introduce G.E. Moore’s criticisms that Mill commits the naturalistic fallacy and conflates means with ends . Following this I will explicate Henry Sidgwick’s attack of the notion that individual pursuits of happiness amount to any exhortation to pursue the general happiness, along with a general argument against psychological hedonism. Though partial defenses exist against each criticism, overall they lack sufficient force to negate Sidgwick and Moore’s concerns.
Proof for First Principles
Describing the first principle upon which utilitarianism stands, Mill writes that “happiness is desirable, and the only thing desirable, as an end; all other things being only desirable as means to that end.”1 According to him such first principles, those that undergird our knowledge, require a particular type of proof and utilitarianism’s is no different. Mill’s proof of utilitarianism, then, is twofold: first, that we can know what is in itself desirable, and second that all that we desire is happiness, with all other seeming ends being but means to achieving the general happiness. It is a quick move, and admits more of analysis than exposition. The first aspect of Mill’s proof centers on an appeal to analogy of apprehending phenomena with our senses.
Beleaguered by criticisms that Jeremy Bentham’s utilitarianism debases human nature by setting mere pleasure as man’s greatest good, J.S. Mill proposes a qualitative distinction between higher and lower pleasures. However, Mill’s proposed change opens utilitarianism up to criticisms not present in Bentham’s formulation, and on these grounds ought to be rejected in favor of the original formulation (whatever its worth). I’ll begin by introducing the impetus for Mill’s proposed changes to utilitarianism and his panel-based test for higher versus lower pleasures. From there I’ll discuss key objections from Henry Sidgwick and G.E. Moore, who each argue that the nature of pleasure does not allow for nonquantitative distinctions unless they refer to some other property. Following these concerns, I’d like to introduce two additional problems for Mill’s proposal, viz. that his test for competing pleasures is plagued by ‘jury-stacking’ and that his proposed lexical scale for pleasure does not apply equally to pain.
J.S. Mill: Quality over Quantity
In setting pleasure1 as man’s highest aspiration, Bentham’s formulation of utilitarianism has been accused of debasing humans to the level of beasts. Bentham and Mill roundly reject this notion, arguing instead that the pleasures that sate beasts are not capable of sating man due to his higher faculties. Furthermore, such a view is not at all inconsistent with utilitarianism.2 Such anthropocentric pleasures, by Mill’s account, have previously been ascribed greater value due to the ease and safety with which we can promote and maintain them as opposed to physical pleasures.3
Mill argues further that an equally consistent, and more preferable, claim can be made by utilitarians, viz. “some kinds of pleasure are more desirable and more valuable than others. It would be absurd that, while in estimating all other things quality is considered as well as quantity, the estimation of pleasure should be supposed to depend on quantity alone.”4 Mill states that the only possible method to test which desires are higher and which lower is a panel-based evaluation by competent judges.
Along with (a) a fundamental moral conviction that I ought to sacrifice my own happiness, if by so doing I can increase the happiness of others to a greater extent than I diminish my own, I find also (b) a conviction – which it would be paradoxical to call ‘moral,’ but which is none the less fundamental – that it would be irrational to sacrifice any portion of my own happiness unless the sacrifice is to be somehow at some time compensated by an equivalent addition to my own happiness. I find these fundamental convictions in my own thought with as much clearness and certainty as the process of introspective reflection can give: I find also a preponderant assent to them – at least implicit – in the common sense of mankind: and I find, on the whole, confirmation of my view in the history of ethical thought in England.
-Henry Sidgwick, “Some Fundamental Ethical Controversies” in Mind, 1889.
Guest Post: Mattheus von Guttenberg on an Exploration of the Validity and Necessary Content of Transcendental Argumentation
The following guest post is from Mattheus von Guttenberg, who is currently studying history and economics at Flagler College in St. Augustine, Florida and writes for the blog Economic Thought. Click here to get in touch with Mattheus!
Charles Taylor, in his seminal work Sources of the Self, puts forward an argument on the relationship between identity and moral truth using a variety of methods, but most notably that of the transcendental argument. Taylor, belonging to what might roughly be called a Neo-Aristotelian camp of moral philosophers, argues that we can derive moral truth by virtue of a moral ontology intrinsic to us as perceptive and evaluative subjects. While the transcendental argument Taylor employs does not appear to us readily and clearly, it is nonetheless the entire vertebrae of his argument without which we would have no reason to accept his conclusions. D.P. Baker, of the University of Natal in South Africa, has written cogently on this topic. Because it carries such persuasive potential, I feel a devoted exploration of Taylor’s transcendental argument, as well as Baker’s contribution to the discussion, is in order. It is my opinion that Taylor does not successfully prove his claim on morality as the content of his argument is inappropriate to the form in which he carries it.
I recently learned that John Hick has passed away at the age of 90. I have been holding on to this piece for quite some time, as I feel I haven’t quite said what I want to say, or am not saying it quite as succinctly as I would like. Regardless, I would like to post this in memory of John Hick, with whom I have almost always disagreed but always enjoyed reading nevertheless. As always, please feel free to offer your critiques and comments, especially since I view this as a fairly rough piece.
John Hick begins his explication of the Irenaean Theodicy by briefly summarizing and simultaneously discounting the Augustinian approach. I shall not spend much more time than Hick does in defining the Augustinian approach, and the only reason I do so at all is to offer a companion against which Hick’s Irenaean Theodicy might be compared as divergent from traditional Christian theodicy. In short, the Augustinian model follows a traditional Christian viewpoint of creation and the fall of man. It postulates that men (and angels) were created as perfect, free, and finite beings who fell from perfection as a consequence of their misuse of freedom. Hick states that, “the Augustinian approach…hinges upon the idea of the fall as the origin of moral evil, which has in turn brought about the almost universal carnage of nature.” An integral piece of Augustinian Theodicy inherent in thinkers all the way from St. Augustine to Alvin Platinga is the free-will defense against the Problem of Evil. This defense chiefly rests upon the idea that God’s creation was entirely perfect and yet man and angels chose to sin of their own free choice, which resulted in the evil that we now see present in the fallen world. Read more…
Welcome to the January 30th, 2012 edition of the Philosophers’ Carnival! The goal of the Carnival is to highlight the best and most engaging blog posts in the area of philosophy – we have a lot of great submissions, so let’s dig in.
Clayton over at Think Tonk brings us a pithy post on a lack of evidence for evidentialism. Clayton argues that there exist instances wherein a person could in good faith believe she has good reason to believe that she is warranted in believing p all the while lacking sufficient evidence for believing p. There is also a valuable exchange in the comments section of the post. An excerpt from the main post:
Here, now, is my anti-evidentialist argument. William has sufficient justification to believe that he permissibly believes that he permissibly believes God exists. William, however, does not have sufficient evidence to believe that God exists. So, according to [the positive accessibility thesis], it is permissible to believe without sufficient evidence. According to the evidentialist, it is never permissible to believe without sufficient evidence. Thus, the evidentialist view is mistaken.
Following in the vein of beliefs, Jim over at Agent Intellect presents an explication on the differences between traditional Global Skepticism ala Descartes and Plantinga’s Evolutionary Argument against Naturalism. While he admits there is a measure of truth in likening the EAAN to Global Skepticism, he claims they differ substantial ways:
Plantinga’s EAAN is significantly different from classical global skepticism. First, we do not have to have a reason for a belief if it is properly basic, and such a belief can constitute knowledge even if we don’t know that we know it. We are justified, or our beliefs are warranted, up until the point where we have a reason for thinking them to be false. The EAAN provides just such a reason: if naturalism is true, then it is improbable or inscrutable that any given belief would be true. After this, the EAAN has the same effect as the more traditional global skeptical arguments: any reason you can give for a particular belief is itself subject to the EAAN and is therefore not trustworthy. There is no stopping the rot once it’s started. Indeed, part of the genius of Plantinga’s argument is that it amounts to a global skeptical argument that arises from within externalism.
Injecting a little bit of Hume into the mix, Maryann from the Examiner discusses the is-ought distinction, arguing that for an ought statement to be true there must exist some being to which that statement corresponds/describes, but which does not justify the statement. An excerpt from the piece:
Translating from epistemology back over to ethics, there needs to be a real ought in order for there to be moral knowledge, but 1) the real ought is not justified by its correspondence to reality—that would be saying its correspondence justifies its correspondence (begging in a circle) and 2) a particular ought is not made to correspond by its justification—that would be like saying that the act of believing made something real to believe in (also begging in a circle). No, there must be ‘both’ justification ‘and’ correspondence. If one or both is lacking (by depending on the other, or for some other reason), knowledge is lacking.
Occasional Philosophy has an interesting re-imagining of Tegmark’s Quantum Suicide thought experiment, which traditionally limits hypothetical conclusions to the experimenter only. Instead, the author proposes the Quantum Homicide thought experiment, which allegedly allows outside observers to draw conclusions about many-worlds vs. Copenhagen interpretations of quantum mechanics. A snippet of the proposed tweak:
The Quantum Homicide thought experiment proposes a modification to the gun used in the experiment. In this case, if the particle is measured as spin up then the gun fires and kills the experimenter, just as before (in fact, the killing of the experimenter isn’t necessary for the experiment to work but I prefer the aesthetics of the continuity between the quantum suicide and quantum homicide cases). On the other hand, if the particle is measured as spin down then the gun fires a time travel ray, sending the experimenter one day into the past.
Noah Greenstein, the eponymous curator of Blog of Noah Greenstein, discusses the role emotional states play in hindering our reasoning. Based on this, he introduces the Future Rationality Cone, which attempts to include emotion and thought in predicting the relative rationality of future beliefs by way of their distance, as it were, from other beliefs:
Considering a person’s consciousness at some point, we can map what we consider rational and irrational based upon the potential mood and thought changes. Any possible future belief (a combination of thought and mood) will be a combination of changes in prior moods and thoughts. Beliefs that require too great a change in both thought or mood may be outside the realm of rationality for a person, while beliefs that require little effort will fall within the realm of rationality. Hence, the rationality cone.
Lewis from the group blog The Mod Squad tackles Leibniz’s views on the worth of “blind thought” i.e. cognition concerning signifiers absent an apparent regard for the signified, offering up a contrast between Locke, Berkeley, and Hume concerning blind thought:
This discussion, in which Leibniz first introduces blind thought, occurs in the midst of Leibniz’s commentary on Locke’s views on power and freedom. Specifically, it appears that Leibniz introduces the notion in response to Locke’s view that the main determinant of the will is not the prospect of a greater good, but instead, some strong present unease…As suggested by the initial illustration of algebraic reasoning, Leibniz’s stance on blind thought is not that it is always problematic. In a later discussion, relating to the purpose and origins of language, Leibniz suggests that blind thought can be of great utility.
Switching gears ever so slightly, Greg at Cognitive Philosophy expounds on the potential threat to ethics posed by genetic modification (given a biologically contingent definition of ethics).
Changing the types of biological organisms that we are could conceivably change what is or is not right to do in any particular situation. It might change the very people that we should be striving to be. Yes, it’s unlikely we’ll change ourselves to the point where harming others is a good thing (though not impossible), but to what degree our systems of ethics will have to change is not something we can predict in advance. Now, let me be clear. I’m not making the naturalistic fallacy (or at least I’m not trying to). My point is that facts about our biology and psychology are going to *constrain* our ethical theories, not wholly *determine* them. Ethics is tricky business. Philosophers have been arguing about it for thousands of years, and while we all have some intuitive notions of what is good and what is bad, what is right and what is wrong, we’re certainly not anywhere close to having all the answers. Changing who we are as human beings will cause us to have to rethink some problematic notions.
Richard from Philosophy, et cetera discusses what he views as major lacunas in a recent argument against immigration that attempts to use environmental concerns to justify its position. He argues that general increases in human welfare outweigh any alleged damage to American wages, and similarly that if anything, mass immigration highlights rather than hides fundamental issues in countries facing an exodus:
Stepping back: If we want to get the most welfare “bang” for our ecological “buck”, barring the global poor access to economic opportunities is surely not the way to go. (It’s less extreme than outright killing them, but I think ultimately misguided for fundamentally similar reasons.) We should strive for improved efficiency in less humanly damaging ways: emissions taxes, reduced animal (esp. cattle) farming, increased urban density / efficient transit, etc. Not to mention investing in scientific research to uncover new solutions — investments which are more easily made by a wealthier, better educated populace.
Assorted Topics: Logic, and our lack of Kants
On the Logic side of philosophy, Tristan at Sprachlogic serves up a new notation for propositional modal operators. He seeks to answer the following by way of introducing a new notational method:
It is common to see the following list of four modal operators presented, sometimes as though it were exhaustive: possibility, necessity, contingency and impossibility. But reflect again that, of these four modalities, possibility is an odd one out, since it is non-commital on truth-value. Also, note that systems have been developed where other operators, e.g. one for non-contingency, are taken as primitive. This can give rise to an uneasy, lost feeling. Are the usual four modal operators just a hodge-podge? What modal operators are there (could there be)? Is there a systematic way of producing them all? And is there then a systematic way of determining logical relations between them?
Concerning philosophers themselves, Eric at Splintered Mind discusses the charge that specialization in contemporary philosophy signals the demise of interdisciplinary giants, using Kant as an example. An excerpt:
Consider by century: It seems plausible that no philosopher of at least the past 60 years has achieved the kind of huge, broad impact of Locke, Hume, or Kant. Lewis, Quine, Rawls, and Foucault had huge impacts in clusters of areas but not across as broad a range of areas. Others like McDowell and Rorty have had substantial impact in a broad range of areas but not impact of near-Kantian magnitude. Going back another several decades we get perhaps some near misses, including Wittgenstein, Russell, Heidegger, and Nietzsche, who worked ambitiously in a wide range of areas but whose impact across that range was uneven. Going back two centuries brings in Hegel, Mill, Marx, and Comte about whom historical judgment seems to be highly spatiotemporally variable. In contrast, Locke, Hume, and Kant span a bit over a century between them. But still, three within about hundred years followed by a 200 year break with some near misses isn’t really anomalous if we’re comparing a peak against an ordinary run.
-I regret to say that Common Sense Atheism is closing its digital doors, as it were. The site will remain as an archive, and the site’s author, Luke Muehlhauser, will be continuing his work in the area of artificial intelligence.
-Peter Ludlow discusses the implications of a hypothetical dissolution of the APA, courtesy of the Leiter Report.
-Gary Gutting, frequent contributor to the New York Times, discusses the purpose of philosophy in our current climate. I highlight this Stone article in particular because I don’t imagine there is a single reader who has not had to brave such questioning!
-Neal Tognazzini at Flickers of Freedom celebrates the 50th anniversary of P.F. Strawson’s Freedom and Resentment. The College of William & Mary will be hosting a two-day conference examining themes across his work.
-Daniel Dennett has been awarded the Erasmus Prize 2012. The 2012 award celebrates those who have promoted “the cultural meaning of the natural sciences.”
-Matthew Mullins at Prosblogion posts on the John Templeton Foundation’s open online submission cycle for funding inquiries. The areas of focus are philosophy and theology.