According to a recent study, decisions reached while thinking in a “foreign” i.e. non-native language are more likely to be rational.
From the abstract:
Using a foreign language reduces decision-making biases. Four experiments show that the framing effect disappears when choices are presented in a foreign tongue. Whereas people were risk averse for gains and risk seeking for losses when choices were presented in their native tongue, they were not influenced by this framing manipulation in a foreign language. Two additional experiments show that using a foreign language reduces loss aversion, increasing the acceptance of both hypothetical and real bets with positive expected value. We propose that these effects arise because a foreign language provides greater cognitive and emotional distance than a native tongue does.
For those unaware, the framing effect is a cognitive bias in psychology wherein a person’s choice or response to a question changes depending on how the same question is worded. This is often the case when one framing highlights losses and another highlights gains. This article over at Wired describes the above study as well as an experiment in which exemplifies the framing effect.
Check out this post at Mind Hacks that discusses a new group which will be attempting to replicate a slew of cognitive science studies from 2008. Below is an excerpt from the Chronicles of Higher Education article the post is reporting on:
If you’re a psychologist, the news has to make you a little nervous—particularly if you’re a psychologist who published an article in 2008 in any of these three journals:Psychological Science, the Journal of Personality and Social Psychology,or the Journal of Experimental Psychology: Learning, Memory, and Cognition.
Because, if you did, someone is going to check your work. A group of researchers have already begun what they’ve dubbed the Reproducibility Project, which aims to replicate every study from those three journals for that one year. The project is part of Open Science Framework, a group interested in scientific values, and its stated mission is to “estimate the reproducibility of a sample of studies from the scientific literature.” This is a more polite way of saying “We want to see how much of what gets published turns out to be bunk.”
Heard of a study whose findings are now in question? Leave a link in the comment section!
In Part I of this two-part post I introduced an extended dialogue between Timothy O’Connor and Derk Pereboom that spans physicalism, reductionism, agency theory, and quantum physics. O’Connor posits a purely physicalist theory of agency based on the formation of macroproperties which instantiate in sets of microproperties which reach a certain threshold level of complexity. Once this level is reached, an emergent macroproperty, constituted as an agent causal power, can then enact downward causal influence over its microproperties without being subject to upward causation or determination from its constituent microproperties. Pereboom takes O’Connor to task for failing to account for the influence of distal causes, which nevertheless determine the behavior of the agent causal power, but to counter the invocation of an emergent property, Pereboom alleges that even in a statistical model rather than a deterministic one, we are still left with distal causes as the ultimate originator of action. In the comment section of a previous post, Aaron Kenna rightly makes mention of this, viz. that statistical, indeterministic, and deterministic worldviews all fail to provide the freedom required by agency theories/moral responsibility. In a future post I shall discuss this point further, using Strawson’s “basic argument” as an example. But for now, let’s turn to four points of analysis on the conversation between O’Connor and Pereboom to see what we can make of it. Read more…
I have recently come to believe that the crux of disagreements in contemporary discussions on physicalism and agency is the seemingly impassable divide between reductionist and non-reductionist positions. Perhaps one of the clearest examples of this disconnect can be seen in a dialogue between Derk Pereboom and Timothy O’Connor regarding the plausibility of a certain type of physicalist agency theory. The conversation is multi-faceted and invokes emergent agent causal powers (which I have mentioned here before, though only in passing) as well as quantum indeterminism. In this post I would like to introduce the reduction/non-reduction divide by unfolding the conversation between Pereboom and O’Connor. Part I will be heavily exegetical, but in Part II I offer up four points of analysis on the dialogue at large and the theories therein. Read more…
I have previously written on some common misconceptions regarding determinism and its implications, spurred by a post over at what is now Reasons for God, a Christian apologist blog. While updating a redirected hyperlink, I noticed a post that had previously escaped my attention. Entitled, “Atheism and the Denial of Freedom” which posits that atheists, due to the nature of their beliefs, cannot in good faith (no pun intended) believe in free will. In this post I would like to once again correct a specious argument that unfairly saddles atheists with a belief in determinism.
I should first like to take to task the manner in which the author stacks his conclusions. I will ignore the particular definition of atheism the author utilizes, as it does not truly matter in this instance, and instead highlight the problematic nature of the assumptions he makes. This argument demonstrates not only the sophomoric approach applied, but also a failure to understand the robust discussion concerning the metaphysics of the universe that continues to this day in professional philosophy.
Welcome to the January 30th, 2012 edition of the Philosophers’ Carnival! The goal of the Carnival is to highlight the best and most engaging blog posts in the area of philosophy – we have a lot of great submissions, so let’s dig in.
Clayton over at Think Tonk brings us a pithy post on a lack of evidence for evidentialism. Clayton argues that there exist instances wherein a person could in good faith believe she has good reason to believe that she is warranted in believing p all the while lacking sufficient evidence for believing p. There is also a valuable exchange in the comments section of the post. An excerpt from the main post:
Here, now, is my anti-evidentialist argument. William has sufficient justification to believe that he permissibly believes that he permissibly believes God exists. William, however, does not have sufficient evidence to believe that God exists. So, according to [the positive accessibility thesis], it is permissible to believe without sufficient evidence. According to the evidentialist, it is never permissible to believe without sufficient evidence. Thus, the evidentialist view is mistaken.
Following in the vein of beliefs, Jim over at Agent Intellect presents an explication on the differences between traditional Global Skepticism ala Descartes and Plantinga’s Evolutionary Argument against Naturalism. While he admits there is a measure of truth in likening the EAAN to Global Skepticism, he claims they differ substantial ways:
Plantinga’s EAAN is significantly different from classical global skepticism. First, we do not have to have a reason for a belief if it is properly basic, and such a belief can constitute knowledge even if we don’t know that we know it. We are justified, or our beliefs are warranted, up until the point where we have a reason for thinking them to be false. The EAAN provides just such a reason: if naturalism is true, then it is improbable or inscrutable that any given belief would be true. After this, the EAAN has the same effect as the more traditional global skeptical arguments: any reason you can give for a particular belief is itself subject to the EAAN and is therefore not trustworthy. There is no stopping the rot once it’s started. Indeed, part of the genius of Plantinga’s argument is that it amounts to a global skeptical argument that arises from within externalism.
Injecting a little bit of Hume into the mix, Maryann from the Examiner discusses the is-ought distinction, arguing that for an ought statement to be true there must exist some being to which that statement corresponds/describes, but which does not justify the statement. An excerpt from the piece:
Translating from epistemology back over to ethics, there needs to be a real ought in order for there to be moral knowledge, but 1) the real ought is not justified by its correspondence to reality—that would be saying its correspondence justifies its correspondence (begging in a circle) and 2) a particular ought is not made to correspond by its justification—that would be like saying that the act of believing made something real to believe in (also begging in a circle). No, there must be ‘both’ justification ‘and’ correspondence. If one or both is lacking (by depending on the other, or for some other reason), knowledge is lacking.
Occasional Philosophy has an interesting re-imagining of Tegmark’s Quantum Suicide thought experiment, which traditionally limits hypothetical conclusions to the experimenter only. Instead, the author proposes the Quantum Homicide thought experiment, which allegedly allows outside observers to draw conclusions about many-worlds vs. Copenhagen interpretations of quantum mechanics. A snippet of the proposed tweak:
The Quantum Homicide thought experiment proposes a modification to the gun used in the experiment. In this case, if the particle is measured as spin up then the gun fires and kills the experimenter, just as before (in fact, the killing of the experimenter isn’t necessary for the experiment to work but I prefer the aesthetics of the continuity between the quantum suicide and quantum homicide cases). On the other hand, if the particle is measured as spin down then the gun fires a time travel ray, sending the experimenter one day into the past.
Noah Greenstein, the eponymous curator of Blog of Noah Greenstein, discusses the role emotional states play in hindering our reasoning. Based on this, he introduces the Future Rationality Cone, which attempts to include emotion and thought in predicting the relative rationality of future beliefs by way of their distance, as it were, from other beliefs:
Considering a person’s consciousness at some point, we can map what we consider rational and irrational based upon the potential mood and thought changes. Any possible future belief (a combination of thought and mood) will be a combination of changes in prior moods and thoughts. Beliefs that require too great a change in both thought or mood may be outside the realm of rationality for a person, while beliefs that require little effort will fall within the realm of rationality. Hence, the rationality cone.
Lewis from the group blog The Mod Squad tackles Leibniz’s views on the worth of “blind thought” i.e. cognition concerning signifiers absent an apparent regard for the signified, offering up a contrast between Locke, Berkeley, and Hume concerning blind thought:
This discussion, in which Leibniz first introduces blind thought, occurs in the midst of Leibniz’s commentary on Locke’s views on power and freedom. Specifically, it appears that Leibniz introduces the notion in response to Locke’s view that the main determinant of the will is not the prospect of a greater good, but instead, some strong present unease…As suggested by the initial illustration of algebraic reasoning, Leibniz’s stance on blind thought is not that it is always problematic. In a later discussion, relating to the purpose and origins of language, Leibniz suggests that blind thought can be of great utility.
Switching gears ever so slightly, Greg at Cognitive Philosophy expounds on the potential threat to ethics posed by genetic modification (given a biologically contingent definition of ethics).
Changing the types of biological organisms that we are could conceivably change what is or is not right to do in any particular situation. It might change the very people that we should be striving to be. Yes, it’s unlikely we’ll change ourselves to the point where harming others is a good thing (though not impossible), but to what degree our systems of ethics will have to change is not something we can predict in advance. Now, let me be clear. I’m not making the naturalistic fallacy (or at least I’m not trying to). My point is that facts about our biology and psychology are going to *constrain* our ethical theories, not wholly *determine* them. Ethics is tricky business. Philosophers have been arguing about it for thousands of years, and while we all have some intuitive notions of what is good and what is bad, what is right and what is wrong, we’re certainly not anywhere close to having all the answers. Changing who we are as human beings will cause us to have to rethink some problematic notions.
Richard from Philosophy, et cetera discusses what he views as major lacunas in a recent argument against immigration that attempts to use environmental concerns to justify its position. He argues that general increases in human welfare outweigh any alleged damage to American wages, and similarly that if anything, mass immigration highlights rather than hides fundamental issues in countries facing an exodus:
Stepping back: If we want to get the most welfare “bang” for our ecological “buck”, barring the global poor access to economic opportunities is surely not the way to go. (It’s less extreme than outright killing them, but I think ultimately misguided for fundamentally similar reasons.) We should strive for improved efficiency in less humanly damaging ways: emissions taxes, reduced animal (esp. cattle) farming, increased urban density / efficient transit, etc. Not to mention investing in scientific research to uncover new solutions — investments which are more easily made by a wealthier, better educated populace.
Assorted Topics: Logic, and our lack of Kants
On the Logic side of philosophy, Tristan at Sprachlogic serves up a new notation for propositional modal operators. He seeks to answer the following by way of introducing a new notational method:
It is common to see the following list of four modal operators presented, sometimes as though it were exhaustive: possibility, necessity, contingency and impossibility. But reflect again that, of these four modalities, possibility is an odd one out, since it is non-commital on truth-value. Also, note that systems have been developed where other operators, e.g. one for non-contingency, are taken as primitive. This can give rise to an uneasy, lost feeling. Are the usual four modal operators just a hodge-podge? What modal operators are there (could there be)? Is there a systematic way of producing them all? And is there then a systematic way of determining logical relations between them?
Concerning philosophers themselves, Eric at Splintered Mind discusses the charge that specialization in contemporary philosophy signals the demise of interdisciplinary giants, using Kant as an example. An excerpt:
Consider by century: It seems plausible that no philosopher of at least the past 60 years has achieved the kind of huge, broad impact of Locke, Hume, or Kant. Lewis, Quine, Rawls, and Foucault had huge impacts in clusters of areas but not across as broad a range of areas. Others like McDowell and Rorty have had substantial impact in a broad range of areas but not impact of near-Kantian magnitude. Going back another several decades we get perhaps some near misses, including Wittgenstein, Russell, Heidegger, and Nietzsche, who worked ambitiously in a wide range of areas but whose impact across that range was uneven. Going back two centuries brings in Hegel, Mill, Marx, and Comte about whom historical judgment seems to be highly spatiotemporally variable. In contrast, Locke, Hume, and Kant span a bit over a century between them. But still, three within about hundred years followed by a 200 year break with some near misses isn’t really anomalous if we’re comparing a peak against an ordinary run.
-I regret to say that Common Sense Atheism is closing its digital doors, as it were. The site will remain as an archive, and the site’s author, Luke Muehlhauser, will be continuing his work in the area of artificial intelligence.
-Peter Ludlow discusses the implications of a hypothetical dissolution of the APA, courtesy of the Leiter Report.
-Gary Gutting, frequent contributor to the New York Times, discusses the purpose of philosophy in our current climate. I highlight this Stone article in particular because I don’t imagine there is a single reader who has not had to brave such questioning!
-Neal Tognazzini at Flickers of Freedom celebrates the 50th anniversary of P.F. Strawson’s Freedom and Resentment. The College of William & Mary will be hosting a two-day conference examining themes across his work.
-Daniel Dennett has been awarded the Erasmus Prize 2012. The 2012 award celebrates those who have promoted “the cultural meaning of the natural sciences.”
-Matthew Mullins at Prosblogion posts on the John Templeton Foundation’s open online submission cycle for funding inquiries. The areas of focus are philosophy and theology.