Moral Evidence

I have several times on this blog gone over my moral framework.  To sketch it quickly, I argue that acting morally means acting autonomously.  I argue that to act autonomously necessarily means to do what one thinks is right, and that this is the most we can ask of people.  However, I feel that I make an evasion in this account that I’m still trying to work out.  My account lets me answer a problem that I believe many other accounts of morality find difficult to address.  It gives an account of the nature and the source of normative.  The normative is the force of our own autonomous desires.  The account hinges on the notion that autonomous human nature is the good.  In a certain sense I’m trying to pull off a trick in my account – I’m saying that the reason that we should be good is because that is what we truly want.  To make good desirable I define it as our true desires.

There are two difficulties that I see with my own account.  The first problem, which I consider relatively minor, is the fact that my account depends on the deep down goodness of human nature.  I see this problem as minor because it seems to me that few moral systems can truly stand without assuming this at some level.  There may seem to be a looming kind of relativism in my account that stems from the same problem, but I can address this at least to my own satisfaction.  We each must act out our own autonomy, and I’m inclined to think that someone else acting autonomously cannot truly be considered an enemy to our own autonomous activity.  Even when someone else stands in opposition to our actions, as long as they act autonomously they stand as a puzzle that we must integrate into the world within which we act.

The second problem is that I still run up against the is/ought distinction, despite all my attempts to evade it.  I tried to get away from the is/ought problem by arguing that the normative is a part of our experience.  Just as we experience texture and colour and so on, we also experience the ‘what is to be done with’ of things.  The problem I feel this doesn’t address is what it means to give evidence for moral ideas.  Strictly speaking when we argue we rarely point out the properties of things.  I would never argue that something is red because it looks red.  The true contrast of the is/ought gap is pointing out cause and effect.  We can point out the conjunction of events to say that a relationship exists, and thus give the world properties beyond immediate sensation.  If we want to point out ought, however, we can point to nothing but a brute feeling.  Anything more then that is merely describing the circumstance that gives rise to the feeling of ought (e.g. ‘she is drowning therefore you should rescue her’ really means ‘I see a woman drowning, and this makes me feel an ought.)  I would clarify the problem I am pointing out by saying that I’m not saying the brute feeling is invalid, but that it does not truly seem to be something that can be argued.  The question I am currently wrestling with is what it means to argue a moral point.  Does it mean we are merely debating the is, so as to have the proper feeling arise in response?  Or is there some way in which these feelings can be debated between the autonomous?

Let me know what you think,


via Blogger


5 thoughts on “Moral Evidence

  1. You pose interesting questions. Your account seems to entail that there are certain desires that autonomous (in the literature philosophers who posit views similar to yours use the phrase “fully rational”) agents would have. Doesn’t this just amount to saying that with full knowledge we would know that there are certain desirable things, intrinsic goods if you will, and that these entities ground our moral experience? While imagining the autonomous agent may have use in reflecting on what things are of intrinsic value, it seems mistaken to say that “human nature is the good” when it seems more true to say that humans, by nature, are responsive to intrinsic goods. Moreover, goods are not intrinsic goods because we desire them, rather, they are intrinsic goods because a priori reflection shows that there is a conceptual entailment between the natural/non-moral properties of an entity and the supervening normative properties. This point avoids the problem of arbitrary normative entailment that you gestured to in your remarks about relativism — your account doesn’t immediately lead to relativism, but it’s dependence on autonomous desires as determining value as the ground floor, the foundation of morality, does entail that moral practices are arbitrary, suggesting that autonomous desires are not a sturdy foundation for morality, at least for a moral realist.

    It’s worth noting that the is/ought gap isn’t really that much of a problem for anyone except a moral realist who is against foundational moral knowledge. The is/ought thesis is only that from a set of non-moral premises no moral conclusion follows. The point is that if we have any advanced conclusions about morality, thus constituting moral knowledge, we must have some foundational moral knowledge, which can be used as premises in moral arguments. So the real drive of the is/ought gap is that either we have no justified moral knowledge at all, or we have foundational moral knowledge provided by a priori reflection, that can feature in the premises of our moral argument, such that we can derive moral conclusions. Intrinsic goods that we know through a priori reflection as being conceptually entailed by the natural properties provide normative and moral foundational knowledge that can be the basis for premises in arguments to moral conclusions.

    • Thanks for the response ausome (I’ve been wondering, by the way, how your name is actually supposed to break down). I’ll respond to your points in order.

      “Doesn’t this just amount to saying that with full knowledge we would know that there are certain desirable things, intrinsic goods if you will, and that these entities ground our moral experience?” – I do not currently take this tact, though it is not much of a jump. There are two ways that it differs. First, I do not want to assume that all autonomous agents would come to the same conclusions given the same evidence. Second, I think that it is very important to note that the notion of full knowledge is a theoretical one that can never occur. There is too much information and too much leaning to do. In practice this means that even if we were grounded in the way you suggest we remain as unstable as ever.

      “While imagining the autonomous agent may have use in reflecting on what things are of intrinsic value, it seems mistaken to say that “human nature is the good” when it seems more true to say that humans, by nature, are responsive to intrinsic goods.” – Here I think I differ from you based on my phenomenology inclinations. You suggest that we recognize external goods in our world. I’m inclined to say that ‘being’ only exists in experience. I don’t think that significance exists without existing in experience. I’d hardly say that anything exists outside of experience (at least, not any existence that we can actually consider). This means that exploring ‘meaning’ is, in a sense, an exploration of human nature.

      “Moreover, goods are not intrinsic goods because we desire them, rather, they are intrinsic goods because a priori reflection shows that there is a conceptual entailment between the natural/non-moral properties of an entity and the supervening normative properties.” – I may or may not understand this correctly, so let me know if I seem way off base. What I think your saying is that our reflection on properties reveals the normative to us. This does not, to me, seem to extradite us from the problem that I was pointing out, namely actually trying to identify how a non-moral property can be used to argue a normative property beyond a brute feeling.

      As for the is/ought gap… I understand you to be saying that if we have innate moral knowledge then we can use that to make moral arguments. I remain puzzled, however, what a moral argument looks like. You should not kill; therefore you should not kill him? I’m not sure what you can really present as evidence to someone with whom you have a moral disagreement, if you agree on the facts of the matter. Now, you might be saying that ultimately we cannot disagree on moral matters if we agree on the facts of the matter, but then superior morality is just a symptom of superior epistemology plus autonomy. This is I think the most stable structure that I’ve come up with so far, though I’m not very satisfied with it. I am inclined to say that epistemic virtue usually follows from autonomous humility – you cannot conduct good epistemology if you are a slave to the surface appearance of the world. I don’t actually know whether we disagree with each other on this last thing, I have kind of managed to confuse myself as to the topic XD

      • You can call me that if you like, or Awestin.

        I agree that ‘full knowledge’ doesn’t adequately ground morality, but my point was that your ‘autonomy’ view is committed to such a conception, as locating the foundations of morality in an agent necessitates an ideally informed agent with such, impossible, full knowledge. If morality is grounded in an autonomous agent, how could anything short of full knowledge in that agent ground morality if morality is not to have gaping holes due to the absence of knowledge of certain topics? This result just is an unavoidable problem of locating moral foundations in human agency. My point was that if we take morality to be grounded in intrinsic goods, which are conceptual relations that are entailed a priori by objective natural properties, we can admit that there are shortcomings in our moral knowledge without it following that the moral fabric of reality is itself incomplete where we don’t have the answers. Any constructivist epistemological view has difficulty in accounting for what is happening when we gain knowledge — but for morality it is certainly true that our phenomenological experience in coming to new moral conclusions is one of discovery, one of uncovering something that we did not yet realize, not one of creation, as moral constructivism implies. Related to this is the point that intrinsic goods are desirable for themselves, such that in recognizing the conceptual entailment of the natural properties of an experience to an intrinsic good, we understand why we desire that experience for those properties, such that when morality is grounded in intrinsic goods, from the plausible premise there are reasons to pursue intrinsic goods, it follows that there are reasons to act morally.

        Think of the is/ought thesis this way:

        This is an invalid argument:
        Stabbing a person will cause them pain.
        Jeff stabbed John.
        Therefore, it was wrong for Jeff to stab John.

        The premises are non-moral, but the conclusion is moral, and does not logically follow from those non-moral premises. The conclusion of the is/ought gap is that no moral conclusion can follow from only non-moral premises. But this would be a valid argument:

        Stabbing a person will cause them pain.
        It is wrong to cause pain.
        Jeff stabbed John.
        Therefore, it was wrong for Jeff to stab John.

        This conclusion does follow, because there is a moral premise in the argument. The question is whether we know on its own that it is wrong to cause pain. I think it is, inter alia, wrong to cause pain, and that this just is seen to be true as a conceptual entailment between causing (unnecessary) pain and wrongness. This would amount to a foundational moral claim, a priori knowledge that need not be derived from non-moral premises. If we could not have foundational moral knowledge then all moral claims would have to be derived from non-moral claims, but the is/ought thesis correctly holds that no moral claim can be derived from solely non-moral claims. So the conclusion is drastic, either we can have foundational/a priori moral knowledge, or we cannot have any moral knowledge at all.

        The is/ought thesis is only an epistemological thesis about the structure of moral knowledge, it is not a thesis about how we conduct moral arguments in everyday moral affairs.

      • Sorry for the delay in my reply. I’ve been having difficulty deciding what it is that I actually think. I think that I’m going to be disagreeing with my past self in some respects, so I’d take this response as unconnected to my past arguments.

        I agree with you that moral realism is a more firmly grounded morality then my morality based on autonomy. However, I’ve pretty much abandoned all forms of realism on epistemological grounds, and so far I have not been able to justify to myself picking it back up again. I currently take myself to have what I understand to be a somewhat Husserlian conception of ‘reality’ in that I think we should bracket off the question of whether things are ‘real’ or ‘mental’ – it is a question that I’m inclined to think is rather meaningless. Instead we should dedicate ourselves to that which we actually experience. Based on this framework I do not think I can adopt moral realism. You can probably see how my moral system flows out of this – we experience the world, and it is integral to our experience of the world is its normative content. The primary nature of all experiences is normative in the sense that the only ‘purpose’ of our experience is for us to direct ourselves within it. Into this framework I insert my moral system – we can either be proactive or reactive in responding to our experience. The proactive (autonomous) person endeavors to pierce deeper into the world as to unveil the deeper normative nature of experience, instead of being buffeted by surface experience. I think that the first problem, the looming relativism, stems from the fact that I’m not entirely certain how to characterize this account socially. My current inclination is to bracket that question off as well – we each have our own experiences (or at least, I have mine), and we cannot have the experiences of anyone else. We should each do what we think is right, the most you can morally ask of anyone (as long as they are also in parallel developing their sense of right). Is morality universal between agents (as in, given an infinite amount of time would all agents come to experience identical normative forces of the world)? Given the absolute situation-dependence of my definition of the normative, I am now inclined to say that the answer is no. Are there any universal principles upon which we can all agree (and therefore which we could call objectively true)? This seems to me to be an empirical question which can only be answered with degrees of certainty. My inclination is to say that any statement which is universally agreed upon is almost certainly devoid of actual content.

        This brings me back to the is/ought gap (again, this should not be taken as following from my previous points). My account kind of maps onto your notion that we recognize normative elements of empirical experience – I’ve just tossed out the universality element. I think you would probably agree based on all I’ve said that the is/ought gap is kind of a problem for me because by my account I’ve got all of these agents running around with normative inclinations that may or may not coincide. The think that I’ve been trying to figure out is what it means for two autonomous people to argue a normative point. However, I’ve realized that based on my characterization the two people actually can’t be arguing on the same point – each is in a unique normative circumstance. I’m inclined to say that the only impact that either can hope to have on the other is to reveal to the other a deeper normative element of their experience. Thus I can not directly contest a point such as ‘one should be a bane to one’s enemies’, but must instead contest the epistemological framework within which the normative is experienced (tell me, what makes someone an enemy?).

        I’m inclined to think that the root of our disagreement is ultimately metaphysical.

      • Indeed, I think given our views on epistemology we are unlikely to come to any sort of agreement here. From the point that talk of a distinction between external and mental content is meaningless I conclude in favor of realism.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s