Research

Publications

Neuroscience of Moral Judgment. Forthcoming. Contemporary Approaches to Moral Psychology, Eds. Felipe de Brigard & Walter Sinnott-Armstrong (MIT Press). [Co-authored with Josh May, Clifford Workman, and Hyemin Han]

We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. Combined with rigorous evidence from psychology and careful philosophical analysis, neuroscientific evidence can even help shed light on the extent of moral knowledge and on ways to promote healthy moral development.


Holistic rational analysis. Forthcoming. Brain and Behavioral Sciences. [Co-authored with Colin Klein]

We argue that Lieder and Griffiths’ method for analyzing rational process models cannot capture an important constraint on resource allocation, which is competition between different processes for shared resources (Klein 2018). We suggest that holistic interactions between processes on at least three different timescales—episodic, developmental, and evolutionary—must be taken into account by a complete resource-bounded explanation.


Valuation mechanisms in moral cognition. Forthcoming. Brain and Behavioral Sciences.

May (2018) cites a body of evidence suggesting that participants take consequences, personal harm, and other factors into consideration when making moral judgments. This evidence is used to support the conclusion that moral cognition relies on rule-based inference. This commentary defends an alternative interpretation of this evidence, namely, that it can be explained in terms of domain general valuation mechanisms.


Revising and Expanding Cushman's Learning-Based Model of Moral Cognition. Forthcoming. The Normative Implications of Contemporary Neuroscience. Eds. Geoffrey Holtzman and Elisabeth Hildt. New York: Springer.

Moral cognition refers to the human capacity to experience and respond to situations of moral significance. Recently, philosophers and cognitive scientists have turned to reinforcement learning, a branch of machine learning, to develop formal, mathematical models of normative cognition. I argue that moral cognition instead depends on three or more in decision-making systems, with interactions between the systems producing its characteristic sociological, psychological, and phenomenological features.

An empirical solution to the puzzle of weakness of will. 2018. Synthese 195: 5175.

This paper presents an empirical solution to the puzzle of weakness of will. Specifically, it presents a theory of action, grounded in contemporary cognitive neuroscientific accounts of decision making, that explains the phenomenon of weakness of will without resulting in a puzzle.


Recovering Spinoza's Theory of Akrasia. 2015. Doing without Free Will: Spinoza and Contemporary Moral Problems. Eds. Ursula Goldenbaum and Christopher Kluz. New York: Rowman and Littlefield.

I show that Spinoza defends a causal psychological theory of akrasia, absent a concept of free will. I then challenge three contemporary discussions of Spinoza's view, as put forward by Jonathan Bennett (1984), Michael Della Rocca (1996), and Martin Lin (2006).


​Drafts

Paper on moral AI (Under review)

Extant approaches to artificial moral cognition propose to build comprehensive cognitive architectures to model moral decision-making. However, these approaches involve building sophisticated architectures into the artificial systems, and so make it difficult to scale up to model the dynamics of moral cognition. I propose a tractable, reinforcement learning-based framework for designing artificial moral cognition, complete with proposals to model ‘fairness’ and ‘honesty.’ I consider some of its core implications for addressing the value alignment problem.


Paper on binocular rivalry (Under review)

Hohwy et al.’s (2008) model of binocular rivalry (BR) is taken as a classic illustration of predictive coding’s explanatory power. I revisit the account and show that it cannot explain the role of reward in BR. I then consider a more recent version of Bayesian model averaging, which recasts the role of reward in (BR) in terms of optimism bias. If we accept this account, however, then we must reconsider our conception of perception. On this latter view, I argue, organisms engage in what amounts to policy-driven, motivated perception.


Paper on synchronic self-control (Under review)

An agent exercises instrumental rationality to the degree that she adopts appropriate means to achieving her ends. Adopting appropriate means to achieving one’s ends can, in turn, involve overcoming one’s strongest desires, that is, it can involve exercising synchronic self-control. However, contra standard approaches (Kennett and Smith 1996, Mele 2002, Sripada 2012), I deny that synchronic self-control is possible. Specifically, I draw on models from reinforcement learning approaches and empirical evidence from cognitive neuroscience to describe a naturalistic, multi-system model of the mind. On this model, synchronic self-control is impossible. Must we, then, give up on a meaningful conception of instrumental rationality? No. A multi-system view still permits something like synchronic self-control: an agent can control her very strong desires. Adopting a multi-system model of the mind thus places limitations on our conceptions of instrumental rationality, without requiring that we abandon it.


Paper on the nature of valuation

Research in reinforcement learning asks how an agent can learn to optimize its behavior by learning from interactions with its environment. (Sutton and Barto 1998, 2018). The research program’s plurality of analyses, findings, and theories is, at a minimum, a sign of its scientific productivity (Kitcher 1982, 35-48). In this paper, I argue for something stronger: namely, I argue that these models and findings target a sui generis cognitive capacity. This cognitive capacity is valuation, or the goal- and context- dependent subpersonal attribution of subjective reward and value to internal and external stimuli.


Paper on valuation and desire

The reward-based theory of desire holds that “to have an intrinsic desire regarding it being the case that p is to constitute p as a reward or a punishment” (Schroeder, 2004; Schroeder and Arpaly, 2014). In doing so, the theory preserves the traditional, philosophical folk psychological notion of desire but specifies it in contemporary computational and empirical terms. In this paper, I defend two related theses. First, I argue that the traditional notion of desire is best expressed by the computational notion of reward and value, rather than only the notion of reward, in order to capture not only intrinsic but instrumental desire. In addition, second, I propose that in theoretical contexts, we can replace the philosophical folk psychological notion of desire with the technical notions of reward and value, allowing these notions to play an explicit role in resolving philosophical puzzles and debates.


Paper on cognitive control [with Colin Klein]

We begin by proposing that there are two, previously undistinguished, senses of cognitive control in the cognitive neuroscience literature. The first sense of cognitive control, which we call ‘psychological cognitive control,’ refers to a psychological capacity posited to explain our widespread but limited ability to multi-task. By contrast, the second sense, which we call ‘connectionist-neural cognitive control,’ refers to a resource allocation mechanism in complex, dynamic systems like the brain. We argue that psychological control is implemented by connectionist control mechanisms, and argue show how multiplexing, a structural feature of connectionist control in which systems reuse control representations across multiple domains, produces the signature limitations of psychological control, e.g., the inability to simultaneously do mental arithmetic and remember a three-digit number


Paper on modeling moral problems [with Colin Klein]

The normative nature of much of moral philosophy suggests that for all moral problems, there is a right thing to do, and the job of moral deliberation is to find it. This obscures important differences between how we might approach moral decisions. We propose that there are in fact two general types of moral problems: ​pattern-matching ​moral problems and ​adaptive ​moral problems, defined by constant, ethically-loaded adjustment to an uncertain world. We argue that these different types of problems have fundamentally different structures, and so should be modeled using different machine learning approaches.