I am a philosophy researcher from Perth, Western Australia. I received my doctoral degree in Philosophy from the University of Western Australia in July 2018.
I currently teach at Curtin University and the University of Notre Dame Fremantle.
My main research topic is the philosophy of self-knowledge—currently a major topic in the intersection of epistemology and the philosophy of mind. My PhD, First-Person Authority and its Limits, defended a view of self-knowledge called the transparency method—a view which differs from traditional inward looking accounts of introspection.
My research also focuses on the ways in which we fail to achieve self-knowledge: I engage with data from cognitive science which show that we often misattribute, or confabulate, our emotions, motivations, and decisions.
I have also written about the philosophy of artificial intelligence, specifically on the Ethics of Big Data and AI Rights; and the history of philosophy, with a keen interest in David Hume.
Peer Reviewed Articles
This paper proposes a philosophically informed decision-making methodology, inspired by Aristotle, that encourages constructive discussions amongst employers and employees; is directed towards shared higher-level goals; is consistent with planning frameworks already in place in many businesses; can be amended over time without disruptive disputes; and accounts for the particularities of each industry, enterprise, workplace, and job. It seeks to establish a more fundamental basis for discussions about the remote office work: specifically, the purpose and nature of the work of those affected. If these matters can be decided, then subsequent discussions might be focused more upon the shared outcomes to which stakeholders are committed and less upon individual preferences and ‘hunches.’
AI, BIG DATA, AND THE FUTURE OF CONSENT (COAUTHORED WITH
NIN KIRKHAM & MARCO RIZZI)
(Forthcoming) AI and Society
In this paper, we discuss several problems with current Big Data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big Data practices. We do so by first discussing three types of problems that can impede informed consent with respect to Big Data use. First, we discuss the transparency (or explanation) problem. Second, we discuss the re-repurposed data problem. Third, we discuss the meaningful alternatives problem. In the final section of the paper, we suggest some solutions to these problems. In particular, we propose that the use of personal data for commercial and administrative objectives could be subject to a ‘soft governance’ ethical regulation, akin to the way that all projects involving human participants (e.g., social science projects, human medical data and tissue use) are regulated in Australia through the Human Research Ethics Committees (HRECs). We also consider alternatives to the standard consent forms, and privacy policies, that could make use of some of the latest research focussed on the usability of pictorial legal contracts.
MORE THAN JUST A PASSING COGNITIVE SHOW: A DEFENCE OF AGENTIALISM ABOUT SELF-KNOWLEDGE
(Forthcoming) Acta Analytica
This paper contributes to a debate that has arisen in the recent self- knowledge literature between agentialists and empiricists. According to agentialists, in order for one to know what one believes, desires, and intends, rational agency needs to be exercised in centrally significant cases. Empiricists disagree: while they acknowledge the importance of rationality in general, they maintain that when it comes to self-knowledge, empirical justification, or warrant, is always sufficient.
In what follows, I defend agentialism. I argue that if we could only come to know our judgement-sensitive attitudes in the way described by empiricism, then we would be self-estranged from them when we acquire knowledge of them. We would relate to our own attitudes as if we were watching the movies of our inner lives unfold. Given that this is not the position we typically inhabit, with respect to our judgement-sensitive attitudes, I conclude that empiricism fails. This is the self-estrangement argument against empiricism. I then consider a response that Brie Gertler, an empiricist, offers to the objection that empiricism fatally portrays us ‘mere observers of a passing cognitive show’ (2016, p.1). I argue that her response is unsuccessful. Hence, we should endorse agentialism.
EXTENDING THE TRANSPARENCY METHOD BEYOND BELIEF: A SOLUTION TO THE GENERALITY PROBLEM
(2020) Acta Analytica
According to the Transparency Method (TM), one can know whether one believes that P by attending to a question about the world—namely, ‘Is P true?’ On this view, one can know, for instance, whether one believes that Socrates was a Greek philosopher by attending to the question ‘Was Socrates a Greek philosopher?’ While many think that TM can account for the self-knowledge we can have of such a belief—and belief in general—fewer think that TM can be generalised to account for the self-knowledge we can have of other propositional attitudes, such as our desires, intentions, wishes and so on. Call this the Generality Problem. In the present paper, I contrast my own attempt to solve the Generality Problem with several recent ones. I argue that in order to extend TM beyond belief, we must look to the concepts underpinning each kind of mental state. Doing so, I argue, reveals a series of outward-directed questions that can be attended to, in order to know what one desires, intends, wishes and so on. Call this the conceptual approach to extending TM. I support the conceptual approach in the present paper by showing how it generates Moore-Paradoxical sentences that are analogous to the case of belief.
LINK (PDF READ ONLY): https://rdcu.be/b6bfM
THE HARD PROBLEM OF AI RIGHTS
(2020) AI & Society
In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research programme is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise to experience.
To motivate this claim, I consider three ways in which to ground AI rights—namely: superintelligence, empathy, and a capacity for consciousness. I argue that appeals to superintelligence and empathy are problematic, and that consciousness should be our central focus, as in the case of animal rights. However, I also argue that AI rights is disanalogous from animal rights in an important respect: animal rights can proceed without a solution to the ‘Hard Problem’ of consciousness. Not so with AI rights, I argue. There we cannot make the same kinds of assumptions that we do about animal consciousness, since we still do not understand why brain states give rise to conscious mental states in humans.
REVISIONISM GONE AWRY: SINCE WHEN HASN’T HUME BEEN A SCEPTIC? (COAUTHORED WITH M.P. LEVINE)
(2020) Journal of Scottish Philosophy
In this paper, we argue that revisionary theories about the nature and extent of Hume’s scepticism are mistaken. We claim that the source of Hume’s pervasive scepticism is his empiricism. As earlier readings of Hume’s Treatiseclaim, Hume was a sceptic—and a radical one. Our position faces one enormous problem. How is it possible to square Hume’s claims about normative reasoning with his radical scepticism? Despite the fact that Hume thinks causal (inductive) reasoning is irrational, he explicitly claims that one can and should make normative claims about beliefs being ‘reasonable’. We show that even though Hume thinks that ourcausal (inductive)beliefs are rationally unjustified, there is nonetheless a ‘relative’ senseof justification available to Hume and that he relies on this ‘relative’sense in those places where he makes normative claims about what we ought to believe.
CONFABULATION DOES NOT UNDERMINE INTROSPECTION FOR PROPOSITIONAL ATTITUDES
According to some, such as Peter Carruthers (2009, 2010, 2011, 2015), the confabulation data (experimental data showing subjects making false psychological self-ascriptions) undermine the view that we can know our propositional attitudes by introspection. He believes that these data favour his Interpretive Sensory-Access (ISA) theory—the view that self-knowledge of our propositional attitudes always involves self-interpretation of our sensations, behaviour, or situational cues. This paper will review some of the confabulation data and conclude that the presence and pattern of these data do not substantiate the claim that we cannot introspect our propositional attitudes. As a consequence of this discussion, I conclude that the ISA theory is not well supported by the empirical data.
SOUNDS LIKE PSYCHOLOGY TO ME: TRANSGRESSING THE BOUNDARIES BETWEEN SCIENCE AND PHILOSOPHY
(2016) Limina: A Journal of Historical and Cultural Studies
In recent years, some eminent scientists have argued that free will, as commonly understood, is an illusion. Given that questions such as ‘do we have free will?’ were once pursued solely by philosophers, how should science and philosophy coalesce here? Do philosophy and science simply represent different phases of a particular investigation—the philosopher concerned with formulating a specific question and the scientist with empirically testing it? Or should the interactions between the two be more involved? Contemporary responses to such questions have occasionally given rise to conflict amongst members of different disciplines. Some individual scientists have dismissed philosophical objections to their scientific theories on the grounds that the philosopher lacks experience in their respective field. And some individual philosophers have rejected scientific theories on a priori grounds, without giving due consideration to the empirical evidence.
In this paper, I argue that such dismissiveness, on both sides, is mistaken. I will do so by putting forward a view that is inspired by the American philosopher and psychologist William James, who has been characterised by recent commentators as having performed ‘boundary work’. Boundary work involves transgressing the dividing lines between such disciplines, and attempting to solve certain problems without being restricted to the methodology of a single discipline. To help support this position, I will examine a series of contemporary problems that are pursued in both philosophy and science that relate to moral responsibility and free will. I will argue that in order to solve such problems we need to perform boundary work.
COMPUTER-AIDED LIVES. MY REVIEW OF JAMES W. CORTADA'S BOOK: LIVING WITH COMPUTERS.
My review of James W. Cortada's Book: Living with computers: The digital world of today and tomorrow. [Click to read]
My review of Abraham Anderson's Book: Kant, Hume, and the Interruption of Dogmatic Slumber featured in Phenomenological Reviews. [Click to read]
REVIEW OF THE STONE READER: MODERN PHILOSOPHY IN 133 ARGUMENTS BY PETER CATAPANO AND SIMON CRITCHLEY
My review of The Stone Reader: Modern Philosophy in 133 Arguments by Peter Catapano and Simon Critchley. Published in Limina: A Journal of Historical and Cultural Studies, Volume 22.1 [Click to Download]
Research and Features
LOOKING INTO THE FUTURE OF AI: BIG DATA AND INFORMED CONSENT
Septemebr 3, 2021
In this article I talk about protecting our personal information in the age of artificial intelligence and big data.
It features in the following:
Muswellbrook Chronicle; Glen Innes Examiner; The Advocate; Central Western Daily; The Canberra Times; Port Macquarie News; The Rural; Margaret River Mail; Wellington Times; Oberon Review; Bega District News; Northern Rivers Review; Beaudesert Times; Border Chronicle; Maitland Mercury; The Border Mail; Busselton Mail
IMPERFECT COGNITIONS BLOG
January 28, 2020
'Confabulation does Not Undermine Introspection' was featured on the Imperfect Cognitions Blog. This is a Blog on delusions, memory distortions, confabulations, biased beliefs, and other imperfect cognitions. Click here for the blog.