Researcher. Teacher.
Email: adamandreotta@outlook.com
LinkedIn: https://www.linkedin.com/in/adam-andreotta-086078169/
PhilPeople: https://philpeople.org/profiles/adam-andreotta
ABOUT
I am a lecturer in the School of Management at Curtin University, with a primary research focus on the philosophy of self-knowledge, a key area at the intersection of epistemology and the philosophy of mind. Additionally, I explore the philosophy of artificial intelligence, particularly regarding the ethics of big data and how we can better secure informed consent online.
PUBLISHED WORK - BOOKS
RETHINKING INFORMED CONSENT IN THE BIG DATA AGE
December 23, 2024 by Routledge
In the “big data age”, providing informed consent online has never been more challenging. Countless companies collect and share our personal data through devices, apps, and websites, fuelling a growing data economy and the emergence of surveillance capitalism. Few of us have the time to read the associated privacy policies and terms and conditions, and thus are often unaware of how our personal data are being used. This is a problem, as in the last few years, large tech companies have abused our personal data. As privacy self-management, through the mechanism of providing online consent, has become increasingly difficult, some have argued that surveillance capitalism, and the data economy more broadly, need to be overthrown.
This book presents a different perspective. It departs from the concept of revolutionary change to focus on pragmatic, incremental solutions tailored to everyday contexts. It scrutinizes how consent is currently sought and provided online and offers suggestions about how online consent practices can be improved upon. These include: the possibility of subjecting consent-gathering practices to ethics committees for review; the creation of visual-based consent agreements and privacy policies, to help with transparency and engagement; the development of software to protect privacy; and the idea of automated consent functionalities that allow users to bypass the task of reading vast amounts of online consent agreements. The author suggests that these “small-scale” changes to online consent-obtaining procedures, could, if successfully implemented, provide us with a way of self-managing our privacy in a way that avoids a revolutionary dismantling of the data economy. In the process, readers are encouraged to rethink the very purpose of providing inform consent online.
Rethinking Informed Consent in the Big Data Age will appeal to researchers in normative ethics, applied ethics, philosophy of law, and the philosophy of AI. It will also be of interest to business scholars, communication researchers, students, and those in industry.
November 22, 2024 by Routledge
This volume presents new perspectives on transparency-theoretic approaches to self-knowledge. It addresses many under-explored dimensions of transparency theories and considers their wider implications for epistemology, philosophy of mind, and psychology.
It is natural to think that self-knowledge is gained through introspection, whereby we somehow peer inward and detect our mental states. However, so-called transparency theories emphasize our capacity to peer outward at the world, hence beyond our minds, in the pursuit of self-knowledge. For all their popularity in recent decades, transparency theories have also met with myriad challenges. The chapters in this volume seek to forge new ground in debates about the role of transparency in self-knowledge. Some chapters deepen our understanding of key themes at the heart of transparency theories, such as the ways in which transparent self-knowledge is properly 'first-personal' or 'non-alienated'. Other chapters extend transparency theory to different kinds of mental states and phenomena such as memory, actions, social groups, credences, projection, second-order sincerity, and Moore’s Paradox.
New Perspectives on Transparency and Self-Knowledge will appeal to scholars and advanced students working in epistemology, philosophy of mind, and psychology.
PUBLISHED WORK - ARTICLES
Peer Reviewed Articles
AUTOMATED INFORMED CONSENT
19 October (2024) Big Data and Society (coauthored with Björn Lundgren)
Online privacy policies or terms and conditions ideally provide users with information about how their personal data are being used. The reality is that very few users read them: they are long, often hard to understand, and ubiquitous. The average internet user cannot realistically read and understand all aspects that apply to them and thus give informed consent to the companies who use their personal data. In this article, we provide a basic overview of a solution to the problem. We suggest that software could allow users to delegate the consent process and consent could thus be automated. The article investigates the practical feasibility of this idea. After suggesting that it is feasible, we develop some normative issues that we believe should be addressed before automated consent is implemented.
PARTIAL FIRST-PERSON AUTHORITY: HOW WE KNOW OUR OWN EMOTIONS
19 October (2023) Review of Philosophy and Psychology
This paper focuses on the self-knowledge of emotions. I first argue that several of the leading theories of self-knowledge, including the transparency method (see, e.g., Byrne 2018) and neo-expressivism (see, e.g., Bar-On 2004), have difficulties explaining how we authoritatively know our own emotions (even though they may plausibly account for sensation, belief, intention, and desire). I next consider Barrett’s (2017a) empirically informed theory of constructed emotion. While I agree with her that we ‘give meaning to [our] present sensations’ (2017a, p.26), I disagree with her that we construct our emotions, as this has some unwelcome implications. I then draw upon recent data from the science of emotions literature to advance a view I call partial first-person authority. According to this view, first-person authority with respect to our emotions is only partial: we can introspect and authoritatively know our own sensations, and beliefs, in ways others cannot; but we still need to interpret those sensations and beliefs, to know our emotions. Finally, I consider self-interpretational accounts of self-knowledge by Carruthers (2011) and Cassam (2014). I argue that while these accounts are implausible when applied to beliefs, desires, and intentions, they are more plausible when applied to our emotions.
LINK: https://link.springer.com/article/10.1007/s13164-023-00698-6
April, 2022
This paper proposes a philosophically informed decision-making methodology, inspired by Aristotle, that encourages constructive discussions amongst employers and employees; is directed towards shared higher-level goals; is consistent with planning frameworks already in place in many businesses; can be amended over time without disruptive disputes; and accounts for the particularities of each industry, enterprise, workplace, and job. It seeks to establish a more fundamental basis for discussions about the remote office work: specifically, the purpose and nature of the work of those affected. If these matters can be decided, then subsequent discussions might be focused more upon the shared outcomes to which stakeholders are committed and less upon individual preferences and ‘hunches.’
AI, BIG DATA, AND THE FUTURE OF CONSENT (COAUTHORED WITH
NIN KIRKHAM & MARCO RIZZI)
(2022) AI and Society
In this paper, we discuss several problems with current Big Data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big Data practices. We do so by first discussing three types of problems that can impede informed consent with respect to Big Data use. First, we discuss the transparency (or explanation) problem. Second, we discuss the re-repurposed data problem. Third, we discuss the meaningful alternatives problem. In the final section of the paper, we suggest some solutions to these problems. In particular, we propose that the use of personal data for commercial and administrative objectives could be subject to a ‘soft governance’ ethical regulation, akin to the way that all projects involving human participants (e.g., social science projects, human medical data and tissue use) are regulated in Australia through the Human Research Ethics Committees (HRECs). We also consider alternatives to the standard consent forms, and privacy policies, that could make use of some of the latest research focussed on the usability of pictorial legal contracts.
LINK: https://link.springer.com/article/10.1007/s00146-021-01262-5
MORE THAN JUST A PASSING COGNITIVE SHOW: A DEFENCE OF AGENTIALISM ABOUT SELF-KNOWLEDGE
(2022) Acta Analytica
This paper contributes to a debate that has arisen in the recent self- knowledge literature between agentialists and empiricists. According to agentialists, in order for one to know what one believes, desires, and intends, rational agency needs to be exercised in centrally significant cases. Empiricists disagree: while they acknowledge the importance of rationality in general, they maintain that when it comes to self-knowledge, empirical justification, or warrant, is always sufficient.
In what follows, I defend agentialism. I argue that if we could only come to know our judgement-sensitive attitudes in the way described by empiricism, then we would be self-estranged from them when we acquire knowledge of them. We would relate to our own attitudes as if we were watching the movies of our inner lives unfold. Given that this is not the position we typically inhabit, with respect to our judgement-sensitive attitudes, I conclude that empiricism fails. This is the self-estrangement argument against empiricism. I then consider a response that Brie Gertler, an empiricist, offers to the objection that empiricism fatally portrays us ‘mere observers of a passing cognitive show’ (2016, p.1). I argue that her response is unsuccessful. Hence, we should endorse agentialism.
LINK: https://link.springer.com/article/10.1007%2Fs12136-021-00492-y
EXTENDING THE TRANSPARENCY METHOD BEYOND BELIEF: A SOLUTION TO THE GENERALITY PROBLEM
(2020) Acta Analytica
According to the Transparency Method (TM), one can know whether one believes that P by attending to a question about the world—namely, ‘Is P true?’ On this view, one can know, for instance, whether one believes that Socrates was a Greek philosopher by attending to the question ‘Was Socrates a Greek philosopher?’ While many think that TM can account for the self-knowledge we can have of such a belief—and belief in general—fewer think that TM can be generalised to account for the self-knowledge we can have of other propositional attitudes, such as our desires, intentions, wishes and so on. Call this the Generality Problem. In the present paper, I contrast my own attempt to solve the Generality Problem with several recent ones. I argue that in order to extend TM beyond belief, we must look to the concepts underpinning each kind of mental state. Doing so, I argue, reveals a series of outward-directed questions that can be attended to, in order to know what one desires, intends, wishes and so on. Call this the conceptual approach to extending TM. I support the conceptual approach in the present paper by showing how it generates Moore-Paradoxical sentences that are analogous to the case of belief.
LINK (PDF READ ONLY): https://rdcu.be/b6bfM
LINK: https://link.springer.com/article/10.1007/s12136-020-00447-9
THE HARD PROBLEM OF AI RIGHTS
(2021) AI & Society
In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research programme is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise to experience.
To motivate this claim, I consider three ways in which to ground AI rights—namely: superintelligence, empathy, and a capacity for consciousness. I argue that appeals to superintelligence and empathy are problematic, and that consciousness should be our central focus, as in the case of animal rights. However, I also argue that AI rights is disanalogous from animal rights in an important respect: animal rights can proceed without a solution to the ‘Hard Problem’ of consciousness. Not so with AI rights, I argue. There we cannot make the same kinds of assumptions that we do about animal consciousness, since we still do not understand why brain states give rise to conscious mental states in humans.
LINK: https://link.springer.com/article/10.1007%2Fs00146-020-00997-x
REVISIONISM GONE AWRY: SINCE WHEN HASN’T HUME BEEN A SCEPTIC? (COAUTHORED WITH M.P. LEVINE)
(2020) Journal of Scottish Philosophy
In this paper, we argue that revisionary theories about the nature and extent of Hume’s scepticism are mistaken. We claim that the source of Hume’s pervasive scepticism is his empiricism. As earlier readings of Hume’s Treatiseclaim, Hume was a sceptic—and a radical one. Our position faces one enormous problem. How is it possible to square Hume’s claims about normative reasoning with his radical scepticism? Despite the fact that Hume thinks causal (inductive) reasoning is irrational, he explicitly claims that one can and should make normative claims about beliefs being ‘reasonable’. We show that even though Hume thinks that ourcausal (inductive)beliefs are rationally unjustified, there is nonetheless a ‘relative’ senseof justification available to Hume and that he relies on this ‘relative’sense in those places where he makes normative claims about what we ought to believe.
LINK: https://www.euppublishing.com/doi/10.3366/jsp.2020.0264
CONFABULATION DOES NOT UNDERMINE INTROSPECTION FOR PROPOSITIONAL ATTITUDES
(2019) Synthese
According to some, such as Peter Carruthers (2009, 2010, 2011, 2015), the confabulation data (experimental data showing subjects making false psychological self-ascriptions) undermine the view that we can know our propositional attitudes by introspection. He believes that these data favour his Interpretive Sensory-Access (ISA) theory—the view that self-knowledge of our propositional attitudes always involves self-interpretation of our sensations, behaviour, or situational cues. This paper will review some of the confabulation data and conclude that the presence and pattern of these data do not substantiate the claim that we cannot introspect our propositional attitudes. As a consequence of this discussion, I conclude that the ISA theory is not well supported by the empirical data.
LINK: https://link.springer.com/article/10.1007/s11229-019-02373-9
SOUNDS LIKE PSYCHOLOGY TO ME: TRANSGRESSING THE BOUNDARIES BETWEEN SCIENCE AND PHILOSOPHY
(2016) Limina: A Journal of Historical and Cultural Studies
In recent years, some eminent scientists have argued that free will, as commonly understood, is an illusion. Given that questions such as ‘do we have free will?’ were once pursued solely by philosophers, how should science and philosophy coalesce here? Do philosophy and science simply represent different phases of a particular investigation—the philosopher concerned with formulating a specific question and the scientist with empirically testing it? Or should the interactions between the two be more involved? Contemporary responses to such questions have occasionally given rise to conflict amongst members of different disciplines. Some individual scientists have dismissed philosophical objections to their scientific theories on the grounds that the philosopher lacks experience in their respective field. And some individual philosophers have rejected scientific theories on a priori grounds, without giving due consideration to the empirical evidence.
In this paper, I argue that such dismissiveness, on both sides, is mistaken. I will do so by putting forward a view that is inspired by the American philosopher and psychologist William James, who has been characterised by recent commentators as having performed ‘boundary work’. Boundary work involves transgressing the dividing lines between such disciplines, and attempting to solve certain problems without being restricted to the methodology of a single discipline. To help support this position, I will examine a series of contemporary problems that are pursued in both philosophy and science that relate to moral responsibility and free will. I will argue that in order to solve such problems we need to perform boundary work.
LINK: http://www.limina.arts.uwa.edu.au/volumes/volume-22.1-2016/article-andreotta
PUBLISHED WORK
Book Reviews
COMPUTER-AIDED LIVES. MY REVIEW OF JAMES W. CORTADA'S BOOK: LIVING WITH COMPUTERS.
My review of James W. Cortada's Book: Living with computers: The digital world of today and tomorrow. [Click to read]
January 2021
My review of Abraham Anderson's Book: Kant, Hume, and the Interruption of Dogmatic Slumber featured in Phenomenological Reviews. [Click to read]
December 2020
REVIEW OF THE STONE READER: MODERN PHILOSOPHY IN 133 ARGUMENTS BY PETER CATAPANO AND SIMON CRITCHLEY
My review of The Stone Reader: Modern Philosophy in 133 Arguments by Peter Catapano and Simon Critchley. Published in Limina: A Journal of Historical and Cultural Studies, Volume 22.1 [Click to Download]
2016
FEATURES
Research and Features
RADIO INTERVIEW: THE PHILOSOPHER'S ZONE EPISODE - DATA PRIVACY AND INFORMED CONSENT
Broadcast Thu 16 Mar 2023 at 4:00pm
Ninety-four per cent of Australians do not read privacy policies that apply to them – because who has the time? In 2008 it was estimated that if someone read every privacy policy they were presented within a single year, it would take them 76 working days to get through the pile. But the amount of data we all create and share has dramatic implications for privacy and safety. Informed consent is taken very seriously in the medical community, is it time for companies using AI and Big Data to follow suit?
LOOKING INTO THE FUTURE OF AI: BIG DATA AND INFORMED CONSENT
Septemebr 3, 2021
In this article I talk about protecting our personal information in the age of artificial intelligence and big data.
It features in the following:
Muswellbrook Chronicle; Glen Innes Examiner; The Advocate; Central Western Daily; The Canberra Times; Port Macquarie News; The Rural; Margaret River Mail; Wellington Times; Oberon Review; Bega District News; Northern Rivers Review; Beaudesert Times; Border Chronicle; Maitland Mercury; The Border Mail; Busselton Mail
IMPERFECT COGNITIONS BLOG
January 28, 2020
'Confabulation does Not Undermine Introspection' was featured on the Imperfect Cognitions Blog. This is a Blog on delusions, memory distortions, confabulations, biased beliefs, and other imperfect cognitions. Click here for the blog.