Summer 2020
Tuesdays 16:00-17:30
Date | Seminar | Venue |
---|---|---|
Jun 23 |
The Truth about Free Will Abstract: The talk will critically examine the arguments against free will. Biography: Raymond Tallis is a philosopher, poet, novelist and cultural critic, and a retired physician and clinical neuroscientist. He ran a large clinical service in Hope Hospital Salford and an academic department in the University of Manchester. His research focussed on epilepsy, stroke, and neurological rehabilitation. |
Zoom https://universityofsussex.zoom.us/j/94508499194 |
Jun 30 |
How to Build a Conscious Machine Abstract: Is the project of creating artificial consciousness a feasible one? It depends on what we think consciousness is. If we adopt a qualitative view of consciousness, then, I shall argue, the project is not feasible, even in principle. Even if there could be artificial consciousness, we could not deliberately set out to create it; we'd have no idea what to do or how to tell if we had succeeded.If we adopt a functional view of consciousness, on the other hand, then the project is feasible; we can see how to proceed, at least in outline. The functional view has a consequence, however: namely, it denies the existence of the supposed qualitative properties of experience, qualia. Functionalists must say that qualia are illusory. Many people regard this view as untenable and self-defeating, but I shall argue that, properly understood, it is a coherent and attractive one. The first step in building a conscious robot is to adopt an illusionist theory of consciousness. |
Zoom https://universityofsussex.zoom.us/j/92008829270 |
Jul 7 |
Don't Ask: Classification in Comparative Cognitive Science Abstract: Many projects in comparative cognitive science (which I mean to include research in both comparative psychology and artificial intelligence) are structured around what I’ll call ‘classificatory questions’ – that is, questions about whether nonhuman cognitive systems have the same cognitive capacities as humans. These projects often generate unproductive, often apparently verbal disputes about how cognitive capacities should be delineated. In part because of this, some researchers have argued that we should stop asking classificatory questions, and instead adopt a ‘bottom-up’ approach focussed on cognitive mechanisms. Against this, I offer a defence of classificatory projects – arguing, first, that bottom-up approaches raise many of the same difficult questions about the delineation of cognitive capacities, and second, that these questions can be addressed once we recognise that researchers’ theoretical interests play a role in delineating the objects of study. On this view of things, apparently verbal disagreements may reflect deeper disagreement about why we are engaged in classificatory projects. So, this defence of classificatory projects in comparative cognitive science comes with a qualification: researchers can’t sensibly pursue classificatory projects for their own sake, but only to satisfy some further theoretical interest. |
Zoom https://universityofsussex.zoom.us/j/97897408661 |
Jul 14 |
Protein computation and its implications Abstract:In the brain, the chemical processes in post-synaptic proteins provide a great deal of computation. These processes have inherent properties similar to Piaget’s concepts of assimilation & accommodation, and models based on these processes are an important step forward for cognitive science and AI. This presentation introduces the topic of protein computation, and describes the building blocks & future directions for a new model of learning and cognition inspired by protein computation processes and by the research of Seth Grant & his team. The low-level properties of protein computation make the model naturally adaptive, distributed, resilient and exploratory. Protein computation models have both technical and ethical implications for the future of AI. |
Zoom https://universityofsussex.zoom.us/j/91140256007 |
Jul 21 |
Minding the Moral Gap in Human-Machine Interaction Abstract:Given the enduring challenges of interpretability, explainability, fairness, safety, and reliability of machine learning systems, as well as expanding legal and ethical constraints imposed on such systems by regulators and standards bodies, it will be the case for the foreseeable future that AI/ML systems deployed in many high-stakes decision contexts will be required to operate under human oversight, what is often called ‘meaningful human control.’ Oversight is increasingly demanded in a broad range of application areas, from medicine and banking to military uses. However, this reassuring phrase conceals grave difficulties. How can humans control or provide effective oversight for ML system operations or machine outputs for which human supervisors lack deep understanding—an understanding often precluded by the very same causes (speed, complexity, opacity and non-verifiability of machine reasoning) that necessitate human supervision in the first place? This quandary exposes a gap in AI safety and ethics governance mechanisms that existing methods are unlikely to close. In this talk I explore two dimensions of this gap which are frequently underappreciated in research on AI safety, explainable AI, or ‘human-friendly AI’: the absence of a capacity for ‘moral dialectic’ between human and machine experts, and the absence of an affective dimension to machine reasoning. |
Zoom https://universityofsussex.zoom.us/j/99931340072 |
Contact COGS
For suggestions for speakers, contact Simon Bowes
For publicity and questions regarding the website, contact Simon Bowes.
Please mention COGS and COGS seminars to all potentially interested newcomers to the university.
A good way to keep informed about COGS Seminars is to be a member of COGS. Any member of the university may join COGS and the COGS mailing list by using the subscription form at .
Follow us on Twitter:
https://universityofsussex.zoom.us/j/94508499194