KEITH FRANKISH

KEITH FRANKISH

Editor of Illusionism as a Theory of Consciousness · Cambridge University Press’ Elements in Philosophy of Mind
Author of Mind and Supermind · Consciousness

What I like about the sort of view I have is that it represents us as fully part of the world, fully part of the same world. We're not sealed off into little private mental bubbles, Cartesian theaters, where all the real action is happening in here, not out there. No, I think we're much more engaged with the world. It's not all happening in some private mental world. It's happening in our engagement with the shared world, and that seems to me a vision that I find much more uplifting, comforting, and rewarding.

RAPHAËL MILLIÈRE

RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm