As mentioned in a previous post, I was invited to speak at “AI: The Future of Us” at the British Museum earlier this month. Rather than give a lecture, it was decided that I should have a “fireside chat” with Stephen Upstone, the CEO and founder of LoopMe, the AI company hosting the event. We had fun, and got some good feedback, so we’re looking into doing something similar this Autumn — watch this space.
Our discussion was structured around the following questions/topics being posed to me:
- My background (what I do, what is Cognitive Science, how did I start working in AI, etc.)
- What is the definition of consciousness and at what point can we say an AI machine is conscious?
- What are the ethical implications for AI? Will we ever reach the point at which we will need to treat AI like a human? And how do we define AI’s responsibility?
- Where do you see AI 30 years from now? How do you think AI will revolutionise our lives? (looking at things like smart homes, healthcare, finance, saving the environment, etc.)
- So on your view, how far away are we from creating a super intelligence that will be better than humans in every aspect from mental to physical and emotional abilities? (Will we reach a point when the line between human and machine becomes blurred?)
- So is AI not a threat? As Stephen Hawking recently said in the Guardian “AI will be either the best or worst thing for humanity”. What do you think? Is AI something we don’t need to be worried about?
You can listen to our fireside chat here.