What is generative AI? How do you create safe and capable models? Is AI overhyped? Join mathematician and broadcaster Professor Hannah Fry as she answers these questions and more in the highly-praised and award-winning podcast from Google DeepMind. In this series, Hannah goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.
In our final episode for the year, we explore Project Astra, a research prototype exploring future capabilities of a universal AI assistant that can understand the world around you. Host Hannah Fry is joined by Greg Wayne, Director in Research at Google DeepMind. They discuss the inspiration behind the research prototype, its current strengths and limitations, as well as potential future use cases. Hannah even gets the chance to put Project Astra's multilingual skills to the test.Further reading / listening:Gemini 2.0Project Astra Decoding Google Gemini with Jeff DeanGaming, Goats & General Intelligence with Frederic BesseThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
20/12/2024 • 46:07
In this episode, Hannah is joined by Oriol Vinyals, VP of Drastic Research and Gemini co-lead. They discuss the evolution of agents from single-task models to more general-purpose models capable of broader applications, like Gemini. Vinyals guides Hannah through the two-step process behind multi modal models: pre-training (imitation learning) and post-training (reinforcement learning). They discuss the complexities of scaling and the importance of innovation in architecture and training processes. They close on a quick whirlwind tour of some of the new agentic capabilities recently released by Google DeepMind. Note: To see all of the full length demos, including unedited versions, and other videos related to Gemini 2.0 head to YouTube.Future reading/watching: Gemini 2.0 Decoding Google Gemini with Jeff DeanGaming, Goats & General Intelligence with Frederic BesseThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind— Subscribe to our YouTube channel Find us on XFollow us on InstagramAdd us on Linkedin Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
12/12/2024 • 49:25
There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphasises the importance of a nuanced approach to regulation, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction. Further reading/watching:AI Principles: https://ai.google/responsibility/principles/Frontier Model Forum: https://blog.google/outreach-initiatives/public-policy/google-microsoft-openai-anthropic-frontier-model-forum/Ethics of AI assistants with Iason Gabriel https://youtu.be/aaZc-as-soA?si=0ThbYY30FlO31kKQThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
05/12/2024 • 53:10
NotebookLM is a research assistant powered by Gemini that draws on expertise from storytelling to present information in an engaging way. It allows users to upload their own documents and generate insights, explanations, and—more recently—podcasts. This feature, also known as audio overviews, has captured the imagination of millions of people worldwide, who have created thousands of engaging podcasts ranging from personal narratives to educational explainers using source materials like CVs, personal journals, sales decks, and more.Join Raiza Martin and Steven Johnson from Google Labs, Google’s testing ground for products, as they guide host Hannah Fry through the technical advancements that have made NotebookLM possible. In this episode they'll explore what it means to be interesting, the challenges of generating natural-sounding speech, as well as exciting new modalities on the horizon.Further readingTry NotebookLM hereRead about the speech generation technology behind Audio Overveiws: https://deepmind.google/discover/blog/pushing-the-frontiers-of-audio-generation/Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Alex Baro Cayetano, Daniel Lazard Video Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
26/11/2024 • 44:18
Join Professor Hannah Fry at the AI for Science Forum for a fascinating conversation with Google DeepMind CEO Demis Hassabis. They explore how AI is revolutionizing scientific discovery, delving into topics like the nuclear pore complex, plastic-eating enzymes, quantum computing, and the surprising power of Turing machines. The episode also features a special 'ask me anything' session with Nobel Laureates Sir Paul Nurse, Jennifer Doudna, and John Jumper, who answer audience questions about the future of AI in science.Watch the episode here, and catch up on all of the sessions from the AI for Science Forum here. Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
21/11/2024 • 54:23
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies. Timecodes: 00:00 Intro01:13 Definition of AI assistants04:05 A utopic view06:25 Iason’s background07:45 The Ethics of Advanced AI Assistants paper13:06 Anthropomorphism14:07 Turing perspective15:25 Anthropomorphism continued20:02 The value alignment question24:54 Deception27:07 Deployed at scale28:32 Agentic inequality31:02 Unfair outcomes34:10 Coordinated systems37:10 A new paradigm38:23 Tetradic value alignment41:10 The future42:41 Reflections from HannahThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo DawoudCommissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
11/11/2024 • 43:58
How human should an AI tutor be? What does ‘good’ teaching look like? Will AI lead in the classroom, or take a back seat to human instruction? Will everyone have their own personalized AI tutor? Join research lead, Irina Jurenka, and Professor Hannah Fry as they explore the complicated yet exciting world of AI in education. Further reading:Towards Responsible Development of Generative AI for Education: An Evaluation-Driven ApproachThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo Dawoud Commissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
23/10/2024 • 39:56
In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in AlphaFold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.Further reading:Millions of new materials discovered with deep learningGraphCast: AI model for faster and more accurate global weather forecastingAlphaFold: A breakthrough unfolds (S2,E1)AlphaGeometry: An Olympiad-level AI system for geometryAI achieves silver-medal standard solving International Mathematical Olympiad problemsThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo Dawoud Commissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
09/10/2024 • 47:04
Games are a very good training ground for agents. Think about it. Perfectly packaged, neatly constrained environments where agents can run wild, work out the rules for themselves, and learn how to handle autonomy. In this episode, Research Engineering Team Lead, Frederic Besse, joins Hannah as they discuss important research like SIMA (Scalable Instructable Multiworld Agent) and what we can expect from future agents that can understand and safely carry out a wide range of tasks - online and in the real world.Further reading:SIMARTX & RT2Interactive AgentsThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo Dawoud Music composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
25/09/2024 • 40:22
Professor Hannah Fry is joined by Jeff Dean, one of the most legendary figures in computer science and chief scientist of Google DeepMind and Google Research. Jeff was instrumental to the field in the late 1990s, writing the code that transformed Google from a small startup into the multinational company it is today. Hannah and Jeff discuss it all - from the early days of Google and neural networks, to the long term potential of multi-modal models like Gemini.Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
11/09/2024 • 53:02
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind. For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
28/08/2024 • 38:11
Professor Hannah Fry is joined by Google DeepMind's senior research director Douglas Eck to explore AI's capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways. Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes. Further reading: VeoImagen SynthIDAn update on web publisher controls (Google-Extended)Social channels to follow for new content:InstagramXLinkedinThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonSeries Editor: Rami Tzabar, TellTale Studios Commissioner and Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Darren Carikas Video Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
14/08/2024 • 40:36
It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up.In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence. Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.Further reading: GeminiProject Astra Google I/O 2024Scaling Language Models: Methods, Analysis & Insights from Training GopherLaMDA: our breakthrough conversation technologySocial channels to follow for new content:InstagramXLinkedinThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonSeries Editor: Rami Tzabar, TellTale Studios Commissioner and Producer: Emma YousifProduction support: Mo DawoudMusic composition: Eleni ShawCamera Director and Video Editor: Tommy BruceAudio Engineer: Darren Carikas Video Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor Tomlinson Commissioned by Google DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
14/08/2024 • 49:42
Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewee: Deepmind co-founder and CEO, Demis Hassabis CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading:DeepMind, The Podcast: https://deepmind.com/blog/article/welcome-to-the-deepmind-podcastDeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbwRiemann hypothesis, Wikipedia: https://en.wikipedia.org/wiki/Riemann_hypothesisUsing AI to accelerate scientific discovery by Demis Hassabis, Kendrew Lecture 2021: https://www.youtube.com/watch?v=sm-VkgVX-2oProtein Folding & the Next Technological Revolution by Demis Hassabis, Bloomberg: https://www.youtube.com/watch?v=vhd4ENh5ON4The Algorithm, MIT Technology Review: https://forms.technologyreview.com/newsletters/ai-the-algorithm/Machine learning resources, The Royal Society: https://royalsociety.org/topics-policy/education-skills/teacher-resources-and-opportunities/resources-for-teachers/resources-machine-learning/How to get empowered, not overpowered, by AI, TED: https://www.youtube.com/watch?v=2LRwvU6gEbA Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
15/03/2022 • 30:28
AI needs to benefit everyone, not just those who build it. But fulfilling this promise requires careful thought before new technologies are built and released into the world. In this episode, Hannah delves into some of the most pressing and difficult ethical and social questions surrounding AI today. She explores complex issues like racial and gender bias and the misuse of AI technologies, and hears why diversity and representation is vital for building technology that works for all. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Sasha Brown, William Isaac, Shakir Mohamed, Kevin Mckee & Obum Ekeke CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias, The Verge: https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-biasTuskegee Syphilis Study, Wikipedia: https://en.wikipedia.org/wiki/Tuskegee_Syphilis_StudyEthics & Society, DeepMind: https://deepmind.com/about/ethics-and-societyRow over AI that 'identifies gay faces', BBC: https://www.bbc.co.uk/news/technology-41188560The Trevor Project: https://www.thetrevorproject.org/AI takes root, helping farmers identify diseased plants, Google: https://www.blog.google/technology/ai/ai-takes-root-helping-farmers-identity-diseased-plants/How Can You Use Technology to Support a Culture of Inclusion and Diversity?, myHRfuture: https://www.myhrfuture.com/blog/2019/7/16/how-can-you-use-technology-to-support-a-culture-of-inclusion-and-diversityScholarships at DeepMind: https://www.deepmind.com/scholarshipsAI, Ain’t I a Woman? Joy Buolamwini, YouTube: https://www.youtube.com/watch?v=QxuyfWoVV98How to be Human in the Age of the Machine, Hannah Fry: https://royalsociety.org/grants-schemes-awards/book-prizes/science-book-prize/2018/hello-world/ Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
01/03/2022 • 33:23
AI doesn’t just exist in the lab, it’s already solving a range of problems in the real world. In this episode, Hannah encounters a realistic recreation of her voice by WaveNet, the voice synthesising system that powers the Google Assistant and helps people with speech difficulties and illnesses regain their voices. Hannah also discovers how ‘deepfake’ technology can be used to improve weather forecasting and how DeepMind researchers are collaborating with Liverpool Football Club, aiming to take sports to the next level. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Demis Hassabis, Raia Hadsell, Karl Tuyls, Zach Gleicher & Jackson Broshear; Niall Robinson of the UK Met Office CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: A generative model for raw audio, DeepMind: https://deepmind.com/blog/article/wavenet-generative-model-raw-audioWaveNet case study, DeepMind: https://deepmind.com/research/case-studies/wavenetUsing WaveNet technology to reunite speech-impaired users with their original voices, DeepMind:| https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voicesProject Euphonia, Google Research: https://sites.research.google/euphonia/about/Nowcasting the next hour of rain, DeepMind: https://deepmind.com/blog/article/nowcastingNow DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-aiAdvancing sports analytics through AI, DeepMind: https://deepmind.com/blog/article/advancing-sports-analytics-through-aiMetOffice, BBC: https://www.metoffice.gov.uk/The village ‘washed on to the map’, BBC: https://www.bbc.co.uk/news/uk-england-cornwall-28523053Michael Fish got the storm of 1987 wrong, Sky News: https://news.sky.com/story/michael-fish-got-the-storm-of-1987-wrong-but-modern-supercomputers-may-have-missed-it-too-11076659#:~:text=In%20a%20lunchtime%20broadcast%20on,%2C%22%20he%20confidently%20told%20viewers. Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
22/02/2022 • 35:31
Step inside DeepMind's laboratories and you'll find researchers studying DNA to understand the mysteries of life, seeking new ways to use nuclear energy, or putting AI to the test in mind-bending areas of maths. In this episode, Hannah meets Pushmeet Kohli, the head of science at DeepMind, to understand how AI is accelerating scientific progress. Listeners also join Hannah on a [virtual] safari in the Serengeti in East Africa to find out how researchers are using AI to conserve wildlife in one of the world’s most spectacular ecosystems. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Demis Hassabis, Pushmeet Kohli & Sarah Jane Dunn; Meredith Palmer of the Princeton University CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading:Using AI for scientific discovery, DeepMind: https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discoveryDeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbwThe AI revolution in scientific research, The Royal Society: https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdfDOE Explains...Tokamaks, Office of Science: https://www.energy.gov/science/doe-explainstokamaksHow AI Accidentally Learned Ecology by Playing StarCraft, Discover: https://www.discovermagazine.com/technology/how-ai-accidentally-learned-ecology-by-playing-starcraftGoogle AI can identify wildlife from trap-camera footage, VentureBeat:https://venturebeat.com/2019/12/17/googles-ai-can-identify-wildlife-from-trap-camera-footage-with-up-to-98-6-accuracy/Snapshot Serengeti, Zooniverse:https://www.zooniverse.org/projects/zooniverse/snapshot-serengetiThe Human Genome Project, National Human Genome Research Institute: https://www.genome.gov/human-genome-projectExploring the beauty of pure mathematics in novel ways, DeepMind: https://deepmind.com/blog/article/exploring-the-beauty-of-pure-mathematics-in-novel-waysPredicting gene expression with AI, DeepMind: https://deepmind.com/blog/article/enformerUsing machine learning to accelerate ecological research, DeepMind: https://deepmind.com/blog/article/using-machine-learning-to-accelerate-ecological-researchAccelerating fusion science through learned plasma control, DeepMind: https://deepmind.com/blog/article/Accelerating-fusion-science-through-learned-plasma-controlSimulating matter on the quantum scale with AI, DeepMind: https://deepmind.com/blog/article/Simulating-matter-on-the-quantum-scale-with-AIHow AI is helping the natural sciences, Nature: https://www.nature.com/articles/d41586-021-02762-6Inside DeepMind's epic mission to solve science's trickiest problem, WIRED: https://www.wired.co.uk/article/deepmind-protein-foldingHow Artificial Intelligence Is Changing Science, Quanta: https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/ Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
16/02/2022 • 33:52
Hannah meets DeepMind co-founder and chief scientist Shane Legg, the man who coined the phrase ‘artificial general intelligence’, and explores how it might be built. Why does Shane think AGI is possible? When will it be realised? And what could it look like? Hannah also explores a simple theory of using trial and error to reach AGI and takes a deep dive into MuZero, an AI system which mastered complex board games from chess to Go, and is now generalising to solve a range of important tasks in the real world. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Shane Legg, Doina Precup, Dave Silver & Jackson Broshear CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: Real-world challenges for AGI, DeepMind: https://deepmind.com/blog/article/real-world-challenges-for-agiAn executive primer on artificial general intelligence, McKinsey: https://www.mckinsey.com/business-functions/operations/our-insights/an-executive-primer-on-artificial-general-intelligenceMastering Go, chess, shogi and Atari without rules, DeepMind: https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rulesWhat is AGI?, Medium: https://medium.com/intuitionmachine/what-is-agi-99cdb671c88eA Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329Reward is enough by David Silver, ScienceDirect: https://www.sciencedirect.com/science/article/pii/S0004370221000862 Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
15/02/2022 • 32:58
Do you need a body to have intelligence? And can one exist without the other? Hannah takes listeners behind the scenes of DeepMind's robotics lab in London where she meets robots that are trying to independently learn new skills, and explores why physical intelligence is a necessary part of intelligence. Along the way, she finds out how researchers trained their robots at home during lockdown, uncovers why so many robotics demonstrations are faking it, and what it takes to train a robotic football team. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Raia Hadsell, Viorica Patraucean, Jan Humplik, Akhil Raju & Doina Precup CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: Stacking our way to more general robots, DeepMind: https://deepmind.com/blog/article/stacking-our-way-to-more-general-robotsResearchers Propose Physical AI As Key To Lifelike Robots, Forbes: https://www.forbes.com/sites/simonchandler/2020/11/11/researchers-propose-physical-ai-as-key-to-lifelike-robots/The robots going where no human can, BBC: https://www.bbc.co.uk/news/av/technology-41584738The Robot Assault On Fukushima, WIRED: https://www.wired.com/story/fukushima-robot-cleanup/Leaps, Bounds, and Backflips, Boston Dynamics: http://blog.bostondynamics.com/atlas-leaps-bounds-and-backflipsNow DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-ai Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
08/02/2022 • 33:43
Cooperation is at the heart of our society. Inventing the railway, giving birth to the Renaissance, and creating the Covid-19 vaccine all required people to combine efforts. But cooperation is so much more. It governs our education systems, healthcare, and food production. In this episode, Hannah meets the researchers working on cooperative AI, and hears about their work and influences from the famous American psychologist - and pigeon trainer - BF Skinner to the strategic board game Diplomacy. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com Interviewees: DeepMind’s Thore Graepel, Kevin Mckee, Doina Precup & Laura Weidinger CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: Machines must learn to find common ground, Nature: https://www.nature.com/articles/d41586-021-01170-0Introduction to Reinforcement Learning, DeepMind: https://www.youtube.com/watch?v=2pWv7GOvuf0B.F. Skinner, Wikipedia: https://en.wikipedia.org/wiki/B._F._SkinnerThe Tragedy of the Commons, Wikipedia: https://en.wikipedia.org/wiki/Tragedy_of_the_commonsStaving Off The Ultimate Tragedy Of The Commons, Forbes: https://www.forbes.com/sites/georgebradt/2021/11/02/staving-off-the-ultimate-tragedy-of-the-commons-by-making-better-complex-decisions-cooperatively-in-glasgow/Understanding Agent Cooperation, DeepMind: https://deepmind.com/blog/article/understanding-agent-cooperationThe emergence of complex cooperative agents, DeepMind: https://deepmind.com/blog/article/capture-the-flag-science Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
01/02/2022 • 34:32
In December 2019, DeepMind’s AI system, AlphaFold, solved a 50-year-old grand challenge in biology, known as the protein-folding problem. A headline in the journal Nature read, “It will change everything” and the President of the UK's Royal Society called it a “stunning advance [that arrived] decades before many in the field would have predicted”. In this episode, Hannah uncovers the inside story of AlphaFold from the people who made it happen and finds out how it could help transform the future of healthcare and medicine.For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Demis Hassabis, John Jumper, Kathryn Tunyasunakool and Sasha Brown; Charles Mowbray and Monique Wasuna of the Drugs for Neglected Diseases initiative (DNDi]) & John McGeehan of the Centre for Enzyme Innovation at the University of Portsmouth CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: AlphaFold blog, DeepMind: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biologyAlphaFold case study, DeepMind: https://deepmind.com/research/case-studies/alphafoldIt will change everything, Nature: https://www.nature.com/articles/d41586-020-03348-4AlphaFold Is The Most Important Achievement In AI—Ever, Forbes: https://www.forbes.com/sites/robtoews/2021/10/03/alphafold-is-the-most-important-achievement-in-ai-ever/?sh=359278426e0aBacteria found to eat PET plastics, NewScientist: https://www.newscientist.com/article/2080279-bacteria-found-to-eat-pet-plastics-could-help-do-the-recycling/Protein Structure Prediction Center: https://predictioncenter.org/An interview with Professor John McGeehan, BBSRC: https://bbsrc.ukri.org/news/features/enzyme-science/an-interview-with-professor-john-mcgeehan/John McGeehan profile, University of Portsmouth: https://researchportal.port.ac.uk/en/persons/john-mcgeehanDrugs for Neglected Diseases initiative (DNDi): https://dndi.org/A doctor’s dream, DNDi: https://www.youtube.com/watch?v=Tk31iucWYdEThe Curious Cases of Rutherford and Fry, BBC: https://www.bbc.co.uk/programmes/b07dx75g/episodes/downloadsHannah Fry: https://hannahfry.co.uk/ Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
25/01/2022 • 39:14
Hannah explores the potential of language models, the questions they raise, and if teaching a computer about language is enough to create artificial general intelligence (AGI). Beyond helping us communicate ideas, language plays a crucial role in memory, cooperation, and thinking – which is why AI researchers have long aimed to communicate with computers using natural language. Recently, there has been extraordinary progress using large-language models (LLM), which learn how to speak by processing huge amounts of data from the internet. The results can be very convincing, but pose significant ethical challenges. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind’s Geoffrey Irving, Chris Dyer, Angeliki Lazaridou, Lisa-Anne Hendriks & Laura Weidinger CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: GPT-3 Powers the Next Generation of Apps, OpenAI: https://openai.com/blog/gpt-3-apps/https://web.stanford.edu/class/linguist238/p36-weizenabaum.pdfNever Mind the Computer 1983 about the ELIZA program, BBC: https://www.bbc.co.uk/programmes/p023kpf8How Large Language Models Will Transform Science, Society, and AI, Stanford University: https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-aiChallenges in Detoxifying Language Models, DeepMind: https://deepmind.com/research/publications/2021/Challenges-in-Detoxifying-Language-ModelsExtending Machine Language Models toward Human-Level Language Understanding, DeepMind: https://deepmind.com/research/publications/2020/Extending-Machine-Language-Models-toward-Human-Level-Language-UnderstandingLanguage modelling at scale, DeepMind: https://deepmind.com/blog/article/language-modelling-at-scaleArtificial general intelligence, Technology Review: https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/A Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329Stuart Russell - Living With Artificial Intelligence, BBC: https://www.bbc.co.uk/programmes/m001216k/episodes/player Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
25/01/2022 • 38:11
The chart-topping podcast which uncovers the extraordinary ways artificial intelligence (AI) is transforming our world is back for a second season. Join mathematician and broadcaster Professor Hannah Fry behind the scenes of world-leading AI research lab DeepMind to get the inside story of how AI is being created – and how it can benefit our lives and the society we live in.Recorded over six months and featuring over 30 original interviews, including DeepMind co-founders Demis Hassabis and Shane Legg, the podcast gives listeners exclusive access to the brilliant people building the technology of the future. Throughout nine original episodes, Hannah discovers how DeepMind is using AI to advance science in critical areas, like solving a 50-year-old grand challenge in biology and developing nuclear fusion.Listeners hear stories of teaching robots to walk at home during lockdown, as well as using AI to forecast weather, help people regain their voices, and enhance game strategies with Liverpool Football Club. Hannah also takes an in-depth look at the challenges and potential of building artificial general intelligence (AGI) and explores what it takes to ensure AI is built to benefit society.“I hope this series gives people a better understanding of AI and a feeling for just how exhilarating an endeavour it is.” – Demis Hassabis, CEO and Co-Founder of DeepMindFor questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com.CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
10/01/2022 • 03:08
In this special extended episode, Hannah Fry meets Demis Hassabis, the CEO and co-founder of DeepMind. She digs into his former life as a chess player, games designer and neuroscientist and explores how his love of chess helped him to get start-up funding, what drives him and his vision, and why AI keeps him up at night.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.Further reading:Wired: Inside DeepMind's epic mission to solve science's trickiest problemQuanta magazine: How Artificial Intelligence Is Changing ScienceDemis Hassabis: A systems neuroscience approach to building AGI. Talk at the 2010 Singularity Summit Demis Hassabis: The power of self-learning systems. Talk at MIT 2019Demis Hassabis: Talk on Creativity and AI Financial Times: The mind in the machine: Demis Hassabis on artificial intelligence (2017)The Times: Interview with Demis HassabisThe Economist Babbage podcast: DeepMind GamesInterview with Demis Hassabis from the book Game Changer, which also features an introduction from DemisInterviewees: Deepmind CEO and co-founder, Demis HassabisCredits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
17/09/2019 • 36:58
AI researchers around the world are trying to create a general purpose learning system that can learn to solve a broad range of problems without being taught how. Koray Kavukcuoglu, DeepMind’s Director of Research, describes the journey to get there, and takes Hannah on a whistle-stop tour of DeepMind’s HQ and its research.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.Further reading:OpenAI: An overview of neural networks and the progress that has been made in AIShane Legg, DeepMind co-founder: Measuring machine intelligence at the 2010 Singularity SummitShane Legg and Marcus Hutter: Paper on defining machine intelligenceDemis Hassabis: Talk on the history, frontiers and capabilities of AIRobert Wiblin: Positively shaping the development of artificial intelligenceAsilomar AI PrinciplesRichard S. Sutton and Andrew G. Barto: Reinforcement Learning: An IntroductionInterviewees: Koray Kavukcuoglu, Director of Research; Trevor Back, Product Manager for DeepMind’s science research; research scientists Raia Hadsell and Murray Shanahan; and DeepMind CEO and co-founder, Demis Hassabis.Credits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
10/09/2019 • 26:09
While there is a lot of excitement about AI research, there are also concerns about the way it might be implemented, used and abused. In this episode Hannah investigates the more human side of the technology, some ethical issues around how it is developed and used, and the efforts to create a future of AI that works for everyone.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.Further reading:The Partnership on AIProPublica: investigation into machine bias in criminal sentencingScience Museum – free exhibition: Driverless: who is in control (until Oct 2020)Survival of the best fit: An interactive game that demonstrates some of the ways in which bias can be introduced into AI systems, in this case for hiringJoy Buolamwini: AI, Ain’t I a Woman: A spoken word piece exploring AI bias, and systems not recognising prominent black womenHannah Fry: Hello World - How to be Human in the Age of the MachineDeepMind: Safety and EthicsFuture of Humanity Institute: AI Governance:A Research AgendaInterviewees: Verity Harding, Co-Lead of DeepMind Ethics and Society; DeepMind’s COO Lila Ibrahim, and research scientists William Isaac and Silvia Chiappa.Credits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
03/09/2019 • 29:22
The ambition of much of AI research is to create systems that can help to solve problems in the real world. In this episode, Hannah meets the people building systems that could be used to save the sight of thousands, help us solve one of the most fundamental problems in biology and reduce energy consumption in an effort to combat climate change. But whilst there is great potential, there are also important obstacles that will need to be tackled for AI to be used effectively, safely and fairly.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.Further reading:Wired: Inside DeepMind's epic mission to solve science's trickiest problemDeepMind blogs on the partnership with Moorfields NHS eye hospital and predicting eye disease, and Moorfields’ news announcement on its research with DeepMindDeepMind blog: AlphaFold: Using AI for scientific discoveryDeepMind blogs on reducing Google’s energy bill for datacentre cooling and how this project has progressedResearch paper: Tackling Climate Change with Machine LearningQuanta magazine: How Artificial Intelligence Is Changing ScienceDeepMind blog: How evolutionary selection can train more capable self-driving carsOther examples of the application of AI for real-world impact include:Francis Crick Institute: machine learning models that can help predict heart diseaseNASA: AUDREY machine learning system to better guide first responders through firesUniversity of Southern California: Protection Assistant for Wildlife Security using AI to help wildlife conservationInterviewees: Pearse Keane, consultant ophthalmologist at Moorfields Eye Hospital; Sandy Nelson, Product Manager for DeepMind’s Science Program; and DeepMind Program Manager Sims Witherspoon.Credits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
27/08/2019 • 31:22
Forget what sci-fi has told you about superintelligent robots that are uncannily human-like; the reality is more prosaic. Inside DeepMind’s robotics laboratory, Hannah explores what researchers call ‘embodied AI’: robot arms that are learning tasks like picking up plastic bricks, which humans find comparatively easy. Discover the cutting-edge challenges of bringing AI and robotics together, and learning from scratch how to perform tasks. She also explores some of the key questions about using AI safely in the real world.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.Further reading:Blogs on AI safety and further resources from Victoria KrakovnaThe Future of Life Institute: The risks and benefits of AIThe Wall Street Journal: Protecting Against AI’s Existential ThreatTED Talks: Max Tegmark - How to get empowered, not overpowered, by AIRoyal Society lecture series sponsored by DeepMind: You & AINick Bostrom: Superintelligence: Paths, Dangers and Strategies (book)OpenAI: Learning from Human PreferencesDeepMind blog: Learning from human preferencesDeepMind blog: Learning by playing - how robots can tidy up after themselvesDeepMind blog: AI safetyInterviewees: Software engineer Jackie Kay and research scientists Murray Shanahan, Victoria Krakovna, Raia Hadsell and Jan Leike.Credits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
20/08/2019 • 32:33
Video games have become a favourite tool for AI researchers to test the abilities of their systems. In this episode, Hannah sits down to play StarCraft II - a challenging video game that requires players to control the onscreen action with as many as 800 clicks a minute. She is guided by Oriol Vinyals, an ex-professional StarCraft player and research scientist at DeepMind, who explains how the program AlphaStar learnt to play the game and beat a top professional player. Elsewhere, she explores systems that are learning to cooperate in a digital version of the playground favourite ‘Capture the Flag’.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com.Further readingThe Economist: Why AI researchers like video gamesDeepMind blogs: Capture the Flag and AlphastarProfessional StarCraft II player MaNa gives his impressions of AlphaStar and DeepMindOpen AI’s work on Dota 2 The New York Times: DeepMind can now beat us at multiplayer games, tooRoyal Society: Machine Learning resourcesDeepMind: The Inside Story of AlphaStar Andrej Karpathy: Deep Reinforcement Learning: Pong from PixelsInterviewees: Research scientists Max Jaderberg and Raia Hadsell; Lead researchers David Silver and Oriol Vinyals, and Director of Research Koray Kavukcuoglu.Credits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
20/08/2019 • 26:52
In March 2016, more than 200 million people watched AlphaGo become first computer program to defeat a professional human player at the game of Go, a milestone in AI research that was considered to be a decade ahead of its time. Since then the team has continued to develop the system and recently unveiled AlphaZero: a program that has taught itself how to play chess, Go, and shogi. Hannah explores the inside story of both with Lead Researcher David Silver and finds out why games are a useful proving ground for AI researchers. She also meets Chess Grandmaster Matthew Sadler and women’s international master Natasha Regan, who have written a book on AlphaZero and its unique gameplay.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.Further readingAlphaGo the documentary The Surrounding Game: Documentary about the ancient game of GoDeepMind website: AlphaGoGarry Kasparov: Deep ThinkingAI: More than Human - Exhibition at the Barbican Centre, 2019 and online exhibitDeepMind blog: AlphaZero: Shedding new light on chess, shogi, and GoMatthew Sadler and Natasha Regan: Game Changer - a book about chess and AI WIRED: What the AI behind AlphaGo can teach us about being humanInterviewees: DeepMind CEO Demis Hassabis, Matthew Sadler, chess Grandmaster; Lead Researcher David Silver, Matt Botvinick, Director of Neuroscience Research; and Natasha Regan, women’s international chess master.Credits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
20/08/2019 • 25:40