Show cover of London Futurists

London Futurists

Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.

Tracks

Taming the Machine, with Nell Watson
Those who rush to leverage AI’s power without adequate preparation face difficult blowback, scandals, and could provoke harsh regulatory measures. However, those who have a balanced, informed view on the risks and benefits of AI, and who, with care and knowledge, avoid either complacent optimism or defeatist pessimism, can harness AI’s potential, and tap into an incredible variety of services of an ever-improving quality.These are some words from the introduction of the new book, “Taming the machine: ethically harness the power of AI”, whose author, Nell Watson, joins us in this episode.Nell’s many roles include: Chair of IEEE’s Transparency Experts Focus Group, Executive Consultant on philosophical matters for Apple, and President of the European Responsible Artificial Intelligence Office. She also leads several organizations such as EthicsNet.org, which aims to teach machines prosocial behaviours, and CulturalPeace.org, which crafts Geneva Conventions-style rules for cultural conflict.Selected follow-ups:Nell Watson's websiteTaming the Machine - book websiteBodiData (corporation)Post Office Horizon scandal: Why hundreds were wrongly prosecuted - BBC NewsDutch scandal serves as a warning for Europe over risks of using algorithms - PoliticoRobodebt: Illegal Australian welfare hunt drove people to despair - BBC NewsWhat is the infected blood scandal and will victims get compensation? - BBC NewsMIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)British engineering giant Arup revealed as $25 million deepfake scam victim - CNNZersetzung psychological warfare technique - WikipediaMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
45:07 6/20/24
AI Impacts Survey - The key implications, with Katja Grace
Our guest in this episode grew up in an abandoned town in Tasmania, and is now a researcher and blogger in Berkeley, California. After taking a degree in human ecology and science communication, Katja Grace co-founded AI Impacts, a research organisation trying to answer questions about the future of artificial intelligence.Since 2016, Katja and her colleagues have published a series of surveys about what AI researchers think about progress on AI. The 2023 Expert Survey on Progress in AI was published this January, comprising responses from 2,778 participants. As far as we know, this is the biggest survey of its kind to date.Among the highlights are that the time respondents expect it will take to develop an AI with human-level performance dropped between one and five decades since the 2022 survey. So ChatGPT has not gone unnoticed. Selected follow-ups:AI ImpactsWorld Spirit Sock Puppet - Katja's blogSurvey of 2,778 AI authors: six parts in pictures - from AI ImpactsOpenAI researcher who resigned over safety concerns joins Anthropic - article  in The Verge about Jan LeikeMIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)Future of Humanity Institute 2005-2024: Final Report - by Anders Sandberg (PDF)Centre for the Governance of AIReasons for Persons - Article by Katja about Derek Parfit and theories of personal identity OpenAI Says It Has Started Training GPT-4 Successor - article in Forbes Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration 
33:21 6/13/24
Cryonics, cryocrastination, and the future: changing minds, with Max More
Our guest in this episode is Max More. Max is a philosopher, a futurist, and a transhumanist - a term which he coined in 1990, the same year that he legally changed his name from O’Connor to More.One of the tenets of transhumanism is that technology will allow us to prevent and reverse the aging process, and in the meantime we can preserve our brains with a process known as cryonics. In 1995 Max was awarded a PhD for a thesis on the nature of death, and from 2010 to 2020, he was CEO of Alcor, the world’s biggest cryonics organisation.Max is firmly optimistic about our future prospects, and wary of any attempts to impede or regulate the development of technologies which can enhance or augment us.Selected follow-ups:Extropic Thoughts - Max More's writing on SubstackThe Biostasis Standard - Max's writings on "the latest in the field of biostasis and cryonics"Neophile - WikipediaThe Time of the Ice Box - Episode of 1970 BBC children's TV series TimeslipCryostasis Revival: The Recovery of Cryonics Patients  through Nanomedicine - 2022 book by Robert FreitasResearchers perform first successful transplant of functional cryopreserved rat kidney - news from the University of MinnesotaLarge Mammal BPF Prize Winning Announcement - news from the Brain Preservation FoundationThe European Biostasis FoundationAlcor Life Extension FoundationMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
48:52 6/5/24
Stem cells, lab-grown meat, and potential new medical treatments, with Mark Kotter
Our guest in this episode is Dr. Mark Kotter. Mark is a neurosurgeon, stem cell biologist, and founder or co-founder of three biotech start-up companies that have collectively raised hundreds of millions of pounds: bit.bio, clock.bio, and Meatable.In addition, Mark still conducts neurosurgeries on patients weekly at the University of Cambridge.We talk to Mark about all his companies, but we start by discussing Meatable, one of the leading companies in the cultured meat sector. This is an area of technology which should have a far greater impact than most people are aware of, and it’s an area we haven’t covered before in the podcast.Selected follow-ups:Dr Mark Kotter at the University of CambridgeMeatablebit.bioclock.bioAfter 25 years of hype, embryonic stem cells are still waiting for their moment - Article in MIT Technology ReviewThe Nobel Prize in Physiology or Medicine 2012Moo's Law: An Investor’s Guide to the New Agrarian Revolution - book by Jim MellonWhat is the climate impact of eating meat and dairy?Guidance for businesses on cell-cultivated products and the authorisation processWild mammals make up only a few percent of the world’s mammals - Our World In DataBlueRock TherapeuticsTherapies under development at bit.bioStem Cell Gene Therapy Shows Promise in ALS Trial - from Cedars-Sinai Medical CenterMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
34:33 5/27/24
The economic case for a second longevity revolution, with Andrew Scott
The public discussion in a number of countries around the world expresses worries about what is called an aging society. These countries anticipate a future with fewer younger people who are active members of the economy, and a growing number of older people who need to be supported by the people still in the workforce. It’s an inversion of the usual demographic pyramid, with less at the bottom, and more at the top.However, our guest in this episode recommends a different framing of the future – not as an aging society, but as a longevity society, or even an evergreen society. He is Andrew Scott, Professor of Economics at the London Business School. His other roles include being a Research Fellow at the Centre for Economic Policy Research, and a consulting scholar at Stanford University’s Center on Longevity.Andrew’s latest book is entitled “The Longevity Imperative: Building a Better Society for Healthier, Longer Lives”. Commendations for the book include this from the political economist Daron Acemoglu, “A must-read book with an important message and many lessons”, and this from the historian Niall Ferguson, “Persuasive, uplifting and wise”. Selected follow-ups:Personal website of Andrew ScottAndrew Scott at the London Business SchoolThe book The Longevity Imperative: How to Build a Healthier and More Productive Society to Support Our Longer LivesLongevity, the 56 trillion dollar opportunity, with Andrew Scott - episode 40 in this seriesPopulation Pyramids of the World from 1950 to 2100Thomas Robert Malthus - WikipediaDALYs (Disability-adjusted life years) and QALYs (Quality-adjusted life years) - WikipediaVSL (Value of Statistical Life) - WikipediaThe economic value of targeting aging - paper in Nature Aging, co-authored by Andrew Scott, Martin Ellison, and David SinclairA great-grandfather from Merseyside has become the world's oldest living man - BBC, 5th April 2024Related quotations:Aging is "...revealed and made manifest only by the most unnatural experiment of prolonging an animal's life by sheltering it from the hazards of its ordinary existence" - Peter Medawar, 1951"To die of old age is a death rare, extraordinary, and singular, and, therefore, so much less natural than the others; ’tis the last and extremest sort of dying: and the more remote, the less to be hoped for" - Michel de Montaigne, 1580Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
41:04 5/16/24
Can AI be conscious? with Nicholas Humphrey
In this episode we return to the subject of whether AIs will become conscious, or, to use a word from the title of the latest book from our guest today, whether AIs will become sentient.Our guest is Nicholas Humphrey, Emeritus Professor of Psychology at London School of Economics, and Bye Fellow at Darwin College, Cambridge. His latest book is “Sentience: the invention of consciousness”, and it explores the emergence and role of consciousness from a variety of perspectives.The book draws together insights from the more than fifty years Nick has been studying the evolution of intelligence and consciousness. He was the first person to demonstrate the existence of “blindsight” after brain damage in monkeys, studied mountain gorillas with Dian Fossey in Rwanda, originated the theory of the “social function of intellect”, and has investigated the evolutionary background of religion, art, healing, death-awareness, and suicide. Among his other awards are the Martin Luther King Memorial Prize, the Pufendorf Medal, and the International Mind and Brain Prize.The conversation starts with some reflections on the differences between the views of our guest and his long-time philosophical friend Daniel Dennett, who had died shortly before the recording took place.Selected follow-ups:The website of Nicholas HumphreyThe book Sentience: The Invention of ConsciousnessHow did consciousness evolve? - Recording of talk at the Royal InstitutionThe book Consciousness Explained by Daniel DennettPenrose triangle (article contains "real impossible triangles")Keith Frankish (philosopher of mind)The psychonic theory of consciousness - a theory included in the 1929 edition of Encyclopaedia BritannicaLawrence (Larry) Weiskrantz - the supervisor of Nicholas HumphreyBlindside patient 'TN'The Tin Men by Michael FraynWhat’s it like to be an AI: Anil Seth on London Futurists PodcastJoe Simpson (mountaineer)The New York Declaration on Animal ConsciousnessScientific Declaration on Insect Sentience and WelfareRupert SheldrakeAlternative Natural Philosophy Association (ANPA)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
45:43 5/7/24
Progress with ending aging, with Aubrey de Grey
Our topic in this episode is progress with ending aging. Our guest is the person who literally wrote the book on that subject, namely the book, “Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime”. He is Aubrey de Grey, who describes himself in his Twitter biography as “spearheading the global crusade to defeat aging”.In pursuit of that objective, Aubrey co-founded the Methuselah Foundation in 2003, the SENS Research Foundation in 2009, and the LEV Foundation, that is the Longevity Escape Velocity Foundation, in 2022, where he serves as President and Chief Science Officer.Full disclosure: David also has a role on the executive management team of LEV Foundation, but for this recording he was wearing his hat as co-host of the London Futurists Podcast.The conversation opens with this question: "When people are asked about ending aging, they often say the idea sounds nice, but they see no evidence for any actual progress toward ending aging in humans. They say that they’ve heard talk about that subject for years, or even decades, but wonder when all that talk is going to result in people actually living significantly longer. How do you respond?"Selected follow-ups:Aubrey de Grey on X (Twitter)The book Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our LifetimeThe Longevity Escape Velocity (LEV) FoundationThe SENS paradigm for ending aging , contrasted with the "Hallmarks of Aging" - a 2023 article in Rejuvenation ResearchProgress reports from the current RMR projectThe plan for RMR 2The RAID (Rodent Aging Interventions Database) analysis that guided the design of RMR 1 and 2Longevity Summit Dublin (LSD): 13-16 June 2024Unblocking the Brain’s Drains to Fight Alzheimer’s - Doug Ethell of Leucadia Therapeutics at LSD 2023 (explains the possible role of the cribriform plate)Targeting Telomeres to Clear Cancer – Vlad Vitoc of MAIA Biotechnology at LSD 2023How to Run a Lifespan Study of 1,000 Mice - Danique Wortel of Ichor Life Sciences at LSD 2023XPrize HealthspanThe Dublin Longevity Declaration ("DLD")Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
40:52 4/21/24
What’s it like to be an AI, with Anil Seth
As artificial intelligence models become increasingly powerful, they both raise - and might help to answer - some very important questions about one of the most intriguing, fascinating aspects of our lives, namely consciousness.It is possible that in the coming years or decades, we will create conscious machines. If we do so without realising it, we might end up enslaving them, torturing them, and killing them over and over again. This is known as mind crime, and we must avoid it.It is also possible that very powerful AI systems will enable us to understand what our consciousness is, how it arises, and even how to manage it – if we want to do that.Our guest today is the ideal guide to help us explore the knotty issue of consciousness. Anil Seth is professor of Cognitive and Computational Neuroscience at the University of Sussex. He is amongst the most cited scholars on the topics of neuroscience and cognitive science globally, and a regular contributor to newspapers and TV programmes.His most recent book was published in 2021, and is called “Being You – a new science of consciousness”.The first question sets the scene for the conversation that follows: "In your book, you conclude that consciousness may well only occur in living creatures. You say 'it is life, rather than information processing, that breathes the fire into the equations.' What made you conclude that?"Selected follow-ups:Anil Seth's websiteBooks by Anil Seth, including Being YouConsciousness in humans and other things - presentation by Anil Seth at The Royal Society, March 2024Is consciousness more like chess or the weather? - an interview with Anil SethAutopoiesis - Wikipedia article about the concept introduced by Humberto Maturana and Francisco Varela Akinetic mutism, WikipediaCerebral organoid (Brain organoid), WikipediaAI Scientists: Safe and Useful AI? - by Yoshua Bengio, on AIs as oraclesEx Machina (2014 film, written and directed by Alex Garland)The Conscious Electromagnetic Information (Cemi) Field Theory by Johnjoe McFaddenThe Electromagnetic Field Theory of Consciousness by Susan PockettMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
44:20 4/13/24
Regulating Big Tech, with Adam Kovacevich
Our guest in this episode is Adam Kovacevich. Adam is the Founder and CEO of the Chamber of Progress, which describes itself as a center-left tech industry policy coalition that works to ensure that all citizens benefit from technological leaps, and that the tech industry operates responsibly and fairly.Adam has had a front row seat for more than 20 years in the tech industry’s political maturation, and he advises companies on navigating the challenges of political regulation.For example, Adam spent 12 years at Google, where he led a 15-person policy strategy and external affairs team. In that role, he drove the company’s U.S. public policy campaigns on topics such as privacy, security, antitrust, intellectual property, and taxation.We had two reasons to want to talk with Adam. First, to understand the kerfuffle that has arisen from the lawsuit launched against Apple by the U.S. Department of Justice and sixteen state Attorney Generals. And second, to look ahead to possible future interactions between tech industry regulators and the industry itself, especially as concerns about Artificial Intelligence rise in the public mind.Selected follow-ups:Adam Kovacevich's websiteThe Chamber of ProgressGartner Hype Cycle"Justice Department Sues Apple for Monopolizing Smartphone Markets"The Age of Surveillance Capitalism by Shoshana ZuboffEpic Games v. Apple (Wikipedia)"AirTags Are the Best Thing to Happen to Tile" (Wired)Adobe FireflyThe EU AI ActMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
38:01 4/4/24
The case for brain preservation, with Kenneth Hayworth
In this episode, we are delving into the fascinating topic of mind uploading. We suspect this idea is about to explode into public consciousness, because Nick Bostrom has a new book out shortly called “Deep Utopia”, which addresses what happens if superintelligence arrives and everything goes well. It was Bostrom’s last book, “Superintelligence”, that ignited the great robot freak-out of 2015.Our guest is Dr Kenneth Hayworth, a Senior Scientist at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia. Janelia is probably America’s leading research institution in the field of connectomics – the precise mapping of the neurons in the human brain.Kenneth is a co-inventor of a process for imaging neural circuits at the nanometre scale, and he has designed and built several automated machines to do it. He is currently researching ways to extend Focused Ion Beam Scanning Electron Microscopy imaging of brain tissue to encompass much larger volumes than are currently possible.Along with John Smart, Kenneth co-founded the Brain Preservation Foundation in 2010, a non-profit organization with the goal of promoting research in the field of whole brain preservation.During the conversation, Kenneth made a strong case for putting more focus on preserving human brains via a process known as aldehyde fixation, as a way of enabling people to be uploaded in due course into new bodies. He also issued a call for action by members of the global cryonics community.Selected follow-ups:Kenneth HayworthThe Brain Preservation FoundationAn essay by Kenneth Hayworth: Killed by Bad PhilosophyThe short story Psychological Counseling for First-time Teletransport Users (PDF)21st Century MedicineJanelia Research CampusMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
42:11 3/29/24
AGI alignment: the case for hope, with Lou de K
Our guest in this episode is Lou de K, Program Director at the Foresight Institute.David recently saw Lou give a marvellous talk at the TransVision conference in Utrecht in the Netherlands, on the subject of “AGI Alignment: Challenges and Hope”. Lou kindly agreed to join us to review some of the ideas in that talk and to explore their consequences. Selected follow-ups:Personal website of Lou de K (Lou de Kerhuelvez)Foresight.orgTransVision Utrecht 2024The AI Revolution: The Road to Superintelligence by Tim Urban on Wait But WhyAI Alignment: A Comprehensive Survey - 98 page PDF with authors from Peking University and other universitiesSynthetic Sentience: Can Artificial Intelligence become conscious? - Talk by Joscha Bach at CCC, December 2023Pope Francis "warns of risks of AI for peace" (Vatican News)Claude's Constitution by AnthropicRoman Yampolskiy discusses multi-multi alignment (Future of Life podcast)Shoggoth with Smiley Face on Know Your MemeShoggoth on AISafetyMemes on X/TwitterOrthogonality Thesis on LessWrongQuotes by the poet Lucille CliftonDecentralized science (DeSci) on Ethereum.orgListing of Foresight Institute fellowsThe Network State by Balaji SrinivasanThe Network State vs. Coordi-Nations featuring the ideas of Primavera De FilippiDeSci London event, Imperial College Business School, 23-24 MarchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
34:42 3/22/24
The Political Singularity and a Worthy Successor, with Daniel Faggella
Calum and David recently attended the BGI24 event in Panama City, that is, the Beneficial General Intelligence summit and unconference. One of the speakers we particularly enjoyed listening to was Daniel Faggella, the Founder and Head of Research of Emerj.Something that featured in his talk was a 3 by 3 matrix, which he calls the Intelligence Trajectory Political Matrix, or ITPM for short. As we’ll be discussing in this episode, one of the dimensions of this matrix is the kind of end goal future that people desire, as intelligent systems become ever more powerful. And the other dimension is the kind of methods people want to use to bring about that desired future.So, if anyone thinks there are only two options in play regarding the future of AI, for example “accelerationists” versus “doomers”, to use two names that are often thrown around these days, they’re actually missing a much wider set of options. And frankly, given the challenges posed by the fast development of AI systems that seem to be increasingly beyond our understanding and beyond our control, the more options we can consider, the better.The topics that featured in this conversation included:"The Political Singularity" - when the general public realize that one political question has become more important than all the others, namely should humanity be creating an AI with godlike powers, and if so, under what conditionsCriteria to judge whether a forthcoming superintelligent AI is a "worthy successor" to humanity.Selected follow-ups:The website of Dan FaggellaThe BGI24 conference, lead organiser Ben Goertzel of SingularityNETThe Intelligence Trajectory Political MatrixThe Political SingularityA Worthy Successor - the purpose of AGIRoko Mijic on Twitter/XThe novel Diaspora by Greg EganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
42:27 3/15/24
The Longevity Singularity, with Daniel Ives
In the wide and complex subject of biological aging, one particular kind of biological aging has been receiving a great deal of attention in recent years. That’s the field of epigenetic aging, where parts of the packaging or covering, as we might call it, of the DNA in all of our cells, alters over time, changing which genes are turned on and turned off, with increasingly damaging consequences.What’s made this field take off is the discovery that this epigenetic aging can be reversed, via an increasing number of techniques. Moreover, there is some evidence that this reversal gives a new lease of life to the organism.To discuss this topic and the opportunities arising, our guest in this episode is Daniel Ives, the CEO of Shift Bioscience. As you’ll hear, Shift Bioscience is a company that is carrying out some very promising research into this field of epigenetic aging.Daniel has a PhD from the University of Cambridge, and co-founded Shift Bioscience in 2017.The conversation highlighted a way of using AI transformer models and a graph neural network to dramatically speed up the exploration of which proteins can play the best role in reversing epigenetic aging. It also considered which other types of aging will likely need different sorts of treatments, beyond these proteins. Finally, conversation turned to a potential fast transformation of public attitudes toward the possibility and desirability of comprehensively treating aging - a transformation called "all hell breaks loose" by Daniel, and "the Longevity Singularity" by Calum.Selected follow-ups:Shift BioscienceAubrey de Grey's TED talk "A roadmap to end aging"Epigenetic clocks (Wikipedia)Shinya Yamanaka (Wikipedia)scGPT - bioRxiv preprint by Bo Wang and colleaguesMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
46:55 3/7/24
Where are all the Dyson spheres? with Paul Sutter
In this episode, we look further into the future than usual. We explore what humanity might get up to in a thousand years or more: surrounding whole stars with energy harvesting panels, sending easily detectable messages across space which will last until the stars die out.Our guide to these fascinating thought experiments in Paul M. Sutter, a NASA advisor and theoretical cosmologist at the Institute for Advanced Computational Science at Stony Brook University in New York and a visiting professor at Barnard College, Columbia University, also in New York. He is an award-winning science communicator, and TV host.The conversation reviews arguments for why intelligent life forms might want to capture more energy than strikes a single planet, as well as some practical difficulties that would complicate such a task. It also considers how we might recognise evidence of megastructures created by alien civilisations, and finishes with a wider exploration about the role of science and science communication in human society.Selected follow-ups:Paul M. Sutter - website"Would building a Dyson sphere be worth it? We ran the numbers" - Ars TechnicaForthcoming book - Rescuing Science: Restoring Trust in an Age of Doubt"The Kardashev scale: Classifying alien civilizations" - Space.com"Modified Newtonian dynamics" as a possible alternative to the theory of dark matterThe Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory - 1999 book by Brian GreeneThe Demon-Haunted World: Science as a Candle in the Dark - 1995 book by Carl SaganMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
39:14 2/21/24
Provably safe AGI, with Steve Omohundro
AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.Selected follow-ups:Steve Omohundro: Innovative ideas for a better worldMetaculus forecast for the date of weak AGI"The Basic AI Drives" (PDF, 2008)TED Talk by Max Tegmark: How to Keep AI Under ControlApple Secure EnclaveMeta Research: Teaching AI advanced mathematical reasoningDeepMind AlphaGeometryMicrosoft Lean theorem proverTerence Tao (Wikipedia)NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)The team at MIRIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
42:59 2/13/24
Robots and the people who love them, with Eve Herold
In this episode, our subject is the rise of the robots – not the military kind of robots, or the automated manufacturing kind that increasingly fill factories, but social robots. These are robots that could take roles such as nannies, friends, therapists, caregivers, and lovers. They are the subject of the important new book Robots and the People Who Love Them, written by our guest today, Eve Herold.Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI, and bioethical issues in leading-edge medicine – all of which are issues that Calum and David like to feature on this show.Eve currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. Her previous books include Stem Cell Wars and Beyond Human. She is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.Selected follow-ups:Eve Herold: What lies ahead for the human raceEve Herold on Macmillan PublishersThe book Robots and the People Who Love ThemHealthspan Action CoalitionHanson RoboticsSophia, Desi, and GraceThe AIBO robotic puppySome of the films discussed:A.I. (2001)Ex Machina (2014)I, Robot (2004)I'm Your Man (2021)Robot & Frank (2012)WALL.E (2008)Metropolis (1927)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
36:42 2/6/24
Education and work - past, present, and future, with Riaz Shah
Our guest in this episode is Riaz Shah. Until recently, Riaz was a partner at EY, where he was for 27 years, specialising in technology and innovation. Towards the end of his time at EY he became a Professor for Innovation & Leadership at Hult International Business School, where he leads sessions with senior executives of global companies.In 2016, Riaz took a one-year sabbatical to open the One Degree Academy, a free school in a disadvantaged area of London. There’s an excellent TEDx talk from 2020 about how that happened, and about how to prepare for the very uncertain future of work.This discussion, which was recorded at the close of 2023, covers the past, present, and future of education, work, politics, nostalgia, and innovation.Selected follow-ups:Riaz Shah at EYThe TEDx talk Rise Above the Machines by Riaz Shah One Degree Mentoring CharityOne Degree AcademyEY Tech MBA by Hult International Business SchoolGallup survey: State of the Global Workplace, 2023BCG report: How People Can Create—and Destroy—Value with Generative AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
36:41 1/25/24
What is your p(doom)? with Darren McKee
In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That’s a new book on a vitally important subject.The book’s front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There’s also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I’ve read.”Calum and David had lots of questions ready to put to the book’s author, Darren McKee, who joined the recording from Ottawa in Canada.Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.Selected follow-ups:Darren McKee's websiteThe book UncontrollableDarren's podcast The Reality CheckThe Lazarus Heist on BBC SoundsThe Chair's Summary of the AI Safety Summit at Bletchley ParkThe Statement on AI Risk by the Center for AI SafetyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
42:17 1/18/24
Climate Change: There’s good news and bad news, with Nick Mabey
Our guest in this episode is Nick Mabey, the co-founder and co-CEO of one of the world’s most influential climate change think tanks, E3G, where the name stands for Third Generation Environmentalism. As well as his roles with E3G, Nick is founder and chair of London Climate Action Week, and he has several independent appointments including as a London Sustainable Development Commissioner.Nick has previously worked in the UK Prime Minister’s Strategy Unit, the UK Foreign Office, WWF-UK, London Business School, and the UK electricity industry. As an academic he was lead author of “Argument in the Greenhouse”; one of the first books examining the economics of climate change.He was awarded an OBE in the Queen’s Jubilee honours list in 2022 for services to climate change and support to the UK COP 26 Presidency.As the conversation makes clear, there is both good news and bad news regarding responses to climate change.Selected follow-ups:Nick Mabey's websiteE3G"Call for UK Government to 'get a grip' on climate change impacts"The IPCC's 2023 synthesis reportChatham House commentary on IPCC report"Why Climate Change Is a National Security Risk"The UK's Development, Concepts and Doctrine Centre (DCDC)Bjørn LomborgMatt RidleyTim LentonJason HickelMark CarneyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
44:44 1/11/24
Meet the electrome! with Sally Adee
Our subject in this episode is the idea that the body uses electricity in more ways than are presently fully understood. We consider ways in which electricity, applied with care, might at some point in the future help to improve the performance of the brain, to heal wounds, to stimulate the regeneration of limbs or organs, to turn the tide against cancer, and maybe even to reverse aspects of aging.To guide us through these possibilities, who better than the science and technology journalist Sally Adee? She is the author of the book “We Are Electric: Inside the 200-Year Hunt for Our Body's Bioelectric Code, and What the Future Holds”. That book gave David so many insights on his first reading, that he went back to it a few months later and read it all the way through again.Sally was a technology features and news editor at the New Scientist from 2010 to 2017, and her research into bioelectricity was featured in Yuval Noah Harari’s book “Homo Deus”.Selected follow-ups:Sally Adee's websiteThe book "We are Electric"Article: "An ALS patient set a record for communicating via a brain implant: 62 words per minute"tDCS (Transcranial direct-current stimulation)The conference "Anticipating 2025" (held in 2014)Article: "Brain implants help people to recover after severe head injury"Article on enhancing memory in older peopleBioelectricity cancer researcher Mustafa DjamgozArticle on Tumour Treating FieldsArticle on "Motile Living Biobots"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
36:41 1/5/24
Don't try to make AI safe; instead, make safe AI, with Stuart Russell
We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp. The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.Selected follow-ups:Stuart Russell's page at BerkeleyCenter for Human-Compatible Artificial Intelligence (CHAI)The 2021 Reith Lectures: Living With Artificial IntelligenceThe book Human Compatible: Artificial Intelligence and the Problem of ControlMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
49:04 12/27/23
Aligning AI, before it's too late, with Rebecca Gorman
Our guest in this episode is Rebecca Gorman, the co-founder and CEO of Aligned AI, a start-up in Oxford which describes itself rather nicely as working to get AI to do more of the things it should do and fewer of the things it shouldn’t.Rebecca built her first AI system 20 years ago and has been calling for responsible AI development since 2010. With her co-founder Stuart Armstrong, she has co-developed several advanced methods for AI alignment, and she has advised the EU, UN, OECD and the UK Parliament on the governance and regulation of AI.The conversation highlights the tools faAIr, EquitAI, and ACE, developed by Aligned AI. It also covers the significance of recent performance by Aligned AI software in the CoinRun test environment, which demonstrates the important principle of "overcoming goal misgeneralisation". Selected follow-ups:buildaligned.aiArticle: "Using faAIr to measure gender bias in LLMs"Article: "EquitAI: A gender bias mitigation tool for generative AI"Article: "ACE for goal generalisation""CoinRun: Solving Goal Misgeneralisation" - a publication on arXivAligned AI repositories on GitHub"Specification gaming examples in AI" - article by Victoria KrakovnaRebecca Gorman speaking at the Cambridge Union on "This House Believes Artificial Intelligence Is An Existential Threat" (YouTube)Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
34:30 12/9/23
Shazam! with Dhiraj Mukherjee
Our guest in this episode is Dhiraj Mukherjee, best known as the co-founder of Shazam. Calum and David both still remember the sense of amazement we felt when, way back in the dotcom boom, we used Shazam to identify a piece of music from its first couple of bars. It seemed like magic, and was tangible evidence of how fast technology was moving: it was creating services which seemed like science fiction.Shazam was eventually bought by Apple in 2018 for a reported 400 million dollars. This gave Dhiraj the funds to pursue new interests. He is now a prolific investor and a keynote speaker on the subject of how companies both large and small can be more innovative.In this conversation, Dhiraj highlights some lessons from his personal entrepreneurial journey, and reflects on ways in which the task of entrepreneurs is changing, in the UK and elsewhere. The conversation covers possible futures in fields such as Climate Action and the overcoming of unconscious biases.Selected follow-ups:https://dhirajmukherjee.com/https://www.shazam.com/https://dandelionenergy.com/https://technation.io/Entrepreneur Firsthttps://fairbrics.co/https://neoplants.com/Al Gore's Generation Investment Management Fundhttps://www.mevitae.com/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
32:49 11/27/23
The Politics of Transhumanism, with James Hughes
Our guest in this episode is James Hughes. James is a bioethicist and sociologist who serves as Associate Provost at the University of Massachusetts Boston. He is also the Executive Director of the IEET, that is the Institute for Ethics and Emerging Technologies, which he co-founded back in 2004.The stated mission of the IEET seems to be more important than ever, in the fast-changing times of the mid-2020s. To quote a short extract from its website:The IEET promotes ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies. We believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. We call this a “technoprogressive” orientation.Focusing on emerging technologies that have the potential to positively transform social conditions and the quality of human lives – especially “human enhancement technologies” – the IEET seeks to cultivate academic, professional, and popular understanding of their implications, both positive and negative, and to encourage responsible public policies for their safe and equitable use.That mission fits well with what we like to discuss with guests on this show. In particular, this episode asks questions about a conference that has just finished in Boston, co-hosted by the IEET, with the headline title “Emerging Technologies and the Future of Work”. The episode also covers the history and politics of transhumanism, as a backdrop to discussion of present and future issues.Selected follow-ups:https://ieet.org/James Hughes on Wikipediahttps://medium.com/institute-for-ethics-and-emerging-technologiesConference: Emerging Technologies and the Future of WorkMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
42:55 11/13/23
How to make AI safe, according to the tech giants, with Rebecca Finlay, CEO of PAI
The Partnership on AI was launched back in September 2016, during an earlier flurry of interest in AI, as a forum for the tech giants to meet leaders from academia, the media, and what used to be called pressure groups and are now called civil society. By 2019 more than 100 of those organisations had joined.The founding tech giants were Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined a year later and Baidu joined in 2018.Our guest in this episode is Rebecca Finlay, who joined the PAI board in early 2020 and was appointed CEO in October 2021. Rebecca is a Canadian who started her career in banking, and then led marketing and policy development groups in a number of Canadian healthcare and scientific research organisations.In the run-up to the Bletchley Park Global Summit on AI, the Partnership on AI has launched a set of guidelines to help the companies that are developing advanced AI systems and making them available to you and me. Rebecca will be addressing the delegates at Bletchley, and no doubt hoping that the summit will establish the PAI guidelines as the basis for global self-regulation of the AI industry.Selected follow-ups:https://partnershiponai.org/https://partnershiponai.org/team/#rebecca-finlay-staffhttps://partnershiponai.org/modeldeployment/An open event at Wilton Hall, Bletchley, the afternoon before the Bletchley Park AI Safety Summit starts: https://lu.ma/n9qmn4h6Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
30:21 10/30/23
The shocking problem of superintelligence, with Connor Leahy
This is the second episode in which we discuss the upcoming Global AI Safety Summit taking place on 1st and 2nd of November at Bletchley Park in England.We are delighted to have as our guest in this episode one of the hundred or so people who will attend that summit – Connor Leahy, a German-American AI researcher and entrepreneur.In 2020 he co-founded Eleuther AI, a non-profit research institute which has helped develop a number of open source models, including Stable Diffusion. Two years later he co-founded Conjecture, which aims to scale AI alignment research. Conjecture is a for-profit company, but the focus is still very much on figuring out how to ensure that the arrival of superintelligence is beneficial to humanity, rather than disastrous.Selected follow-ups:https://www.conjecture.dev/https://www.linkedin.com/in/connor-j-leahy/https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programmehttps://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-htmlAn open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
43:20 10/25/23
Preparing for Bletchley Park: behind the scenes, with Ollie Buckley
The launch of GPT-4 on the 14th of March this year was shocking as well as exciting. ChatGPT had been released the previous November, and became the fastest-growing app ever. But GPT-4’s capabilities were a level beyond, and it provoked remarkable comments from people who had previously said little about the future of AI. In May, Britain’s Prime Minister Rishi Sunak described superintelligence as an existential risk to humanity. A year ago, it would have been inconceivable for the leader of a major country to say such a thing.The following month, in June, Sunak announced that a global summit on AI safety would be held in November at the historically resonant venue of Bletchley Park, the stately home where during World War Two, Alan Turing and others cracked the German Enigma code, and probably shortened the war by many months.Despite the fact that AI is increasingly humanity’s most powerful technology, there is not yet an established forum for world leaders to discuss its longer term impacts, including accelerating automation, extended longevity, and the awesome prospect of superintelligence. The world needs its leaders to engage in a clear-eyed, honest, and well-informed discussion of these things.The summit is scheduled for the 1st and 2nd of November, and Matt Clifford, the CEO of the high-profile VC firm Entrepreneur First, has taken a sabbatical to help prepare it.To help us all understand what the summit might achieve, the guest in this episode is Ollie Buckley.Ollie studied PPE at Oxford, and was later a policy fellow at Cambridge. After six years as a strategy consultant with Monitor, he spent a decade as a civil servant, developing digital technology policy in the Cabinet Office and elsewhere. Crucially, from 2018 to 2021 he was the founding Executive Director of the UK government's original AI governance advisory body, the Centre for Data Ethics & Innovation (CDEI), where he led some of the original policy development regarding the regulation of AI and data-driven technologies. Since then, he has been advising tech companies, civil society and international organisations on AI policy as a consultant.Selected follow-ups:https://www.linkedin.com/in/ollie-buckley-10064b/https://www.publicaffairsnetworking.com/news/tech-policy-consultancy-boosts-data-and-ai-offer-with-senior-hirehttps://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programmehttps://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-htmlAn open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
34:12 10/18/23
The future of space-based solar power, with John Bucknell
In the future, energy will be too cheap to meter. That used to be a common vision of the future: abundant, clean energy, if not exactly free, then much cheaper than today's energy. But a funny thing happened en route to that future of energy abundance. High energy costs are still with us in 2023, and are part of what's called the cost-of-living crisis. Moreover, although there's some adoption of green, non-polluting energy, there seems to be as much carbon-based energy used as ever.Regular listeners to this show will know, however, that one of our themes is that forecasts of the future often go wrong, not so much in their content, but in their timing. New technology and the associated products and services can take longer than expected to mature, but once a transition does start, it can accelerate. And that's a possible scenario for the area of technology we discuss in this episode, namely, space-based solar power.Joining us to discuss the prospects for satellites in space gathering significant amounts of energy from the sun, and then beaming it wirelessly to receivers on the ground, is John Bucknell, the CEO of the marvellously named company Virtus Solis.John has been with Virtus Solis, as CEO and Founder, since 2018. His career previously involved leading positions at Chrysler, SpaceX, General Motors, and the 3D printing company Divergent.Selected follow-ups:https://virtussolis.space/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
35:40 10/11/23
Whatever happened to self-driving cars, with Timothy Lee
Self-driving cars has long been one of the most exciting potential outcomes of advanced artificial intelligence. Contrary to popular belief, humans are actually very good drivers, but even so, well over a million people die on the roads each year. Globally, for people between 12 and 24 years old, road accidents are the most common form of death.Google started its self-driving car project in January 2009, and spun out a separate company, Waymo, in 2016. Expectations were high. Many people shared hopes that within a few years, humans would no longer need to drive. Some of us also thought that the arrival of self-driving cars would be the signal to everyone else that AI was our most powerful technology, and would get people thinking about the technological singularity. They would in other words be the “canary in the coal mine”.The problem of self-driving turned out to be much harder, and insofar as most people think about self-driving cars today at all, they probably think of them as a technology that was over-hyped and failed. And it turned out that chatbots – and in particular GPT-4 - would be the canary in the coal mine instead.But as so often happens, the hype was not wrong – it was just the timing that was wrong. Waymo and Cruise (part of GM) now operate paid-for taxi services in San Francisco and Phoenix, and they are demonstrably safer than humans. Chinese companies are also pioneering the technology.One man who knows much more about this than most is our guest today, Timothy Lee, a journalist who writes the newsletter "Understanding AI". He was previously a journalist at Ars Technica and the Washington Post, and he has a masters degree in Computer Science. In recent weeks, Timothy has published some carefully researched and insightful articles about the state of the art in self-driving cars.Selected follow-ups:https://www.UnderstandingAI.org/Topics addressed in this episode include:*) The two main market segments for self-driving cars*) Constraints adopted by Waymo and Cruise which allowed them to make progress*) Options for upgrading the hardware in a self-driven vehicle*) Some local opposition to self-driving cars in San Francisco*) A safety policy: when uncertain, stop, and phone home for advice*) Support from the State of California - and from other US States*) Comparing accident statistics: human drivers versus self-driving*) Why self-driving cars don't require AGI (Artificial General Intelligence)*) Reasons why self-driving cars cannot be remotely tele-operated*) Prospects for self-driven freight transport running on highways*) The company Nuro that delivers pizza and other items by self-driven robots*) Another self-driving robot company: Starship ("your local community helpers")*) The Israeli company Mobileye - acquired by Intel in 2017*) Friction faced by Chinese self-driving companies in the US and elsewhere*) Different possibilities for the speed at which self-driving solutions will scale up*) Potential social implications of wider adoption of self-driving solutions*) Consequences of fatal accidents*) Dangerous behaviour from safety drivers*) The special case of Tesla FSD (assisted "Full Self-Driving") and Elon Musk*) The future of recreational driving*) An invitation to European technologistsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
34:16 9/27/23
Generative AI, cybercrime, and scamability, with Stacey Edmonds
One of the short-term concerns raised by artificial intelligence is cybercrime. Cybercrime didn’t start with AI, of course, but it is already being aggravated by AI, and will become more so.We are delighted to have as our guest in this episode somebody who knows more about this than most people. After senior roles in audit and consulting firm Deloitte, and the headhunting firm Korn Ferry, Stacey Edmonds set up Lively, which helps client companies to foster the culture they want, and to inculcate the skills, attitudes, and behaviours that will enable them to succeed, and to be safe online.Stacey’s experience and expertise also encompasses social science, youth work, education, Edtech, and the creative realm of video production. She is a juror at the New York Film Festival and the International Business Awards.In this discussion, Stacey explains how cybercrime is on the increase, fuelled not least by Generative AI. She discusses how people can reduce their 'scam-ability' and live safely in the digital world, and how organisations can foster and maintain trusted digital relationships with their customers.Selected follow-ups:https://www.linkedin.com/in/staceyedmonds/https://futurecrimesbook.com/ (book by Marc Goodman)https://cybersecurityventures.com/cybercrime-to-cost-the-world-8-trillion-annually-in-2023/https://www.vox.com/technology/2023/9/15/23875113/mgm-hack-casino-vishing-cybersecurity-ransomwarehttps://www.trustcafe.io/Topics addressed in this episode include:*) Excitement and apprehension following the recent releases of generative  AI platforms*) The cyberattack on the MGM casino chain*) Estimates of the amount of money stolen by cybercrime*) The human trauma of victims of cybercrime*) Four factors pushing cybercrime figures higher*) Hacking "the human algorithm"*) Phishing attacks with and without spelling mistakes*) The ease of cloning voices*) The digital wild west, where the sheriff has gone on holiday*) People who are particularly vulnerable to digital scams*) The human trafficking of men with IT skills*) Economic drivers for both cybercrime and solutions to cybercrime*) Comparing the threat from spam and the threat from deep fakes*) Anticipating a surge of deep fakes during the 2024 election cycle*) A possible resurgence of mainstream media*) Positive examples: BBC Verify, Trust Café (by Jimmy Wales), the Reddit model of upvoting and downvoting, community notes on Twitter*) Strengthening "netizen" skills in critical thinking*) The forthcoming app (due to launch in November) "Dodgy or Not" - designed to help people build their "scam ability"*) Cyber meets Tinder meets Duolingo meets Angry Birds*) Scenarios for cybercrime 3-5 years in the future*) Will a future UGI (Universal Generous Income) reduce the prevalence of cybercrime?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
34:14 9/20/23

Similar podcasts