Show cover of Cloud Security Podcast by Google

Cloud Security Podcast by Google

Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure. We’re going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject’s benefit or just for organizational benefit. We hope you’ll join us if you’re interested in where technology overlaps with process and bumps up against organizational design. We’re hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can’t keep as the world moves from on-premises computing to cloud computing.

Tracks

EP173 SAIF in Focus: 5 AI Security Risks and SAIF Mitigations
Guest: Shan  Rao, Group Product Manager, Google  Topics: What are the unique challenges when securing AI for cloud environments, compared to traditional IT systems? Your talk covers 5 risks, why did you pick these five? What are the five, and are these the worst? Some of the mitigation seems the same for all risks. What are the popular SAIF mitigations that cover more of the risks? Can we move quickly and securely with AI? How? What future trends and developments do you foresee in the field of securing AI for cloud environments, and how can organizations prepare for them? Do you think in 2-3 years AI security will be a separate domain or a part of … application security? Data security? Cloud security?  Resource: Video (LinkedIn, YouTube)  [live audio is not great in these] “A cybersecurity expert's guide  to securing AI products with Google SAIF“ presentation SAIF Site “To securely build AI on Google Cloud, follow these best practices” (paper) “Secure AI Framework (SAIF): A Conceptual Framework for Secure AI Systems” resources Corey Quinn on X (long story why this is here… listen to the episode)
33:16 5/20/24
EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines?
Guests: None Topics: What have we seen at RSA 2024? Which buzzwords are rising (AI! AI! AI!) and which ones are falling (hi XDR)? Is this really all about AI? Is this all marketing? Security platforms or focused tools, who is winning at RSA? Anything fun going on with SecOps? Is cloud security still largely about CSPM? Any interesting presentations spotted? Resources: EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (RSA 2024 episode 1 of 2) “From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blog “Decoupled SIEM: Brilliant or Stupid?” blog “Introducing Google Security Operations: Intel-driven, AI-powered SecOps” blog “Advancing the art of AI-driven security with Google Cloud” blog
27:20 5/13/24
EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side
Guest: Elie Bursztein, Google DeepMind Cybersecurity Research Lead, Google  Topics: Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)? What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical? Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really? Why do you think that AI favors the defenders? Is this a long term or a short term view? What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?  Resources: “How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM) “Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM) EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents EP40 2021: Phishing is Solved? EP135 AI and Security: The Good, the Bad, and the Magical EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It PyRIT LLM red-teaming tool Accelerating incident response using generative AI Threat Actors are Interested in Generative AI, but Use Remains Limited OpenAI’s Approach to Frontier Risk  
27:03 5/6/24
EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC
Guest: Payal Chakravarty, Director of Product Management, Google SecOps, Google Cloud Topics: What are the different use cases for GenAI in security operations and how can organizations  prioritize them for maximum impact to their organization? We’ve heard a lot of worries from people that GenAI will replace junior team members–how do you see GenAI enabling more people to be part of the security mission? What are the challenges and risks associated with using GenAI in security operations? We’ve been down the road of automation for SOCs before–UEBA and SOAR both claimed it–and AI looks a lot like those but with way more matrix math-what are we going to get right this time that we didn’t quite live up to last time(s) around? Imagine a SOC or a D&R team of 2029. What AI-based magic is routine at this time? What new things are done by AI? What do humans do? Resources: Live video (LinkedIn, YouTube) [live audio is not great in these] Practical use cases for AI in security operations, Cloud Next 2024 session by Payal EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps 15 must-attend security sessions at Next '24  
27:48 4/29/24
EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps
Guests:  no guests (just us!) Topics: What are some of the fun security-related launches from Next 2024 (sorry for our brief “marketing hat” moment!)? Any fun security vendors we spotted “in the clouds”? OK, what are our favorite sessions? Our own, right? Anything else we had time to go to? What are the new security ideas inspired by the event (you really want to listen to this part! Because “freatures”...) Any tricky questions at the end? Resources: Live video (LinkedIn, YouTube) [live audio is not great in these] 15 must-attend security sessions at Next '24 Cloud CISO Perspectives: 20 major security announcements from Next ‘24 EP137 Next 2023 Special: Conference Recap - AI, Cloud, Security, Magical Hallway Conversations (last year!) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP90 Next Special - Google Cybersecurity Action Team: One Year Later! A cybersecurity expert's guide to securing AI products with Google SAIF Next 2024 session How AI can transform your approach to security Next 2024 session
27:36 4/22/24
EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It
Guests:  Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security Scott Coull, Head of Data Science Research, Google Cloud Security Topics: What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM? What can “security trained LLM” do better vs regular LLM? Does making it better at security make it worse at other things that we care about? What can a security team do with it today?  What are the “starter use cases” for SecLM? What has been the feedback so far in terms of impact - both from practitioners but also from team leaders? Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning? Resources: “How to tackle security tasks and workflows with generative AI” (Google Cloud Next 2024 session on SecLM) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models Supercharging security with generative AI  Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma? Considerations for Evaluating Large Language Models for Cybersecurity Tasks Introducing Google’s Secure AI Framework Deep Learning Security and Privacy Workshop  Security Architectures for Generative AI Systems ACM Workshop on Artificial Intelligence and Security Conference on Applied Machine Learning in Information Security  
33:18 4/15/24
EP167 Stolen Cards and Fake Accounts: Defending Google Cloud Against Abuse
Speakers:  Maria Riaz, Cloud Counter-Abuse, Engineering Lead, Google Cloud Topics: What is “counter abuse”? Is this the same as security? What does counter-abuse look like for GCP? What are the popular abuse types we face?  Do people use stolen cards to get accounts to then violate the terms with? How do we deal with this, generally? Beyond core technical skills, what are some of the relevant competencies for working in this space that would appeal to a diverse set of audience? You have worked in academia and industry. What similarities or differences have you observed? Resources / reading: Video EP165 Your Cloud Is Not a Pet - Decoding 'Shifting Left' for Cloud Security P161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud “Art of War” by Sun Tzu “Dare to Lead” by Brene Brown "Multipliers" by Liz Wiseman
25:24 4/8/24
EP166 Workload Identity, Zero Trust and SPIFFE (Also Turtles!)
Guests: Evan Gilman, co-founder CEO of Spirl Eli Nesterov, co-founder CTO of Spril Topics: Today we have IAM,  zero trust and security made easy. With that intro, could you give us the 30 second version of what a workload identity is and why people need them?  What’s so spiffy about SPIFFE anyway?  What’s different between this and micro segmentation of your network–why is one better or worse?  You call your book “solving the bottom turtle” could you tell us what that means? What are the challenges you’re seeing large organizations run into when adopting this approach at scale?  Of all the things a CISO could prioritize, why should this one get added to the list? What makes this, which is so core to our internal security model–ripe for the outside world? How people do it now, what gets thrown away when you deploy SPIFFE? Are there alternative? SPIFFE is interesting, yet can a startup really “solve for the bottom turtle”?  Resources: SPIFFE  and Spirl “Solving the Bottom Turtle” book [PDF, free] “Surely You're Joking, Mr. Feynman!” book [also, one of Anton’s faves for years!] “Zero Trust Networks” book Workload Identity Federation in GCP
30:06 4/1/24
EP165 Your Cloud Is Not a Pet - Decoding 'Shifting Left' for Cloud Security
Guest: Ahmad Robinson,  Cloud Security Architect, Google Cloud Topics: You’ve done a BlackHat webinar where you discuss a Pets vs Cattle mentality when it comes to cloud operations. Can you explain this mentality and how it applies to security? What in your past led you to these insights?  Tell us more about your background and your journey to Google.  How did that background contribute to your team? One term that often comes up on the show and with our customers is 'shifting left.'  Could you explain what 'shifting left' means in the context of cloud security? What’s hard about shift left, and where do orgs get stuck too far right? A lot of “cloud people” talk about IaC and PaC but the terms and the concepts are occasionally confusing to those new to cloud. Can you briefly explain Policy as Code  and its security implications? Does PaC help or hurt security? Resources: “No Pets Allowed - Mastering The Basics Of Cloud Infrastructure” webinar EP33 Cloud Migrations: Security Perspectives from The Field EP126 What is Policy as Code and How Can It Help You Secure Your Cloud Environment? EP138 Terraform for Security Teams: How to Use IaC to Secure the Cloud  
24:34 3/25/24
EP164 Quantum Computing: Understanding the (very serious) Threat and Post-Quantum Cryptography
Guest: Jennifer Fernick, Senor Staff Security Engineer and UTL, Google Topics: Since one of us (!) doesn't have a PhD in quantum mechanics, could you explain what a quantum computer is and how do we know they are on a credible path towards being real threats to cryptography? How soon do we need to worry about this one? We’ve heard that quantum computers are more of a threat to asymmetric/public key crypto than symmetric crypto. First off, why? And second, what does this difference mean for defenders? Why (how) are we sure this is coming? Are we mitigating a threat that is perennially 10 years ahead and then vanishes due to some other broad technology change? What is a post-quantum algorithm anyway? If we’re baking new key exchange crypto into our systems, how confident are we that we are going to be resistant to both quantum and traditional cryptanalysis?  Why does NIST think it's time to be doing the PQC thing now? Where is the rest of the industry on this evolution? How can a person tell the difference here between reality and snakeoil? I think Anton and I both responded to your initial email with a heavy dose of skepticism, and probably more skepticism than it deserved, so you get the rare on-air apology from both of us! Resources: Securing tomorrow today: Why Google now protects its internal communications from quantum threats How Google is preparing for a post-quantum world NIST PQC standards PQ Crypto conferences “Quantum Computation & Quantum Information” by Nielsen & Chuang book “Quantum Computing Since Democritus” by Scott Aaronson book EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google  
31:23 3/18/24
EP163 Cloud Security Megatrends: Myths, Realities, Contentious Debates and Of Course AI
Guest: Phil Venables, Vice President, Chief Information Security Officer (CISO) @ Google Cloud  Topics:  You had this epic 8 megatrends idea in 2021, where are we now with them? We now have 9 of them, what made you add this particular one (AI)? A lot of CISOs fear runaway AI. Hence good governance is key! What is your secret of success for AI governance?  What questions are CISOs asking you about AI? What questions about AI should they be asking that they are not asking? Which one of the megatrends is the most contentious based on your presenting them worldwide? Is cloud really making the world of IT simpler (megatrend #6)? Do most enterprise cloud users appreciate the software-defined nature of cloud (megatrend #5) or do they continue to fight it? Which megatrend is manifesting the most strongly in your experience? Resources: Megatrends drive cloud adoption—and improve security for all and infographic “Keynote | The Latest Cloud Security Megatrend: AI for Security” “Lessons from the future: Why shared fate shows us a better cloud roadmap” blog and shared fate page SAIF page “Spotlighting ‘shadow AI’: How to protect against risky AI practices” blog EP135 AI and Security: The Good, the Bad, and the Magical EP47 Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security Secure by Design by CISA  
25:54 3/11/24
EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler
Guest: Kat Traxler, Security Researcher, TrustOnCloud Topics: What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it? A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it? How have you seen the CSPs take different approaches to IAM? What does it mean for the cloud users? Why do people still screw up IAM in the cloud so badly after years of trying? Deeper, why do people still screw up resource hierarchy and resource management?  Are the identity sins of cloud IAM users truly the sins of the creators? How did the "big 3" get it wrong and how does that continue to manifest today? Your best cloud IAM advice is “assign roles at the lowest resource-level possible”, please explain this one? Where is the magic? Resources: Video (Linkedin, YouTube) Kat blog “Diving Deeply into IAM Policy Evaluation” blog “Complexity: a Guided Tour” book EP141 Cloud Security Coast to Coast: From 2015 to 2023, What's Changed and What's the Same? EP129 How CISO Cloud Dreams and Realities Collide  
28:09 3/4/24
EP161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud
Guest: Victoria Geronimo, Cloud Security Architect, Google Cloud Topics: You work with technical folks at the intersection of compliance, security, and cloud. So  what do you do, and where do you find the biggest challenges in communicating across those boundaries? How does cloud make compliance easier? Does it ever make compliance harder?  What is your best advice to organizations that approach cloud compliance as they did for the 1990s data centers and classic IT? What has been the most surprising compliance challenge you’ve helped teams debug in your time here?  You also work on standards development –can you tell us about how you got into that and what’s been surprising in that for you?  We often say on this show that an organization’s ability to threat model is only as good as their team’s perspectives are diverse: how has your background shaped your work here?   Resources: Video (YouTube) EP14 Making Compliance Cloud-native EP25 Beyond Compliance: Cloud Security in Europe  Fordham University Law and Technology site IAPP  site  
27:38 2/26/24
EP160 Don't Cloud Your Judgement: Security and Cloud Migration, Again!
Guest: Merritt Baer, Field CTO,  Lacework, ex-AWS, ex-USG Topics: How can organizations ensure that their security posture is maintained or improved during a cloud migration? Is cloud migration a risk reduction move? What are some of the common security challenges that organizations face during a cloud migration? Are there different gotchas between the three public clouds? What advice would you give to those security leaders who insist on lift/shift or on lift/shift first? How should security and compliance teams approach their engineering and DevOps colleagues to make sure things are starting on the right foot? In your view, what is the essence of a cloud-native approach to security? How can organizations ensure that their security posture scales as their cloud usage grows? Resources: Video (LinkedIn, YouTube) EP69 Cloud Threats and How to Observe Them EP138 Terraform for Security Teams: How to Use IaC to Secure the Cloud EP67 Cyber Defense Matrix and Does Cloud Security Have to DIE to Win? 9 Megatrends drive cloud adoption—and improve security for all Darknet Diaries podcast  
27:32 2/19/24
EP159 Workspace Security: Built for the Modern Threat. But How?
Guests: Emre Kanlikilicer, Senior Engineering Manager @ Google Sophia Gu, Engineering Manager at Google  Topics Workspace makes the claim that unlike other productivity suites available today, it’s architectured for the modern threat landscape. That’s a big claim! What gives Google the ability to make this claim? Workspace environments would have many different types of data, some very sensitive. What are some of the common challenges with controlling access to data and protecting data in hybrid work?  What are some of the common mistakes you see customers making with Workspace security? What are some of the ways context aware access and DLP (now SDP) help with this? What are the cool future plans for DLP and CAA? Resources: Google Workspace blog & Workspace Update blog EP99 Google Workspace Security: from Threats to Zero Trust CISA Zero Trust Maturity Model 2.0  
25:31 2/12/24
EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics
Guest: Jason Solomon, Security Engineer, Google Topics: Could you share a bit about when you get pulled into incidents and what are your goals when you are? How does that change in the cloud? How do you establish a chain of custody and prove it for law enforcement, if needed? What tooling do you rely on for cloud forensics and is that tooling available to "normal people"?  How do we at Google know when it’s time to call for help, and how should our customers know that it’s time?  Can I quote Ray Parker Jr and ask, who you gonna call? What’s your advice to a security leader on how to “prepare for the inevitable” in this context?  Cloud forensics - is it easier or harder than the 1990s classic forensics?  Resource: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster? EP103 Security Incident Response and Public Cloud - Exploring with Mandiant Google SRE Workbook (Ch 9) GRR Cloud Logging LibCloudForensics, Turbinia, Timesketch tools
21:33 2/5/24
EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud
Guest: Arie Zilberstein, CEO and Co-Founder at Gem Security Topics:  How does Cloud Detection and Response (CDR) differ from traditional, on-premises detection and response? What are the key challenges of cloud detection and response? Often we lift and shift our teams to Cloud, and not always for bad reasons, so  what’s your advice on how to teach the old dogs new tricks: “on-premise-trained” D&R teams and cloud D&R? What is this new CIRA thing that Gartner just cooked up?  Should CIRA exist as a separate market or technology or is this just a slice of CDR or even SIEM perhaps? What do you tell people who say that “SIEM is their CDR”? What are the key roles and responsibilities of the CDR team? How is the cloud D&R process related to DevOps and cloud-style IT processes?  Resources: Video version of this episode Cloud breaches databases EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster? EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP76 Powering Secure SaaS … But Not with CASB? Cloud Detection and Response? 9 Megatrends drive cloud adoption—and improve security for all “Emerging Tech: Security — Cloud Investigation and Response Automation (CIRA) Offers Transformation Opportunities” (Gartner access required) “Does the World Need Cloud Detection and Response (CDR)?” blog
25:27 1/29/24
EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive
Guest: Sandra Joyce, VP at Mandiant Intelligence Topics: Could you give us a brief overview of what this power disruption incident was about? This incident involved both Living Off the Land and attacks on operational technology (OT). Could you explain to our audience what these mean and what the attacker did here? We also saw a wiper used to hide forensics, is that common these days? Did the attacker risk tipping their hand about upcoming physical attacks? If we’d seen this intrusion earlier, might we have understood the attacker’s next moves? How did your team establish robust attribution in this case, and how they do it in general? How sure are we, really?  Could you share how this came about and maybe some of the highlights in our relationship helping defend that country? Resources: Sandworm Disrupts Power in Ukraine Using a Novel Attack Against Operational Technology | Mandiant Andy Greenberg’s book Sandworm  EP155 Cyber, Geopolitics, AI, Cloud - All in One Book?  
25:12 1/22/24
EP155 Cyber, Geopolitics, AI, Cloud - All in One Book?
Guests: Derek Reveron, Professor and Chair of National Security at the US Naval War College John Savage, An Wang Professor Emeritus of Computer Science of Brown University Topics: You wrote a book on cyber and war, how did this come about and what did you most enjoy learning from the other during the writing process? Is generative AI going to be a game changer in international relations and war, or is it just another tool? You also touch briefly on lethal autonomous weapons systems and ethics–that feels like the genie is right in the very neck of the bottle right now, is it too late? Aside from this book, and the awesome course you offered at Brown that sparked Tim’s interest in this field, how can we democratize this space better?  How does the emergence and shift to Cloud impact security in the cyber age? What are your thoughts on the intersection of Cloud as a set of technologies and operating model and state security (like sovereignty)? Does Cloud make espionage harder or easier?  Resources: “Security in the Cyber Age” book (and their other books’) “Thinking, Fast and Slow” book “No Shortcuts: Why States Struggle to Develop a Military Cyber-Force” book “The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age“ book “Active Cyber Defense: Applying Air Defense to the Cyber Domain” EP141 Cloud Security Coast to Coast: From 2015 to 2023, What's Changed and What's the Same? EP145 Cloud Security: Shared Responsibility, Shared Fate, Shared Faith?  
38:36 1/15/24
EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google
Guest: Mike Schiffman, Network Security “UTL” Topics: Given your impressive and interesting history, tell us a few things about yourself? What are the biggest challenges facing network security today based on your experience? You came to Google to work on Network Security challenges. What are some of the surprising ones you’ve uncovered here? What lessons from Google's approach to network security absolutely don’t apply to others? Which ones perhaps do? If you have to explain the difference between network security in the cloud and on-premise, what comes to mind first? How do we balance better encryption with better network security monitoring and detection? Speaking of challenges in cryptography, we’re all getting fired up about post-quantum and network security. Could you give us the maybe 5 minute teaser version of this because we have an upcoming episode dedicated to this? I hear you have some interesting insight on LLMs, something to do with blueboxing or something. What is that about? Resources: Video EP113 Love it or Hate it, Network Security is Coming to the Cloud EP122 Firewalls in the Cloud: How to Implement Trust Boundaries for Access Control “A History of Fake Things on the Internet” by WALTER J. SCHEIRER Why Google now protects its internal communications from quantum threats How Google is preparing for a post-quantum world NIST on PQC “Smashing The Stack For Fun And Profit” (yes, really)  
35:41 1/8/24
EP153 Kevin Mandia on Cloud Breaches: New Threat Actors, Old Mistakes, and Lessons for All
Guest: Kevin Mandia, CEO at Mandiant, part of Google Cloud Topics: When you look back, what were the most surprising cloud breaches in 2023, and what can we learn from them? How were they different from the “old world” of on-prem breaches?  For a long time it’s felt like incident response has been an on-prem specialization, and that adversaries are primarily focused on compromising on-prem infrastructure. Who are we seeing go after cloud environments? The same threat actors or not? Could you share a bit about the mistakes and risks that you saw organizations make that made their cloud breaches possible or made them worse? Conversely, what ended up being helpful to organizations in limiting the blast radius or making response easier?  Tim’s mother worked in a network disaster recovery team for a long time–their motto was “preparing for the inevitable.” What advice do you have for helping security teams and IT teams get ready for cloud breaches? Especially for recent cloud entrants? Anton tells his “2000 IDS story” (need to listen for details!) and asks: what approaches for detecting threats actually detects threats today? Resources: EP148 Decoding SaaS Security: Demystifying Breaches, Vulnerabilities, and Vendor Responsibilities "Microsoft lost its keys, and the government got hacked" news article SEC Charges SolarWinds and Chief Information Security Officer with Fraud, Internal Control Failures  (must read by every CISO!)
28:41 12/18/23
EP152 Trust, Security and Google's Annual Transparency Report
Guest: Michee Smith, Director, Product Management for Global Affairs Works, Google Topics: What is Google Annual Transparency Report and how did we get started doing this?  Surely the challenge of a transparency report is that there are things we can’t be transparent about, how do we balance this? What are those? Is it a safe question? What Access Transparency Logs are and if they are connected to the report –other than in Tim's mind and your career?  Beyond building the annual transparency report, you also work on our central risk data platform. Every business has a problem managing risk–what’s special here? Do we have any Google magic here?  Could you tell us about your path in Product Management here? You have been here eight years, and recently became Director. Do you have any advice for the ambitious Google PMs listening to the show?   Resources: Google Annual Transparency report Access Transparency Logs “Digital Asset Valuation and Cyber Risk Measurement: Principles of Cybernomics“ book Keyun Ruan “Trapped in a frame: Why leaders should avoid security framework traps”  blog
26:03 12/11/23
EP151 Cyber Insurance in the Cloud Era: Balancing Protection, Data and Risks
Guest: Monica Shokrai, Head Of Business Risk and Insurance For Google Cloud  Topics: Could you give us the 30 second run down of what cyber insurance is and isn't? Can you tie that to clouds? How does the cloud change it? Is it the case that now I don't need insurance for some of the "old school" cyber risks? What challenges are insurers facing with assessing cloud risks? On this show I struggle to find CISOs who "get" cloud, are there insurers and underwriters who get it? We recently heard about an insurer reducing coverage for incidents caused by old CVEs! What's your take on this? Effective incentive structure to push orgs towards patching operational excellence or someone finding yet another way not to pay out? Is insurance the magic tool for improving security? Doesn't cyber insurance have a difficult reputation with clients? “Will they even pay?” “Will it be enough?” “Is this a cyberwar exception?” type stuff? How do we balance our motives between selling more cloud and providing effective risk underwriting data to insurers? How soon do you think we will have actuarial data from many clients re: real risks in the cloud? What about the fact that risks change all the time unlike say many “non cyber” risks?   Resources: Video (LinkedIn, YouTube) Google Cloud Risk Protection program “Cyber Insurance Policy”  by Josephine Wolff  InsureSec
26:06 12/4/23
EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw
Guest: Dr Gary McGraw, founder of the Berryville Institute of Machine Learning Topics: Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?  If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help? How would you threat model a system with ML in it or a new ML system you are building?  What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system? What are the key differences between securing the AI you built and AI you buy or subscribe to? Which security tools and frameworks will solve all of these problems for us?  Resources: EP135 AI and Security: The Good, the Bad, and the Magical Gary McGraw books “An Architectural Risk Analysis Of Machine Learning Systems: Toward More Secure Machine Learning“ paper “What to think about when you’re thinking about securing AI” Annotated ML Security bibliography   Tay bot story (2016) “Can you melt eggs?” “Microsoft AI researchers accidentally leak 38TB of company data” “Random number generator attack” “Google's AI Red Team: the ethical hackers making AI safer” Introducing Google’s Secure AI Framework
26:17 11/27/23
EP149 Canned Detections: From Educational Samples to Production-Ready Code
Guests: John Stoner, Principal Security Strategist, Google Cloud Security Dave Herrald, Head of Adopt Engineering, Google Cloud Security Topics: In your experience, past and present, what would make clients trust vendor detection content? Regarding “canned”, default or “out-of-the-box” detections, how to make them more production quality and not merely educational samples to learn from? What is more important, seeing the detection or being able to change it, or both? If this is about seeing the detection code/content, what about ML and algorithms? What about the SOC analysts who don't read the code? What about “tuning” - is tuning detections a bad word now in 2023? Everybody is obsessed about “false positives,” what about the false negatives? How are we supposed to eliminate them if we don’t see detection logic? Resources: Video (Linkedin, YouTube) Github rules for Chronicle DetectionEngineering.net by Zack Allen “On Trust and Transparency in Detection” blog “Detection as Code? No, Detection as COOKING!” blog EP64 Security Operations Center: The People Side and How to Do it Right EP108 How to Hunt the Cloud: Lessons and Experiences from Years of Threat Hunting EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil Why is Threat Detection Hard? Detection Engineering is Painful — and It Shouldn’t Be (Part 1, 2, 3, 4, 5)  
28:37 11/20/23
EP148 Decoding SaaS Security: Demystifying Breaches, Vulnerabilities, and Vendor Responsibilities
Guest: Adrian Sanabria,  Director of Valence Threat Labs at Valence Security, ex-analyst Topics: When people talk about “cloud security” they often forget SaaS, what should be the structured approach to using SaaS securely or securing SaaS? What are the incidents telling us about the realistic threats to SaaS tools? Is the Microsoft 365 breach a SaaS breach, a cloud breach or something else? Do we really need CVEs for SaaS vulnerabilities? What are the least understood aspects of securing SaaS? What do you tell the organizations who assume that “SaaS vendor takes care of all SaaS security”? Isn’t CASB the answer to all SaaS security issues? We also have SSPM now too? Do we really need more tools? Resources: VIdeo (LinkedIn, YouTube) EP76 Powering Secure SaaS … But Not with CASB? Cloud Detection and Response? Valence 2023 State of SaaS Security report DHS Launches First-Ever Cyber Safety Review Board Enterprise Security Weekly podcast CloudVulnDb and another cloud vulnerability list Cyber Safety Review Board (CSRB) by CISA
29:44 11/12/23
EP147 Special: 2024 Google Cloud Security Forecast Report
Guest:  Kelli Vanderlee, Senior Manager, Threat Analysis, Mandiant at Google Cloud Topics: Can you really forecast threats? Won’t the threat actors ultimately do whatever they want? How can clients use the forecast? Or as Tim would say it, what gets better once you read it? What is the threat forecast for cloud environments? It says “Cyber attacks targeting hybrid and multi-cloud environments will mature and become more impactful“ - what does it mean? Of course AI makes an appearance as well: “LLMs and other gen AI tools will likely be developed and offered as a service to assist attackers with target compromises.” Do we really expect attacker-run LLM SaaS? What models will they use? Will it be good? There are a number of significant elections scheduled for 2024, are there implications for cloud security? Based on the threat information, tell me about something that is going well, what will get better in 2024? Resources: 2024 Google Cloud Security Forecast Report EP112 Threat Horizons - How Google Does Threat Intelligence EP135 AI and Security: The Good, the Bad, and the Magical How to Stop a Ransomware Attack Sophisticated StripedFly Spy Platform Masqueraded for Years as Crypto Miner  
22:51 11/8/23
EP146 AI Security: Solving the Problems of the AI Era: A VC's Insights
Guest: Wei Lien Dang, GP at Unusual Ventures  Topics:  We have a view at Google that AI for security and security for AI are largely separable disciplines. Do you feel the same way? Is this distinction a useful one for you?  What are some of the security problems you're hearing from AI companies that are worth solving?  AI is obviously hot, and as always security is chasing the hotness. Where are we seeing the focus of market attention for AI security? Does this feel like an area that's going to have real full products or just a series of features developed by early stage companies that get acquired and rolled up into other orgs?  What lessons can we draw on from previous platform shifts, e.g. cloud security, to inform how this market will evolve?  Resources: “What to think about when you’re thinking about securing AI” blog / paper EP135 AI and Security: The Good, the Bad, and the Magical EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models Introducing Google’s Secure AI Framework OWASP Top 10 for Large Language Model Applications Unusual VC Startup Field Guide Demystifing LLMs and Threats by Caleb Sima
24:27 11/5/23
EP145 Cloud Security: Shared Responsibility, Shared Fate, Shared Faith?
Guest: Jay Thoden van Velzen, Strategic Advisor to the CSO, SAP  Topics: What are the challenges with shared responsibility for cloud security? Can you explain "shared" vs "separated" responsibility? In your article, you mention “shared faith”, we have “shared fate”, but we never heard of shared faith. What is this? Can you explain? What about the cloud models (SaaS, PaaS, IaaS), how does this sharing model differ? While at it, what is cloud, really? [yes, we really did ask this!]  Resources: LinkedIn post and  Blog EP132 Chaos Engineering for Security: How to Improve Software Resilience with Kelly Shortridge “Security Chaos Engineering” book Shared responsibility failures blog Shared fate at Google Cloud (also see blogs one and two) National Cyber Security strategy
20:36 10/29/23
EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models
Guest: Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security Topics: Could you give our audience the quick version of what is an LLM and what things can they do vs not do?  Is this “baby AGI” or is this a glorified “autocomplete”? Let’s talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data? Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one? How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they’re a) talking nonsense or b) actually keeping their output safe?  Are hallucinations inherent to LLMs and can they ever be fixed? So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?  Resources: Retrieval Augmented Generation (or go ask Bard about it) “New Paper: “Securing AI: Similar or Different?“”  blog
29:04 10/23/23

Similar podcasts