Shifting Privacy Left features lively discussions on the need for organizations to embed privacy by design into the UX/UI, architecture, engineering / DevOps and the overall product development processes BEFORE code or products are ever shipped. Each Tuesday, we publish a new episode that features interviews with privacy engineers, technologists, researchers, ethicists, innovators, market makers, and industry thought leaders. We dive deeply into this subject and unpack the exciting elements of emerging technologies and tech stacks that are driving privacy innovation; strategies and tactics that win trust; privacy pitfalls to avoid; privacy tech issues ripped from the headlines; and other juicy topics of interest.
In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need. Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout. Topics Covered: How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneysThe unique ideas and practices presented in this course and what attendees will take away Frameworks, cases, and mental models that Eric and Amalia will cover in their courseHow Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework The importance of upskilling for privacy engineers and attorneysResources Mentioned:Enroll in the 'Privacy Engineering in AI Systems Certificate program' (Save $300 with promo code: PODCAST300 - enter this into the Inquiry Form instead of directly purchasing the course)Read: 'The Privacy Engineer's Manifesto'Take the free European Commission's course, 'Understanding Law as Code'Guest Info: Connect with Amalia on LinkedInConnect with Eric on LinkedInLearn about Designing PrivacySend us a Text Message. TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
7/23/24 • 38:58
Today, I chat with Gianclaudio Malgieri, an expert in privacy, data protection, AI regulation, EU law, and human rights. Gianclaudio is an Associate Professor of Law at Leiden University, the Co-director of the Brussels Privacy Hub, Associate Editor of the Computer Law & Security Review, and co-author of the paper "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness". In our conversation, we explore this paper and why privacy-enhancing technologies (PETs) are essential but not enough on their own to address digital policy challenges.Gianclaudio explains why PETs alone are insufficient solutions for data protection and discusses the obstacles to achieving fairness in data processing – including bias, discrimination, social injustice, and market power imbalances. We discuss data alteration techniques such as anonymization, pseudonymization, synthetic data, and differential privacy in relation to GDPR compliance. Plus, Gianclaudio highlights the issues of representation for minorities in differential privacy and stresses the importance of involving these groups in identifying bias and assessing AI technologies. We also touch on the need for ongoing research on PETs to address these challenges and share our perspectives on the future of this research. Topics Covered: What inspired Gianclaudio to research fairness and PETsHow PETs are about power and controlThe legal / GDPR and computer science perspectives on 'fairness'How fairness relates to discrimination, social injustices, and market power imbalances How data obfuscation techniques relate to AI / ML How well the use of anonymization, pseudonymization, and synthetic data techniques address data protection challenges under the GDPRHow the use of differential privacy techniques may led to unfairness Whether the use of encrypted data processing tools and federated and distributed analytics achieve fairness 3 main PET shortcomings and how to overcome them: 1) bias discovery; 2) harms to people belonging to protected groups and individuals autonomy; and 3) market imbalances.Areas that warrant more research and investigation Resources Mentioned:Read: "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness"Guest Info: Connect with Gianclaudio on LinkedInLearn more about Brussles Privacy HubSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
6/25/24 • 47:38
In this episode, I had the pleasure of talking with Avi Bar-Zeev, a true tech pioneer and the Founder and President of The XR Guild. With over three decades of experience, Avi has an impressive resume, including launching Disney's Aladdin VR ride, developing Second Life's 3D worlds, co-founding Keyhole (which became Google Earth), co-inventing Microsoft's HoloLens, and contributing to the Amazon Echo Frames. The XR Guild is a nonprofit organization that promotes ethics in extended reality (XR) through mentorship, networking, and educational resources. Throughout our conversation, we dive into privacy concerns in augmented reality (AR), virtual reality (VR), and the metaverse, highlighting increased data misuse and manipulation risks as technology progresses. Avi shares his insights on how product and development teams can continue to be innovative while still upholding responsible, ethical standards with clear principles and guidelines to protect users' personal data. Plus, he explains the role of eye-tracking technology and why he advocates classifying its data as health data. We also discuss the challenges of anonymizing biometric data, informed consent, and the need for ethics training in all of the tech industry. Topics Covered: The top privacy and misinformation issues that Avi has noticed when it comes to AR, VR, and metaverse dataWhy Avi advocates for classifying eye tracking data as health data The dangers of unchecked AI manipulation and why we need to be more aware and in control of our online presence The ethical considerations for experimentation in highly regulated industriesWhether it is possible to anonymize VR and AR dataWays these product and development teams can be innovative while maintaining ethics and avoiding harm AR risks vs VR risksAdvice and privacy principles to keep in mind for technologists who are building AR and VR systems Understanding The XR Guild Resources Mentioned:Read: The Battle for Your Brain: Defending the Right to Think Freely in the Age of NeurotechnologyRead: Our Next RealityGuest Info: Connect with Avi on LinkedInCheck out the XR GuildLearn about Avi's Consulting ServicesSend us a Text Message. Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
6/18/24 • 51:35
Today, I'm joined by Matt Gershoff, Co-founder and CEO of Conductrics, a software company specializing in A/B testing, multi-armed bandit techniques, and customer research and survey software. With a strong background in resource economics and artificial intelligence, Matt brings a unique perspective to the conversation, emphasizing simplicity and intentionality in decision-making and data collection. In this episode, Matt dives into Conductrics' background, the role of A/B testing and experimentation in privacy, data collection at a specific and granular level, and the details of Conductrics' processes. He emphasizes the importance of intentionally collecting data with a clear purpose to avoid unnecessary data accumulation and touches on the value of experimentation in conjunction with data minimization strategies. Matt also discusses his upcoming talk at the PEPR Conference and shares his hopes for what privacy engineers will learn from the event. Topics Covered: Matt’s background and how he started A/B testing and experimentation at ConductricsThe major challenges that arise when companies run experiments and how Conductrics works to solve them Breaking down A/B testingHow being intentional about A/B testing and experimentation supports high level privacyThe process of the data collection, testing, and experimentation Collecting the data while minimizing privacy risks The value of attending the USENIX Conference on Privacy Engineering Practice & Respect (PEPR24) and what to expect from Matt’s talk Guest Info: Connect with Matt on LinkedInLearn more about ConductricsRead about George Box's quote, "All models are wrong" Learn about the PEPR ConferenceSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
6/4/24 • 45:22
In this episode, Marie Potel-Saville joins me to shed light on the widespread issue of dark patterns in design. With her background in law, Marie founded the 'FairPatterns' project with her award-winning privacy and innovation studio, Amurabi, to detect and fix large-scale dark patterns. Throughout our conversation, we discuss the different types of dark patterns, why it is crucial for businesses to prevent them from being coded into their websites and apps, and how designers can ensure that they are designing fair patterns in their projects.Dark patterns are interfaces that deceive or manipulate users into unintended actions by exploiting cognitive biases inherent in decision-making processes. Marie explains how dark patterns are harmful to our economic and democratic models, their negative impact on individual agency, and the ways that FairPatterns provides countermeasures and safeguards against the exploitation of people's cognitive biases. She also shares tips for designers and developers for designing and architecting fair patterns.Topics Covered: Why Marie shifted her career path from practicing law to deploying and lecturing on Legal UX design & combatting Dark Patterns at AmurabiThe definition of ‘Dark Patterns’ and the difference between them and ‘deceptive patterns’What motivated Marie to found FairPatterns.com and her science-based methodology to combat dark patternsThe importance of decision making governance Why execs should care about preventing dark patterns from being coded into their websites, apps, & interfacesHow dark patterns exploit our cognitive biases to our detrimentWhat global laws say about dark patternsHow dark patterns create structural risks for our economies & democratic modelsHow "Fair Patterns" serve as countermeasures to Dark PatternsThe 7 categories of Dark Patterns in UX design & associated countermeasures Advice for designers & developers to ensure that they design & architect Fair Patterns when building products & featuresHow companies can boost sales & gain trust with Fair Patterns Resources to learn more about Dark Patterns & countermeasuresGuest Info: Connect with Marie on LinkedInLearn more about AmurabiCheck out FairPatterns.comResources Mentioned:Learn about the 7 Stages of Action ModelTake FairPattern's course: Dark Patterns 101 Read Deceptive Design PatternsListen to FairPatterns' FigSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
4/30/24 • 54:12
In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work. Topics Covered: Aaron’s deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy leftWhat motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should beAaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he soughtHow HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use caseWhy it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you thereLessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorshipThe amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnoutAaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologiesThe importance of being open to continuous learning in the field of privacy engineering Guest Info: Connect with Aaron on LinkedInLearn about HP's Privacy Engineering Center of ExcellenceReview the OWASP Machine Learning Security Top 10Review the OWASP Top 10 for LLM ApplicationsSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
4/9/24 • 40:13
Today, I’m joined by Amaka Ibeji, Privacy Engineer at Cruise where she designs and implements robust privacy programs and controls. In this episode, we discuss Amaka's passion for creating a culture of privacy and compliance within organizations and engineering teams. Amaka also hosts the PALS Parlor Podcast, where she speaks to business leaders and peers about privacy, AI governance, leadership, and security and explains technical concepts in a digestible way. The podcast aims to enable business leaders to do more with their data and provides a way for the community to share knowledge with one other.In our conversation, we touch on her career trajectory from security engineer to privacy engineer and the intersection of cybersecurity, privacy engineering, and AI governance. We highlight the importance of early engagement with various technical teams to enable innovation while still achieving privacy compliance. Amaka also shares the privacy-enhancing technologies (PETs) that she is most excited about, and she recommends resources for those who want to learn more about strategic privacy engineering. Amaka emphasizes that privacy is a systemic, 'wicked problem' and offers her tips for understanding and approaching it. Topics Covered:How Amaka's compliance-focused experience at Microsoft helped prepare her for her Privacy Engineering role at CruiseWhere privacy overlaps with the development of AI Advice for shifting privacy left to make privacy stretch beyond a compliance exerciseWhat works well and what doesn't when building a 'Culture of Privacy'Privacy by Design approaches that make privacy & innovation a win-win rather than zero-sum gamePrivacy Engineering trends that Amaka sees; and, the PETs about which she's most excitedAmaka's Privacy Engineering resource recommendations, including: Hoepman's "Privacy Design Strategies" book;The LINDDUN Privacy Threat Modeling Framework; andThe PLOT4AI Framework"The PALS Parlor Podcast," focused on Privacy Engineering, AI Governance, Leadership, & SecurityWhy Amaka launched the podcast;Her intended audience; andTopics that she plans to cover this yearThe importance of collaboration; building a community of passionate privacy engineers, and addressing the systemic issue of privacy Guest Info & Resources:Follow Amaka on LinkedInListen to The PALS Parlor PodcastRead Jaap-Henk Hoepman's "Privacy Design Strategies (The Little Blue Book)"Read Jason Cronk's "Strategic Privacy by Design, 2nd Edition"Check out The LINDDUN Privacy Threat Modeling FrameworkCheck out The Privacy Library of Threats for Artificial IntelSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
4/2/24 • 43:24
In this week's episode, I am joined by Heidi Saas, a privacy lawyer with a reputation for advocating for products and services built with privacy by design and against the abuse of personal data. In our conversation, she dives into recent FTC enforcement actions, analyzing five FTC actions and some enforcement sweeps by Colorado & Connecticut. Heidi shares her insights on the effect of the FTC enforcement actions and what privacy engineers need to know, emphasizing the need for data management practices to be transparent, accountable, and based on affirmative consent. We cover the role of privacy engineers in ensuring compliance with data privacy laws; why 'browsing data' is 'sensitive data;' the challenges companies face regarding data deletion; and the need for clear consent mechanisms, especially with the collection and use of location data. We also discuss the need to audit the privacy posture of products and services - which includes a requirement to document who made certain decisions - and how to prioritize risk analysis to proactively address risks to privacy.Topics Covered: Heidi’s journey into privacy law and advocacy for privacy by design and defaultHow the FTC brings enforcement actions, the effect of their settlements, and why privacy engineers should pay closer attentionCase 1: FTC v. InMarket Media - Heidi explains the implication of the decision: where data that are linked to a mobile advertising identifier (MAID) or an individual's home are not considered de-identifiedCase 2: FTC v. X-Mode Social / OutLogic - Heidi explains the implication of the decision, focused on: affirmative express consent for location data collection; definition of a 'data product assessment' and audit programs; and data retention & deletion requirementsCase 3: FTC v. Avast - Heidi explains the implication of the decision: 'browsing data' is considered 'sensitive data'Case 4: The People (CA) v. DoorDash - Heidi explains the implications of the decision, based on CalOPPA: where companies that share personal data with one another as part of a 'marketing cooperative' are, in fact, selling of dataHeidi discusses recent State Enforcement Sweeps for privacy, specifically in Colorado and Connecticut and clarity around breach reporting timelinesThe need to prioritize independent third-party audits for privacyCase 5: FTC v. Kroger - Heidi explains why the FTC's blocking of Kroger's merger with Albertson's was based on antitrust and privacy harms given the sheer amount of personal data that they processTools and resources for keeping up with FTC cases and connecting with your privacy community Guest Info: Follow Heidi on LinkedInRead (book): 'Means of Control: How the Hidden Alliance of Tech and Government is Creating a New American Surveillance State'Send us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
3/26/24 • 75:33
This week's episode, I chat with Chris Zeunstrom, the Founder and CEO of Ruca and Yorba. Ruca is a global design cooperative and founder support network, while Yorba is a reverse CRM that aims to reduce your digital footprint and keep your personal information safe. Through his businesses, Chris focuses on solving common problems and creating innovative products. In our conversation, we talk about building a privacy-first company, the digital minimalist movement, and the future of decentralized identity and storage.Chris shares his journey as a privacy-focused entrepreneur and his mission to prioritize privacy and decentralization in managing personal data. He also explains the digital minimalist movement and why its teachings reach beyond the industry. Chris touches on Yorba's collaboration with Consumer Reports to implement Permission Slip and creating a Data Rights Protocol ecosystem that automates data deletion for consumers. Chris also emphasizes the benefits of decentralized identity and storage solutions in improving personal privacy and security. Finally, he gives you a sneak peek at what's next in store for Yorba.Topics Covered: How Yorba was designed as a privacy-1st consumer CRM platform; the problems that Yorba solves; and key product functionality & privacy featuresWhy Chris decided to bring a consumer product to market for privacy rather than a B2B productWhy Chris incorporated Yorba as a 'Public Benefit Corporation' (PBC) and sought B Corp statusExploring 'Digital Minimalism' How Yorba's is working with Consumer Reports to advance the CR Data Rights Protocol, leveraging 'Permission Slip' - an authorized agent for consumers to submit data deletion requestsThe architectural design decisions behind Yorba’s personal CRM system The benefits to using Matomo Analytics or Fathom Analytics for greater privacy vs. using Google Analytics The privacy benefits to deploying 'Decentralized Identity' & 'Decentralized Storage' architecturesChris' vision for the next stage of the Internet; and, the future of YorbaGuest Info: Follow/Connect with Chris on LinkedInCheck out Yorba's website Resources Mentioned: Read: TechCrunch's review of YorbaRead: 'Digital Minimalism - Choosing a Focused Life In a Noisy World' by Cal NewportSubscribe to the Bullet Journal (AKA Bujo) on Digital Minimalism by Ryder CarrollLearn about Consumer Reports' Permission Slip Protocol Check out Matomo Analytics and Fathom for privacySend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
3/19/24 • 43:11
In this week's episode, I sat down with Jake Ottenwaelder, Principal Privacy Engineer at Integrative Privacy LLC. Throughout our conversation, we discuss Jake’s holistic approach to privacy implementation that considers business, engineering, and personal objectives, as well as the role of anonymization, consent management, and DSAR processes for greater privacy. Jake believes privacy implementation must account for the interconnectedness of privacy technologies and human interactions. He highlights what a successful implementation looks like and the negative consequences when done poorly. We also dive into the challenges of implementing privacy in fast-paced, engineering-driven organizations. We talk about the complexities of anonymizing data (a very high bar) and he offers valuable suggestions and strategies for achieving anonymity while making the necessary resources more accessible. Plus, Jake shares his advice for organizational leaders to see themselves as servant-leaders, leaving a positive legacy in the field of privacy. Topics Covered: What inspired Jake’s initial shift from security engineering to privacy engineering, with a focus on privacy implementationHow Jake's previous role at Axon helped him shift his mindset to privacyJake’s holistic approach to implementing privacy The qualities of a successful implementation and the consequences of an unsuccessful implementationThe challenges of implementing privacy in large organizations Common blockers to the deployment of anonymizationJake’s perspective on using differential privacy techniques to achieve anonymityCommon blockers to implementing consent management capabilitiesThe importance of understanding data flow & lineage, and auditing data deletion Holistic approaches to implementing a streamlined and compliant DSAR process with minimal business disruption Why Jake believes it's important to maintain a servant-leader mindset in privacyGuest Info: Connect with Jake on LinkedInIntegrative Privacy LLCSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
3/5/24 • 54:09
In this week's episode, I am joined by Steve Tout, Practice Lead at Integrated Solutions Group (ISG) and Host of The Nonconformist Innovation Podcast to discuss the intersection of privacy and identity. Steve has 18+ years of experience in global Identity & Access Management (IAM) and is currently completing his MBA from Santa Clara University. Throughout our conversation, Steve shares his journey as a reformed technologist and advocate for 'Nonconformist Innovation' & 'Tipping Point Leadership.'Steve's approach to identity involves breaking it down into 4 components: 1) philosophy, 2) politics, 3) economics & 4)technology, highlighting their interconnectedness. We also discuss his work with Washington State and its efforts to modernize Consumer Identity Access Management (IAM). We address concerns around AI, biometrics & mobile driver's licenses. Plus, Steve offers his perspective on tipping point leadership and the challenges organizations face in achieving privacy change at scale.Topics Covered: Steve's origin story; his accidental entry into identity & access management (IAM)Steve's perspective as a 'Nonconformist Innovator' and why he launched 'The Nonconformist Innovation Podcast'The intersection of privacy & identityHow to address organizational resistance to change, especially with lean resourcesBenefits gained from 'Tipping Point Leadership'4 common hurdles to tipping point leadership How to be a successful tipping point leader within a very bottom-up focused organization'Consumer IAM' & the driving need for modernizing identity in Washington StateHow Steve has approached the challenges related to privacy, ethics & equity Differences between the mobile driver's license (mDL) & verified credentials (VC) standards & technologyHow States are approaching the implementation of mDL in different ways and the privacy benefits of 'selective disclosure'Steve's advice for privacy technologists to best position them and their orgs at the forefront of privacy and security innovationSteve recommended books for learning more about tipping point leadershipGuest Info: Connect with Steve on LinkedInListen to The Nonconformist Innovation Podcast Resources Mentioned: Steve's Interview with Tom KempTipping Point Leadership books:On Change Management Organizational BehaviorEthics in the Age of Disruptive Technologies: An OperaSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
2/27/24 • 54:55
This week, I chat with Jake Ward, the Co-Founder and CEO of Data Protocol, to discuss how the Data Protocol platform supports developers' accountability for privacy by giving developers the relevant information in the way that they want it. Throughout the episode, we cover the Privacy Engineering course offerings and certification program; how to improve communication with developers; and trends that Jake sees across his customers after 2 years of offering these courses to engineers.In our conversation, we dive into the topics covered in the Privacy Engineering Certification Program course offering , led by instructor Nishant Bhajaria, and the impact that engineers can make in their organization after completing it. Jake shares why he's so passionate about empowering developers, enabling them to build safer products. We talk about the effects of privacy engineering on large tech companies and how to bridge the gap between developers and the support they need with collaboration and accountability. Plus, Jake reflects on his own career path as the Press Secretary for a U.S. Senator and the experiences that shaped his perspectives and brought him to where he is now.Topics Covered: Jake’s career journey and why he landed on supporting software developers How Jake build Data Protocol and it’s community What 'shifting privacy left' means to JakeData Protocol's Privacy Engineering Courses, Labs, & Certification Program and what developers will take awayThe difference between Data Protocol's free Privacy Courses and paid CertificationFeedback from customers and & trends observedWhether tech companies have seen improvement in engineers' ability to embed privacy into the development of products & services after completing the Privacy Engineering courses and labs Other privacy-related courses available on Data Protocol, and privacy courses on the roadmapWays to leverage communications to surmount current challengesHow organizations can make their developers accountable for privacy, and the importance of aligning responsibility, accountability & business processesHow Debra would operationalize this accountability into an organizationHow you can use the PrivacyCode.ai privacy tech platform to enable the operationalization of privacy accountability for developersResources Mentioned: Check out Data Protocol's courses, based on topicEnroll in The Privacy Engineering Certification Program (courses are free)Check out S3E2: 'My Top 20 Privacy Engineering Resources for 2024' Guest Info: Connect with Jake on LinkedSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
2/13/24 • 44:40
My guest this week is Jay Averitt, Senior Privacy Product Manager and Privacy Engineer at Microsoft, where he transitioned his career from Technology Attorney to Privacy Counsel, and most recently to Privacy Engineer.In this episode, we hear from Jay about: his professional path from a degree in Management Information Systems to Privacy Engineer; how Twitter and Microsoft navigated a privacy setup, and how to determine privacy program maturity; multiple of his Privacy Engineering community projects; and tips on how to spread privacy awareness and stay active within the industry. Topics Covered:Jay’s unique professional journey from Attorney to Privacy EngineerJay’s big mindset shift from serving as Privacy Counsel to Privacy Engineer, from a day-to-day and internal perspectiveWhy constant learning is essential in the field of privacy engineering, requiring us to keep up with ever-changing laws, standards, and technologiesJay’s comparison of what it's like to work for Twitter vs. Microsoft when it comes to how each company focuses on privacy and data protection Two ways to determine Privacy Program Maturity, according to JayHow engineering-focused organizations can unify around a corporate privacy strategy and how privacy pros can connect to people beyond their siloed teamsWhy building and maintaining relationships is the key for privacy engineers to be seen as enablers instead of blockers A detailed look at the 'Technical Privacy Review' processA peak into Privacy Quest’s gamified privacy engineering platform and the events that Jay & Debra are leading as part of its DPD'24 Festival Village month-long puzzles and eventsDebra's & Jay's experiences at the USENIX PEPR'23; why it provided so much value for them both; and, why you should consider attending PEPR'24 Ways to utilize online Slack communities, LinkedIn, and other tools to stay active in the privacy engineering worldResources Mentioned:Review talks from the University of Illinois 'Privacy Everywhere Conference 2024'Join the Privacy Quest Village's 'Data Privacy Day’24 Festival' (through Feb 18th)Submit a Proposal / Register for the USENIX PEPR ‘24 ConferenceGuest Info:Connect with Jay on LinkedInSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
1/30/24 • 51:51
In Honor of Data Privacy Week 2024, we're publishing a special episode. Instead of interviewing a guest, Debra shares her 'Top 20 Privacy Engineering Resources' and why. Check out her favorite free privacy engineering courses, books, podcasts, creative learning platforms, privacy threat modeling frameworks, conferences, government resources, and more.DEBRA's TOP 20 PRIVACY ENGINEERING RESOURCES (in no particular order)Privado's Free Course: 'Technical Privacy Masterclass'OpenMined's Free Course: 'Our Privacy Opportunity' Data Protocol's Privacy Engineering Certification ProgramThe Privacy Quest Platform & Games; Bonus: The Hitchhiker's Guide to Privacy Engineering'Data Privacy: a runbook for engineers by Nishant Bhajaria'Privacy Engineering, a Data Flow and Ontological Approach' by Ian Oliver'Practical Data Privacy: enhancing privacy and security in data' by Katharine JarmulStrategic Privacy by Design, 2nd Edition by R. Jason Cronk'The Privacy Engineer's Manifesto: getting from policy to code to QA to value' by Michelle Finneran-Dennedy, Jonathan Fox and Thomas R. Dennedy USENIX Conference on Privacy Engineering Practice and Respect (PEPR)IEEE's The International Workshop on Privacy Engineering (IWPE)Institute of Operational Privacy Design (IOPD)'The Shifting Privacy Left Podcast,' produced and hosted by Debra J Farber and sponsored by PrivadoMonitaur's 'The AI Fundamentalists Podcast' hosted by Andrew Clark & Sid MangalikSkyflow's 'Partially Redacted Podcast' with Sean FalconerThe LINDDUN Privacy Threat Model Framework & LINDDUN GO Card GameThe Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai) Framework & PLOT4ai Card GameThe IAPP Privacy Engineering SectionThe NIST Privacy Engineering Program Collaboration SpaceThe EDPS Internet Privacy Engineering Network (IPEN)Read “Top 20 Privacy Engineering Resources” on Privado’s Blog.Send us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
1/23/24 • 54:13
My guest this week is Patricia Thaine, Co-founder and CEO of Private AI, where she leads a team of experts in developing cutting-edge solutions using AI to identify, reduce, and remove Personally Identifiable Information (PII) in 52 languages across text, audio, images, and documents.In this episode, we hear from Patricia about: her transition from starting a Ph.D. to co-founding an AI company; how Private AI set out to solve fundamental privacy problems to provide control and understanding of data collection; misunderstandings about how best to leverage AI regarding privacy-preserving machine learning; Private AI’s intention when designing their software, plus newly deployed features; and whether global AI regulations can help with current risks around privacy, rogue AI and copyright.Topics Covered:Patricia’s professional journey from starting a Ph.D. in Acoustic Forensics to co-founding an AI companyWhy Private AI’s mission is to solve privacy problems and create a platform for developers to modularly and flexibly integrate it anywhere you want in your software pipeline, including model ingress & egressHow companies can avoid mishandling personal information when leveraging AI / machine learning; and Patricia’s advice to companies to avoid mishandling personal information Why keeping track of ever-changing data collection and regulations make it hard to find personal informationPrivate AI's privacy-enabling architectural approach to finding personal data to prevent it from being used by or stored in an AI modelThe approach that Privacy AI took to design their softwarePrivate AI's extremely high matching rate, and how they aim for 99%+ accuracyPrivate AI's roadmap & R&D effortsDebra & Patricia discuss AI Regulation and Patricia's insights from her article 'Thoughts on AI Regulation'A foreshadowing of AI’s copyright risk problem and whether regulations or licenses can helpChatGPT’s popularity, copyright, and the need for embedding privacy, security, and safety by design from the beginning (in the MVP)How to reach out to Patricia to connect, collaborate, or access a demoHow thinking about the fundamentals gets you a good way on your way to ensuring privacy & securityResources Mentioned:Read: Yoshua Bengio’s blog post: "How Rogue AI's May Arise"Read: Microsoft's Digital Defense Report 2023Read Patricia’s article, “Thoughts on AI Regulation” Guest Info:Connect with Patricia on LinkedInSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
1/2/24 • 36:54
My guest this week is Kevin Killens, CEO of AHvos, a technology service that provides AI solutions for data-heavy businesses using a proprietary technology called Contextually Responsive Intelligence (CRI), which can act upon a business's private data and produce results without storing that data.In this episode, we delve into this technology and learn more from Kevin about: his transition from serving in the Navy to founding an AI-focused company; AHvos’ architectural approach in support of data minimization and reduced attack surface; AHvos' CRI technology and its ability to provide accurate answers based on private data sets; and how AHvos’ Data Crucible product helps AI teams to identify and correct inaccurate dataset labels. Topics Covered:Kevin’s origin story, from serving in the Navy to founding AHvosHow Kevin thinks about privacy and the architectural approach he took when building AHvosThe challenges of processing personal data, 'security for privacy,' and the applicability of the GDPR when using AHvosKevin explains the benefits of Contextually Responsive Intelligence (CRI): which abstracts out raw data to protect privacy; finds & creates relevant data in response to a query; and identifies & corrects inaccurate dataset labelsHow human-created algorithms and oversight influence AI parameters and model bias; and, why transparency is so importantHow customer data is ingested into models via AHvosWhy it is important to remove bias from Testing Data, not only Training Data; and, how AHvos ensures accuracy How AHvos' Data Crucible identifies & corrects inaccurate data set labelsKevin's advice for privacy engineers as they tackle AI challenges in their own organizationsThe impact of technical debt on companies and the importance of building slowly & correctly rather than racing to market with insecure and biased AI modelsThe importance of baking security and privacy into your minimum viable product (MVP), even for products that are still in 'beta' Guest Info:Connect with Kevin on LinkedInCheck out AHvosCheck out Trinsic TechnologiesSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
12/26/23 • 43:20
My guest this week is Nabanita De, Software Engineer, Serial Entrepreneur, and Founder & CEO at Privacy License where she's on a mission to transform the AI landscape. In this episode, we discuss Nabanita's transition from Engineering Manager at Remitly to startup founder; what she's learned from her experience at Antler's accelerator program, her first product to market: PrivacyGPT and her work to educate Privacy Champions. Topics Covered:Nabanita’s origin story, from conducting AI research at Microsoft as an intern all the way to founding Privacy LicenseHow Privacy License supports enterprises entering the global market while protecting privacy as a human rightA comparison between Nabanita's experience as a corporate role as Privacy Engineering Manager at Remitly versus her entrepreneurial role as Founder-in-Residence at AntlerHow PrivacyGPT, a Chrome browser plugin, empowers people to use ChatGPT with added privacy protections and without compromising data privacy standards by redacting sensitive and personal data before sending to ChatGPTNLP techniques that Nabanita leveraged to build out PrivacyGPT, including: 'regular expressions,' 'parts of speech tagging,' & 'name entity recognition'How PrivacyGPT can be used to protect privacy across nearly all languages, even where a user has no Internet connectionHow to use Product Hunt to gain visibility around a newly-launched product; and whether it's easier to raise a financial round in the AI space right nowNabanita’s advice for software engineers who might found a privacy or AI startup in the near futureWhy Nabanita created a Privacy Champions Program; and how it provides (non)-privacy folks with recommendations to prioritize privacy within their organizationsHow to sign up for PrivacyGPT’s paid pilot app, connect with Nabanita to collaborate, or subscribe to "Nabanita's Moonshots Newsletter" on LinkedInResources Mentioned:Check out Privacy LicenseLearn more about PrivacyGPTInstall the PrivacyGPT Chrome ExtensionLearn about Data Privacy Week 2024Guest Info:Connect with Nabanita on LinkedInSubscribe to the Nabanita's Moonshots NewsletterLearn more about The Nabinita De Foundation Learn more about Covid Help for IndiaLearn more about Project FiBSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
12/19/23 • 41:29
My guests this week are Yusra Ahmad, CEO of Acuity Data, and Luke Beckley, Data Protection Officer and Privacy Governance Manager at Correla, who work with The RED (Real Estate Data) Foundation, a sector-wide alliance that enables the real estate sector to benefit from an increased use of data, while voiding some of the risks that this presents, and better serving society.We discuss the current drivers for change within the real estate industry and the complexities of the real estate industry utilizing incredible amounts of data. You’ll learn the types of data protection, privacy, and ethical challenges The RED Foundation seeks to solve, especially now with the advent of new technologies. Yusra and Luke discuss some ethical questions the real estate sector as it considers leveraging new technology. Yusra and Luke come to the conversation from the knowledgeable perspective as The RED Foundation’s Chair of the Data Ethics Steering Group and Chair of the Engagement and Awareness Group, respectively.Topics Covered:Introducing Luke Beckley (DPO, Privacy & Governance Manager at Correla) and Yusra Ahmed (CEO of Acuity Data); who are here to talk about their data ethics work at The RED FoundationHow the scope, sophistication, & connectivity of data is increasing exponentially in the real estate industryWhy ESG, workplace experience, & smart city development are drivers of data collection; and the need for data ethics reform within the real estate industryDiscussion of types of personal data these real estate companies collect & use across stakeholders: owners, operators, occupiers, employees, residents, etc.Current approaches that retailers take to protect location data, when collected; and why it's important to simplify language, increase transparency, & make consumers aware of tracking in in-store WIFi privacy noticesOverview of The RED Foundation & mission: to ensure the real estate sector benefits from an increased use of data, avoids some of the risks that this presents, and is better placed to serve societySome ethical questions with which the real estate sector needs to still align, along with examplesWhy there’s a need to educate the real estate industry on privacy-enhancing techThe need for privacy engineers and PETs in real estate; and why this will build trust with the different stakeholdersGuidance for privacy engineers who want to work in the real estate sector.Ways to collaborate with The RED Foundation to standardize data ethics practices across the real estate industryWhy there's great opportunity to embed privacy into real estate; and why its current challenges are really obstacles, rather than blockers.Resources Mentioned:Check out The RED FoundationGuest Info:Follow Yusra on LinkedInFollow Luke on LinkedInSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
12/5/23 • 64:40
This week, I welcome Jared Coseglia, co-founder and CEO at TRU Staffing Partners, a contract staffing & executive placement search firm that represents talent across 3 core industry verticals: data privacy, eDiscovery, & cybersecurity. We discuss the current and future state of the contracting market for privacy engineering rols and the market drivers that affect hiring. You’ll learn about the hiring trends and the allure of 'part-time impact,' 'part-time perpetual,' and 'secondee' contract work. Jared illustrates the challenges that hiring managers face with a 'Do-it-Yourself' staffing process; and he shares his predictions about the job market for privacy engineers over the next 2 years. Jared comes to the conversation with a lot of data that supports his predictions and sage advice for privacy engineering hiring managers and job seekers. Topics Covered:How the privacy contracting market compares and contrasts to the full-time hiring market; and, why we currently see a steep rise in privacy contractingWhy full-time hiring for privacy engineers won't likely rebound until Q4 2024; and, how hiring for privacy typically follows a 2-year cycleWhy companies & employees benefit from fractional contracts; and, the differences between contracting types: 'Part-Time - Impact,' 'Part-Time - Perpetual,' and 'Secondee'How hiring managers typically find privacy engineering candidatesWhy it's far more difficult to hire privacy engineers for contracts; and, how a staffing partner like TRU can supercharge your hiring efforts and avoid the pitfalls of a "do-it-yourself" approachHow contract work benefits privacy engineers financially, while also providing them with project diversityHow salaries are calculated for privacy engineers; and, the driving forces behind pay discrepancies across privacy rolesJared's advice to 2024 job seekers, based on his market predictions; and, why privacy contracting increases 'speed to hire' compared to hiring FTEsWhy privacy engineers can earn more money by changing jobs in 2024 than they could by seeking raises in their current companies; and discussion of 2024 salary ranges across industry segmentsJared's advice on how privacy engineers can best position themselves to contract hiring managers in 2024Recommended resources for privacy engineering employers and job seekersResources Mentioned:Read: "State of the Privacy Job Market Q3 2023”Subscribe to TRU InsightsGuest Info:Connect with Jared on LinkedInLearn more about TRU Staffing PartnersEngineering Managers: Check out TRU Staffing Data Privacy Staffing solutionsPE Candidates: Apply to Send us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
11/21/23 • 57:47
This week’s guests are Mathew Mytka and Alja Isakovoić, Co-Founders of Tethix, a company that builds products that embed ethics into the fabric of your organization. We discuss Matt and Alja’s core mission to bring ethical tech to the world, and Tethix’s services that work with your Agile development processes. You’ll learn about Tethix’s solution to address 'The Intent to Action Gap,' and what Elemental Ethics can provide organizations beyond other ethics frameworks. We discuss ways to become a proactive Responsible Firekeeper, rather than remaining a reactive Firefighter, and how ETHOS, Tethix's suite of apps can help organizations embody and embed ethics into everyday practice. TOPICS COVERED:What inspired Mat & Alja to co-found Tethix and the company's core missionWhat the 'Intent to Action Gap' is and how Tethix address itOverview of Tethix's Elemental Ethics framework; and how it empowers product development teams to 'close the 'Intent to Action Gap' and move orgs from a state of 'Agile Firefighting' to 'Responsible Firekeeping'Why Agile is an insufficient process for embedding ethics into software and product development; and how you can turn to Elemental Ethics and Responsible Firekeeping to embed 'Ethics-by-Design' into your Agile workflowsThe definition of 'Responsible Firekeeping' and its benefits; and how Ethical Firekeeping transitions Agile teams from a reactive posture to a proactive oneWhy you should choose Elemental Ethics over conventional ethics frameworksTethix's suite of apps called ETHOS: The Ethical Tension and Health Operating System apps, which help teams embed ethics into their collaboration tech stack (e.g., JIRA, Slack, Figma, Zoom, etc.)How you can become a Responsible FirekeeperThe level of effort required to implement Elemental Ethics & Responsible Firekeeping into Product Development based on org size and level of maturityAlja's contribution to the ResponsibleTech.Work, an open source Responsible Product Development Framework, core elements of the Framework, and why we need itWhere to learn more about Responsible FirekeepingRESOURCES MENTIONED:Read: "Day in the Life of a Responsible Firekeeper"Review the ResponsibleTech.Work FrameworkSubscribe to the Pathfinders NewmoonsletterGUEST INFO:Connect with Mat on LinkedInConnect with Alja on LinkedInCheck out Tethix’s Website Send us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
11/14/23 • 44:51
This week’s guest is Isabel Barberá, Co-founder, AI Advisor, and Privacy Engineer at Rhite , a consulting firm specializing in responsible and trustworthy AI and privacy engineering, and creator of The Privacy Library Of Threats 4 Artificial Intelligence Framework and card game. In our conversation, we discuss: Isabel’s work with privacy-by-design, privacy engineering, privacy threat modeling, and building trustworthy AI; and info about Rhite’s forthcoming Self-Assessment Open-Source framework for AI maturity, SARAI®. As we wrap up the episode, Isabel shares details about PLOT4ai, her AI threat modeling framework and card game created based on a library of threats for artificial intelligence. Topics Covered:How Isabel became interested in privacy engineering, data protection, privacy by design, threat modeling, and trustworthy AIHow companies are thinking (or not) about incorporating privacy-by-design strategies & tactics and privacy engineering approaches within their orgs todayWhat steps can be taken so companies start investing in privacy engineering approaches; and whether AI has become a driver for such approaches.Background on Isabel’s company, Rhite, and its mission to build responsible solutions for society and its individuals using a technical mindset. What “Responsible & Trustworthy AI” means to Isabel The 5 core values that make up the acronym, R-H-I-T-E, and why they’re important for designing and building products & services.Isabel's advice for organizations as they approach AI risk assessments, analysis, & remediation The steps orgs can take in order to build responsible AI products & servicesWhat Isabel hopes to accomplish through Rhite's new framework: SARAI® (for AI maturity), an open source AI Self-Assessment Tool and Framework, and an extension the Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai) Framework (i.e., a library of AI risks)What motivated Isabel to focus on threat modeling for privacyHow PLOT4ai builds on LINDDUN (which focuses on software development) and extends threat modeling to the AI lifecycle stages: Design, Input, Modeling, & OutputHow Isabel’s experience with the LINDDUN Go card game inspired her to develop of a PLOT4ai card game to make it more accessible to teams.Isabel calls for collaborators to contribute to the PLOT4ai open source database of AI threats as the community grows.Resources Mentioned:Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai)PLOT4ai's Github Threat Repository"Threat Modeling Generative AI Systems with PLOT4ai” Self-Assessment for Responsible AI (SARAI®)LINDDUN Privacy Threat Model Framework"S2E19: Privacy Threat Modeling - Mitigating Privacy Threats in Software with Kim Wuyts (KU Leuven)”"Data Privacy: a runbook for engineers"Guest Info:Send us a Text Message. Copyright © 2022 - 2024 Principled LLC. All rights reserved.
11/7/23 • 50:03
This week, I sat down with Vaibhav Antil ('Vee'), Co-founder & CEO at Privado, a privacy tech platform that's leverages privacy code scanning & data mapping to bridge the privacy engineering gap. Vee shares his personal journey into privacy, where he started out in Product Management and saw need for privacy automation in DevOps. We discuss obstacles created by the rapid pace of engineering teams and a lack of a shared vocabulary with Legal / GRC. You'll learn how code scanning enables privacy teams to move swiftly and avoid blocking engineering. We then discuss the future of privacy engineering, its growth trends, and the need for cross-team collaboration. We highlight the importance of making privacy-by-design programmatic and discuss ways to scale up privacy reviews without stifling product innovation. Topics Covered:How Vee moved from Product Manager to Co-Founding Privado, and why he focused on bringing Privacy Code Scanning to market.What it means to "Bridge the Privacy Engineering Gap" and 3 reasons why Vee believes the gap exists.How engineers can provide visibility into personal data collected and used by applications via Privacy Code Scans.Why engineering teams should 'shift privacy left' into DevOps.How a Privacy Code Scanner differs from traditional static code analysis tools in security.How Privado's Privacy Code Scanning & Data Mapping capabilities (for the SDLC) differ from personal data discovery, correlation, & data mapping tools (for the data lifecycle).How Privacy Code Scanning helps engineering teams comply with new laws like Washington State's 'My Health My Data Act.'A breakdown of Privado’s FREE "Technical Privacy Masterclass."Exciting features on Privado’s roadmap, which support its vision to be the platform for collaboration between privacy operations & engineering teams.Privacy engineering trends and Vee’s predictions for the next two years. Privado Resources Mentioned:Free Course: "Technical Privacy Masterclass" (led by Nishant Bhajaria)Guide: Introduction to Privacy Code ScanningGuide: Code Scanning Approach to Data MappingSlack: Privado's Privacy Engineering CommunityOpen Source Tool: Play Store Data Safety Report BuilderGuest Info:Connect with Vee on LinkedInCheck out Privado's websiteSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
10/31/23 • 56:06
This week’s guest is Rebecca Balebako, Founder and Principal Consultant at Balebako Privacy Engineer, where she enables data-driven organizations to build the privacy features that their customers love. In our conversation, we discuss all things privacy red teaming, including: how to disambiguate adversarial privacy tests from other software development tests; the importance of privacy-by-infrastructure; why privacy maturity influences the benefits received from investing in privacy red teaming; and why any database that identifies vulnerable populations should consider adversarial privacy as a form of protection. We also discuss the 23andMe security incident that took place in October 2023 and affected over 1 mil Ashkenazi Jews (a genealogical ethnic group). Rebecca brings to light how Privacy Red Teaming and privacy threat modeling may have prevented this incident. As we wrap up the episode, Rebecca gives her advice to Engineering Managers looking to set up a Privacy Red Team and shares key resources. Topics Covered:How Rebecca switched from software development to a focus on privacy & adversarial privacy testingWhat motivated Debra to shift left from her legal training to privacy engineeringWhat 'adversarial privacy tests' are; why they're important; and how they differ from other software development testsDefining 'Privacy Red Teams' (a type of adversarial privacy test) & what differentiates them from 'Security Red Teams'Why Privacy Red Teams are best for orgs with mature privacy programsThe 3 steps for conducting a Privacy Red Team attackHow a Red Team differs from other privacy tests like conducting a vulnerability analysis or managing a bug bounty programHow 23andme's recent data leak, affecting 1 mil Ashkanazi Jews, may have been avoided via Privacy Red Team testingHow BigTech companies are staffing up their Privacy Red TeamsFrugal ways for small and mid-sized organizations to approach adversarial privacy testingThe future of Privacy Red Teaming and whether we should upskill security engineers or train privacy engineers on adversarial testingAdvice for Engineer Managers who seek to set up a Privacy Red Team for the first timeRebecca's Red Teaming resources for the audienceResources Mentioned:Listen to: "S1E7: Privacy Engineers: The Next Generation" with Lorrie Cranor (CMU)Review Rebecca's Red Teaming Resources Guest Info:Connect with Rebecca on LinkedInVisit Balebako Privacy Engineer's websiteSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
10/24/23 • 48:58
This week’s guest is Steve Hickman, the founder of Epistimis, a privacy-first process design tooling startup that evaluate rules and enables the fixing of privacy issues before they ever take effect. In our conversation, we discuss: why the biggest impediment to protecting and respecting privacy within organizations is the lack of a common language; why we need a common Privacy Ontology in addition to a Privacy Taxonomy; Epistimis' ontological approach and how it leverages semantic modeling for privacy rules checking; and, examples of how Epistimis Privacy Design Process tooling complements privacy tech solutions on the market, not compete with them.Topics Covered:How Steve’s deep engineering background in aerospace, retail, telecom, and then a short stint at Meta, led him to found Epistimis Why its been hard for companies to get privacy right at scaleHow Epistimis leverages 'semantic modeling' for rule checking and how this helps to scale privacy as part of an ontological approachThe definition of a Privacy Ontology and Steve's belief that all should use one for common understanding at all levels of the businessAdvice for designers, architects, and developers when it comes to creating and implementing privacy ontology, taxonomies & semantic modelsHow to make a Privacy Ontology usableHow Epistimis' process design tooling work with discovery and mapping platforms like BigID & Secuvy.aiHow Epistimis' process design tooling work along with a platform like Privado.ai, which scans a company's product code and then surfaces privacy risks in the code and detects processing activities for creating dynamic data mapsHow Epistimis' process design tooling works with PrivacyCode, which has a library of privacy objects, agile privacy implementations (e.g., success criteria & sample code), and delivers metrics on the privacy engineering process is goingSteve calls for collaborators who are interested in POCs and/or who can provide feedback on Epistimis' PbD processing toolingSteve describes what's next on the Epistimis roadmap, including wargamingResources Mentioned:Read Dan Solove's article, "Data is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data"Guest Info:Connect with Steve on LinkedInReach out to Steve via EmailLearn more about EpistimisSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
10/10/23 • 51:35
This week's guest is Shashank Tiwari, a seasoned engineer and product leader who started with algorithmic systems of Wall Street before becoming Co-founder & CEO of Uno.ai, a pathbreaking autonomous security company. He started with algorithmic systems on Wall Street and then transitioned to building Silicon Valley startups, including previous stints at Nutanix, Elementum, Medallia, & StackRox. In this conversation, we discuss ML/AI, large language models (LLMs), temporal knowledge graphs, causal discovery inference models, and the Generative AI design & architectural choices that affect privacy. Topics Covered:Shashank describes his origin story, how he became interested in security, privacy, & AI while working on Wall Street; & what motivated him to found UnoThe benefits to using "temporal knowledge graphs," and how knowledge graphs are used with LLMs to create a "causal discovery inference model" to prevent privacy problemsThe explosive growth of Generative AI, it's impact on the privacy and confidentiality of sensitive and personal data, & why a rushed approach could result in mistakes and societal harm Architectural privacy and security considerations for: 1) leveraging Generative AI, and those to avoid certain mechanisms at all costs; 2) verifying, assuring, & testing against "trustful data" rather than "derived data;" and 3) thwarting common Generative AI attack vectorsShashank's predictions for Enterprise adoption of Generative AI over the next several yearsShashank's thoughts on proposed and future AI-related legislation may affect the Generative AI market overall and Enterprise adoption more specificallyShashank's thoughts on the development of AI standards across tech stacksResources Mentioned:Check out episode S2E29: Synthetic Data in AI: Challenges, Techniques & Use Cases with Andrew Clark and Sid Mangalik (Monitaur.ai)Guest Info:Connect with Shashank on LinkedInLearn more about Uno.aiSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
10/3/23 • 60:19
This week I welcome Dr. Andrew Clark, Co-founder & CTO of Monitaur, a trusted domain expert on the topic of machine learning, auditing and assurance; and Sid Mangalik, Research Scientist at Monitaur and PhD student at Stony Brook University. I discovered Andrew and Sid's new podcast show, The AI Fundamentalists Podcast. I very much enjoyed their lively episode on Synthetic Data & AI, and am delighted to introduce them to my audience of privacy engineers. In our conversation, we explore why data scientists must stress test their model validations, especially for consequential systems that affect human safety and reliability. In fact, we have much to learn from the aerospace engineering field who has been using ML/AI since the 1960s. We discuss the best and worst use cases for using synthetic data'; problems with LLM-generated synthetic data; what can go wrong when your AI models lack diversity; how to build fair, performant systems; & synthetic data techniques for use with AI.Topics Covered:What inspired Andrew to found Monitaur and focus on AI governanceSid’s career path and his current PhD focus on NLPWhat motivated Andrew & Sid to launch their podcast, The AI FundamentalistsDefining 'synthetic data' & why academia takes a more rigorous approach to synthetic data than industryWhether the output of LLMs are synthetic data & the problem with training LLM base models with this dataThe best and worst 'synthetic data' use cases for ML/AIWhy the 'quality' of input data is so important when training AI models Thoughts on OpenAI's announcement that it will use LLM-generated synthetic data; and critique of OpenAI's approach, the AI hype machine, and the problems with 'growth hacking' corner-cuttingThe importance of diversity when training AI models; using 'multi-objective modeling' for building fair & performant systemsAndrew unpacks the "fairness through unawareness fallacy"How 'randomized data' differs from 'synthetic data'4 techniques for using synthetic data with ML/AI: 1) the Monte Carlo method; 2) Latin hypercube sampling; 3) gaussian copulas; & 4) random walkingWhat excites Andrew & Sid about synthetic data and how it will be used with AI in the futureResources Mentioned:Check out Podchaser Listen to The AI Fundamentalists PodcastCheck out MonitaurGuest Info:Follow Andrew on LinkedInFollow Sid on LinkedInSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
9/26/23 • 54:32
This week, I welcome Jutta Williams, Head of Privacy & Assurance at Reddit, Co-founder of Humane Intelligence and BiasBounty.ai, Privacy & Responsible AI Evangelist, and Startup Board Advisor. With a long history of accomplishments in privacy engineering, Jutta has a unique perspective on the growing field.In our conversation, we discuss her transition from security engineering to privacy engineering; how privacy cultures differ across social media companies where she's worked: Google, Facebook, Twitter, and now Reddit; the overlap of the privacy engineering & responsible AI; how her non-profit, Humane Intelligence, supports AI model owners; her experience launching the largest Generative AI Red Teaming challenge ever at DEF CON; and, how a curious knowledge-enhancing approach to privacy will create engagement and allow for fun. Topics Covered:How Jutta’s unique transition from security engineering landed her in the privacy engineering space. A comparison of privacy cultures across Google, Facebook, Twitter (now 'X'), and Reddit based on her privacy engineering experiences there.Two open Privacy Engineering roles at Reddit, and Jutta's advice for those wanting to transition from security engineering to privacy engineering.Whether Privacy Pros will be responsible for owning new regulatory obligations under the EU's Digital Services Act (DSA) & the Digital Markets Act (DMA); and the role of the Privacy Engineer when overlapping with Responsible AI issuesHumane Intelligence, Jutta's 'side quest,' which she co-leads with Dr. Rumman Chowdhury, and supports AI model owners seeking 'Product Readiness Reviews' at scale.When, during the product development life cycle, companies should perform 'AI Readiness Reviews'How to de-biased at scale or whether attempting to do so is 'chasing windmills'Who should be hunting for biases in an AI Bias Bounty challengeDEF CON 31's AI Village's 'Generative AI Red Teaming Challenge,' which was a bias bounty that she co-designed; lessons learned; and what Jutta & team have planned for DEF CON 32 next yearWhy it's so important for people to 'love their side quests'Resources Mentioned:DEF CON Generative Red Team ChallengeHumane IntelligenceBias Buccaneers ChallengeGuest Info:Connect with Jutta on LinkedInSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
9/19/23 • 54:57
Today, I welcome Victor Morel, PhD and Simone Fischer-Hübner, PhD to discuss their recent paper, "Automating Privacy Decisions – where to draw the line?" and their proposed classification scheme. We dive into the complexity of automating privacy decisions and emphasize the importance of maintaining both compliance and usability (e.g., via user control and informed consent). Simone is a Professor of Computer Science at Karlstad University with over 30 years of privacy & security research experience. Victor is a post-doc researcher at Chalmers University's Security & Privacy Lab, focusing on privacy, data protection, and technology ethics.Together, they share their privacy decision-making classification scheme and research across two dimensions: (1) the type of privacy decisions: privacy permissions, privacy preference settings, consent to processing, or rejection to processing; and (2) the level of decision automation: manual, semi-automated, or fully-automated. Each type of privacy decision plays a critical role in users' ability to control the disclosure and processing of their personal data. They emphasize the significance of tailored recommendations to help users make informed decisions and discuss the potential of on-the-fly privacy decisions. We wrap up with organizations' approaches to achieving usable and transparent privacy across various technologies, including web, mobile, and IoT. Topics Covered:Why Simone & Victor focused their research on automating privacy decisions How GDPR & ePrivacy have shaped requirements for privacy automation toolsThe 'types' privacy decisions & associated 'levels of automation': privacy permissions, privacy preference settings, consent to processing, & rejection to processingThe 'levels of automation' for each privacy decision type: manual, semi-automated & fully-automated; and the pros / cons of automating each privacy decision typePreferences & concerns regarding IoT Trigger Action PlatformsWhy the only privacy decisions that you should 'fully automate' are the rejection of processing: i.e., revoking consent or opting outBest practices for achieving informed controlAutomation challenges across web, mobile, & IoTMozilla's automated cookie banner management & why it's problematic (i.e., unlawful)Resources Mentioned:"Automating Privacy Decisions – where to draw the line?"CyberSecIT at Chalmers University of Technology"Tapping into Privacy: A Study of User Preferences and Concerns on Trigger-Action Platforms"Consent O Matic browser extensionSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
9/12/23 • 44:18
This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers. We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs: the hallucination problem; the deliberation problem; the sleazy salesperson problem; & the problem of shared responsibilityTopics Covered:What motivated Reid to write his book, "Ethical Machines"The key differences between 'active privacy' & 'passive privacy'Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimizationThe importance of aligning privacy agendas with business prioritiesWhy what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred dataAutomated decision making: when it's necessary to have a 'human in the loop'Approaches for mitigating 'AI ethics fatigue'The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated valueThe differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethicsReid's article, "Generative AI-xiety," & the 4 main risks related to generative AIReid's advice for technical staff building products & services that leverage LLM'sResources Mentioned:Read the book, "Ethical Machines"Reid's podcast, Ethical MachinesGuest Info:Follow Reid on LinkedInSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
9/5/23 • 51:41
This week, we're chatting with Engin Bozdag, Senior Staff Privacy Architect at Uber, and Stefano Bennati, Privacy Engineer at HERE Technologies. Today, we explore their recent IWPE'23 talk, "Can Location Data Truly be Anonymized: a risk-based approach to location data anonymization" and discuss the technical & business challenges to obtain anonymization. We also discuss the role of Privacy Engineers, how to choose a career path, and the importance of embedding privacy into product development & DevPrivOps; collaborating with cross-functional teams; & staying up-to-date with emerging trends.Topics Covered:Common roadblocks privacy engineers face with anonymization techniques & how to overcome themHow to get budgets for anonymization tools; challenges with scaling & regulatory requirements & how to overcome themWhat it means to be a 'Privacy Engineer' today; good career paths; and necessary skill setsHow third-party data deletion tools can be integrated into a company's distributed architectureWhat Privacy Engineers should understand about vendor privacy requirements for LLMs before bringing them into their orgsThe need to monitor code changes in data or source code via code scanning; how HERE Technologies uses Privado to monitor the compliance of its products & data lineage; and how Privado detects new assets added to your inventory & any new API endpointsAdvice on how to deal with conflicts between engineering, legal & operations teams and hon how to get privacy issues fixed within an orgStrategies for addressing privacy issues within orgs, including collaboration, transparency, and continuous refinementResources Mentioned:IAPP Defining Privacy Engineering InfographicEU AI ActEthics Guidelines for Trustworthy AIPrivacy Engineering SuperheroesFTC Investigates OpenAI over Data Leak and ChatGPT’s InaccuracyGuest Info:Follow EnginFollow StefanoSend us a Text Message. Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
8/29/23 • 50:14