Welcome to The Cyber Business Podcast where we feature top founders and entrepreneurs and share their inspiring stories.
Guest Introduction Alex Dalay is the CISO at IDB Bank, a New York-headquartered commercial, private banking, and broker dealer institution with more than 70 years of history. As the security leader of a financial institution that sits squarely in the crosshairs of modern threat actors, Alex brings a perspective grounded in operational reality rather than theoretical frameworks. His approach to security leadership strips away the noise and returns consistently to the fundamentals: know what you have, know who has access to it, and build everything else from there. Here's a Glimpse of What You'll Learn Why asset inventory and identity management are the two foundational elements every security program must get right before any advanced tool can be effective How AI has changed offensive security by enabling attackers to evaluate and pivot off responses in real time, a capability that previously required human judgment and gave defenders a meaningful edge Why the window between vulnerability disclosure and active exploitation has compressed to near real time and what that demands from security teams right now How contextual vulnerability scoring differs from out-of-the-box ratings and why a critical vulnerability in one environment may not be critical in yours Why social engineering and credential theft remain the most reliable attack paths and how AI-powered behavioral detection is changing the defender's ability to respond Why the race to AGI carries geopolitical stakes comparable to the nuclear arms race and what energy infrastructure has to do with who gets there first How Alex thinks about the ethical challenge of training AI to be good, not just intelligent, and why guardrails alone are not sufficient What Alex told his 10-year-old son when asked about what jobs will look like by the time he graduates college In This Episode Alex opens with a perspective that cuts through the noise immediately: security does not need to be complicated, and the organizations that struggle most are usually the ones that skipped the basics in pursuit of advanced capabilities. Asset inventory and identity management are unglamorous but they are the foundation everything else is built on. If you do not know what is in your environment and who has access to it, no tool, AI-powered or otherwise, will save you. That philosophy of fundamentals-first shapes how he approaches the role of CISO at a financial institution that faces a significantly higher volume of attacks than most industries simply because money is involved. The AI conversation takes a sharp turn toward the offensive side of the ledger. Alex identifies the most consequential change AI has made to the threat landscape as the ability to evaluate responses in real time during an attack. Historically, automated tools ran scripts and moved on when something failed. Human attackers could pivot off unexpected responses. Now AI can do both, at machine speed. That shift has compressed the window between vulnerability disclosure and active exploitation to near real time in many cases, fundamentally changing how urgently defenders must act. He also draws an important distinction that often gets lost in the noise: a critical vulnerability rating from a vendor like Microsoft assumes the worst-case configuration. Whether it is actually critical in your specific environment requires human and increasingly AI-assisted contextual analysis before you drop everything to patch it. Alex closes with a wide-angle view of where AI is taking both the profession and society. He draws a comparison to the nuclear arms race, arguing that whichever nation cracks AGI first will hold a form of leverage that reshapes global power. He connects that to an underappreciated dependency: energy. Without the infrastructure to power the data centers that run AI at scale, the United States risks falling behind adversaries who face fewer environmental or political constraints on energy expansion. On the ethical side, he raises a point that goes beyond guardrails. We are racing to make AI intelligent without taking the time to teach it to be good, and the consequences of that gap may be the most important and least discussed challenge in the entire AI conversation.
4/14/26 • 33:00
Guest Introduction Laurel Cipriani returns to the Cyber Business Podcast for a second conversation that goes deeper and broader than the first. As CIO at AffirmedRX, a transparent pharmacy benefits management company and public benefit corporation legally obligated to put patients ahead of profits, Laurel brings a background unlike almost any other CIO in the industry. She trained in psychology, became a registered nurse, spent years in health administration and clinical quality, and arrived in IT through a path that has given her a perspective on people, culture, and human-centered technology that is genuinely rare at the executive level. She is also an active member of the Digital Economist think tank in Washington DC and is joining the World Technology Congress, a Switzerland-based international think tank, as this episode records. Here's a Glimpse of What You'll Learn How Laurel is rolling out a role-based AI strategy at AffirmedRX where tool access, permissions, and accountability are all determined by what each person actually does Why she is considering hiring dedicated AI fact checkers and what that says about the current state of AI output reliability in high-stakes environments What the representation gap for women in IT leadership actually looks like from the inside and why culture fit may be more important than credentials in closing it How AI is currently reinforcing gender bias through scraped training data and what that means for the next generation of models Why Laurel believes AI could eventually help solve the root causes of gender inequality if developed and governed thoughtfully How the anonymity of the internet has amplified harmful behavior and why removing it may be more beneficial than most people are willing to admit What it means to lead a technology team with compassion as a core value and why that quality is becoming more important as AI takes over more execution work Why Laurel believes the most important question for this generation is not whether to use AI but how to use it without losing what makes us human In This Episode Laurel opens this return visit with an origin story that sets the tone for everything that follows. From aspiring grief therapist to floor nurse to health informaticist to CIO of a public benefit corporation, her path into technology was never linear and never conventional. What runs through all of it is a single thread: a desire to help people and a belief that technology is most powerful when it is built around human needs rather than the other way around. That philosophy is now embedded in how she is building the AI strategy at AffirmedRX, where every steward in the company will have a clearly defined set of tools, permissions, and accountability structures tied directly to their role. No one gets unfettered access. No output goes unreviewed. And no AI system will ever make a decision without a human signing off. The conversation on women in IT leadership is honest and specific in ways that broader industry discussions rarely are. Laurel notes that virtually every person on her own team is male, not by design but by the reality of a candidate pipeline that still skews heavily toward men. Her response is not to lower the bar but to raise the profile of culture as the primary filter in hiring, something AffirmedRX does formally through a culture screening call before any other evaluation takes place. She makes the case that as AI raises the floor on individual capability, the differentiator between good teams and great ones will increasingly be how people work together, not what any individual can produce alone. That shift, she argues, naturally favors the holistic, relationship-oriented thinking that women have historically been undervalued for bringing to technical roles. The deepest thread in this episode is the one that connects AI governance to human development in ways that go well beyond the enterprise. Laurel is conducting original research through the Digital Economist on how AI and internet anonymity are amplifying harmful behavior toward women, how gender bias baked into training data is being reinforced at scale in AI models, and what it would take to actually interrupt those cycles rather than just acknowledge them. Her conclusion is not pessimistic. She believes AI, if governed with the same intentionality she is applying at AffirmedRX, could become the most powerful tool ever built for identifying and dismantling the cultural patterns that have kept inequality in place for generations. Getting there requires the same thing everything else in this conversation requires: humans staying in charge, staying accountable, and refusing to let speed become an excuse for carelessness.
4/8/26 • 59:05
Guest Introduction Greg McCord is a career security leader operating across two roles simultaneously. As CISO at Lightcast.io, a leading labor market analytics firm, he protects one of the most data-intensive organizations in the workforce intelligence space. As founder and CISO of McCord Keystone Advisory, launched in late 2025, he extends fractional CISO services to small and mid-sized businesses that need executive-level security leadership but cannot sustain a full-time hire. His background spans government, public sector, and private enterprise, and includes time as an Army interrogator at the SERE school for special forces, an experience that informs how he thinks about intelligence, data relevance, and the psychology of adversarial pressure. Here's a Glimpse of What You'll Learn Why Greg argues every CSO must incorporate AI into their daily security lifecycle or risk being left behind by adversaries who already have Why adopting AI in a non-attributable way is the most important and underemphasized discipline in enterprise security right now How quantum computing threatens to make every encrypted breach dataset collected today readable in the future and what that means for your data strategy Why AI frameworks like AIUC-1 and CSA Maestro are becoming critical infrastructure for organizations trying to govern agents, prompts, and LLMs at scale How running LLMs locally on hardware rather than in the cloud changes the security calculus for SMBs and enterprises alike Why the cloud adoption analogy is the most useful mental model for thinking about where AI governance is headed How AI-powered penetration testing and continuous red teaming are changing how organizations find and prioritize vulnerabilities Why the right question is not whether to use AI but how to use it without losing positive control of your most sensitive data In This Episode Greg opens with a position that is both practical and urgent. Security leaders who choose not to adopt AI are not playing it safe. They are falling behind adversaries who are already deploying it against them. His counsel is specific: adopt AI, but do it in a non-attributable way. The moment confidential data is connected to an uncontrolled AI system, positive control of that data is gone and there is no reliable way to get it back. The traditional tools still matter. The telemetry and signal they provide remains valuable. But they need to be augmented with AI that can act faster, identify patterns earlier, and close the gap between detection and response before attackers achieve their objective inside your environment. The quantum computing thread is where Greg brings one of the most forward-looking and underappreciated risks in the conversation. Governments and sophisticated threat actors are collecting encrypted breach data today with no current ability to decrypt it. Once quantum computing matures, that changes. Everything collected now becomes readable later. Greg draws on his Army interrogator background to frame it clearly: the goal is for your data to be irrelevant by the time anyone can crack it, but not all of it will be, and the organizations that are not thinking about this now will have no recourse when it arrives. That reality, combined with the convergence of quantum processing and AI training models, is what makes the current moment unlike anything the industry has faced before. Greg closes with a perspective on frameworks and governance that is both honest about the pace problem and constructive about the path forward. By the time a framework is written and discussed, the technology it describes has already evolved. That is not an argument against frameworks. It is an argument for building continuous feedback loops between practitioners in the field and the people writing the standards. AIUC-1 and CSA Maestro represent serious efforts to govern AI agent behavior, prompt handling, and LLM risk in a structured way. The organizations that engage with those frameworks now, rather than waiting for mandates, will be the ones with the governance foundation in place when the next wave of threats arrives.
4/6/26 • 37:33
Guest Introduction Jason Lawrence is the Cybersecurity Director at Yancey Brothers, the oldest Caterpillar dealer in the United States and a company that has been in business since 1914. As the first person to hold this role at the organization, Jason is building the cybersecurity program from the ground up, reporting directly to the CIO. Before joining Yancey Brothers, Jason built a career spanning security operations, identity management, and strategic risk, and he also co-founded Security Reimagined, a firm focused on securing small businesses and communities across Georgia. His approach to cybersecurity is rooted in business outcome thinking, treating cyber defense not as a technology problem but as a revenue protection function. Here's a Glimpse of What You'll Learn Why Jason separates AI into generative AI and machine learning and why that distinction matters more in cybersecurity than anywhere else How the OODA Loop framework from military strategy applies directly to cyber defense and why disrupting the attacker's decision cycle is the real objective Why non-human identities now outnumber human identities in enterprise environments and what that means for your security posture How agentic AI and RAG systems are introducing a new insider threat vector that most organizations are not yet accounting for Why AI-powered penetration testing and continuous threat exposure management are changing how organizations prioritize and remediate vulnerabilities Why Jason believes cybersecurity is a business problem first and a technology problem second How hardening the tools you use to manage your own infrastructure is the most overlooked security priority right now Why human imagination remains the one capability AI cannot replicate and why that matters for both attackers and defenders In This Episode Jason opens with a framework that reframes how most people think about AI in security. Rather than treating AI as a single category, he separates generative AI from machine learning and assigns each a distinct role. Generative AI helps analysts make sense of massive data volumes quickly, turning raw signals into actionable observations. Machine learning, the kind Darktrace has been applying for well over a decade, automates detection and response in ways that rule-based systems simply cannot match. The real objective, he argues, is not just prevention but disrupting the attacker's OODA loop before they achieve their goal inside your environment. Getting in is not the win for threat actors. What they do after getting in is what matters, and that is where speed of detection and response becomes everything. The identity conversation is where Jason brings the most urgent and underappreciated insight of the episode. The perimeter is gone. Identities are the new perimeter. And for every human identity in an enterprise, there are now estimated to be up to 144 non-human identities, including devices, data systems, and increasingly, agentic AI and RAG systems that have been granted privileged access to an organization's most sensitive assets. The Stryker breach is the defining example: a compromised Intune instance handed the attacker complete control of the environment. Jason's prescription is direct. Harden the tools you use to manage your infrastructure, roll out MFA everywhere, adopt passkeys, and build a complete identity inventory that accounts for everything in your environment, not just the humans. Jason closes with a perspective on cybersecurity's role in the business that every security leader should hear. If a user has to stop and think about whether an email is safe, that is a cybersecurity failure because it is pulling that person away from the work that generates revenue. His job, as he frames it, is to make sure the business can do business with as little friction as possible. The department of no has to become the department of know, finding the secure path forward rather than simply blocking the unsafe one. That philosophy, grounded in humble inquiry and genuine understanding of business processes, is what separates security functions that protect the organization from those that simply slow it down.
4/1/26 • 37:49
Guest Introduction Laurel Cipriani is the Chief Information Officer at AffirmedRX, a transparent pharmacy benefits management company built on a mission to make medications accessible and affordable for everyone. A clinician by training and a registered nurse originally, Laurel brings a rare combination of frontline healthcare experience, executive technology leadership, and global policy engagement to her role. She joined AffirmedRX in December 2025 and is currently building the company's IT department, data and analytics function, and AI strategy from the ground up at a company that has been operating for approximately four years. Beyond her work at AffirmedRX, Laurel is an active AI ethicist and member of the Digital Economist, a Washington DC-based think tank focused on the intersection of technology, ethics, and global policy. She has represented that organization at the World Economic Forum in Davos and participated on panels at New York Fashion Week through her involvement with the Fashion Fusion Technology Group, an organization working to apply technology to sustainable and circular fashion. Her perspective spans healthcare transparency, responsible AI adoption, data security, and the broader social and economic forces that technology either reinforces or disrupts. Here's a Glimpse of What You'll Learn How AffirmedRX is differentiating itself from the big three pharmacy benefit managers through transparency, patient-centered care, and a model built around proactive patient advocacy Why Laurel and the AffirmedRX leadership team are taking a deliberately cautious, non-PHI approach to AI adoption while building toward broader patient care applications What it means to treat AI as an employee rather than a tool, and why that mindset shift determines whether AI actually delivers value inside an organization How quantum computing is changing the threat landscape for healthcare data and why quantum-proof security is already on the AffirmedRX roadmap What Laurel experienced at the World Economic Forum in Davos and why she believes you cannot make global change if you are not willing to push through the discomfort of being in the room How blockchain technology is being explored to bring ethical accountability and supply chain transparency to the fashion industry Why Klarna's aggressive AI agent rollout serves as a cautionary tale for any organization tempted to replace human judgment with automation before the technology is ready The connection between fast fashion, economic inequality, and the misaligned incentives that Laurel argues are at the root of many of today's most urgent systemic problems In This Episode Laurel opens with a clear-eyed description of what AffirmedRX is attempting to do in one of the most entrenched and resistant markets in American healthcare. The big three pharmacy benefit managers have decades of history, established relationships, and enormous switching costs working in their favor. AffirmedRX is betting that transparency, outcomes, and a genuinely patient-first model through its Patient Care Advocates will eventually make the choice obvious for employers. Laurel is direct about the challenge: even people who love the mission in writing hesitate to put their employees through the disruption of changing plans. The company's answer is to let results do the talking, including a white paper in progress at the time of recording detailing the outcomes they have already achieved. The conversation around AI is where Laurel's dual identity as practitioner and ethicist comes through most clearly. AffirmedRX is using AI, but strictly for internal business process optimization and not yet for anything that touches protected health information. Every recommendation made by AI requires a human to sign off. Pharmacists are designing the models and reviewing the outputs. That discipline is not timidity. It is the product of a CIO who understands that in healthcare, the cost of getting AI wrong is not just financial. It is human. Laurel also introduces a goal she has set for the entire organization: every steward at AffirmedRX should be able to speak confidently about the responsible use of AI in their own role by the end of the year. The Davos segment brings an unexpected and unusually candid thread to the conversation. Laurel describes arriving at the World Economic Forum with what she calls a naive impression that this was where the world's problems get solved, and encountering something far more complicated. Billboards targeting attendees, luxury fashion as social currency, and a pervasive sense of conflict between the forum's stated ideals and its visible reality. She dealt with it by asking every stranger she met whether they felt the same discomfort. The answer was universally yes. Her conclusion: you cannot make global change if you are not willing to be in the room, even when the room makes you uncomfortable. That philosophy connects directly to the work she is doing at AffirmedRX, at the think tank, and in the fashion sustainability space. The episode closes with a wide-ranging discussion about the relationship between technology, economic inequality, and systemic change. Laurel draws a line from fast fashion's hidden costs to the misaligned incentives that keep people economically disadvantaged, and frames AI as a potential equalizer if it develops in time and in the right direction. Her argument is not that technology will solve everything. It is that the people who care enough to show up, do the unglamorous work, and push for change from inside the system are the ones who have the best chance of actually moving it.
3/30/26 • 48:03
Guest Introduction Sinan Al Taie is the Cybersecurity Manager at Master Electronics, a leading global authorized distributor of electronic components with more than half a century of history as a family owned business headquartered in Phoenix, Arizona. His path into cybersecurity is one built from firsthand experience, having transitioned into the field after being hacked himself while working as a database engineer with the United Nations and USAID missions. That personal encounter with a breach sparked a pursuit of professional development through Northeastern Illinois University and hands-on penetration testing work before he joined Master Electronics as a cybersecurity analyst. He grew with the company into his current leadership role, gaining end-to-end exposure to building and evolving a full security posture from the ground up. Today Sinan operates at the intersection of threat intelligence, agentic AI defense strategy, and organizational security architecture, bringing both the practitioner's instinct and the strategist's perspective to one of the most rapidly shifting threat landscapes in recent memory. Here's a Glimpse of What You'll Learn Why AI introduces two distinct and dangerous attack paths that security teams must plan for separately How agentic AI defense differs from simply adding another tool to your security stack Why attack timelines have compressed from nearly 200 minutes to as few as 77 seconds and what that means for human defenders The difference between machine learning applied correctly in security products versus LLMs bolted onto legacy tools Why social engineering remains the most persistent and difficult threat to eliminate regardless of how advanced your tools become How the concept of detection in depth complements the traditional defense in depth model Why subject matter experts will not be replaced by AI but will need to develop managerial and orchestration skills to stay competitive What responsible AI inclusion looks like for small and medium businesses that cannot deploy enterprise-level security budgets In This Episode Sinan brings a framework to the conversation that cuts through the noise surrounding AI in cybersecurity. He identifies two distinct attack paths organizations are now facing simultaneously: attacks on AI agents, where the autonomous nature of those agents amplifies the speed and scale of damage when something goes wrong, and attacks by agents, where threat actors use AI to generate polymorphic malware, automate entire ransomware kill chains, and launch phishing campaigns sophisticated enough that grammar errors are no longer a reliable tell. The compression of attack timelines from 197 minutes in earlier incidents down to 77 seconds in late 2025 makes clear that human defenders operating alone cannot keep pace. His response to that reality is not to simply add more tools. Sinan introduces the concept of agentic cyber defense, deploying autonomous agents that can reason, investigate, and act alongside security teams in parallel with traditional infrastructure. These agents are not a replacement for the existing security posture but an additional intelligence layer capable of detecting the micro-processes and behavioral anomalies that traditional tools are not designed to catch. He pairs this with his own framework of detection in depth, a complement to the established defense in depth model, where each layer of the security stack carries its own detection and response capability rather than relying on perimeter defense to carry the full load. Sinan is direct that there is no silver bullet and no environment where the human element can be fully removed. Social engineering remains the most reliable entry point for threat actors precisely because it bypasses technology entirely. His answer is wide-eyed inclusion, deploying AI with minimum permissions, rigorous review processes, and a clear understanding of what each tool can and cannot do. Even smaller organizations can harden their posture meaningfully by choosing endpoint and security tools that incorporate AI features without needing enterprise-scale budgets to do it. He closes with a forward-looking take on the profession itself. AI will not take jobs, but people who know how to use AI will replace those who do not. The skill set shifting across security and IT is moving from hands-on execution toward orchestration, directing AI agents the way a manager directs a team, reviewing outputs, catching errors, and making judgment calls that autonomous systems are not yet equipped to handle. The human firewall still matters. What changes is where human attention is most valuable and how professionals need to position themselves to lead alongside the tools rather than behind them.
3/25/26 • 53:28
Guest Introduction Brian Younger is the Chief Information Officer at Liberty Dental Plan of Oklahoma, the largest privately held dental benefits administrator in the United States. With nearly 30 years of experience in IT, Younger has built a career that spans desktop support, network infrastructure, information security, ITSM operational excellence, and executive leadership. Before joining Liberty, he spent a decade working in Medicaid IT for the state of Oklahoma, giving him a deep understanding of regulated healthcare environments from both the public and private sector sides. At Liberty, which serves approximately 8 million members nationwide across Medicare, Medicaid, commercial, and exchange markets, Younger oversees a technology organization that must balance strict compliance requirements, including HiTrust, SOC 2 Type 2, SOC 1 Type 2, and HIPAA, with the need to adopt modern tools and AI-driven capabilities responsibly. His background spans enterprise service management, change management, information security, and IT governance, making him a practitioner who understands both the tactical and strategic dimensions of running IT in a high-stakes, member-focused organization. Here's a Glimpse of What You'll Learn Why IT service management, rooted in the ITIL framework, is essential for reducing downtime and driving accountability across the organization How change management through a Change Advisory Board directly reduces outages and improves mean time to resolution What the CrowdStrike and SolarWinds incidents reveal about the real cost of poor QA and supply chain risk Why governing AI from the start is non-negotiable, especially in healthcare and regulated industries handling protected health information How machine learning-based tools like Darktrace differ from LLM-based security products and why that distinction matters Why social engineering remains the most reliable attack vector and how AI can serve as an additional detection layer How IT leaders can shift from being a department that says no to a function that co-creates value with the business Career advice for those entering IT, including why understanding your destination early shapes the certifications and path you should pursue In This Episode Brian Younger brings a grounded perspective on IT service management, opening with a clear case for why change management is not bureaucratic friction but a proven mechanism for limiting downtime. He points to real-world data showing that 80 percent of outages trace back to a bad change and draws a direct line between disciplined change processes and financial protection, illustrating how stopping even a handful of avoidable outages each year can translate into millions of dollars saved for an organization. The CrowdStrike incident serves as a vivid reference point for what happens when QA and change control break down at scale. The conversation moves into AI governance with notable specificity. Younger explains how Liberty approaches AI adoption through a formal AI governing board that evaluates every new tool for compliance risk, data handling, and architectural integrity. He draws a sharp distinction between products that bolt an LLM onto existing services for market appeal and those that apply machine learning in a contained, purposeful way, citing Darktrace as an example of AI done right in the security context. He is direct about the risk of employees using tools like ChatGPT with sensitive data, noting that once information enters those platforms, ownership and use become unclear, a serious concern in a HiTrust, HIPAA-governed environment. Younger and host Matthew Connor explore the tension between convenience and security, arriving at a framing that will resonate with anyone managing enterprise IT. Security will always prioritize protection while the rest of the business defaults to ease of use. The job of IT leadership is to find the balance that enables the business rather than obstructs it, offering governance as a feature rather than a gate. That philosophy runs through Younger's broader view of IT: a non-revenue-producing department that no one in the organization can operate without, and one that earns its seat by co-creating value rather than holding the line on hardware. For those considering a career in IT, Younger offers advice that is both practical and forward-looking. He encourages early-career professionals to look past the help desk and identify their target specialty before choosing certifications, comparing the IT landscape to medicine, where a general practitioner and a specialist require fundamentally different training paths. He acknowledges the anxiety around AI displacing IT jobs but reframes it as an argument for staying curious, specializing deliberately, and understanding that the people who will thrive are the ones who know how to direct and govern the tools, not just use them.
3/23/26 • 34:39
Guest Introduction Mark Bojeun serves as Chief Information Officer at Seward County Community College in southwest Kansas. In addition to leading the institution's technology strategy, he is also the author of Awakening Leadership: The Journey to Conscious Influence, a book focused on leadership awareness, personal growth, and the development of stronger organizational cultures. His career blends higher education technology leadership with a deep interest in leadership psychology and human development. In this episode of The Cyber Business Podcast, Mark discusses how leadership awareness shapes technology teams, how community colleges are evolving through digital transformation, and why modern CIOs must balance technical strategy with personal influence. The conversation explores how leadership mindset, culture, and communication determine whether technology initiatives succeed or stall. Here's a Glimpse of What You'll Learn How community colleges are evolving their technology infrastructure to support modern learning environments Why leadership awareness is a critical skill for CIOs and IT executives How personal development impacts technology leadership and decision making Why communication and influence are often more important than technical authority How higher education institutions balance innovation with limited resources Why strong leadership culture improves the success of IT initiatives The connection between conscious leadership and long term organizational impact In This Episode Mark Bojeun explains how community colleges are experiencing rapid technological change as digital learning environments expand and student expectations continue to evolve. As CIO of Seward County Community College, he describes how smaller institutions must often innovate creatively while operating with limited resources. Technology leaders in higher education must balance modernization with financial realities while still delivering reliable systems for students, faculty, and staff. Mark also highlights how leadership perspective directly shapes the success of technology initiatives. Many IT projects fail not because of technical issues but because of communication gaps, lack of alignment, or leadership blind spots. His work and writing focus on helping leaders develop stronger awareness of how their actions influence teams and organizational outcomes. The conversation also explores Mark's book Awakening Leadership: The Journey to Conscious Influence. He explains that leadership development begins with understanding personal behavior patterns, communication styles, and how leaders affect the people around them. Technology leaders who develop this awareness often build stronger teams, encourage collaboration, and achieve more consistent results. Mark's perspective highlights a growing shift in the CIO role. Modern technology leaders are no longer defined solely by infrastructure knowledge or system architecture. Instead, the most effective CIOs combine technical expertise with emotional intelligence, communication skills, and a clear leadership philosophy. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
3/12/26 • 49:03
Guest Introduction Shannon Thomas serves as Chief Information Officer at Mitchell Hamline School of Law in Saint Paul, Minnesota, one of the largest independent law schools in the United States. In addition to leading IT strategy and execution, she is completing her dissertation focused on women in IT and operates a leadership focused LLC. Her work centers on the intersection of technology, culture, leadership, and human behavior, with a particular emphasis on how bias, allyship, and organizational systems shape the future of the IT workforce. Here's a Glimpse of What You'll Learn Why microaggressions still impact women in technology careers How mental load at home influences retention in demanding IT roles What allyship looks like in real workplace scenarios Why leadership should focus on managing people, not positions How unconscious bias subtly shapes workplace dynamics The connection between culture, media, and leadership expectations Why flexibility increases both productivity and loyalty How inclusive leadership strengthens retention and performance In This Episode Shannon Thomas explains how systemic and cultural factors continue to shape the experience of women in IT. She discusses how women are often dissuaded from entering technology early in their academic journeys and how microaggressions persist even at senior leadership levels. From vendors directing technical questions to male subordinates to assumptions about who makes final decisions, she provides concrete examples of how bias still manifests in everyday interactions. The conversation explores the concept of mental load and how it disproportionately affects women in demanding technology roles. Shannon describes how cybersecurity and IT leadership positions rarely pause, while family responsibilities also remain constant. She argues that retention challenges are not simply about technical capability, but about how organizations structure flexibility, policy, and leadership expectations. Allyship emerges as a central theme. Shannon emphasizes that real progress requires colleagues to redirect conversations, correct behavior, and actively support women in decision making spaces. She explains that meaningful change does not always require confrontation, but it does require awareness and intentional redirection. The discussion ultimately reframes the issue as a human leadership challenge rather than a gender specific one. Shannon makes the case that organizations perform better when leaders treat employees as whole people. Flexibility, empathy, and accountability create stronger cultures, improve retention, and allow diverse talent to thrive in high demand technical environments. Sponsor for this episode... This episode is brought to you by CyberLynx CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
3/3/26 • 51:28
Guest Introduction Vikas Sachdeva serves as Chief Information Officer at HealthDrive Corporation, a healthcare organization delivering care to patients in long term care facilities across more than 20 states and over 4,000 facilities. With prior leadership roles spanning financial services, retail, AI driven digital engineering, and healthcare, Vikas has built a career focused on digital transformation that drives measurable business outcomes. At HealthDrive, his role centers on enabling clinicians with the right technologies, embedding responsible AI practices, strengthening security posture, and aligning innovation directly with improved patient care and operational performance. Here's a Glimpse of What You'll Learn How AI powered ambient listening and clinical assistance tools are augmenting providers in long term care settings Why responsible AI principles such as transparency, fairness, accountability, and human oversight are essential in healthcare How security and AI must evolve together to address protected health information risks Why AI should augment human workflows rather than replace employees How involving resistant stakeholders early turns them into champions of change Why transformation must start with business outcomes, not technology hype How data driven proof points reduce fear around automation initiatives In This Episode Vikas Sachdeva explains how HealthDrive leverages innovation to improve care delivery for underserved populations in long term care facilities. AI tools assist clinicians through ambient note capture, diagnosis support, and treatment guidance, allowing providers to focus more fully on patient interaction. He emphasizes that AI must remain augmentative rather than substitutive, particularly in healthcare where trust, ethics, and human accountability are foundational. Security plays a parallel role in the transformation. Vikas outlines the importance of responsible AI, especially when working with protected health information. He discusses transparency, bias mitigation, reliability, and human oversight as non negotiable guardrails when deploying AI systems. He also addresses the reality that adversaries are leveraging AI as well, making automation and proactive security measures essential to stay competitive. A major theme of the discussion centers on change management. Vikas shares a practical example of introducing intelligent document processing to automate unstructured data conversion. Initial resistance focused on trust and error rates, but by involving stakeholders early and comparing AI performance to existing human error rates, confidence grew. Error rates dropped from 17 percent to 4 percent, demonstrating measurable improvement rather than theoretical promise. Throughout the episode, Vikas reinforces a consistent philosophy. Innovation is not about chasing trends. It is about identifying business outcomes first, then selecting the right technology to support them. AI becomes powerful when aligned with mission, patient care, operational efficiency, and employee empowerment. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
3/3/26 • 35:55
Guest Introduction Steve Orrin serves as Chief Technology Officer and Senior Principal Engineer at Intel Federal, where he operates at the intersection of advanced computing, cybersecurity, and national security missions. In his role, Steve works closely with U.S. federal agencies and the Defense Industrial Base to translate mission requirements into hardware, firmware, and software capabilities that can operate at massive scale and under elevated security demands. He also feeds those real world requirements back into Intel's product and research teams, helping shape future platforms that support government, critical infrastructure, and highly regulated industries. His background places him in a unique position to explain how technologies pioneered for government use often become the next standards adopted across the commercial sector. Here's a Glimpse of What You'll Learn Why federal government requirements often predict future commercial security standards How AI and cybersecurity must be addressed across the full lifecycle Where AI delivers real value in security operations versus where expectations fall short What confidential computing solves and why data in use is the next security frontier How post quantum cryptography timelines are being driven by government mandates Why hardware based security controls matter for cloud, edge, and mission systems How memory safe technologies can eliminate entire classes of cyber attacks In This Episode Steve explains his role at Intel Federal as a three part function. He helps government agencies adopt the right technologies for their missions, translates those requirements back to Intel's internal product and engineering teams, and supports innovation where standard commercial solutions do not fully meet government needs. This two way translation ensures that future platforms align with real world mission and security demands. The discussion moves into AI and cybersecurity, which Steve frames across three dimensions. Organizations must secure AI systems themselves, use AI responsibly to improve cybersecurity operations, and defend against adversaries that are also leveraging AI. He emphasizes that AI cannot be treated like traditional software. It requires governance, validation, and continuous monitoring across data sourcing, training, tuning, and deployment. Steve outlines where AI is delivering tangible value today. Rather than detecting entirely new threats in isolation, AI excels at automating repetitive, high volume security tasks. By reducing the operational burden of routine alerts, patching, and triage, AI allows security teams to focus their expertise on higher impact risks and emerging threats. A key segment of the conversation focuses on confidential computing. Steve explains how protecting data in use closes a long standing security gap that encryption at rest and in transit cannot address. Through trusted execution environments, memory encryption, isolation, and attestation, organizations can protect sensitive workloads even from compromised operating systems or untrusted cloud environments. This capability is especially relevant for AI models, intellectual property, and mission critical workloads deployed across cloud, edge, and disconnected environments. The episode concludes with a forward looking discussion on post quantum cryptography and secure mission platforms. Steve explains why the threat is not limited to future quantum computers, but to data being harvested and stored today for later decryption. Government driven timelines are accelerating adoption, and commercial industries will benefit from following the same path as compliant products become broadly available. Sponsor for this Episode This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
2/4/26 • 42:20
Guest Introduction Brett Talmadge served as Chief Information Officer at Nisqually Red Wind Casino during one of the most critical periods in the organization's history. Brought in following a ransomware incident that disrupted operations and exposed long standing technology gaps, Brett was tasked with stabilizing systems, rebuilding trust, and creating a sustainable security and IT foundation. His background spans highly regulated and mission critical environments, including financial services in New York City and work tied to federal defense operations. That experience shaped his disciplined approach to cybersecurity, operational resilience, and leadership communication. Here's a Glimpse of What You'll Learn How ransomware incidents expose deeper organizational and governance issues Why paying ransomware creates long term risk rather than resolution The importance of defining a clear IT end state before implementing tools How leadership misunderstanding of IT roles creates security blind spots Why cybersecurity is an ongoing process, not a finish line How AI driven security tools reduce noise but still require human oversight Why communication with executives matters as much as technical controls In This Episode Brett walks through the reality of stepping into an organization that had recently paid ransomware and was still recovering from operational and cultural fallout. He explains how legacy systems, siloed ownership, and the absence of a long term IT vision created an environment where a single phishing click could cripple the business. Rather than focusing on surface level fixes, Brett prioritized rebuilding structure, visibility, and accountability across systems and teams. The conversation highlights a recurring challenge faced by many IT leaders: executive teams often view cybersecurity as a state that can be achieved and checked off. Brett pushes back on that assumption, emphasizing that security is an ongoing process shaped by constant threat evolution, user behavior, and organizational entropy. Tools like Darktrace and Varonis provided meaningful visibility and alert quality, but only when paired with trained staff and leadership engagement. A key theme throughout the episode is communication. Brett shares a pivotal moment when leadership questioned why IT staff needed desks, revealing a fundamental misunderstanding of modern IT roles. That moment underscored why many organizations struggle with security maturity. Without executive clarity on what IT actually does, even strong technical programs can be undervalued or dismantled prematurely. Sponsor for This Episode This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
1/29/26 • 35:48
Guest Introduction Jess Vachon is a three time CISO, the founder of Vigilant Violet LLC, and the host of the Voices of the Vigilant Podcast. With a career spanning manufacturing, defense, robotics, software, healthcare, and global financial services, Jess brings a uniquely broad perspective to cybersecurity leadership. Her journey reflects a deep commitment to building security programs that balance technical rigor with human centered leadership. Across every role, Jess has focused on developing resilient teams, pragmatic security strategies, and leaders who understand both risk and responsibility. Here's a Glimpse of What You'll Learn Why diverse industry experience strengthens security leadership How human centered leadership improves security outcomes Where AI helps security teams and where it creates new risk Why doing the basics well still matters more than new tools How AI can reduce user friction while improving protection What reasonable security looks like in an era of nation state threats Why investing in teams delivers better long term defense In This Episode Jess Vachon explains how her path to becoming a CISO was shaped by working across multiple industries and building security programs from the ground up. She shares how creating a full security program at a defense manufacturer helped confirm that security leadership was where she could make the greatest impact. That experience also reinforced her belief that hard problems with visible outcomes are the most rewarding. The conversation explores the role of AI in modern security, with Jess emphasizing that productivity gains should not come at the expense of people. She challenges the idea that AI should simply replace staff and instead argues for using it to increase effectiveness, retain institutional knowledge, and reduce unnecessary friction for employees. Her perspective reframes AI as a tool that supports humans rather than one that sidelines them. Jess and Matthew also discuss why security tools must be purpose built rather than bolted on with buzzwords. Using real world examples, she explains how machine learning can quietly protect users by understanding behavior and stopping threats before employees even see them. This approach reduces blame, improves trust, and shifts security closer to being invisible but effective. The episode closes with a powerful leadership discussion shaped by Jess's Marine Corps experience. She shares how military service taught her to lead under pressure, maintain perspective during crises, and focus on outcomes without losing sight of people. That mindset continues to inform how she views risk, response, and the responsibility of modern security leaders. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
1/27/26 • 46:15
Guest Introduction Kara Schlageter is a cybersecurity executive with a career that bridges human resources, technology, and security leadership. Formerly Deputy CISO at First Citizens Bank, she brings a rare perspective shaped by early consulting experience, large scale transformation work at Bank of America, and deep exposure to identity and access management. Her path into cybersecurity began not with firewalls or endpoints, but with people, culture, and organizational change. Today, Kara is known for advocating a human centered approach to cybersecurity that treats leadership, empathy, and ethics as core security controls. Here's a Glimpse of What You'll Learn Why cybersecurity failures are driven more by people than by technology How an HR background can strengthen security leadership Why culture and empathy are critical security enablers How AI should complement human judgment rather than replace it The ethical risks of AI adoption without governance Why risk tolerance and values must guide technology decisions How leadership roles like the CISO are evolving beyond technical expertise In This Episode Kara Schlageter explains why cybersecurity must be demystified and understood as a human problem first. She challenges the common perception that security is primarily about tools, arguing instead that breaches happen because of human behavior, incentives, and culture. Her background in HR allows her to view cybersecurity through the lens of motivation, trust, and organizational design rather than purely technical controls. She shares how her career evolved through consulting, identity and access management, and large scale transformation at Bank of America. While helping organizations grow rapidly, Kara learned that hiring decisions, culture, and leadership alignment matter as much as technical skill. That experience shaped her belief that understanding people is a force multiplier in cybersecurity. The conversation also explores AI and its growing role in both security and leadership. Kara emphasizes that AI is a powerful tool, but one that must be governed carefully. She stresses the importance of transparency, ethical use, and intentional guardrails, especially as organizations rush to adopt AI driven capabilities without fully understanding long term risk. As the discussion turns toward leadership, Kara outlines how the CISO role is changing. Modern security leaders must communicate risk in business terms, define culture, and align technology decisions with organizational values. Technical expertise still matters, but it is no longer sufficient on its own. The future of cybersecurity leadership belongs to those who can balance innovation with humanity. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
1/23/26 • 47:06
Guest Introduction William O'Connell serves as the Information Security Officer at VHC Health, a hospital system based in Arlington, Virginia, just outside Washington, DC. With more than seven years at the organization, O'Connell was brought in to help jump start and mature the healthcare system's cybersecurity program. His background spans network engineering, firewalls, VPNs, and early infrastructure security, giving him a practitioner's perspective on how security has evolved from perimeter defense to continuous risk management. Today, his work focuses on balancing patient care, operational access, and modern security controls in one of the most complex and regulated environments in IT. Here's a Glimpse of What You'll Learn Why zero trust should be treated as an ongoing strategy rather than a finished project How hospital security mirrors physical access control in real world healthcare settings Where AI adds value in cybersecurity and where it introduces new risks Why agentic AI still requires strong human oversight How CISOs should evaluate AI tools in regulated environments like healthcare The importance of governance and third party risk assessment for AI adoption Why storytelling matters when communicating security metrics to executive leadership In This Episode William O'Connell explains that zero trust is often misunderstood as a project with an end date, when in reality it is a guiding security concept that requires continuous improvement. He uses a healthcare analogy to clarify the idea, explaining that hospitals must allow access to many people while still protecting highly sensitive areas. This same principle applies to digital environments where access must be intentional, segmented, and constantly reviewed. The conversation also explores the role of AI in modern security operations. O'Connell shares how healthcare organizations must carefully assess AI tools to ensure patient data is not exposed or reused in unintended ways. While AI can dramatically improve visibility and response time, he cautions against blindly attaching large language models to every system without understanding the risks, including prompt injection and unintended data exposure. As the discussion turns to agentic AI, O'Connell highlights both the promise and the concern. Automation can reduce repetitive tasks and improve efficiency, but it also removes traditional learning paths for junior staff and introduces trust challenges when AI is given autonomy. He emphasizes the importance of maintaining a human in the loop and applying zero trust principles even to AI driven systems. The episode closes with practical leadership insight on reporting and communication. O'Connell stresses that security leaders must translate metrics into stories that resonate with executive teams. Data alone is not enough. Clear narratives tied to business outcomes are what drive understanding, alignment, and investment in cybersecurity initiatives. Sponsor for this episode... This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.
1/21/26 • 40:56
Guest Introduction Dan Meacham serves as Vice President of Cyber and Content Security at Legendary Entertainment, a global film and television production company behind some of the most recognizable franchises in modern media. In his role, Dan is responsible for securing not only traditional enterprise systems, but also the creative content, intellectual property, and complex supply chains that power large scale movie and television production. His work spans cyber defense, digital forensics, vendor risk, and emerging AI driven security models in an industry where collaboration extends far beyond corporate boundaries. Here's a Glimpse of What You'll Learn Why securing a movie studio is fundamentally different from securing a traditional enterprise How content production relies on thousands of external collaborators and temporary environments The role of digital forensics and watermarking in protecting unreleased media How sophisticated attackers target individuals through social engineering and custom applications Why AI driven analytics are essential for threat detection at massive scale How long term log retention enables rapid decision making during incidents What shared learning intelligence could mean for the future of security operations In This Episode Dan Meacham explains how Legendary's business model reshapes cybersecurity strategy. Each film or television project operates like its own company, complete with a unique technology stack, vendor ecosystem, and lifecycle. Security must adapt quickly to environments that appear and disappear over months or years. He walks through the realities of protecting creative content across the production pipeline. From dailies and post production workflows to global distribution, large media files are constantly replicated, shared, and transformed. Watermarking, stenography, and forensic techniques play a critical role in tracing leaks back to their source. The conversation highlights how attackers exploit human behavior rather than systems alone. Dan shares real world examples where threat actors built targeted applications to extract photos from personal devices, demonstrating how deeply personal and contextual modern attacks have become. Dan also outlines how AI and machine learning have long existed in both filmmaking and cybersecurity. Today's challenge is not adopting AI, but governing it across devices, platforms, and supply chains. He introduces the concept of shared learning intelligence as a way to aggregate insights from multiple AI systems without centralizing sensitive data. The episode closes with a discussion on scale and speed. By retaining over a decade of security logs, Dan's team can quickly identify anomalous behavior and shut down access before damage spreads. AI accelerates analysis, but human accountability remains central to every decision.
1/19/26 • 58:07
Guest Introduction David Mashburn serves as Chief Information Security Officer at Embry-Riddle Aeronautical University, one of the world's leading institutions focused on aviation, aerospace, and applied engineering. With residential campuses in Florida and Arizona alongside a large global online population, Embry Riddle operates in a highly complex technology and security environment. David oversees cybersecurity across academic, research, and administrative systems, balancing innovation, safety, and operational resilience. His background spans enterprise security, incident response, and leadership roles in both higher education and large scale commercial environments, giving him a pragmatic perspective on how security must enable the mission it protects. Here's a Glimpse of What You'll Learn Why higher education security resembles a large scale Zero Trust environment by design How AI in cybersecurity is an evolution of long standing machine learning practices The challenges of securing unmanaged student and faculty devices at scale Why governance and guardrails matter more than outright restriction How identity and behavior drive modern security decisions Where AI can accelerate analysts without replacing human accountability How leadership and coaching experience shapes effective security teams In This Episode David Mashburn explains how Embry Riddle's aviation focused mission creates unique security requirements. With flight training, aerospace research, and global online education, systems must remain available and trusted at all times. Security exists to support learning and operations rather than slow them down. He shares why AI in cybersecurity should be viewed as a natural progression of existing analytics. From SIEM platforms to cloud security tools, machine learning has been embedded in security workflows for years. The current wave of AI expands scale and speed while introducing new governance considerations. The conversation dives deep into Zero Trust principles as a practical necessity. With thousands of unmanaged devices accessing university systems daily, security decisions rely on identity verification, behavior analysis, and continuous monitoring instead of network location. David also discusses the balance between automation and accountability. While AI can reduce analyst workload and surface insights faster, final decisions must remain human. Automation supports judgment but does not replace responsibility. The episode closes with David's career journey, from early exposure to technology through his family, to coaching athletics, to enterprise security leadership. He explains how coaching shaped his leadership philosophy and how those lessons translate directly into managing security teams under pressure.
1/16/26 • 51:51
Guest Introduction Chris McCay serves as Vice President for Corporate Infrastructure at Brailsford and Dunlavey, a national program management and development advisory firm supporting higher education institutions, municipalities, sports organizations, and K 12 districts. In his role, Chris oversees IT, corporate real estate, facilities operations, and internal administration. His career path into technology leadership was nontraditional, beginning as a music major before moving through hardware, networking, and business operations. Over nearly two decades at Brailsford and Dunlavey, Chris progressed from IT manager to director and ultimately into an executive role that reflects how infrastructure leadership now spans people, technology, and physical space. Here's a Glimpse of What You'll Learn How corporate infrastructure expanded beyond traditional IT after hybrid work became permanent Why facilities, real estate, and technology now operate as one system What it takes to transition from managing tasks to developing people How AI should function as an ideation and productivity tool rather than a replacement Why recognition and culture matter as much as compensation How career growth often requires leaving and sometimes returning Why startups may offer long term opportunity for early career technologists In This Episode Chris McCay explains how hybrid work reshaped corporate infrastructure by forcing technology and physical operations to function together. With teams distributed across offices, homes, and client sites, systems must work consistently regardless of location. This reality led to the convergence of IT, facilities, and real estate under a single leadership model. He shares his unconventional career journey, moving from music and creative interests into defense contracting, IT support, and eventually executive leadership. Chris reflects on how early exposure to customer service and technical fundamentals shaped his management style and helped him guide others through non linear career paths. Leadership development emerges as a central theme. Chris discusses the challenge of helping team members grow, even when growth may lead them outside the organization. He emphasizes the importance of honest conversations about career direction, compensation, and long term fulfillment. The conversation closes with a practical discussion on AI adoption. Chris explains how Brailsford and Dunlavey uses AI as a starting point for learning, analysis, and internal tools while maintaining human accountability. He reinforces that AI works best as a companion that enhances judgment rather than replacing it.
1/12/26 • 39:47
Guest Introduction Malcolm Blow serves as Chief Information Security Officer at Bowie State University, where he leads cybersecurity strategy for a complex higher education environment that includes students, faculty, research programs, and public sector obligations. With nearly three years running the cyber program at Bowie State, Malcolm is responsible for protecting institutional data while preserving the academic openness that defines university life. His background spans more than a decade in federal cybersecurity operations across defense, intelligence, and scientific agencies, experience that directly informs his pragmatic approach to risk, governance, and executive decision making. In addition to his university role, Malcolm is the founder of Quantiuum, where he advises organizations on translating technical risk into executive and board level understanding. Here's a Glimpse of What You'll Learn Why higher education security resembles managing a small city How universities balance open access with cybersecurity controls Where AI fits into modern security operations and governance Why human oversight remains essential in regulated AI use cases How privacy laws shape AI adoption in public institutions What provable compliance looks like in higher education How the CISO role is evolving into a business enabling function In This Episode Malcolm Blow outlines the unique cybersecurity challenges facing universities, where thousands of students connect multiple personal devices to campus networks every day. Unlike traditional enterprises, higher education must secure faculty and staff systems while simultaneously supporting student access, research freedom, and academic experimentation. Malcolm explains how isolating environments by use case allows institutions to manage risk without disrupting learning or innovation. The discussion moves into artificial intelligence and cybersecurity, where Malcolm emphasizes that AI is no longer optional for organizations trying to compete and defend themselves. He explains that technology is often the fastest lever to pull in environments constrained by limited budgets and staffing. At the same time, regulatory requirements around privacy and AI use demand careful implementation, particularly in public sector and educational settings. Malcolm shares real examples of where AI systems can create unintended consequences when human oversight is removed. From admissions decisions to security monitoring, he explains why having a human in the loop is often required to meet regulatory expectations and avoid reputational or legal harm. Compliance alone is not sufficient if systems are not designed with accountability and context. The conversation concludes with an in depth look at how the CISO role has changed. Malcolm describes the shift from security as a blocking function to security as a strategic partner. Today's CISO must translate cyber risk into business terms, guide executive decision making, and enable the organization to move faster while staying within defined risk tolerance. Note: The views stated by Malcolm reflect his own and not of his employer's current or previous.
1/8/26 • 32:35
Guest Introduction Simon Lara serves as Chief Information Officer at SPS Poolcare, one of the fastest growing service organizations in the United States. In just four years, SPS Poolcare has grown into the largest pool service provider in the country by applying enterprise grade technology, disciplined execution, and a strong focus on local customer relationships. Simon leads technology strategy across the organization, overseeing platform design, data, security, and operational systems that support rapid scale. His background is unconventional, blending deep technology leadership with formal training in sports coaching and sports psychology, a perspective that strongly shapes how he builds teams, culture, and performance inside the business. Here's a Glimpse of What You'll Learn How SPS Poolcare scaled nationally in four years using technology as a competitive advantage Why enterprise systems matter in traditionally fragmented service industries How AI is being used to generate revenue instead of only cutting costs The role of data driven insights in operations, maintenance, and hiring Why customer relationships stay local even as the company grows How sports psychology directly influences IT leadership and team performance Why team chemistry matters more than traditional IT service metrics In This Episode Simon Lara explains how SPS Poolcare approaches growth differently by treating technology as a core business engine rather than a support function. From supply chain to routing, reporting, and customer engagement, enterprise systems allow small, locally run pool businesses to operate with scale and consistency without losing personal relationships. He emphasizes that technology alone is not enough, and that trust, familiarity, and continuity at the local level remain critical to customer retention. The conversation shifts into how SPS uses AI across the organization, particularly in reporting and forecasting. Simon describes an internal platform called Clarity that allows teams to ask questions directly of their data, uncovering insights around pool maintenance, vehicle servicing, chemical usage, and hiring. Rather than focusing first on cost reduction, the company prioritizes AI initiatives that support revenue growth, faster decisions, and improved customer experience. Security and technology selection are framed as balance decisions. Simon outlines how SPS evaluates AI first tools alongside established platforms, weighing innovation against resilience and experience. The goal is not to chase trends, but to build a durable, flexible security and technology portfolio that supports long term growth. A defining portion of the discussion centers on leadership philosophy. Drawing from more than two decades of basketball coaching, Simon applies sports psychology principles to corporate IT leadership. He explains how team chemistry, shared goals, and mutual accountability outperform traditional service models built on tickets, SLAs, and customer service metrics. By aligning incentives and encouraging teams to care more about each other's success than individual outcomes, SPS fosters a culture designed to win, not simply operate.
1/5/26 • 48:24
Olivia Phillips is the founder of Wolfbyte Technologies, an AI focused consulting firm that helps organizations understand where artificial intelligence truly fits within their existing technology and security foundations. In addition to leading Wolfbyte Technologies, Olivia serves as Vice President of the USA Chapter for the Global Council for Responsible AI, where she works alongside global stakeholders to promote structured, ethical, and secure AI adoption. With a background spanning cybersecurity, intelligence, and hands on operations, Olivia brings a practical and security minded perspective to conversations that are often dominated by hype. Her work consistently centers on preparedness, responsible implementation, and protecting people as technology accelerates. Here's a Glimpse of What You'll Learn Why AI should be layered onto a strong foundation rather than rushed into production How self learning AI differs from large language models in security use cases Why responsible AI requires structure, governance, and human oversight How deepfakes and AI driven fraud are impacting real people today Why separation of systems and access still matters in a highly automated world How AI can support security teams without replacing human judgment What aspiring professionals should understand about careers, certifications, and networking In This Episode Olivia Phillips explains why many organizations are approaching AI backwards by focusing on tools before understanding their own environments. She describes how Wolfbyte Technologies helps clients inventory assets, understand dependencies, and ensure foundations are stable before introducing AI. Without that groundwork, she warns that AI can amplify existing weaknesses rather than solve problems. The conversation dives deeply into AI and cybersecurity, particularly the difference between self learning machine learning systems and large language models. Olivia outlines why self learning systems are better suited for threat detection, while LLMs introduce risks such as hallucinations and prompt injection. She emphasizes that AI should reduce analyst workload, not create more busy work or new attack paths. As Vice President of the Global Council for Responsible AI USA Chapter, Olivia shares real world examples of AI misuse, including deepfakes targeting family members. She stresses that responsible AI means placing structure around how systems are built, accessed, and monitored. Throughout the episode, she reinforces that technology alone cannot solve trust issues and that verification, separation, and human awareness remain essential.
12/29/25 • 55:05
Guest Introduction Bryan Tomczyk serves as a Cybersecurity Engineer at GP Strategies Corporation, where he works closely with senior IT and infrastructure teams to secure systems across a large, global organization. GP Strategies operates primarily as a training and professional services company, supporting clients across multiple countries and industries. Bryan's role places him at the intersection of security engineering, vendor risk management, and user education, with a strong emphasis on enabling the business rather than obstructing it. His background reflects a long term evolution into cybersecurity, shaped by decades of security focused thinking before formally entering a cyber role. Here's a Glimpse of What You'll Learn Why cybersecurity must be embedded into every role, not isolated to IT teams How security advocacy grows organically through education and experience The real risks of AI adoption without proper guardrails Why large language models are not a complete solution for security How supply chain risk has become one of the biggest threats to organizations What secure by design actually looks like in modern environments Practical considerations for evaluating AI tools and SaaS vendors In This Episode Bryan Tomczyk explains why the idea that security is everyone's job only works when organizations invest in education and context. He describes how working directly with users, especially after incidents, creates awareness that policies alone cannot achieve. Security, in his view, must enable productivity while quietly reducing risk in the background. The conversation dives deep into AI and cybersecurity, with Bryan outlining why machine learning excels at correlating massive volumes of data but struggles when used without constraints. He cautions against treating large language models as universal solutions, noting their susceptibility to hallucination, prompt injection, and misuse. Instead, he advocates for narrowly scoped, self learning systems that are heavily restricted in access. Bryan also addresses the growing complexity of modern environments, from email security and MFA fatigue to operational technology and supply chain risk. He highlights why vendor reviews, SOC 2 reports, and infrastructure transparency are no longer optional. Throughout the discussion, he reinforces a consistent theme that security must evolve thoughtfully, balancing innovation with responsibility to protect users, data, and operations.
12/23/25 • 46:46
Guest Introduction Zach Lewis serves as both CIO and CISO at the University of Health Sciences and Pharmacy in St. Louis, bringing nearly a decade of experience across engineering, systems administration, help desk leadership, and executive IT leadership. He oversees technology operations and cybersecurity for one of the oldest pharmacy institutions in the United States, balancing academic continuity, research integrity, and institutional resilience. Zach is also the author of the upcoming book Locked Up: Cybersecurity Threat Mitigation, Lessons from a Real World LockBit Ransomware Response, which documents a firsthand ransomware incident and the leadership decisions required to navigate it. His perspective blends technical depth with lived experience under real pressure. Here's a Glimpse of What You'll Learn: What actually happens inside an organization during a LockBit ransomware attack Why incident response planning looks very different in practice than on paper How leadership stress, decision making, and communication shape outcomes Why recovery and resilience matter more than the illusion of prevention How tabletop exercises help but still fail to predict real world chaos What CISOs should expect emotionally, operationally, and politically during an incident Why transparency and shared learning are still rare but critically needed How post incident investments and tooling decisions should be evaluated In This Episode Zach Lewis walks through the ransomware incident that ultimately inspired his book. The attack began with system outages that initially looked like aging infrastructure failures during a period of delayed hardware refreshes caused by supply chain issues. After briefly restoring systems, the environment collapsed again, revealing a ransomware note at the hypervisor level. By that point, core files had been encrypted, leaving little opportunity for traditional endpoint or EDR controls to intervene. Zach explains the rapid shift from disaster recovery to full incident response. External forensics teams, negotiators, cyber insurance, legal counsel, and federal authorities were brought in while the university worked to remain operational. Thanks to a SaaS first strategy adopted prior to the incident, students and faculty were largely unaffected, even as backend systems were rebuilt. Full recovery and remediation took nearly two months, with teams working long hours under extreme pressure. A central theme of the conversation is the human side of ransomware. Zach describes the stress placed on leadership, the emotional toll on staff, and the importance of remaining calm when others are overwhelmed. He emphasizes that CISOs are not hired to prevent every incident, but to respond, recover, and lead through uncertainty. Clear communication with executives, boards, and end users became just as important as technical recovery. Zach also discusses why he chose to write Locked Up. Ransomware incidents are often hidden due to legal and reputational concerns, leaving practitioners without real guidance. By openly documenting what happened, including mistakes and lessons learned, Zach aims to provide a practical framework for others who will inevitably face similar events. He closes with advice on incident response planning, out of band communication, backup testing, password manager access, and the value of pre established relationships with the FBI and CISA.
12/15/25 • 48:31
Guest Introduction Andrew DeBratto, Chief Information Security Officer at Hunton Andrews Kurth LLP, leads cybersecurity strategy for one of the world's top 100 law firms. With more than 25 years in IT and two decades in the legal sector, Andrew combines operational discipline with forward-thinking innovation. His leadership at Hunton Andrews Kurth emphasizes cybersecurity as both a client obligation and a business enabler. Guiding a global IT team of more than 90 professionals, he champions "operational excellence" as the foundation for secure innovation. His practical insights reveal how large legal organizations can maintain stability while exploring emerging technologies like AI, automation, and micro-segmentation. Here's a Glimpse of What You'll Learn: • Why operational excellence is the foundation of every successful IT department • How Hunton Andrews Kurth builds trust through proactive cybersecurity practices • The role of ethical AI use in the legal industry • Why attitude and aptitude outweigh certifications in IT hiring • How the firm applies micro-segmentation and zero trust principles effectively • Why lawyers must remain human-in-the-loop when using AI tools • How innovation and practicality coexist in modern law firms In This Episode: Andrew DeBratto shares an inside look at how Hunton Andrews Kurth balances cybersecurity, innovation, and productivity across its global operations. He explains that "keeping the lights on" through operational excellence creates the foundation for innovation. When systems run smoothly and attorneys can focus on their clients, IT earns the credibility to explore transformative projects like AI integration and advanced endpoint protection. Andrew dives into the realities of cybersecurity in the legal sector, where firms are prime targets for sophisticated threat actors. Hunton Andrews Kurth conducts regular penetration tests and tabletop exercises not for compliance, but for genuine improvement. "Find the flaws," Andrew insists, emphasizing that vulnerability detection drives resilience. His team uses a best-of-breed approach, prioritizing specialized tools that deliver depth of security over one-size-fits-all platforms. The discussion also explores AI's growing influence on legal practice. Andrew acknowledges its potential but insists that every AI implementation at the firm is bound by responsible-use training. Attorneys must complete ethical certification before using any generative AI platform. "You are still responsible for your work," he reminds listeners, underscoring that human judgment must remain central even as technology accelerates productivity. Later in the conversation, Andrew highlights the firm's AI strategy, which blends internal development on Microsoft Azure OpenAI with external best-of-breed tools. Rather than chasing every new platform, the firm uses a "buffet approach," allowing experimentation without overspending. AI, he notes, is still in its exploratory phase, and meaningful productivity gains will come only when the right tools align with specific workflows. On leadership, Andrew emphasizes hiring for attitude and aptitude. Technical skills can be taught, but curiosity, collaboration, and integrity are essential. His philosophy has built a team that is both technically capable and deeply aligned with the firm's mission of trust, innovation, and client service.
12/8/25 • 44:21
Guest Introduction Rao Tadepalli is the CEO and Founder of DigiTran, a digital transformation and AI advisory firm specializing in insurance and financial services. Previously the CIO of Slide Insurance, Rao has spent decades guiding insurers through modernization, core system evolution, cloud adoption, and AI driven process redesign. Today he helps carriers, agents, and insuretechs move from legacy workflows to a forward looking operating model that blends automation, human expertise, and strong governance. His background gives him a rare perspective that combines deep technical knowledge, board level thinking, and a practical grasp of the challenges faced by regulated industries. Here's a Glimpse of What You'll Learn How AI accelerates claims processing for insurers while preserving the human in the loop for complex cases Why AI is creating new job categories such as prompt engineering instead of simply eliminating roles How DigiTran guides carriers through digital transformation and modernization of core systems Why financial services require both safety mindset and compliance mindset at the leadership level How AI powered security tools reshape detection and response in a high threat environment Why layered security, policies, procedures, and end user training must work together How leadership perception of IT needs to shift from cost center to value creation team Why communication, visibility, and proactive reporting help CIOs gain influence across the business In This Episode Rao opens by explaining DigiTran's mission: helping insurance organizations evolve from legacy systems into modern, AI supported operating environments. He outlines why insurance is uniquely sensitive to modernization cycles given the regulatory landscape, the importance of claims accuracy, and the constant need for faster service for policyholders. Rao describes how AI shines in straightforward claims workflows, especially situations where outcomes are predictable and repeatable. At the same time, he emphasizes that high complexity claims still demand human involvement, empathy, and judgment. The conversation shifts to workforce evolution. Rao details how AI does not eliminate people, but pushes organizations to retrain and rethink skill development. He explains why prompt engineering is becoming a necessary capability for future professionals and shares how he created a promptathon that taught students how to approach prompts systematically. His lesson is simple and powerful: as technology changes, the workforce must adapt in ways that preserve value, not shrink it. Rao and Matthew then explore AI's growing influence on security. Rao highlights why traditional rule based approaches cannot keep up with sophisticated threat actors who use AI to enhance phishing, social engineering, and lateral movement. He explains why companies must deploy AI powered detection tools, implement strict procedures, and train end users repeatedly to close the weakest link. His examples include major cyber incidents impacting insurers and how downtime directly affects revenue and operational stability. Leadership is a key theme throughout the episode. Rao shares a story from his early career about how CEOs once viewed technology as simply the equipment department. This motivated him to change leadership perception and demonstrate IT's strategic value. His advice to CIOs and CISOs is clear: communicate wins, translate technical work into business outcomes, engage executives proactively, and shape organizational safety culture. Technology leaders must speak the language of the business and present themselves as contributors to revenue, efficiency, and protection. The episode concludes with Rao's forward looking vision for the future of programming and AI. He describes his concept of NTH Generation Programming, a shift toward natural language interfaces that eliminate the need for traditional coding structures. For Rao, this is not an evolution but a revolution that will transform how systems are built, maintained, and optimized across industries.
12/8/25 • 30:29
Guest Introduction: CJ Covell is the Chief Information Officer at Everlast Roofing, a family owned American manufacturer specializing in metal building components used in residential, commercial, industrial, and agricultural construction. Since its founding in 1996, Everlast Roofing has expanded across multiple states, producing metal roofing and siding that power everything from pole barns to modern residential builds. CJ grew up inside the company, often learning technology alongside its evolution, and eventually developed a leadership style that blends hands on understanding with strategic direction. Today, he oversees technology, systems, process improvement, and digital transformation across a fast growing manufacturing footprint. Here's a Glimpse of What You'll Learn: How Everlast Roofing scaled from a small family business to a multi state manufacturer Why CJ believes technology should serve as a force multiplier for human ability How AI is transforming warehouse operations, logistics, and ERP workflows Why understanding the user experience is the foundation of great system design How Everlast used ChatGPT and Cursor to build a production ready warehouse system in weeks Why communication tools like Zoom and good audio equipment are essential for trust and connection How strong vendor relationships affect long term technology outcomes Why future leaders must continually experiment with AI to avoid falling behind In This Episode: CJ Covell shares the origin story of Everlast Roofing and explains how a family business adopted technology from the earliest stages of the internet. Many longtime employees received their first email address through Everlast, which created a unique challenge as the company transitioned from simple office servers to modern systems requiring structured access control and disciplined IT strategy. CJ reflects on growing up inside the organization, helping solve computer issues as a child, and watching technology become a business critical function. A major theme of this episode is the acceleration of AI and its ability to amplify human capability. CJ describes Everlast's challenge of managing a massive coil warehouse with thousands of steel coils and new employees lacking historical knowledge. Instead of hiring outside consultants or purchasing a costly logistics system, CJ and his team used ChatGPT to generate system specifications, ask context building questions, and outline a custom warehouse solution. Within three weeks, his team built a working application using Cursor that now allows any employee with a phone to find coils, scan barcodes, update information, and perform tasks with confidence. What would have taken six months to a year with traditional consulting was completed internally with greater accuracy and far lower cost. CJ also discusses the importance of deep user empathy. He spent days performing warehouse tasks himself to understand friction points and workflow issues. By capturing every moment of friction and turning it into actionable design requirements, the team created a solution that improves decision making and eliminates guesswork. CJ emphasizes that most people do not make mistakes intentionally; they simply lack the right information at the right time. Technology becomes transformative when it removes barriers rather than creating new ones. The conversation shifts toward communication and the role technology plays in building connection. CJ explains why tools like Zoom outperform other platforms and how simple investments in lighting, microphones, and camera placement create human centered virtual interactions. He even uses a teleprompter setup so his eyes align directly with the viewer, creating natural eye contact and improving trust. CJ points out that companies often resist small investments in communication technology despite spending thousands on travel for a single meeting. He argues that communication quality is the modern equivalent of showing up well dressed and prepared for an in person conversation. CJ closes with a reflection on the future of AI and security. He notes that threat actors now use AI to mimic writing styles, create sophisticated phishing attacks, and exploit email weakness. As businesses rely heavily on email, AI driven threats force organizations to adopt AI powered defenses. Beyond security, CJ believes the rapid acceleration of AI means leaders must continually experiment, learn, and adapt. Falling behind even briefly could create a widening gap that becomes impossible to close.
12/3/25 • 43:20
Guest Introduction Lemon Williams serves as the Chief Information Security Officer at Pine Gate Renewables, one of the nation's leading utility scale solar power developers and operators. With a background spanning Y2K era infrastructure, consulting, critical asset protection, and modern cybersecurity leadership, Lemon brings a rare blend of technical depth and operational awareness. He oversees both security and IT operations for a rapidly growing renewable energy organization that manages solar plants across 33 states. His experience navigating regulatory pressure, data concentration risks, operational resiliency, and AI enabled security tools gives him a comprehensive perspective on what security looks like in the evolving energy sector. Here's a Glimpse of What You'll Learn Why renewable energy companies face unique risks tied to data concentration and flat organizational structures How combining IT operations and security leads to a resiliency focused model instead of a reactive cybersecurity model Why mid sized companies must treat every user as part of the security function How AI enabled tools can automate micro level adjustments and strengthen security posture Why data sharing with third parties expands breach exposure even if your own system remains uncompromised How to build better relationships with users through education instead of enforcement Why role based access control must evolve when employees wear multiple hats How the CISO role is shifting toward business partnership, internal consulting, and revenue protection In This Episode Lemon Williams explains why Pine Gate Renewables carries the same responsibilities as major utilities despite having a fraction of the staff. With a lean structure and flat teams, the company must carefully manage privilege, role combinations, and data concentration. Lemon outlines how a single compromised account in a mid sized organization can have wider consequences than in a highly compartmentalized enterprise, which creates the need for a more deliberate approach to access control. A major theme of the conversation is the convergence of security and IT operations. Lemon shares how his teams merged into a single organization focused on resiliency rather than traditional cybersecurity boundaries. He explains that every role touching technology inevitably touches security, and that the organization functions better when analysts, sysadmins, and support staff think through the same lens. This shift allows Pine Gate Renewables to prevent issues earlier and support smooth operations even when incidents occur. Lemon also dives deep into the challenges of data sharing across partners, vendors, legal teams, compliance groups, and internal departments. He describes how companies often underestimate how much sensitive information flows through routine work and why a third party breach can expose years of shared data. His team spends significant time understanding how information moves, what truly needs to be shared, and how to reduce unnecessary exposure through redaction, alternative delivery channels, and better automation. Education and partnership drive much of Lemon's security philosophy. Instead of playing the role that staff fear, he and his team focus on being approachable problem solvers who embed themselves with operational groups. By explaining concepts like multifactor authentication, encryption, and role based controls in simple terms, they build trust and encourage employees to reach out early. This shift toward internal consulting has increased security's credibility and positioned the team as collaborators rather than blockers. The second half of the episode explores AI enabled security tools that can detect unusual behavior, adjust access in real time, and monitor user patterns. Lemon sees significant promise in these systems, especially in environments with limited staffing. Tools that make thousands of micro adjustments per minute give teams more time for innovation, strategic planning, and measurable contributions such as reducing cyber insurance premiums. For Lemon, AI is not a threat but an accelerator that allows security teams to operate with greater precision and impact.
12/1/25 • 51:59
Guest Introduction Mark Bentsen serves as the Chief Information Officer at CellGate Access Control Systems and is the Co Founder of Secure IVAI, an artificial intelligence managed service provider. His career includes decades of experience in logistics, banking software, healthcare technology, and security engineering. Mark spent ten years at FedEx in technology roles before transitioning into software development, AI integration, and cybersecurity work across multiple industries. His combined background in physical security, AI adoption, and enterprise software gives him a unique perspective on how organizations can secure remote properties, implement AI safely, and prepare for the next generation of intelligent systems. Today, Mark leads technology strategy at CellGate while supporting clients through Secure IVAI as they adopt AI in a practical, scalable, and secure way. Here's a Glimpse of What You'll Learn How CellGate provides full stack access control using hardware, software, and cloud managed systems Why cellular to cellular failover is one of the hardest engineering challenges in security devices How Secure IVAI helps small and medium businesses adopt AI safely and securely Why many businesses feel overwhelmed when choosing where to begin with AI How Mark uses frontier models like Claude to talk directly to years of operational data Why verifying AI outputs is essential for trust and long term adoption How organizations can evaluate emerging AI products in a crowded market What the next phase of AI looks like as agentic systems accelerate In This Episode Mark Bentsen explains how CellGate solves one of the biggest problems in physical security: providing reliable access control in places where wired connections do not exist. CellGate devices operate in remote ranches, oil fields, and rural properties, relying entirely on cellular networks. Mark describes why switching between carriers is not as simple as choosing the strongest signal at any moment and why true cellular failover requires sophisticated engineering that most competitors have not mastered. Mark also shares the origin of Secure IVAI, a managed service provider he co founded with a longtime friend who served as a chief information security officer. Their goal was to help businesses adopt AI responsibly, building real world solutions rather than theoretical prototypes. Mark explains how early reactions to AI ranged from skepticism to fear and why most companies struggled with one foundational question: where do we start. His work focuses on giving businesses a safe and structured entry point into AI adoption. The conversation expands into how AI can be used today to query years of company data across tools like Fabric, Salesforce, and Jira. Mark describes how he asks natural language questions of millions of records and then verifies those results directly in the company's internal systems. He outlines how businesses can evaluate new AI products, why they should understand what a model was trained on, and how to test for reliability. He also explains why specialized models can outperform general purpose tools when they are trained on narrow, domain specific data. Mark closes by discussing the future of agentic AI. True agents, he notes, are not simple workflow tools but systems capable of understanding goals, coordinating tasks, and making decisions with minimal oversight. With AI capabilities doubling roughly every seven months, Mark expects meaningful agentic systems to emerge within months, not years. He also emphasizes why professionals must develop horizontal awareness, stepping outside their own silo to drive business impact across the entire organization.
11/25/25 • 47:54
Guest Introduction Glenn Rumfellow serves as the Chief Information Officer at Window World of Baton Rouge, part of the largest Window World operation in the United States. His career began with early exposure to programming on the TRS 80 and Apple II, followed by roles in mainframe programming, technical support, and extensive development work in Microsoft Access, SQL, and enterprise document imaging. Glenn joined Window World first as a consultant, then as CIO, and now leads the organization's technology strategy across four major markets. His work includes modernizing legacy systems, guiding cloud migrations, deploying AI driven tools, and supporting operational efficiency in a business that completes tens of thousands of home installations each year. Here's a Glimpse of What You'll Learn How Glenn transitioned from early BASIC and Pascal programming into enterprise technology leadership Why Window World is modernizing a long standing Microsoft Access CRM and preparing for an Azure migration How data accuracy, reporting, and automation support a business completing tens of thousands of installations How AI powered tools like Samsara and Reila support driver safety, coaching, and sales performance How Glenn built a natural language query interface using an LLM to help executives access data Why operational scale requires strong APIs, data structures, and continuous reporting discipline How Window World uses analytics to measure installers, sales reps, regions, and marketing sources In This Episode Glenn Rumfellow shares how he went from tinkering with early computers to leading technology for the largest Window World operations in the country. His background across mainframe systems, enterprise imaging platforms, and complex Access and SQL applications shaped his approach to designing reliable systems that scale with the business. He explains how a long standing Access based CRM supported the company for nearly two decades and outlines the ongoing transition into a modern web application backed by SQL and Azure services. Glenn describes the level of data movement, automation, and reporting required when a company handles tens of thousands of installations each year. API integrations, structured reporting, and database mail have become essential to keeping the operation efficient and accountable. Glenn also highlights how AI is already embedded in their business. The team uses Samsara for real time driver safety alerts and video capture, and they recently adopted Reila to improve sales performance through coaching and analysis. In the IT department, AI tools assist with coding, documentation, and product research. Glenn even built a prototype LLM powered query tool so executives can access operational data through natural language. He also shares how the team evaluates AI call agents and considers long term opportunities for automation as the technology becomes more cost effective.
11/24/25 • 39:44
Guest Introduction: Karly Burke serves as the Chief Information Officer at Zinkerz, a growing education technology company that has transformed from a simple mobile test prep platform into a full ecosystem for academic support, counseling, and intensive SAT and AP preparation. She entered the organization as a freelancer creating math content and gradually expanded her role through a combination of technical curiosity, instructional leadership, and a deep understanding of student performance data. Today she guides Zinkerz through major pivots in technology, student analytics, adaptive testing preparation, and program expansion while helping the company scale both its digital tools and human centered education model. Her background as a math educator, curriculum designer, and program architect gives her a unique viewpoint on how technology supports real learning and how personalization must remain central in online education. Here's a Glimpse of What You'll Learn: How Zinkerz transitioned from a fully automated SAT prep app to a hybrid education model centered on human instruction Why the combination of automation and personalization creates stronger outcomes for students How Zinkerz measures student performance and uses adaptive data to drive curriculum decisions What parents should understand about the return of SAT requirements across top universities The structure and philosophy behind Zinkerz counseling programs How Zinkerz summer camps deliver high impact SAT score increases Why Karly's unique path from teacher to CIO shapes her leadership style How Zinkerz continues to innovate its platform to support educators and students worldwide In This Episode: Karly Burke details how Zinkerz evolved from a mobile only test prep platform into a multifaceted academic support system that blends technology with personalized instruction. She discusses the company's early attempt to automate SAT preparation entirely and why the team realized that students needed far more interaction with educators. This insight sparked the company's major shift toward online classes, counseling, and immersive summer programs. She explains how Zinkerz gathers and analyzes student data to identify trends, pinpoint strengths and weaknesses, and deliver realistic adaptive testing that mirrors the current digital SAT experience. The conversation highlights the growing importance of tracking attendance, homework consistency, question level analytics, and difficulty patterns to inform instruction in real time. Karly also provides clarity on the national shift back toward SAT requirements. She outlines how many top universities, including Ivy League institutions, are reintroducing standardized test expectations and how families should approach exam planning. She breaks down the Zinkerz counseling model, which avoids a la carte programs in favor of full relational guidance built over several years. The final section explores Karly's personal story. She shares her path from marketing to education, her eight year teaching career, and the unexpected moment when a former student introduced her to Zinkerz. Her progression from freelance math question writer to CIO is presented with humility and authenticity. It is a clear example of how curiosity, initiative, and a willingness to solve problems create opportunities for advancement within a growing company.
11/18/25 • 30:15