Webinars
Secrets Management for AI Networks
16 January 2026 at 12:00pm ET | Virtual Event
Modern AI systems rely on complex, high-performance networks to move vast amounts of data, synchronize distributed training, and serve models at scale. These networks form the backbone of AI infrastructure but, they also introduce a rapidly expanding attack surface. With adversaries leveraging AI to exploit protocol weaknesses and implementation flaws, securing the network fabric has never been more critical.
This session explores Secrets Management for AI Networks-an often overlooked but essential discipline for safeguarding access, ensuring confidentiality, and maintaining the integrity of network devices and systems. We will unpack how they are used within AI infrastructure and the common pitfalls lead to exposure. Attendees will gain insight into best practices for handling secrets securely across their lifecycle, and strategies to prevent, detect, and respond to breaches.
Guest Speaker:
Harsh Pathak is a Technical Program Manager at Meta, specializing in security for large-scale infrastructure systems. With experience spanning network security, platform resilience, and secrets management, he focuses on building secure foundations that enable AI and distributed computing at scale. Harsh combines a deep technical understanding with a practical approach to risk mitigation, helping bridge the gap between engineering and security strategy. He is passionate about advancing secure-by-design principles in next-generation network.
This webinar is organized by the IEEE AI Hardware & Infrastructure Working Group.
Past Webinars
Access previous IEEE AI Coalition webinars on-demand below:
High-Performance Computing and Its Influence on Accelerating Low-Code and AI Transformation in Healthcare
26 November 2025 at 12:00pm ET | Virtual Event
High-Performance Computing (HPC) has evolved from government labs into a cornerstone of modern industries, powering breakthroughs in artificial intelligence, precision medicine, and large-scale data analytics. Its convergence with AI—often termed AI-HPC—has become particularly transformative in healthcare, where the demand for real-time insights, compliance, and scalability is accelerating the adoption of low-code platforms.
In healthcare IT, traditional transactional systems now face continuous streams of structured and unstructured data from electronic health records, claims, wearables, and research pipelines. By integrating HPC with low-code platforms, organizations can process these massive datasets in near real time, embed AI/ML models for predictive analytics and fraud detection, and even enable advanced simulations such as personalized treatment modeling—all through user-friendly, low-code interfaces.
This talk explores how HPC accelerates low-code healthcare transformation by enhancing data pipelines, enabling AI at scale, and improving patient experiences with faster, more proactive care delivery. It also addresses key challenges—scalability, sustainability, and complexity—and outlines solutions such as hybrid HPC strategies, energy-efficient architectures, and DevSecOps-driven compliance. The result is a new paradigm of human-centric supercomputing, where HPC’s immense computational power is democratized through intuitive, secure, and interoperable low-code platforms, empowering clinicians, researchers, and patients alike.
View On-Demand
Guest Speaker:
Harikrishnan Muthukrishnan, Principal IT Developer, Blue Cross Blue Shield of Florida
Over the past two decades, I’ve contributed to transformative IT initiatives across India, the UK, and the US, working with Fortune 500 clients to build, modernize, and optimize complex systems that drive business and operational excellence.
My journey began in Management Information Systems (MIS), where I developed a deep appreciation for the power of data-driven decision-making. From there, I expanded my expertise into Supply Chain Management, Retail Store Systems, HR People Systems, and Warehouse Management — gaining a holistic understanding of enterprise operations and the critical role of technology in driving efficiency and growth.
Over the last decade, I transitioned my focus to Healthcare IT, specializing in Pega PRPC architecture and administration, emphasizing system modernization, security, and performance optimization. My work in legacy system modernization, DevSecOps implementation, cloud migration, and AI/ML-driven automation has enabled healthcare organizations to streamline operations, strengthen security, and improve patient outcomes through more efficient, reliable, and scalable technology solutions.
To me, technology has always been about solving real-world problems—building resilient systems, fostering innovation, and empowering organizations to serve their communities more effectively. As the landscape evolves, I remain committed to exploring new frontiers in Healthcare IT, DevSecOps, and AI to drive meaningful, lasting change.
Beyond technical expertise, I am passionate about mentorship, collaboration, and community engagement. As a Senior Member of IEEE, a Forbes Technology Council Member, and an active contributor to BCS, ACM, and ADPList, I enjoy sharing knowledge and fostering professional growth.
I have had the opportunity to author industry articles, contribute to research papers, and speak at international conferences, discussing Healthcare IT modernization, DevSecOps best practices, and AI-driven enterprise solutions. My experience judging expert panels and participating in industry forums allows me to stay at the forefront of emerging trends and best practices.
My goal is to continue driving technology-driven healthcare solutions, leveraging secure, scalable, and efficient systems while mentoring the next generation of professionals in the field.
This webinar is organized by the IEEE AI Hardware & Infrastructure Working Group.
Everything Old Is New Again – FPCA (Field Programmable Compute Array):
Energy-efficient Tile-based at-memory AI Compute HW/Chip Architecture
15 October 2025 at 12:00pm ET | Virtual Event
Topic: FPCA (Field Programmable Compute Array):
Energy-efficient Tile-based at-memory AI Compute HW/Chip Architecture
View On-Demand
Guest Speaker:
Dr. Martin Snelgrove, CEO & Co-Founder, Hepzibah AI
Martin Snelgrove is a veteran semiconductor innovator, entrepreneur, and educator whose career spans both academia and industry. He is currently the CEO and Co-Founder of Hepzibah AI, a Toronto-based deeptech startup building next-generation tiled at-memory compute IP for ultra-efficient AI inference and light training. Hepzibah’s architecture rethinks how memory and compute interact at the silicon level — delivering performance-per-watt breakthroughs for edge and datacenter applications alike.
Martin previously served as co-founder/CEO/CTO of Untether AI, where he helped pioneer custom AI accelerator architectures that pushed the boundaries of memory-centric compute. Earlier in his career, Martin co-founded multiple successful startups including Kapik Integration (analog/mixed-signal design),Soma Networks and Philsar (acquired by Conexant).
Before his entrepreneurial journey, Martin was a professor of Electrical and Computer Engineering at the University of Toronto and later held an NSERC Industrial Chair at Carleton University, where he worked closely with Ottawa’s thriving semiconductor ecosystem.
This webinar is organized by the IEEE AI Hardware & Infrastructure Working Group.
Efficient and Scalable AI Inference: Navigating the Challenges of Model Deployment at Scale
17 September 2025 at 12:00pm ET | Virtual Event
In machine learning, model deployment strategies are crucial for managing high scale infrastructure where the goal is to achieve efficient, scalable, and cost effective inference. This session will cover the challenges involved in deploying models, both small and large, in heterogeneous environments where models use varying amounts of resources like GPUs. The session will explore the complexities of orchestrating such a system, emphasizing that efficient GPU usage is a priority to prevent idling and wasted compute, while also serving inference requests quickly. The session will delve into suitable design approaches to navigate these complexities and aims to equip attendees with the knowledge to design their infrastructure for deploying and managing models effectively.
View On-Demand
Guest Speaker:
Bhala Ranganathan is a seasoned software engineer and technical leader, specializing in cloud services and distributed systems with a strong focus on Data and AI infrastructure. He is currently a Principal Software Engineer and Tech Lead on the Azure Open AI service runtime team, where he works on large scale AI inferencing. Throughout his time at Microsoft, he has contributed to several impactful initiatives, including Azure Cosmos DB’s Multi-Master offering, and core components of the Azure AI platform such as Feature Store and Model-as-a-Service. Beyond his technical accomplishments, he is a tech author contributing technical articles on cloud services and infrastructure.
This webinar is organized by the IEEE AI Hardware & Infrastructure Working Group.