| Page 29 | Kisaco Research
 

Naveh Grofi

Customer Success Engineer
NeuReality

Naveh Grofi

Customer Success Engineer
NeuReality

Naveh Grofi

Customer Success Engineer
NeuReality
 

Tina Bou-Saba

Founder
CXT Investments

Tina Bou-Saba

Founder
CXT Investments

Tina Bou-Saba

Founder
CXT Investments
 

Richard Henry

Partner
Sandbridge Capital

Richard Henry

Partner
Sandbridge Capital

Richard Henry

Partner
Sandbridge Capital
 

Sam Pritzker

Principal
TSG Consumer

Sam Pritzker

Principal
TSG Consumer

Sam Pritzker

Principal
TSG Consumer

Location: Room 207

Duration: 1 hour

Location: Room 206

Duration: 1 hour

 

Anna Doherty

Partner
G9 Ventures

Anna Doherty

Partner
G9 Ventures

Anna Doherty

Partner
G9 Ventures

The rapid evolution of high-performance computing (HPC) clusters has been instrumental in driving transformative advancements in AI research and applications. These sophisticated systems enable the processing of complex datasets and support groundbreaking innovation. However, as their adoption grows, so do the critical security challenges they face, particularly when handling sensitive data in multi-tenant environments where diverse users and workloads coexist. Organizations are increasingly turning to Confidential Computing as a framework to protect AI workloads, emphasizing the need for robust HPC architectures that incorporate runtime attestation capabilities to ensure trust and integrity.

In this session, we present an advanced HPC cluster architecture designed to address these challenges, focusing on how runtime attestation of critical components – such as the kernel, Trusted Execution Environments (TEEs), and eBPF layers – can effectively fortify HPC clusters for AI applications operating across disjoint tenants. This architecture leverages cutting-edge security practices, enabling real-time verification and anomaly detection without compromising the performance essential to HPC systems.

Through use cases and examples, we will illustrate how runtime attestation integrates seamlessly into HPC environments, offering a scalable and efficient solution for securing AI workloads. Participants will leave this session equipped with a deeper understanding of how to leverage runtime attestation and Confidential Computing principles to build secure, reliable, and high-performing HPC clusters tailored for AI innovations.

Location: Room 201

Duration: 1 hour

Author:

Jason Rogers

CEO
Invary

Jason Rogers is the Chief Executive Officer of Invary, a cybersecurity company that ensures the security and confidentiality of critical systems by verifying their Runtime Integrity. Leveraging NSA-licensed technology, Invary detects hidden threats and reinforces confidence in an existing security posture. Previously, Jason served as the Vice President of Platform at Matterport, successfully launched a consumer-facing IoT platform for Lowe's, and developed numerous IoT and network security software products for Motorola.

Jason Rogers

CEO
Invary

Jason Rogers is the Chief Executive Officer of Invary, a cybersecurity company that ensures the security and confidentiality of critical systems by verifying their Runtime Integrity. Leveraging NSA-licensed technology, Invary detects hidden threats and reinforces confidence in an existing security posture. Previously, Jason served as the Vice President of Platform at Matterport, successfully launched a consumer-facing IoT platform for Lowe's, and developed numerous IoT and network security software products for Motorola.

Author:

Ayal Yogev

CEO & Co-founder
Anjuna

Ayal Yogev

CEO & Co-founder
Anjuna

Dive into a hands-on workshop designed exclusively for AI developers. Learn to leverage the power of Google Cloud TPUs, the custom accelerators behind Google Gemini, for highly efficient LLM inference using vLLM. In this trial run for Google Developer Experts (GDEs), you'll build and deploy Gemma 3 27B on Trillium TPUs with vLLM and Google Kubernetes Engine (GKE). Explore advanced tooling like Dynamic Workload Scheduler (DWS) for TPU provisioning, Google Cloud Storage (GCS) for model checkpoints, and essential observability and monitoring solutions. Your live feedback will directly shape the future of this workshop, and we encourage you to share your experience with the vLLM/TPU integration on your social channels.

Location: Room 207

Duration: 1 hour

Author:

Niranjan Hira

Senior Product Manager
Google Cloud

As a Product Manager in our AI Infrastructure team, Hira looks out for how Google Cloud offerings can help customers and partners build more helpful AI experiences for users.  With over 30 years of experience building applications and products across multiple industries, he likes to hog the whiteboard and tell developer tales.

Niranjan Hira

Senior Product Manager
Google Cloud

As a Product Manager in our AI Infrastructure team, Hira looks out for how Google Cloud offerings can help customers and partners build more helpful AI experiences for users.  With over 30 years of experience building applications and products across multiple industries, he likes to hog the whiteboard and tell developer tales.