AI
AI

Microsoft and NVIDIA Join Forces to Safeguard AI Workloads in Azure

Photo credit: www.csoonline.com

With the growing interest in artificial intelligence (AI), security professionals are increasingly focusing on developing frameworks that not only foster innovation but also ensure comprehensive protection for sensitive data and models. This development aims to prevent issues related to data exfiltration, poisoning, and other malicious actions that arise in the evolving landscape of AI.

Challenges such as unintentional leaks of AI models trained on personally identifiable information (PII), users inadvertently providing sensitive data through generative AI prompts, and the use of AI for creating deepfakes or malicious exploits present significant hurdles for security teams. As executives work to create a robust security architecture suitable for the AI era, these scenarios underscore the necessity for enhanced preventive measures.

Creating an infrastructure that guarantees holistic protection for AI workloads is no small feat. Microsoft, in collaboration with NVIDIA, has developed a pioneering solution that integrates confidential computing into the Azure cloud platform, introducing Azure Confidential VMs powered by NVIDIA H100 Tensor Core GPUs.

The Confidential Computing Consortium, which emphasizes the acceleration of technologies and standards in confidential computing, defines this concept as a method to protect data in active use by conducting computations within a hardware-based Trusted Execution Environment (TEE). This secure and isolated space safeguards the applications and data in use from potential memory attacks, including those that could exploit vulnerabilities in the host operating system or hypervisor, a particularly pressing concern given the sensitive nature of data that AI models often handle. Organizations in regulated industries such as government, finance, and healthcare require strong assurances that their models and the associated data remain protected from unauthorized access, including from cloud service providers when operating in the cloud environment.

According to Vikas Bhatia, Head of Product for Azure Confidential Computing at Microsoft, “The Azure confidential VMs with NVIDIA H100 GPUs deliver a complete, highly secure computing stack that extends from VMs to GPU architecture. This enables developers to create and deploy AI applications with the guarantee that their sensitive data, intellectual property, and AI models are safeguarded from end to end.”

Bhatia also noted, “This solution provides Azure customers with greater options and flexibility to run their workloads securely in the cloud, addressing critical privacy and regulatory concerns.”

Utilizing confidential computing with GPUs

Traditionally, encryption has played a vital role in protecting data while at rest and during transmission. However, Azure Confidential VMs equipped with NVIDIA H100 GPUs introduce a novel third layer of protection, securing data and AI models while they are actively in use through a TEE that integrates both CPU and GPU functionalities. Inside this environment, workloads are shielded from privileged attackers, which include system administrators and others who may have physical access. Consequently, all application code, models, and data maintain continuous protection.

The technology is currently being demonstrated in a controlled preview, implementing a two-phase attestation process. The first phase involves VM attestation where a guest attestation agent collects essential evidence and endorsements. This may include TCG logs capturing boot measurements, SNP reports with Trusted Platform Module (TPM) Attestation Keys, and other verification elements designed to ensure data integrity.

The second phase encompasses GPU attestation, where a verifier can operate locally within the confidential VM or assess data remotely via NVIDIA Remote Attestation Services (NRAS). This allows for a comparison of reported firmware measurements against Reference Integrity Manifests published by NVIDIA. The attestation agent also monitors the revocation status of both the GPU certification chain and the RIMs signing certification chain to ensure the integrity of the computing environment.

Microsoft and NVIDIA are committed to enhancing this experience by rolling out more advanced CPU and GPU attestation capabilities in future updates. Their ultimate aim is to instill confidence in the security of data throughout the entire AI lifecycle, from protecting models and prompt information from unauthorized access during both inference and training phases to ensuring the safety of prompts and the resulting outputs.

For further insights and to explore the new GPU-enabled confidential VMs, visit us here.

Source
www.csoonline.com

Related by category

Chase CISO Critiques Security of Industry SaaS Solutions

Photo credit: www.csoonline.com An anonymous employee from Chase provided insights...

Palo Alto Networks Acquires Protect AI to Enhance AI Security Platform

Photo credit: www.networkworld.com Palo Alto Networks to Acquire AI Security...

Google Warns of Increasing Enterprise-Specific Zero-Day Exploits

Photo credit: www.csoonline.com The Evolving Landscape of Mobile Security Vulnerabilities Recent...

Latest news

Idina Menzel Suggests She Should Receive Royalties for Frozen Halloween Costumes

Photo credit: www.thewrap.com Idina Menzel recently discussed her experiences on...

Photos from TeenBookCon 2025

Photo credit: www.publishersweekly.com On April 12, young adult (YA) literature...

Amber Gray, Taylor Iman Jones, and More to Star in Arena Stage’s A WRINKLE IN TIME

Photo credit: www.broadwayworld.com Arena Stage has announced the cast and...

Breaking news