Infinite scale: The architecture behind the Azure AI superfactory

By Dustin Ward

Today, we are unveiling the next Fairwater site of Azure AI datacenters in Atlanta, Georgia. This purpose-built datacenter is connected to our first Fairwater site in Wisconsin, prior generations of AI supercomputers and the broader Azure global datacenter footprint to create the world’s first planet-scale AI superfactory. By packing computing power more densely than ever…

The new era of Azure Ultra Disk: Experience the next generation of mission-critical block storage

By Dustin Ward

Since its launch at Microsoft Ignite 2019, Azure Ultra Disk has powered some of the world’s most demanding applications and workloads: From real-time financial trading and electronic health records to high-performance gaming and AI/ML services. Ultra Disk was a breakthrough in cloud block storage innovation from the start, introducing independent configuration of capacity, IOPS, and…

Driving ROI with Azure AI Foundry and UiPath: Intelligent agents in real-world healthcare workflows

By Dustin Ward

Across industries, organizations are moving from experimentation with AI to operationalizing it within business-critical workflows. At Microsoft, we are partnering with UiPath—a preferred enterprise agentic automation platform on Azure—to empower customers with integrated solutions that combine automation and AI at scale. One example is Azure AI Foundry agents and UiPath agents (built on Azure AI…

Microsoft strengthens sovereign cloud capabilities with new services

By Dustin Ward

Across Europe and around the world, organizations today face a complex mix of regulatory mandates, heightened expectations for resilience, and relentless technological advancement. Sovereignty has become a core requirement for governments, public institutions, and enterprises seeking to harness the full power of the cloud while retaining control over their data and operations. In June 2025,…

Powering Distributed AI/ML at Scale with Azure and Anyscale

By Dustin Ward

The path from prototype to production for AI/ML workloads is rarely straightforward. As data pipelines expand and model complexity grows, teams can find themselves spending more time orchestrating distributed compute than building the intelligence that powers their products. Scaling from a laptop experiment to a production-grade workload still feels like reinventing the wheel. What if…