Microsoft releases a one-stop AI development tool, with 60,000 Azure AI users, allowing easy switching of large language models
Microsoft stated that Azure AI Foundry will be offered for free to attract more enterprise customers to purchase its cloud services. At the same time, the company also released new features for 365 Copilot, allowing users to automate repetitive tasks
Author: Zhao Yuhe
Source: Hard AI
Microsoft Corporation released the AI tool Azure AI Foundry at its annual Ignite conference held in Chicago on Tuesday, which can assist cloud customers in building and deploying artificial intelligence applications. At the same time, the company also launched two new chips.
Media reports indicate that Azure AI Foundry allows users to more easily switch between large language models that support artificial intelligence. For example, customers using older OpenAI products can try the new version or switch from OpenAI to Mistral or Meta's AI tools, said Scott Guthrie, head of Microsoft's cloud computing division, in a media interview. In addition to mixing and matching models, customers can also ensure that applications run smoothly and achieve good returns on investment.
Microsoft stated that it will offer this software for free to attract more enterprise customers to purchase its cloud services.
Currently, there are 60,000 customers using Azure AI, a cloud service that allows developers to build and run applications using 1,700 different AI models. However, this process remains cumbersome, making it difficult to keep up with the pace of new models and updates. Media reports suggest that customers do not want to rebuild applications every time new technology emerges, nor do they want to switch blindly without understanding the applicable tasks of the models.
Guthrie said:
"Developers often find that each new model, even within the same family, may have better answers or performance in some areas but regress in others. If your business runs critical applications, you wouldn't want to switch and just hope it works."
Some features of Foundry come from the previous Azure AI Studio, while others are new additions, including tools to help companies deploy AI agents (semi-autonomous digital assistants) that can take actions on behalf of users.
Guthrie stated that making it easier for customers to switch between models will not undermine Microsoft's close partnership with OpenAI. He pointed out that it actually allows customers to more easily choose the OpenAI model that best fits each task. However, Microsoft is also well aware that providing choices is key to attracting and retaining customers.
"In many use cases, OpenAI models are currently absolutely the best in the industry. At the same time, under different use cases and needs, people may want to use different tools. The choice will become very important."
Analysts believe that Microsoft's move aims to increase revenue in generative AI. Although Microsoft is trying to persuade customers to invest more in AI, the company has consistently warned investors that sales growth in its cloud business will decline due to data center capacity not being sufficient to meet demand. Guthrie stated that these limitations are temporary, and Microsoft is committed to having sufficient computing power in the future.
Last year, Microsoft announced its first self-developed cloud computing and AI chips at the Ignite conference, and this year it released two new chips, one of which is a secure microprocessor that protects content such as encryption and signing keys. The company stated that starting next year, this new chip will be installed in all new servers in Microsoft data centers Another type of chip is the Data Processing Unit (DPU), which is similar to the network chips produced by NVIDIA and can transfer data to computing and AI chips more quickly, thereby accelerating task processing. Microsoft and its competitors are pursuing increasingly powerful cloud systems to train and run AI models.
Rani Borkar, Vice President of Chip Design and Development at Microsoft, stated that every layer of chips, servers, software, and other components must be continuously improved to achieve optimal performance:
"The scale of the models is becoming very large, and you need to make 1 plus 1 plus 1 plus 1 greater than 4 or 5. The Data Processing Unit is part of this, which can accelerate network and storage performance while reducing energy consumption."
In addition, Microsoft announced that the Maia AI chip will be used to run Azure OpenAI services, operating alongside NVIDIA chips. Regarding 365 Copilot, Microsoft has added a feature that can automate the handling of repetitive tasks, while Teams Copilot will be able to understand and answer questions about slides, web images, and other visual content. Starting next year, PowerPoint's Copilot will be able to translate presentations into 40 languages.
This article is from WeChat Official Account "Hard AI". For more cutting-edge AI news, please click here.