Microsoft unveils Phi-2, the next of its smaller, more nimble genAI models


Microsoft recently announced a series of smaller, specialized artificial intelligence (AI) models named Phi, aimed at addressing more specific applications. The first, Phi-1, and its successor, Phi-1.5, are smaller language models (SLMs) with significantly fewer parameters compared to their predecessors like GPT-3 and GPT-4. Phi-2, the latest release, boasts 2.7 billion parameters, claiming to surpass larger models in efficiency.

These advancements reflect a broader industry shift towards more targeted, efficient AI solutions. Microsoft, in collaboration with OpenAI, is leveraging these SLMs to enhance applications such as its Copilot AI assistant. As LLMs are costly and time-consuming to train, smaller models offer a more practical alternative for business-specific needs.

Experts predict that these streamlined models will eventually challenge the current dominance of large-scale LLMs by providing equally competent but more cost-effective and specialized solutions. Microsoft positions Phi-2 as a tool for researchers, emphasizing its capacity for various AI research and development tasks, including interpretability and safety. The push towards more efficient, smaller AI models is seen as a necessary evolution to make AI more accessible and sustainable for businesses of all sizes.