Microsoft has introduced a new multi-model AI capability called “Critique” within its Copilot Researcher tool, aiming to improve the accuracy and reliability of AI-generated outputs.
The Critique system separates content generation from evaluation, using multiple AI models in a coordinated workflow. One model—such as those from OpenAI—generates the initial response, while another model, including systems from Anthropic, reviews and refines the output before it is delivered to users.

This approach allows Copilot to cross-verify information, reduce hallucinations, and enhance overall output quality, addressing a key limitation of single-model AI systems.
In addition to Critique, Microsoft has introduced a complementary feature called “Council,” which enables users to compare responses from multiple AI models side-by-side, offering diverse perspectives and improving decision-making.
Read More: OpenAI Launches Safety-Focused Bug Bounty Programme to Strengthen AI Systems
The update is part of Microsoft’s broader push toward multi-model and agentic AI systems, where different AI models collaborate like a team—handling planning, execution, and validation tasks in parallel.
By integrating multiple frontier models into a single workflow, Microsoft aims to transform Copilot from a simple assistant into a more reliable, research-grade AI tool for complex knowledge work.
Be a part of Elets Collaborative Initiatives. Join Us for Upcoming Events and explore business opportunities. Like us on Facebook , connect with us on LinkedIn and follow us on Twitter.
"Exciting news! Elets technomedia is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest insights!" Click here!