Mistral comes out swinging for enterprise AI customers with new Le Chat Enterprise, Medium 3 model


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

French AI startup Mistral has raised boatloads of private funding but has yet to crack the top AI usage charts globally, especially when it comes to enterprise and developer adoption.

But that may change starting today: The company just unveiled Le Chat Enterprise, a unified AI assistant platform designed for enterprise-scale productivity and privacy, powered by its new Medium 3 model that outperforms larger ones at a fraction of the cost (here, “larger” refers to the number of parameters, or internal model settings, which typically denote more complexity and more powerful capabilities, but also take more compute resources such as GPUs to run).

Le Chat Enterprise is a ChatGPT-like assistant and competitor built from the ground up for data protection, auditing, and cross-application support

Available on the web and via mobile apps, Le Chat Enterprise is like a ChatGPT competitor, but one built specifically for enterprises and their employees, taking into account the fact that they’ll likely be working across a suite of different applications and data sources.

It’s designed to consolidate AI functionality into a single, privacy-first environment that enables deep customization, cross-functional workflows, and rapid deployment.

Among its key features that will be of interest to business owners and technical decision makers are:

  • Enterprise search across private data sources (your company’s Google Drive, SharePoint, Gmail, and more, without exposing or releasing information to third-parties)
  • Document libraries with auto-summary and citation capabilities
  • Custom connectors and agent builders for no-code task automation
  • Custom model integrations and memory-based personalization
  • Hybrid deployment options with support for public cloud, private VPCs, and on-prem hosting

Le Chat Enterprise supports seamless integration into existing tools and workflows. Companies can build AI agents tailored to their operations and maintain full sovereignty over deployment and data—without vendor lock-in.

The platform’s privacy architecture adheres to strict access controls and supports full audit logging, ensuring data governance for regulated industries. Enterprises also gain full control over the AI stack—from infrastructure and platform features to model-level customization and user interfaces.

And given the general suspicion from some Western companies and governments around China and its growing library of powerful open source models from companies there, coupled with Mistral’s location in the European Union and the tight data protection laws it must follow (General Data Protection Regulation aka GDPR and the EU AI Act), Mistral’s new Le Chat Enterprise offering could be appealing to many enterprises with stricter security and data storage policies (especially medium-to-large and legacy businesses).

Mistral is also rolling out improvements to its Le Chat Pro and Team plans, targeting individuals and small teams looking for productivity tools backed by its language models. All tiers benefit from the core capabilities introduced in Le Chat Enterprise.

Mistral Medium 3 outperforms GPT-4o and even Claude 3.7 Sonnet on key benchmarks and is available via API and on-prem

Mistral Medium 3 introduces a new performance tier in the company’s model lineup, positioned between lightweight and large-scale models. It is a proprietary model, meaning unlike previous Mistral releases, it is not available under an open source license and must be used through Mistral’s website and API or that of its partners.

Designed for enterprise use, the model delivers more than 90% of the benchmark performance of Claude 3.7 Sonnet, but at one-eighth the cost—$0.40 per million input tokens and $20.80 per million output tokens, compared to Sonnet’s $3/$15 for input/output.

Benchmarks show that Mistral Medium 3 is particularly strong in software development tasks. In coding tests like HumanEval and MultiPL-E, it matches or surpasses both Claude 3.7 Sonnet and OpenAI’s GPT-4o models.

According to third-party human evaluations, it outperforms Llama 4 Maverick in 82% of coding scenarios and exceeds Command-A in nearly 70% of cases.

The model also performs competitively across languages and modalities. Compared to Llama 4 Maverick, it has higher win rates in English (67%), French (71%), Spanish (73%), and Arabic (65%), and leads in multimodal performance with notable scores in tasks like DocVQA (0.953), AI2D (0.937), and ChartQA (0.826).

Mistral Medium 3 is optimized for enterprise integration. It supports hybrid and on-premises deployment, offers custom post-training, and connects easily to business systems. According to Mistral, it’s already being used in beta by organizations in sectors such as financial services, energy, and healthcare to power domain-specific workflows and customer-facing solutions.

Mistral Medium 3 is now accessible via Mistral’s La Plateforme API and Amazon Sagemaker, with support coming soon to IBM WatsonX, NVIDIA NIM, Azure AI Foundry, and Google Cloud Vertex.

Meanwhile, Le Chat Enterprise is available in the Google Cloud Marketplace, and will launch shortly on Azure AI and AWS Bedrock.

For those ready to explore the assistant experience, Le Chat is available at chat.mistral.ai, as well as in the App Store and Google Play Store, with no credit card required to get started.

By combining a high-efficiency model with a customizable enterprise platform, Mistral AI is making a concerted push to lower the barriers to scalable, privacy-respecting AI adoption in the enterprise world.



Source link

You might also like

Comments are closed, but trackbacks and pingbacks are open.