ServiceNow Accelerates Enterprise AI with Apriel Nemotron 15B
In a groundbreaking move for enterprise AI, ServiceNow has unveiled its latest model, the Apriel Nemotron 15B, developed in collaboration with NVIDIA. This innovative reasoning model is designed to enhance the capabilities of AI agents, allowing them to respond in real-time, manage complex workflows, and scale operations across various functions such as IT, HR, and customer service teams globally. The announcement was made during the Knowledge 2025 event, where NVIDIA’s CEO, Jensen Huang, joined ServiceNow’s chairman and CEO, Bill McDermott, for a keynote presentation.
What Makes Apriel Nemotron 15B Unique?
The Apriel Nemotron 15B model is compact, cost-efficient, and fine-tuned for action. Unlike some of the latest general-purpose large language models (LLMs) that boast over a trillion parameters, this model is engineered for reasoning. It draws inferences, weighs goals, and navigates rules in real time, delivering faster responses and lower inference costs while still providing enterprise-grade intelligence. It is the result of extensive development utilizing NVIDIA NeMo, the open NVIDIA Llama Nemotron Post-Training Dataset, and ServiceNow’s domain-specific data, all trained on NVIDIA DGX Cloud hosted on Amazon Web Services (AWS).
Performance Optimization through Advanced Infrastructure
The post-training of Apriel Nemotron 15B leveraged the high-performance infrastructure of NVIDIA DGX Cloud on AWS, ensuring that the model is not only accurate but also optimized for speed, efficiency, and scalability. This optimization is crucial for powering AI agents capable of supporting thousands of concurrent enterprise workflows without compromising performance.
Innovative Closed Loop for Continuous Improvement
ServiceNow and NVIDIA have introduced a new data flywheel architecture that integrates ServiceNow’s Workflow Data Fabric with NVIDIA NeMo microservices, including NeMo Customizer and NeMo Evaluator. This architecture enables a closed-loop process that refines and enhances AI performance by utilizing workflow data to personalize responses and improve accuracy over time. Importantly, robust guardrails are in place to ensure that customers maintain control over how their data is used, keeping security and compliance at the forefront.
Real-World Applications and Impact
During the keynote demonstration, ServiceNow showcased the practical deployment of these agentic models in real enterprise scenarios, highlighting a collaboration with AstraZeneca. Here, AI agents will empower employees to resolve issues and make decisions more swiftly and accurately, ultimately reclaiming 90,000 hours for the workforce. Jon Sigler, Executive Vice President of Platform and AI at ServiceNow, emphasized that the Apriel Nemotron 15B model combines real-time enterprise data, workflow context, and advanced reasoning to drive tangible productivity, achieving what generic models cannot.
The Future of Intelligent AI Agents
The partnership between ServiceNow and NVIDIA represents a significant evolution in enterprise AI strategy, transitioning from static models to dynamic intelligent systems that can adapt and evolve. This shift not only enhances productivity and speeds up resolution times for businesses but also ensures that technology leaders have access to a model that meets today’s performance and cost demands while remaining scalable for future growth.
Availability of ServiceNow AI Agents
ServiceNow AI Agents powered by the Apriel Nemotron 15B model are set to launch following the Knowledge 2025 event. This model will support ServiceNow’s Now LLM services and become a pivotal component in the company’s agentic AI offerings. Businesses looking to leverage advanced AI capabilities can look forward to the rollout of these innovative solutions, which promise to reshape the landscape of enterprise AI.
To learn more about the launch and the collaborative efforts between NVIDIA and ServiceNow in revolutionizing enterprise AI, be sure to check out the insights shared at Knowledge 2025.
Inspired by: Source

