Building applications with large language models (LLMs) involves much more than just ensuring high quality outputs. For many use-cases, speed and cost-effectiveness play equally critical roles. In consumer applications and chat experiences, for instance, users expect rapid responses; any delay can significantly hinder engagement. Furthermore, in more intricate applications that involve tool use or agentic systems, both speed and cost can become bottlenecks, limiting the overall capability of the system. The cumulative time spent on sequential requests to LLMs can quickly escalate for each user request, contributing to higher costs.
To address these challenges, Artificial Analysis (@ArtificialAnlys) has developed a comprehensive leaderboard that evaluates price, speed, and quality across more than 100 serverless LLM API endpoints, now available on Hugging Face.
Explore the leaderboard here!
The LLM Performance Leaderboard
The LLM Performance Leaderboard is designed to provide valuable metrics that assist AI engineers in making informed decisions about which LLMs—whether open-source or proprietary—and API providers to integrate into their AI-enabled applications.
When engineers are faced with selecting AI technologies, it is crucial to evaluate not just quality but also price and speed (both latency and throughput). The LLM Performance Leaderboard consolidates these three essential factors, enabling decision-making in a single, accessible platform.
Source: LLM Performance Leaderboard
Metric Coverage
The leaderboard reports on several key metrics:
- Quality: This is a simplified index designed for comparing model quality and accuracy, calculated using metrics like MMLU, MT-Bench, HumanEval scores, and Chatbot Arena rankings.
- Context Window: This refers to the maximum number of tokens an LLM can handle at any given time, encompassing both input and output tokens.
- Pricing: It reflects the costs charged by a provider for querying the model during inference. The leaderboard features both input/output per-token pricing and a blended pricing model, which assumes that the input is three times longer than the output.
- Throughput: This measures the speed at which an endpoint generates tokens during inference, expressed in tokens per second (TPS). The reported values include median, P5, P25, P75, and P95 over the previous 14 days.
- Latency: This metric indicates the time it takes for the endpoint to respond after a request is made, referred to as Time to First Token (TTFT), measured in seconds. Median, P5, P25, P75, and P95 values are provided for the last 14 days.
For further definitions, please visit our full methodology page.
Test Workloads
The leaderboard allows users to explore performance across a variety of workloads, including:
- Variations in prompt length: approximately 100 tokens, 1,000 tokens, and 10,000 tokens.
- Handling parallel queries: testing with 1 query or 10 parallel queries.
Methodology
Each API endpoint listed on the leaderboard is tested eight times daily, with the figures representing the median measurements over the last 14 days. Additionally, percentile breakdowns are provided for deeper insights within the collapsed tabs. Quality metrics are collected per model and reported based on the model creators’ input. However, we are also in the process of sharing results from our independent quality evaluations across each endpoint. For more details, please refer to our full methodology page.
Highlights (May 2024, see the leaderboard for the latest)
The landscape of language models has become increasingly complicated over the past year. Recent launches that have made significant impacts on the market include proprietary models like Anthropic’s Claude 3 series and open models such as Databricks’ DBRX and Cohere’s Command R Plus, among others.
- There is a staggering 300x pricing spread between models and providers, from Claude 3 Opus to Llama 3 8B, demonstrating more than two orders of magnitude variance!
- The speed at which API providers are launching new models has increased dramatically. Within just 48 hours, seven providers had begun offering Llama 3 models, highlighting the demand for new, open-source models and the competitive nature of API providers.
- Key models across various quality segments include:
- High quality, typically higher price and slower: GPT-4 Turbo and Claude 3 Opus
- Moderate quality, price, and speed: Llama 3 70B, Mixtral 8x22B, Command R+, Gemini 1.5 Pro, and DBRX
- Lower quality, but with significantly faster speed and lower pricing options: Llama 3 8B, Claude 3 Haiku, and Mixtral 8x7B
Source: artificialanalysis.ai/models
Use Case Example: Speed and Price Can Be as Important as Quality
In certain scenarios, leveraging design patterns that involve multiple requests with faster and more economical models can yield not only lower costs but also enhance overall system quality compared to relying on a single, larger model.
For example, imagine a chatbot tasked with retrieving information from recent news articles. One approach might involve using a large, high-quality model like GPT-4 Turbo to search and process a handful of articles. Alternatively, employing a smaller, faster model like Llama 3 8B to read and extract highlights from multiple web pages in parallel, followed by using GPT-4 Turbo to assess and summarize the most relevant findings, can prove to be more cost-effective. This method may even result in superior outcomes, despite the increased volume of content being analyzed.
Get in Touch
Stay updated by following us on Twitter and LinkedIn. We welcome messages on either platform, as well as through our website or via email.
Inspired by: Source