Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

GuideLLM: Evaluate LLM deployments for real-world inference

June 20, 2025
Jenny Yi Mark Kurtz Addie Stevens
Related topics:
Artificial intelligence
Related products:
Red Hat AI

Share:

    Key takeaways

    • GuideLLM is an open source toolkit for benchmarking LLM deployment performance by simulating real-world traffic and measuring key metrics like throughput and latency.
    • It supports crucial testing activities such as pre-deployment benchmarking, regression testing, and hardware evaluation to ensure LLMs meet SLOs.
    • GuideLLM offers a customizable architecture compatible with various models, backends, and hardware, allowing tailored testing with custom data and traffic patterns.
    • Help build up GuideLLM by writing guides, tweaking the content, giving us your thoughts, flagging bugs, and sharing your ideas!

    As large language models (LLMs) become integral to real-world applications, from chatbots to retrieval-augmented generation (RAG) systems, ensuring LLMs perform reliably and efficiently under real-world conditions is critical for end-user experience and infrastructure cost control. 

    While users often know what interacting with a "good enough" chatbot or image generation AI tool "feels like," building and delivering these AI applications with the proper service level objectives (SLOs) is not trivial. Between hardware sizing, LLM model sizing, dataset configuration, and workload traffic, there are many variables to consider when taking a model from POC to production deployment to ensure it's sufficient for your use case.

    We're excited to announce GuideLLM, an open source toolkit for evaluating LLM performance built by Red Hat. Whether you're running models locally or in a production cluster, GuideLLM makes it easy to simulate configurable workloads, evaluate how well your deployment handles real-world traffic, and gain actionable insights into throughput, latency, and resource needs. See Figure 1. 

    Figure 1
    Figure 1: GuideLLM toolkit overview.

    When should you use GuideLLM?

    • Pre-deployment benchmarking: Know your model's limits before going live.

      "With H200, should I use a Llama 3.1 8B or 70B Instruct to create a cost-efficient, serviceable customer service chatbot?"

    • Regression testing: Track performance deltas after code/model updates.

      "How much more traffic can Llama 3.1 8B Instruct FP8 handle over the baseline model?"

    • Hardware evaluation: Compare inference throughput across GPU/CPU instances.

      "With Llama 3.1 8B, what is the max RPS (requests per second) that this hardware can handle before my performance begins to degrade?"

    • Capacity planning: Predict infrastructure needs under projected user loads.

      "How many servers do I need to keep my service running within my SLOs and handle this much RPS when my application is under maximum load?"

    Why use GuideLLM?

    To support these use cases, GuideLLM is built with a modular architecture that can adapt to a wide range of models, backends, and hardware setups. Instead of fixed benchmarks, GuideLLM enables users to shape load tests around their unique production needs and SLOs.

    Modular architecture

    GuideLLM's modular design is built to be highly adaptable to a wide range of LLM deployments. Whether you're testing inference on a single GPU, across a cluster, or experimenting with different models and backends, its architecture lets you customize your benchmarks to match real-world scenarios. A breakdown of the key modular components powering GuideLLM follows.

    Flexible data sources and targets

    GuideLLM supports a flexible input system through DatasetCreator and RequestLoader. This allows users to benchmark using real datasets from Hugging Face or generate synthetic prompts that match specific token distributions. This makes replicating your production traffic, whether short chatbot queries or long-form summarization tasks, easy.

    On the back end side, GuideLLM uses a universal back-end interface that currently supports OpenAI-style HTTP requests. It works out of the box with inference servers like vLLM, letting you benchmark across a wide range of hardware and deployment setups. Native Python interfaces for tighter engine integration are coming soon.

    Sophisticated load generation

    To accurately simulate real-world traffic, GuideLLM uses a Scheduler and pluggable SchedulingStrategy components. These support a variety of load patterns, such as constant rate, concurrency-based, and Poisson distributions, letting you tailor each test to your actual usage scenario.

    Benchmarks are executed through a multi-process architecture that combines RequestWorkers and asyncio with thread pools to ensure timing accuracy and low overhead. Internal testing shows precise load timing (99.7% accuracy) and minimal performance interference (9.98 concurrency with targeted 10), providing reliable, repeatable benchmarks.

    Comprehensive benchmark capabilities

    GuideLLM's benchmark engine combines Benchmarker, Profile, and BenchmarkAggregator to run and summarize complete test suites, from basic constant-rate tests to full sweeps that explore your system's latency and throughput limits. It also supports profiling at different rates to identify performance degradation.

    Each test collects fine-grained metrics such as RPS, latency, and concurrency, with minimal overhead. The aggregated results let you easily determine which configurations meet your SLOs and where optimizations are needed.

    User-friendly outputs and reporting

    While benchmarks run, GuideLLM provides clear, real-time updates through BenchmarkerProgressDisplay, including live stats like requests per second, token throughput, and error rates. This helps you monitor progress and spot issues immediately.

    Once complete, results are output in multiple formats, JSON, YAML, and CSV, ready for analysis or import into other tools. Whether pasting into a spreadsheet or generating a visual report, GuideLLM ensures consistent, structured results that align with your benchmarking goals.

    Customizable benchmarking methodology

    GuideLLM is built to simulate realistic workloads that reflect your actual use case, rather than relying on generic benchmarks. You can customize tests using:

    • Custom datasets: Choose from Hugging Face datasets or define synthetic prompts using input/output token counts to mimic production traffic.
    • Rate-type based traffic shaping: Select how traffic is applied to replicate the behavior of real users: examples include fixed RPS, concurrency, or dynamic ramp-ups. (See Configurations).

    These configurations determine which benchmarks are executed and how the system is stressed. Once a test is complete, GuideLLM captures detailed performance metrics to help you evaluate deployment readiness:

    • Requests per second (RPS): The number of requests processed per second.
      • Use case: Indicates the throughput of the system and its ability to handle concurrent workloads.
    • Request latency: The time taken to process a single request, from start to finish.
      • Use case: A critical metric for evaluating the responsiveness of the system.
    • Time to first token (TTFT): The time taken to generate the first token of the output.
      • Use case: Indicates the initial response time of the model, which is crucial for user-facing applications.
    • Inter-token latency (ITL): The average time between generating consecutive tokens in the output, excluding the first token.
      • Use case: Helps assess the smoothness and speed of token generation.
    • Throughput (output tokens/second): The average number of output tokens generated per second as a throughput metric across all requests.
      • Use case: Provides insights into the server's performance and efficiency in generating output tokens.

    Together, this end-to-end flow enables you to align model behavior with your infrastructure goals and service-level objectives, giving you clear insight into how your deployment will perform under real-world conditions.

    Getting started with GuideLLM

    Say you're building a chat application using Llama 3.1 8B (Quantized) and want to know: "What's the maximum RPS my 8xH200 setup can handle while still meeting acceptable latency targets?"

    GuideLLM helps you answer this by simulating real-world traffic and outputting key latency metrics that can be directly compared against your SLOs.

    Install GuideLLM

    Prerequisites:

    • OS: Linux or macOS
    • Python: 3.8 - 3.12
    • Latest vLLM

    GuideLLM is available on PyPI and is installed using pip:

    pip install guidellm

    Start a vLLM server

    GuideLLM requires an OpenAI-compatible server to run evaluations. vLLM is recommended for this purpose. After installing vLLM on your desired server with target hardware set up, start a vLLM server with a Llama 3.1 8B quantized model by running the following command:

    vllm serve "RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16"

    Run a GuideLLM benchmark

    To run a GuideLLM benchmark, run the following command: 

    guidellm benchmark \
      --target "http://localhost:8000" \
      --rate-type sweep \
      --max-seconds 30 \
      --data "prompt_tokens=512,output_tokens=256"

    The target assumes vLLM is running on the same server, and by default, guideLLM uses the first available model unless one is specified with the --model flag.

    GuideLLM supports multiple workload simulation modes, known as rate types (see full list). Each rate type determines which benchmarks are run. The example above uses sweep, which runs a series of benchmarks for 30 seconds each: first, a synchronous test that sends one request at a time (representing minimal traffic), then a throughput test where all requests are sent in parallel to identify the system's maximum RPS. Finally, it runs intermediate RPS levels to capture latency metrics across the full traffic spectrum.

    The --data argument specifies a synthetic dataset with 512 prompt tokens and 256 output tokens per request, a data profile we recommend for Chat (see data profiles). This can be customized to match your specific workload, or you can supply your own dataset from Hugging Face.

    Upon running this command, you should see output similar to Figure 2.

    Figure 2
    Figure 2: Running the GuideLLM benchmark.

    Read the results

    After the evaluation is completed, GuideLLM will summarize the results into three sections:

    • Benchmarks Metadata: Summary of the run configuration, including server, dataset, and workload profile.
    • Benchmarks Info: Overview of each benchmark, covering type, duration, request statuses, and token counts.
    • Benchmarks Stats: Key metrics per benchmark, including RPS, concurrency, latency, TTFT, ITL, and more.

    The sections will look similar to Figure 3.

    Figure 3
    Figure 3: GuideLLM results summary.

    The full results, including all statistics and request data, are saved to a benchmarks.json file in the current directory. This file can be used for custom visualizations or loaded into Python using the guidellm.benchmark.GenerativeBenchmarksReport class. You can specify a different format (.csv, .yaml) using the --output flag.

    Understanding the results

    Now that we have the output from GuideLLM, we can compare the key latency metrics against your defined Service Level Objectives (SLOs).

    In our chat application example, the goal is to keep time to first token (TTFT) low enough to ensure a responsive user experience while maximizing hardware efficiency. For this use case, we recommend a TTFT under 200ms. You can further adjust your performance targets by adding or refining SLOs. (See SLO recommendations for further guidance)

    From the Benchmark Stats table in Figure 3, we see that the deployment meets the SLO for 99% of requests at approximately 10.69 RPS. This tells us that a single server can reliably handle around 10 RPS while meeting latency targets.

    With this information, you can estimate the horizontal scaling required to support your target traffic while maintaining SLO compliance. If you're planning a distributed deployment by using Red Hat OpenShift AI, for example, to support 1,000 RPS, you would need approximately 10 servers with the same hardware configuration. 

    These insights validate the deployment's performance and inform infrastructure planning, connecting model behavior directly to real-world cost and scalability decisions.

    Final thoughts

    Translating model performance into infrastructure planning is just one way GuideLLM supports deployment decisions. As LLMs power more real-world applications, a reproducible and customizable benchmarking tool is essential for ensuring readiness, selecting the right models, and validating changes like quantization or fine-tuning.

    GuideLLM goes beyond measuring speed by simulating realistic workloads, it helps teams optimize cost, reliability, and user experience before going live.

    Try it out, file a feature request or issue, or contribute your first PR. We'd love your feedback as we build a more robust, community-driven ecosystem for LLM benchmarking.

    Feature roadmap

    Looking ahead, we envision GuideLLM evolving into the go-to tool for pre-production validation, expanding beyond performance testing into a broader framework for ensuring models are truly ready for enterprise-grade deployment (see full roadmap). Upcoming features include:

    • Multi-modal support: Extend GuideLLM beyond text-only evaluations to support multimodal workloads.
    • Visual report generation: Enhance the data analysis workflow by automating the generation of visual reports that graph key data relationships
    • Accuracy evaluations: Integrate accuracy evaluation frameworks for comprehensive model comparisons.
    • Community-driven prioritization: Introduce a voting system on GitHub issues to surface the most requested features; please submit issues and consider contributing your first PR!

    These enhancements aim to make GuideLLM a more powerful, flexible, and community-focused tool for evaluating and scaling LLM deployments.

    Related Posts

    • vLLM V1: Accelerating multimodal inference for large language models

    • Multimodal model quantization support through LLM Compressor

    • Deploy Llama 3 8B with vLLM

    • Performance boosts in vLLM 0.8.1: Switching to the V1 engine

    • Distributed inference with vLLM

    • Structured outputs in vLLM: Guiding AI responses

    Recent Posts

    • 4 essential network automation use cases for everyone

    • 6 benefits of Models-as-a-Service for enterprises

    • GuideLLM: Evaluate LLM deployments for real-world inference

    • Unleashing multimodal magic with RamaLama

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    What’s up next?

    Learn how to access a large language model using Node.js and LangChain.js. You’ll also explore LangChain.js APIs that simplify common requirements like retrieval-augmented generation (RAG).

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue