top of page
LoadGen full form logo with slogan
  • googlePlaces
  • twitter
  • facebook
  • linkedin

Benchmarking User Experience at Scale: LoadGen Powers the OBUX Framework


LoadGen Sponsors OBUX

How LoadGen's No-Code Workload Engine Drives the Open Benchmark for User Experience

In a world where digital workspace performance is critical to employee productivity and satisfaction, traditional system benchmarks often fall short. They measure CPU load, disk latency, or frame rates — but fail to tell you how your users actually feel while using their applications. That’s why we at LoadGen are proud to be the official workload engine provider and founding sponsor of OBUX — the Open Benchmark for User Experience.

OBUX is a groundbreaking, open-source initiative that brings transparency, standardization, and reproducibility to UX benchmarking. And at the heart of this platform? A powerful simulation engine that replicates real-world user behavior: LoadGen.


The Vision Behind OBUX

OBUX is designed to provide a transparent, robust, and vendor-neutral benchmark for digital workspaces. The project aims to evaluate:

  • System performance (CPU, disk, memory, GPU)

  • User experience (responsiveness, satisfaction, frustration levels)

  • Environmental impact (e.g., CO₂ consumption)

These dimensions are combined into a composite score that reflects both the system’s technical capability and the user’s perceptual experience. This allows IT leaders, architects, and engineers to make informed decisions based not only on infrastructure metrics, but on human experience.


The Role of LoadGen in the OBUX Ecosystem

To generate accurate, reproducible benchmark results, you need realistic, consistent, and scalable user simulation. This is exactly what LoadGen delivers.


No Code, High Fidelity

Using LoadGen’s no-code workload engine, OBUX can simulate typical user activities such as:

  • Opening and interacting with productivity apps (Outlook, Word, Excel)

  • Navigating enterprise portals and intranet sites

  • Performing transactions in business-critical apps (CRM, EMR, ERP)

  • Launching virtual desktops or remote sessions

All of this is configured via LoadGen’s intuitive visual interface — no scripting required — ensuring that the benchmark is accessible to engineers, consultants, and QA teams alike.


Repeatable and Modular

Each workload is built from modular transactions — atomic units of user activity. These transactions are orchestrated to form user scenarios, which can be run across multiple virtual users and environments. Transactions can include:

  • Keystrokes and mouse actions

  • File I/O operations

  • UI navigation steps

  • Response time checkpoints

These are timed, logged, and scored to assess responsiveness, stability, and consistency.


Understanding the OBUX Scoring Model

The OBUX scoring system is designed to unify objective system metrics with subjective user perception.


Two Dimensions of Scoring:

  1. System Score (absolute):

    • Derived from hardware and OS-level performance metrics

    • Includes CPU load, memory utilization, disk latency, and system response times

  2. User Experience Score (relative):

    • Represents perceived UX, on a scale from 0.00 (unacceptable) to 0.94 (perfect)

    • Anchored to human-centric categories:

      • 0.00 = Absolutely unacceptable

      • 0.50 = Just fair

      • 0.70 = This is good

      • 0.94 = Perfect


Composite Formula Example:

nginxCopyEditBenchmark Score = (#sessions at fair or better performance) × (UX score + System score) 

This formula ensures that both technical efficiency and subjective quality contribute to the final benchmark result.


Transactions as the Foundation of Insight

Every LoadGen-generated benchmark run consists of multiple sessions, each made up of repeated transactions. These transactions form the core data points for OBUX metrics.

Each transaction is:

  • Timestamped and compared against expected performance thresholds

  • Measured for duration, jitter, and resource utilization

  • Linked to environmental impact (e.g., energy consumed → CO₂ equivalent)

These measurements feed into the scoring engine, allowing OBUX to correlate system strain, user perception, and environmental footprint in a single benchmark result.


Transparency, Extensibility, and Open Collaboration

Because LoadGen is integrated into OBUX as an open and transparent workload generator, the entire test flow is reproducible and auditable:

  • Benchmark definitions are published openly

  • Metrics are stored in InfluxDB

  • Visualizations are created using Grafana

  • Community contributions are welcome via GitHub

The LoadGen engine supports plugin-based extensions, allowing contributors to add custom applications, edge cases, or vertical-specific workloads (e.g., healthcare EMRs, financial dashboards, or CAD software).


What’s Next?

With LoadGen powering workload simulation and OBUX providing the structure and scoring logic, the first public version of the benchmark is scheduled for release at the end of 2024 — with EUCworld and E2EVC as the target launch venues.

We are also exploring:

  • Free and discounted LoadGen licenses for OBUX contributors

  • Pre-built workload packs for popular use cases

  • Enhanced real-time feedback using AI/LLM-assisted UX interpretation


Conclusion

The collaboration between LoadGen and OBUX represents a powerful synergy: precision engineering meets open benchmarking. Together, we are enabling IT professionals to see beyond hardware metrics — and into the real-world user experience.


Interested in becoming part of this evolution?


 
 
 

Comments


bottom of page