Project

General

Profile

Actions

Task #3531

closed

Task #3530: [ndnrtc-oi] Dashboard design

[ndnrtc-oi] Design QoE dashboard

Added by Peter Gusev about 8 years ago. Updated almost 8 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
-
Start date:
03/09/2016
Due date:
% Done:

100%

Estimated time:

Description

Need to design a dashboard that is easy to read and estimate test results.

Dashboard should include QoE metrics. One of the possible QoEs that can be included are:

  • # of rebufferings
  • # of crashes
  • Latency estimation
  • Average buffer size
  • Times (in %) in each state (chase, adjust, fetch)
  • ndncon CPU, memory usage
  • NFD CPU, memory usage

Such a dashboard may be used to track results across multiple tests.

Actions #1

Updated by Peter Gusev about 8 years ago

  • Assignee set to Peter Gusev
Actions #2

Updated by Peter Gusev about 8 years ago

  • Parent task set to #3530
Actions #3

Updated by Peter Gusev almost 8 years ago

  • Status changed from New to Closed
  • % Done changed from 0 to 100

OI dashboards can be found here.

Grafana is used as an OI tool, InfluxDB is used as a time-series data storage.

Grafana is not designed for using it as an analysis tool for occasional test runs. It is designed as visualization&dig-and-discover tool for continuous metrics gathered from remote services. Consequently, I had to come up with some usage pattern in order to use Grafana with NDN-RTC large-scale testing.

Key concepts:

  • individual tests are combined into test-groups
  • each test has a number and ID - date-time string
  • test group is defined by tester, who launches test scripts;
    • this is typically a string containing meaningful identificator for the tests, i.e. "version0.1.1-beta", "NFD-fixed-strategy-issue", etc.
  • test id and test group attached to each metric as tags

Usage pattern:

Analyst searches for certain test group using "Test Sets Overview" dashboard:

  • first, analyst must choose time period that covers time when tests were taken (otherwise, data will not be retrieved);
  • second, analyst must choose value for the "Test group" variable from the drop-down list (if value is not there, reload the page);
  • if nothing is visualized on the graph - make sure time interval is correct (it doesn't need to be exact though);
  • once data appears on the graph, analyst should be able to see differently colored dots for each test runs within test set:
    • from here - analyst should narrow down the time range on the graph that corresponds to the test she wants to see;
  • once test data is narrowed (graph does not mean anything, it's just a visualization of the duration of the test), analyst must use hyperlink on top of the graph to jump to "Test Run Overview (NEW)" dashboard;
  • from "Test Run Overview (NEW)" dashboard, analyst is able to jump to other dashboards using the same principle - hyperlinks next to the graphs' captions.
Actions

Also available in: Atom PDF