5.
● Our code is validated using CI, and
performance trends monitored.
● Our output is verified on one general
purpose ARM platform and against two SoC
vendor platforms via a configurable switch to
allow for dedicated links between nodes
under test.
● Using open source software, we use one
realistic network application, a general
purpose benchmark and five feature specific
test suites.
LNG outputs are verified by

8.
● Some of the SoC vendors hardware has up to 16 x
10Gb links, generating this much traffic is non trivial.
● Tests equipment such as IXIA traffic generators are
expensive.
● Test equipment needs to be remotely switched between
the different hardware under test in an automated way
● Scheduling test runs that take days and requires
specific equipment to be dedicated to the task.
LNG unique challenges

9.
● Multiple nodes may be needed to test traffic
interoperability.
● It is not feasible to replicate the test environment at
every developer's desk.
● the applied RT patch even when disabled, alters the
execution paths
● Some test run for 24 hours or more
LNG unique challenges

10.
Questions
○ LAVA is(isn't) working for us
■ Interactive shells in the LAVA environment would
speed debugging given that testing can only be
performed with the test equipment in the lab
■ Multinode testing, with the reservation and
configuration of network switches is required.
■ Long term trends in performance data need to
analysed and compared for regression analysis,
triggering alerts for deviations.
○ Further thoughts on Friday
○ https://lce-13.zerista.
com/event/member/79674
LNG Q&A

13.
● Kernel code is validated using CI in the
Linaro LAVA Lab, on various member
hardware devices and ARM fast models.
● Our kernel code is also validated in member
LAVA labs on both current and next gen
hardware.
● Our builds at present are a sanity tested by
the LT's but most testing is done by
piggybacking on QA or automated testing
set up by the platform team.
Verification of LT outputs

14.
● Currently run only basic compile boot test + default CI
tests (LTP, powermgmt)
● This needs to change, we want/need to do more
● We need more SoC level tests, having LT's aware of
how to produce tests to run in LAVA will become more
important
LT and kernel tests

16.
● Deployment Guide
○ what are the hardware requirements for a LAB
○ what are the infrastructure requirements for a LAB
○ hardware setup, software installation instructions
● Administrator's Guide
○ basically how Dave Piggot does his job
○ after initial setup, day to day ops and maintenance
Better Documentation

17.
● Test Developer's Guide
○ how to integrate tests to be run in lava-test-shell
(lava glue)
○ recommendations on how best to write tests for lava-
test-shell
● User's Guide for lava-test-shell
○ for developers to use lava-test-shell
○ section devoted to using lava-test-shell in workflow
of kernel developer?
Better Documentation

18.
● Impossible to answer the question: What tests are
available in LAVA?
● http://lava-test.readthedocs.org/en/latest/index.html
○ not sufficient, not up to date
○ problem isn't LAVA team, Linaro needs an
acceptance policy on what a test has available
before being used in LAVA
● would like to see meta-data in test documentation that
can be used in test reports
○ in a format that can be used in report generation
Document the tests

20.
● Web dashboard won't cut it
● need to separate analysis from display
○ rather do an analysis, then decide how to display
● why infrastructure?
○ think there should be a level of reuse for
components used to do analysis
○ think these should be separate from LAVA
○ think of this a more of a data mining operation
Infrastructure for Analysis

23.
example:
● test report comparing:
○ current member BSP kernel
○ current LT kernel based on mainline
● evidence of quality/stability of LT/mainline kernel
● could be used to convince product teams
Infrastructure for Analysis

27.
This model is "good enough" for most
developers and maintainers, so...
Why should we use Jenkins/LAVA?
Linaro test/validation will have to be
● at least as easy to use (locally and remotely)
● output/results more useful
● faster
○ build time
○ diagnostic time
Current workflow: "good enough"

32.
Where is the line between Jenkins and LAVA?
● Jenkins == build, LAVA == test?
● when a LAVA test fails how do I know...
○ was this a new/updated test?
○ was this a new/updated kernel?
○ if so, can I get to the Jenkins build?
In less than 10 clicks?
Issues: Big picture

34.
● Terminology learning curve
○ dispatcher, scheduler, dashboard
○ device, device-type
○ What is a bundle?
○ WTF is a bundle stream?
○ Documentation... not helpful (enough said)
● Navigation
○ click intensive
○ how to get from a log to the test results? or...
○ from a test back to the boot log?
○ what about build log (Jenkins?)
○ can I navigate from Jenkins log to the LAVA test?
Issues: LAVA usability