Linux Test Project - Wiki

The Linux Test Project (LTP) is an open source project with a goal to deliver test suites to the open source community that validate the reliability, robustness, and stability of Linux. The LTP test suite is a collection of automated and semi-automated tests for testing various aspects of the Linux operating system. The goal of LTP is to deliver a suite of automated testing tools for Linux as well as publishing the results of tests we run. LTP invites Internal & External community to contribute to new horizons. For more detailed information about "Linux Test Project"(LTP), visit the following websites http://ltp.sourceforge.net/ and http://sourceforge.net/projects/ltp/, and , IRC on irc.freenode.org #ltp .

This wiki has been mainly designed to get more feedback from LTP users regarding their feel about LTP. Anybody can update this page about various ways by which we can improve this project and also post their willingness to contribute to this project. We as an integrated LTP team will make sure that your contribution(s) become(s) part of LTP test suite with little wastage of time . As with every project we promise to evolve on better lines and seek suggestions from you in this regard. For any issue(s) regarding LTP, kindly contact shubham@linux.vnet.ibm.com.

Latest News on LTP

LTP for the month of JANUARY 2012 has been released. You can pick your copy from here

LTP Changes during JANUARY 2009

Upcoming LTP works

Recently we have embarked on a Major Collaboration initiative, where we would be porting major chunk of existing System Call test cases from the Crackerjack Project to Linux Test Project. Leading this initiative is Masatake Yamato<yamato@redhat.com> from Red Hat. He has already ported some of the test cases to LTP. The following table will depict the status of our work in this regard, which we plan to finish by 2009 end. We would like to welcome other volunteers, who can help us in this porting effort. We appreciate you to add you name inthe following list as an Willing LTP contributor .

CRACKERJACK to LTP Porting status

62 of the syscall tests require direct porting and creation of new directories in LTP. Pink Background test cases will require immediate attention and may be taken up with priority. However, you may decide to do as per your comfort,

Once this is over, 29 of testcases needs to be ported for their 16-bit or 64-bit versions. This will not involve creation of new directories in LTP, but will be placed in the existing directories. These may either add new source files or update the existing ones similar to the sendfile64 and fadvise64,

Remaining test cases like shmget, shmat, msgsnd, shmctl & shmdt can be investigated later, and once all the above i) and ii) are done. We can see whether there is something additional functionality testing in Crackerjack for them. If yes, we can then include 1 or 2 tests for each of them. But that is at the end.

Points to Follow for writing NEW SYSCALL tests(Masatake Yamato)

Here is a general rule for writing test cases for newer system calls. If you find something wrong please modify.

(1) If glibc provides interface, we SHOULD write a test case on it.

We can test kernel code through glibc.

(2) If we have more time, we will add a test case on header files

provided by kernel. So we can test the system call even if glibc has not provided a header file for it yet.

(3) If no header file neither sys/foo.h and linux/foo.h, We SHOULD

add a fallback to report it like:

testcases/kernel/syscalls/inotify/inotify01.c int main() {

#ifndef NR_inotify_init

tst_resm(TWARN, "This test needs a kernel that has inotify syscall."); tst_resm(TWARN, "Inotify syscall can be found at kernel 2.6.13 or higher.");

#endif #ifndef HAS_SYS_INOTIFY:

tst_resm(TBROK, "can't find header sys/inotify.h"); return 1;

#endif

return 0;

} #endif In the report, the test case should tell from which version the system call may be available.

I'm not sure I should check NR_inotify_init and HAS_SYS_INOTIFY. But for a while I'd like to ignore these details.

One thing we have to decide now is which test status I should use when the header files is not found. inotify01.c uses TBROK. signalfd uses TCONF like:

int main(int argc, char** argv) {

tst_resm(TCONF,

"System doesn't support execution of the test");

return 0;

}

Some Points to Follow during Porting (Masatake Yamato)

See Makefile and *.h file of setgid in LTP to work on 16bit related system calls. That may help you.

As Subrata wrote, See Makefile of fadvise64 in LTP to work on 64bit related system calls.

It seems that test cases for foo16, foo and foo64 prints the result in the same format like: # ./posix_fadvise03

Some test cases crackerjack project don't have copyright notice. So we cannot port. I will have a chance to have a presentation in front of crackerjack at Beijing next month. I'd like to explain this copyright issue, which makes the porting impossible.

Some test cases in crackerjack project use non-English in their comment text. Generally I'd like to port them as is. However, non-English comment text makes it hard. I'd like to explain this at Beijing, too. We can ignore the comment but I don't want.

Some test cases in crackerjack use LTP API compatible layer. These test cases may be easy to be ported. I'll work on these kind of test cases.

We have this at testcases/kernel/syscalls/ipc/shmget. Need not look into this right now. May be after the porting is over, we can see whether there is some additional functionality that Crackerjack shmget is covering over LTP-shmget. We may then need to add 1/2 test cases in this regard. But, not to be prioritized immediately.

We have a different form of this at recently added Memory Hotplug testcases at testcases/kernel/hotplug/memory_hotplug/. But i would like to see this tested inside testcases/kernel/syscalls, as hotplug testing presently is optional. So, please include/port this as well

SMP alternatices for i386 added to patch instructions in the kernel on the fly

do

CONFIG_REGPARM enabled by the default

do

1Gb process stack randomiziation added (used to be 8Mb)

do

make isoimage support added

do

memory hotadd without sparsemem added

do

lots of Cell processor updates

do

ext3 performance improvements

do

xfs tweaks

do

jfs mount options added

do

ext2 attributes added to jfs

do

jfs support for splice added

do

FUSE O_ASYNC and O_NONBLOCK support added

do

NFS I/O performance counters added

do

NFS client metrics added

do

RPC I/O stats added

do

relayfs support made generic

do

debugs blob support added

do

sysfs attributes are now pollable

do

syscall audit records added to SELinux

do

RFC 4191 IPv6 support added

do

DCCP sysctls added

do

softmac wireless driver layer added

do

lots of new wireless drivers added (broadcom included)

do

PCI legacy proc support removed

do

IPMI driver model support added

do

new device ids and drivers for video added

do

big libata update with new devices and fixes

do

SCSI cache settings added to sysfs

do

braille device support for all input devices added

do

SNES mouse support added

do

unified the USB touchscreen driver for all touchscreens

do

loads of new USB device support added

do

Huge network driver updates

do

large sound driver updates

do

acpi dock support added

do

i2c support for new controllers added

do

LED class support added, along with a lot of diferent LED drivers

do

Secure digital driver support added

do

Niagara multicore CPU processor support added

do

User Feedback

Feedback

User

Your Comment

(1) Use the same coding style for every tests because i saw many tests that is coded in different ways. We could use the linux kernel Coding Style to make the tests(with this itll be easier to maintain the LTP code). (2) Work more with the linux kernel regression bugs. The community general report all regression that was found at every kernel release. You can see the regression list at http://kernelnewbies.org/known_regressions . We have more detailed stuff at the kernel bugzilla. It'd be nice to have regression test cases, to be sure that these bugs are not in kernel anymore. (3) Work closely with the kernel community, making tests as soon we have new features (this is because the kernel community uses the ABAT + LTP to test every kernel release, at http://test.kernel.org/functional/index.html we have the results). For example, at the newest kernel release, 2.6.22-rc1, we have a new system call, named eventfd (http://lwn.net/Articles/234123/rss), and it'd be nice to have a test case that could test this system call. (4) Result graphics, using tools such as gnuplot. It'd be nice to have all results in an output that could be used with gnuplot to make some result graphics. With this we could see all pass/failed tests thought the time, watching the results with different kernels. (5) Improve LTP Website

The biggest pain of using LTP is the number of false positives it generates. There are so many false positives that we need to keep a list of the "known errors" in order to decide whether any real error messages have popped out. Fixing bugs is exactly where the focus should remain until it becomes the norm for LTP to declare "pass" for a run where there have been no true failures.Here's the list of LTP errors that we generally ignore as false positives. (1)fcntl17 attempt to signal child failed, (2)fsync took too long, (3)gettimeofday is going backwards, (4)gf18, (5)mlockall* (2944?), (6)msg* call failed - erro=28 (note: after you get error 28 or 'device out of space, you'll get tons and tons of msg* error msgs that are safe to ignore. I think this tends to happen only when swap is less than 2*ram, and perhaps only on RHEL), (7)nanosleep remaining time doesn't match foo (17215?) (8)pselect sleep time was incorrect, (9)syslog* failed to log msgs of all levels, (10)capset02 BROK Unexpected signal 15 received(matrix issue 1234 - if you see this, please mark as PASS, but attach the issue to help establish trends), (11)getsid02 (25542), (12)ioperm* (21070, and 21619), (13)setpgid* expect EACCES got 1 (21134 mention, ltp SF bug 1114033), (14)shmctl01 FAIL : # of attaches is incorrect, (15)shmget02 FAIL: call succeeded unexpectedly (21896), (16)socket* (21065), (17)syslogd no such command (21068 - diff failure than the historical), (18)vmsplice01 1 FAIL : vmsplice() Failed, errno=38 (bz28685)

I understand, most of the tests generates two kinds of results file. Generally, we will look throu the .log file & if any failures, go through .out file to find out cause for failure & rerun individual test, if required to see it passes or not. I haven't come across such tests which says test failed in log file & end result showing all passed. During those initial days(2000-01), LTP used to yield many bugs. Now a days not much bugs through LTP could mean kernel is getting stronger day by day & hence very hard to find regression defects. Also it is understood that the kernel developers will test their patch for functionality to get ack or nack. LTP is meant for both functional & regression tests, it may not keep track of developers patch whether it has fixed particular issue or not, but if any regressions out of that patch can be found through introduction of new bug. There is a scope for improvement in this area, filter out logs, collect in precise format. As we own LTP maintainership, our job is to make sure each LTP release has latest good patches from the community. And definitely we can contribute on the improvement other areas of LTP (like improving log files, etc), but ultimately it is also the responsibility of kernel developers across the community to contribute to the addition new effective test cases to yield more no. of regression defects to get the glory back. A code coverage analysis on LTP could help us in finding out where the bugs are lacking.

(1)The networktests and ltpstress workloads use rsh, rcp, and rlogin. These out of date tools should be replaced with more current tools. A good replacement candidate would be ssh based tools. (2)I use ltprun to initiate the LTP base workload. This allows me to get a summary of the test status from the system that we use as a master by running ltp_check. There is no equivalent to ltprun or ltp_check available for the other LTP workloads. It would be nice to have some sort of tool to obtain a summary when running the other workloads. (3)It would be nice to have tests for CIFS/Samba in addition to ftp and nfs. (4)I would like to see long running stress tests for the various network file systems in addition to the test provided for ftp and nfs. Also a more diverse set of sample files for the test cases to operate on

I found that some of the system calls like bdflush, clock_getres, clock_gettime, epoll_create, getdents64, get_mempolicy, mmap2, pivot_root, remap_file_pages, rt_sigaction, endfile64, sched_getaffinity, set_thread_area, set_tid_address, stat64, statfs64, tgkill, waitid, add_key... are not tested.

One of the nasty things about LTP, is I don't think we pick up any of the test failures and cascade them through to the final test run status. That makes it harder to tell if there's any regression or not, but I guess it's doable by hand (Picked up from his comment in autotest@test.kernel.org)