Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

A method, apparatus and program product for using test results to improve
code quality are provided. An IDE or program operable with an IDE
retrieves automated test results for a code sequence. The IDE or separate
program detects the code sequence during source code development in an
IDE. The test results are then presented in the IDE during source code
development.

Claims:

1. A method for using test results to improve code quality, comprising the
steps of:retrieving automated test results for a code sequence;detecting
the code sequence during source code development in a software
development environment; andpresenting the test results.

2. The method of claim 1, wherein the test results are metrics and the
metrics are stored in a file.

3. The method of claim 1, wherein the test results are metrics and the
metrics are stored in a database.

4. The method of claim 1, wherein the test results are presented when a
cursor is hovered over a code sequence in the software development
environment.

5. The method of claim 4, wherein the test results are presented in a
dialog box.

6. The method of claim 1, wherein the code sequences are presented with
different appearances based upon the test results for each code sequence.

7. The method of claim 6, wherein the code sequences are presented in
different colors based upon the test results for each code sequence.

8. The method of claim 3, wherein code sequences are detected by comparing
a written code sequence to a reference code sequence in the metrics
database.

9. An apparatus for using test results to improve code quality,
comprising:a processor, anda memory interconnected with the processor and
having stored thereon a software development environment which retrieves
automated test results for a code sequence and presents the automated
test results during source code development.

10. The apparatus of claim 9, further comprising a graphical user
interface, wherein the automated test results are presented on the
graphical user interface during source code development.

11. The apparatus of claim 9, further comprising a file encoded on a
memory interconnected with the processor wherein the test results are
stored as metrics in the file.

12. The apparatus of claim 9, further comprising a database encoded on a
memory interconnected with the processor wherein the test results are
stored as metrics in the database.

13. The apparatus of claim 10, further comprising an input/output device
wherein the automated test results are presented when the input/output
device is used to select the code sequence.

14. A computer program product comprising a computer-readable medium
having encoded thereon computer-executable program instructions for using
test results to improve code quality, comprising:first program
instructions for retrieving automated test results for a code
sequence;second program instructions for detecting the code sequence
during source code development in a software development environment;
andthird program instructions for presenting the test results.

15. The program product of claim 14, further comprising:fourth program
instructions for storing the test results in a file with the code
sequence, and wherein the second program instructions detect the code
sequence by matching the code sequence in the source code to the code
sequence in the file.

16. The program product of claim 14, further comprising:fourth program
instructions for storing the test results in a database with the code
sequence, and wherein the second program instructions detect the code
sequence by matching the code sequence in the source code to the code
sequence in the database.

17. The program product of claim 14, wherein the third program
instructions present the test results when a cursor is hovered over a
code sequence in the software development environment.

18. The program product of claim 17, wherein the test results are
presented in a dialog box.

19. The program product of claim 14, wherein the code sequences are
presented with different appearances based upon the test results for each
code sequence.

20. The program product of claim 19, wherein the code sequences are
presented in different colors based upon the test results for each code
sequence.

Description:

FIELD OF THE INVENTION

[0001]The invention relates to the field of computer code development and
more particularly to a method, apparatus and program product for feeding
test metrics into an Integrated Development Environment to aid software
developers to improve code quality.

BACKGROUND

[0002]Software developers are constantly looking for ways to improve code
quality. Code quality measures how well the code is designed and well the
code conforms to that design. Code quality encompasses a wide range of
qualities, such as usability, reliability, maintainability, scalability
and performance. Code performance means the time it takes to execute the
code under a given scenario (e.g. on given hardware, with a given number
of concurrent users running specified tasks). Code quality can be
measured through automated unit tests, which may be run at build time to
determine the quality of a unit of code. The output of these automated
tests may be a set of html or xml pages detailing passed/failed status of
various tests, the performance of certain functions calls (e.g.,
performance time under a given scenario). A developer can manually
reference these tests for use in improving future code quality. However,
this is currently a manual process. The improvement in the quality of
future code depends upon the developer expending the time to review the
automated test results and the developer's ability to effectively
interpret the test results and implement improvements.

[0003]Integrated development environments (IDEs) are known for aiding
software developers to create code. These IDEs perform functions such as
providing an icon or list of one or more potentially fitting source code
elements based on probability. However, determining the suitability of
suggested code elements relies upon the expertise of the developer.

SUMMARY

[0004]A method, apparatus and program product for using test results to
improve code quality are provided. An IDE or program operable with an IDE
retrieves automated test results for a code sequence. The IDE or separate
program detects the code sequence during source code development in an
IDE. The test results are then presented in the IDE during source code
development.

BRIEF DESCRIPTION OF THE DRAWINGS

[0005]The features and advantages of the invention will be more clearly
understood from the following detailed description of the preferred
embodiments when read in connection with the accompanying drawing.
Included in the drawing are the following figures:

[0006]FIG. 1 is a block diagram of a computing device configured to feed
test metrics into an Integrated Development Environment according to an
exemplary embodiment of the present invention;

[0007]FIG. 2 is a flow diagram of method for feeding test metrics into a
software development environment to aid software developers to improve
code quality according to an exemplary embodiment of the present
invention;

[0008]FIG. 3 is a flow diagram of a method for writing improved quality
code using metrics in an IDE according to an exemplary embodiment of the
present invention;

[0009]FIG. 4 is a visual representation of a view for presenting
performance metrics for code sequences in an IDE when a user hovers over
a method name according to an exemplary embodiment of the present
invention; and

[0010]FIG. 5 is flow diagram for a method of presenting software metrics
in an IDE according to an exemplary embodiment of the present invention.

DETAILED DESCRIPTION

[0011]The present invention provides a method, apparatus and program
product for using test results to improve code quality.

[0012]In an exemplary embodiment of the present invention, a computing
apparatus 100 is provided for using automated software test results to
improve code quality. As shown in FIG. 1, the computing apparatus
comprises a central processing unit 110 interconnected with a memory 120
and a random access memory (RAM) 130 through a data bus 140. The CPU 110
may also be interconnected to a display 150 and various input/output
devices 160 such as a mouse, keypad, printer or the like through bus 140.
One or more networks 190, such as the Internet, an intranet, a local area
network (LAN), a wide area network (WAN) and the like may be
interconnected to the computing device 100 through the bus 140.

[0013]An Integrated Development Environment (IDE) 125 is stored on memory
125, which may be an internal or external hard drive, a disc drive, a USB
memory device, or any other memory device suitable for storing program
code in an accessible manner. The IDE comprises program code that creates
a user interface display useful for software developers during the
building of code.

[0014]The IDE 125 may be a specialized IDE embodying the advantages of the
present invention, or alternatively, a separate program may work with an
IDE to achieve the advantages of the invention. The IDE 25 (or a
supporting separate program) retrieves results from automated testing.
These automated tests, which are known in the art, can be run during the
automated source code development process or during testing of completed
units of code.

[0015]The automated test results may provide a range of quality metrics:
from pass/fail indicating whether or not the code sequence passed the
automated tests (this may include whether or not the code sequence met an
established performance criteria, whether the code sequence actually
delivered all the function it is meant to deliver, etc.); to quality
metrics such as how long the code sequence takes to execute under a given
scenario (performance); or warnings of poor code style, which might be
generated using, for example, a static code analysis tool. The automated
test results might additionally or alternatively provide metrics
measuring other code characteristics such as code complexity,
maintainability, and the like. These quality metrics may be in the form
of warnings as appropriate or as measurement data. The IDE 125 or a
program supporting the IDE collects metrics from the test results, and
displays those metrics during source code development to aid the code
developer to improve code quality.

[0016]An exemplary method for using test results to improve code quality
is shown in FIG. 2. In the illustrated method, the IDE 125 retrieves
automated test results (step 210). Typically, the software is built and
then deployed onto a system where it will run. The automated tests are
also built and deployed on the system, and run against the software. The
automated test framework records the pass/failure of each unit test, and
the time it took to execute the unit test, which is the performance of
the particular part of the software tested by the unit test. Unit tests
often test one particular function (e.g. a java method), thus it can be
said that the unit test is measuring the performance of that particular
function. The unit test will publish the results of the unit tests. It is
these results that are retrieved or fed back into the IDE 125. The IDE
125 may retrieve these results, for example, by requesting the published
results. Alternatively a separate program may request the results and
feed them into the IDE. Then, when a developer attempts to use a
particular function (e.g. a java method) for which automated test results
have been published, he/she can see the results of the unit test for that
function.

[0017]Metrics are then compiled from the test results (step 220). As
discussed above, the metrics in one exemplary embodiment include
pass/fail information, indicating whether or not a particular code
sequence passed a particular unit test. This information is valuable
during code development to avoid known problems, and thereby improve code
quality. In another exemplary embodiment, an automated unit test measures
performance of a code sequence, and the metric that is compiled is the
average execution time for the particular code sequence compared to some
benchmark, for example, to determine whether the code sequence is
relatively fast or slow. This information enables the code developer to
make informed decisions when choosing between alternative code sequences.

[0018]The metrics are stored in a memory 120 (step 230). Metrics may be
stored by populating a database, for instance. Each code segment tested
may be entered into the database with its corresponding test results.
Alternatively, the metrics for each tested code sequence may be stored to
a data file. The metrics may be stored within the IDE (125), separately
within memory 120, in separate memory in computing device 100 or in a
separate memory accessible through network 190.

[0019]While the developer is building code in the IDE 125, the IDE detects
code sequences for which metrics are available (step 240). Code sequences
are detected by comparing or matching a written code sequence to a
reference code sequence in a metrics database. The matching step may
comprise an exact match, a best fit or other available matching routine.
Moreover, the matching may comprise a single line of code or larger
blocks of code or both.

[0020]The IDE 125 presents metrics for the current code sequence while the
developer is operating in the IDE 125 (step 250). In an exemplary
embodiment, the IDE 125 presents metrics in a dialog box (420 in FIG. 4)
as will be described in more detail below. The dialog box may contain,
for example, performance characteristics such as processing time for a
function call. The dialog box may also contain test status, such as
whether or not the code sequence selected has been subjected to automated
testing, whether or not the code sequence has passed automated testing,
the percentage of a selected code sequence that has passed automated
testing, and other test related information.

[0021]In another exemplary embodiment, the IDE 125 presents metrics by
changing the appearance of the code sequence. For example, a code
sequence may appear in a different color in an editing window of the IDE
125 depending upon the test status of the particular code sequence. A
code sequence that has been tested and has passed automated testing, for
example, may appear in green. A code sequence that has not been tested,
or contains elements that have not been tested, may appear in yellow. A
code sequence that has failed automated testing or contains elements that
have failed automated testing may appear in red. Similarly, code
sequences that have passed automated testing may appear normally, while
code sequences that have not been tested or have failed testing may
appear in bold, italics, different fonts, etc. In an exemplary
embodiment, specific lines of code may be altered in appearance based on
test status.

[0022]Alternatively, code sequences may appear differently depending upon
performance in automated testing. For example, a function call that takes
less than 10 milliseconds (ms) to perform may appear in green, while a
function call that takes between 10 ms and 100 ms appears in yellow and a
function call that takes more than 100 ms appears in red. It should be
understood that these thresholds are exemplary and any thresholds, as
well as any number of thresholds may be used. Moreover, the thresholds
may be set and controlled by a developer or administrator. As another
example, two different code sequences for performing a similar function,
such as finding an object by `path` or by `id` may both undergo unit
testing and the relatively faster code sequence would appear in green and
the relatively slower code sequence would appear in red. Also, the
appearance variations may vary from the foregoing examples. For example,
instead of the actual code sequence appearing in various colors, a
background or border may change in appearance to indicate automated
testing results for a particular code sequence.

[0023]Other performance attributes, such as warnings may also be presented
either in a dialog box or by changes in appearance of the code. Moreover,
the metrics which are presented may be selected, changed and controlled
by a code developer or administrator.

[0024]FIG. 3 shows a flow diagram of a method for writing improved quality
code using metrics in a software development environment according to an
exemplary embodiment of the present invention. A code developer builds
code within an integrated development environment (IDE) 125 which
provides various functions to aid the developer in the code development.
The IDE 125 provides a display at a user interface which includes a
window or graphical representation of code as it is written. In many
instances during code development a developer may know two or more ways
to perform a particular task. In the exemplary embodiment illustrated in
FIG. 3, the developer types two or more alternative code sequences in the
IDE 125 (step 310).

[0025]Next, the developer selects a first one of the alternative code
sequences (step 320). The developer may select one of the alternative
code sequences by hovering over the selected code sequence with a mouse
or other I/O device, for example. Alternatively, the developer may click
on the selected code sequence. It should be understood that any method
for selection known in the art is encompassed within the invention.

[0026]In response to the selection of one of the alternative code
sequences, the IDE 125 presents metrics from automated testing previously
performed on the selected code sequence. In an exemplary embodiment,
these metrics are presented in a dialog box, and are written in plain
English, for example, "average execution time=10 ms". Any combination or
variation of metrics may be presented for a selected code sequence. The
developer reads the metrics for the selected first code sequence (step
330).

[0027]The developer then selects a second code sequence (step 340). Again,
the developer may select the second code sequence by hovering over it
with an I/O device, such as a mouse, or by any other convenient selection
method. As with the first selected code sequence, the SD 125 presents
metrics for the second selected code sequence in a dialog box. Then, the
developer reads the metrics for the second code sequence in the dialog
box (step 350).

[0028]Having read each metrics on each alternative code sequence, the
developer then chooses the code sequence with better metrics (step 360).
Thus, the developer can easily access metrics from automated testing for
alternative code sequences while building code and choose the code
sequence that has been demonstrated to provide higher code quality. This
enables the developer to select code sequences with faster processing
time, code sequences that have passed automated testing, code sequences
that have fewer warnings, etc.

[0029]The foregoing flow is an example flow of how the IDE might present
the test results for a code sequence, by hovering on typed code. Other
mechanisms may also be used for presenting test results in an IDE during
source code development. For example, some IDEs offer an `outline` view
of each ode unit, which lists all of the functions available in that code
unit. A user (developer) could select each function in the outline view
to view the metrics for that function. Also, if an IDE offers an
`autocomplete` mechanism, which offers a similar outline list of
available functions, this view may be extended to show metrics for
available code.

[0030]FIG. 4 illustrates an example of the foregoing process. During
software development, a developer knows two methods to perform a similar
function. In the illustrated example, the developer has an object
(content) and wants to get the library that the content is in. The
developer knows two ways to find the library, both involve using a
`LibraryService`, which is a service that can be used for getting
libraries associated with objects, such as content. The developer can use
the path of the content, which is known to the developer. Alternatively,
the developer can use the id of the content, which is known to the
developer. Either can be used to find the associated library.

[0031]In the illustrated instance the developer can either find the
library by its path or by its id. One method, however, may perform
significantly better than the other. In this example, looking up the
library by path might involve a simple string manipulation and cache
lookup, which will require very little processing time. Meanwhile,
looking up the library by id might involve a query to a database, will
require much more processing time.

[0032]The developer types lines of code for each way to find the
associated library:

Library library=libraryService.getLibraryForContentId(id) (1)

Library library=libraryService.getLibraryForContentPath(path) (2)

[0033]These lines of code are displayed on a screen 400 of a graphical
user interface (e.g., display 150) in the IDE 125.

[0034]In an alternative embodiment, the developer does not actually type
two alternate lines of code. Instead, the developer may use a feature of
the IDE 125 to select a function. First, the developer types:

Library library-libraryService (3)

[0035]Then, the developer uses an `auto-complete` feature of the IDE 125
(such as ctrl+space in Eclipse®). The `auto-complete` feature offers
(presents) all of the functions available on the libraryService in a
list. The user will view the functions: `getLibraryForContentId( )` and
`getLibraryForContentPath( )` as potential functions on the list. The
developer might hover over each function in the list to view the
performance metrics for that function. Alternatively, the list of
functions might be presented with metrics for each function. Having
reviewed the performance metrics, the developer may select the fastest
function. The IDE 125 would then inset the text for the developer, saving
typing by the developer.

[0036]As shown in FIG. 4, the developer has selected code sequence (2) by
hovering on the line of code 410 to find the library using the content
path. In response the selection of line of code 410, the IDE 125 opens a
dialog box 420. In the dialog box, the IDE presents the performance
metrics 421. In the illustrated example the performance metric is average
run time, and the average run time for the code sequence using the file
path is 10 ms.

[0037]FIG. 5 is flow diagram for a method of presenting software metrics
in a software development environment according to an exemplary
embodiment of the present invention. The IDE 125 receives a code sequence
(step 510). This may be accomplished by a developer entering a code
sequence into a user interface, whereby the IDE 125 automatically checks
for test metrics for each line of code or other code sequence definition.
Alternatively this may be accomplished by selecting a code sequence.

[0038]The IDE 125 then checks for the entered or selected code sequence in
a metrics table (step 520) and retrieves the metrics for the specified
code sequence. Alternatively, the IDE may look in a folder or the like.
In another embodiment, the IDE 125 may run an automated test during
source code development in response to entering or selecting a particular
code sequence.

[0039]The IDE 125 then determines from the metrics (retrieved either from
a table or folder or from running an automated test) for the specified
code sequence whether or not the code sequence has failed unit test (step
125). If the code sequence failed testing, then the code sequence is
displayed in a manner to indicate that this code sequence should be
avoided (step 530). In an exemplary embodiment the failed code sequence
is displayed in red type.

[0040]The IDE 125 then determines whether a performance threshold is
exceeded (step 535). In the illustrated embodiment, the performance
threshold is run time. However, the threshold may be for other
performance metrics, such as warnings, for example. Also, absence of
testing may be used as a performance threshold.

[0041]If the metric for the specified code sequence exceeds the threshold,
then the code sequence is displayed in a manner indicating that this code
sequence should be used with caution or use should be limited. For
example, a code sequence which should be used with caution or used
sparingly may be displayed in yellow type.

[0042]If the metric does not exceed the threshold, then the code sequence
is displayed in a manner that indicates that the particular code sequence
is good to use. For example, a code sequence that is good to use may be
displayed in green type.

[0043]It should be understood that while three different display colors
are described in the foregoing description, any number of different
display types (e.g., colors) may be used to indicate different test
results. Also, the metrics can be displayed in different combinations.
Moreover, the number, type and combinations of display appearances
indicating different metrics may be selected or set by a user/developer
or by an administrator.

[0044]The invention can take the form of an entirely hardware embodiment,
an entirely software embodiment or an embodiment containing both hardware
and software elements. In an exemplary embodiment, the invention is
implemented in software, which includes but is not limited to firmware,
resident software, microcode, etc.

[0045]Furthermore, the invention may take the form of a computer program
product accessible from a computer-usable or computer-readable medium
providing program code for use by or in connection with a computer or any
instruction execution system or device. For the purposes of this
description, a computer-usable or computer readable medium may be any
apparatus that can contain, store, communicate, propagate, or transport
the program for use by or in connection with the instruction execution
system, apparatus, or device.

[0046]The foregoing method may be realized by a program product comprising
a machine-readable media having a machine-executable program of
instructions, which when executed by a machine, such as a computer,
performs the steps of the method. This program product may be stored on
any of a variety of known machine-readable media, including but not
limited to compact discs, floppy discs, USB memory devices, and the like.
Moreover, the program product may be in the form of a machine readable
transmission such as blue ray, HTML, XML, or the like.

[0047]The medium can be an electronic, magnetic, optical, electromagnetic,
infrared, or semiconductor system (or apparatus or device) or a
propagation medium. Examples of a computer-readable medium include a
semiconductor or solid state memory, magnetic tape, a removable computer
diskette, a random access memory (RAM), a read-only memory (ROM), a rigid
magnetic disk an optical disk. Current examples of optical disks include
compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W)
and DVD.

[0048]The preceding description and accompanying drawing are intended to
be illustrative and not limiting of the invention. The scope of the
invention is intended to encompass equivalent variations and
configurations to the full extent of the following claims.