[ https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869423#action_12869423
]
Chris Douglas commented on HADOOP-6332:
---------------------------------------
bq. all visible changes in the bulld system will be the same + a lot of stuff from src/test/aop/build/aop.xml
will have to be brought into the Common, HDFS, and MR builds anyway.
bq. we'll need to have a source code dependency on Hadoop's subprojects in the framework development
time to make sure the aspects are binding right, etc
This is why I'm asking about packaging. Building (and supporting) artifacts for Herriot in
Common, HDFS, and MapReduce as part of their normal compile is sub-optimal. What is required
to compile the aspects? If source is not required, can the AOP code live in the Herriot project
and be compiled against the jars published by maven?
> Large-scale Automated Test Framework
> ------------------------------------
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
> Issue Type: New Feature
> Components: test
> Affects Versions: 0.21.0
> Reporter: Arun C Murthy
> Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 6332-phase2.patch,
6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch,
HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch,
HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch,
HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. This jira
is meant to be a master-jira to track relevant work.
> ----
> The proposal is a junit-based, large-scale test framework which would run against _real_
clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, large-scale hadoop
clusters. E.g. utilities to bring up to deploy, start & stop clusters, bring down tasktrackers,
datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in the system
e.g. daemons such as namenode, jobtracker should expose their data-structures for query/manipulation
etc. Tests would be much more relevant if we could for e.g. query for specific states of the
jobtracker, scheduler etc. Clearly these apis should _not_ be part of the production clusters
- hence the proposal is to use aspectj to weave these new apis to debug-deployments.
> ----
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.