We sampled 92 wetlands from four different basins in the United States to quantify observer repeatability in rapid wetland condition assessment using the Delaware Rapid Assessment Protocol (DERAP). In the Inland Bays basin of Delaware, 58 wetland sites were sampled by multiple observers with varying levels of experience (novice to expert) following a thorough training workshop. In the Nanticoke (Delaware/Maryland), Cuyahoga (Ohio), and John Day (Oregon) basins, 34 wetlands were sampled by two expert teams of observers with minimal protocol training. The variance in observer to observer scoring at each site was used to calculate pooled standard deviations (SD(pool)), coefficients of variation, and signal-to-noise ratios for each survey. The results showed that the experience level of the observer had little impact on the repeatability of the final rapid assessment score. Training, however, had a large impact on observer to observer repeatability. The SD(pool) in the Inland Bay survey with training (2.2 points out of a 0-30 score) was about half that observed in the other three basins where observers had minimal training (SD(pool) = 4.2 points). Using the results from the survey with training, we would expect that two sites assessed by different, trained observers who obtain DERAP scores differing by more than 4 points are highly likely to differ in ecological condition, and that sites with scores that differ by 2 or fewer points are within variability that can be attributed to observer differences.