Those tables are a bit confusing, but they refer to the same thing. In fact, the first one is only typical and the second one is only specified (no typical).

The background reason is for two tables is that in the production and more particularly the field service tests it is complicated to get enough test port power to drive the receiver into compression, so the the power level where compression occurs for specified level depends more on the power that the test system can generate. For table 19, as I remember, for traceability reasons, the uncertainty of the measurement is on the order of 0.1 dB, so we look for something on the order of 0.05 dB compression to include guard banding and environmental drift issues (based on temperature chamber results on a few units).

Table 18 shows the typical performance of the receiver, which is measured in a different way. We pad down the test receiver on port 2, drive to maximum power of the source and record the compression of S21(which looks like expansion since S21 is B/R and R is getting smaller due to compression) and assign that to the R-channel. Then, we remove the pad and run the same test and look at compression again (across frequency), remove the R channel effect, and assign that to the test port. In our products, the R channel is almost always padded extra to ensure less compression than the test channel. The value should be below 0.1 dB at the power levels shown. But these are not readily achievable in the field so this test is not done as part of verification and that is why it is not specified.

In actual performance, the compression is very very small until you reach the ADC data read limit, at which point a receiver overload message will appear. But for my work, and what I recommend, is to keep the input below about +10 dBm for most measurements to avoid any compression effects.