In above case, the p-Value is not less than significance level (0.05), therefore the null hypothesis that the mean=10 cannot be rejected. Also note that the 95% confidence interval range includes the value 10 within its range. So, it is ok to say the mean of ‘x’ is 10, especially since ‘x’ is assumed to be normally distributed. In case, a normal distribution is not assumed, use wilcoxon signed rank test shown in next section.
Note: Use conf.level argument to adjust the confidence level.

3. Wilcoxon Signed Rank Test: Testing the mean of a sample when normal distribution is not assumed

Why / When is it used?

Wilcoxon signed rank test can be an alternative to t-Test, when the data sample is not assumed to follow a normal distribution. It is a non-parametric method used to test if an estimate is different from its true value.

With p-Value < 0.05, we can safely reject the null hypothesis that there is no difference in mean.
What if we want to do a 1-to-1 comparison of means for values of x and y?# Use paired = TRUE for 1-to-1 comparison of observations.
t.test(x, y, paired = TRUE) # when observations are paired, use 'paired' argument.
wilcox.test(x, y, paired = TRUE) #both x and y are assumed to have similar shapes

When can I conclude if the mean’s are different?

Conventionally, If the p-Value is less than significance level (ideally 0.05), reject the null hypothesis that both means are the are equal.

5. Kolmogorov And Smirnov Test: Test if two samples have the same distribution

Kolmogorov-Smirnov test is used to check whether 2 samples follow the same distribution.

If p-Value < 0.05 (significance level), we reject the null hypothesis that they are drawn from same distribution. In other words, p < 0.05 implies x and y from different distributions

6. Fisher’s F-Test: Test if two samples have same variance

Fisher’s F test can be used to compare variances of 2 samples.var.test(x, y) # Do x and y have the same variance?

Alternatively fligner.test() and bartlett.test() can be used for the same purpose.

7. Chi Squared Test: Test the independence of two variables in a contingency table

Chi-squared test can be used to test independence of two categorical variables. Example: You may want to figure out if big budget films become box-office hits. We got 2 categorical variables (Budget of film, Success Status) each with 2 factors (Big/Low budget and Hit/Flop), which forms a 2 x 2 matrix.

There are two ways to tell if they are independent: (1) By looking at the p-Value (2) From Chi.sq value

p-Value: If the p-Value is less that 0.05, we fail to reject the null hypothesis that the x and y are independent. So for the example output above, (p-Value=2.954e-07), we reject the null hypothesis and conclude that x and y are not independent.

Chi-sq Value: For 2 x 2 contingency tables with 2 degrees of freedom (d.o.f), if the Chi-Squared calculated is greater than 3.841 (critical value), we reject the null hypothesis that the variables are independent. To find the critical value of larger d.o.f contingency tables, use qchisq(0.95, n-1), where n is the number of variables.

8. Correlation: Test the linear relationship of two variables

The cor.test() function test if the correlation between two variables are significant.cor.test(x, y) # where x and y are numeric vectors.