1. I don't think there's a difference between the slide and the scribe. At any case, the confidence is a bound on the probability that the algorithm returns an hypothesis with a small error.

2. The confidence is in that the algorithm will return an hypothesis with a small error.

I think what would help you is as follows: Being weak-PAC is a property of the hypothesis class. In particular, when we limit ourselves to distributions which are nonzero only on the training sample, we get as a result that there exists an algorithm that with high confidence, returns an hypothesis will small weighted empirical error, with any set of weights. The algorithm in AdaBoost, which simply chooses the hypothesis with the smallest weighted empirical error, is obviously such an algorithm. Therefore, there exists an hypothesis in the set which has an error <= 1/2-gamma, for some gamma. (If there wasn't, we couldn't get it in confidence >0).