Suppose that , then for some . I claim that . To see this assume it was false. Then there exists some such that but since we see that this implies which contradicts the minimality of . Thus, we have that

Definition of gcd by linear combination:
let a and b are two integers, then gcd(a,b) is the smallest positive integer which can be written as a linear combination of a and b over Z, that is, there exist integers x and y such that gcd(a,b)=ax+by.
This is why the equality holds.

Suppose that , then for some . I claim that . To see this assume it was false. Then there exists some such that but since we see that this implies which contradicts the minimality of . Thus, we have that

Thanks, that makes sense.

But is there any "intuitive" way to understand the reason why we can factor the "m" out of the min?

But is there any "intuitive" way to understand the reason why we can factor the "m" out of the min?

What I said is very intuitive. It is just shrouded in formal language. Basically, the reason why we can factor the out is that it doesn't contribute to the minimum. So, multiplying by a constant will not change the minimum of a set...except for multiplying it by the constant.

Think about it like this. (this is really informal, and kind of incorrect)

We can think about the one set as

. Now, if I multiply by I get so that the minimum of the second set is clearly going to just be times the mimum of the first set.