This converges for every x∈ℝ, so e=∑k=0∞1k! and e-1=∑k=0∞(-1)k⁢1k!. Arguing
by contradiction, assume a⁢e2+b⁢e+c=0 for integers
a, b and c. That is the same as a⁢e+b+c⁢e-1=0.

Fix n>|a|+|c|, then a,c∣n! and ∀k≤n, k!∣n!.
Consider

0=n!⁢(a⁢e+b+c⁢e-1)

=a⁢n!⁢∑k=0∞1k!+b+c⁢n!⁢∑k=0∞(-1)k⁢1k!

=b+∑k=0n(a+c⁢(-1)k)⁢n!k!+∑k=n+1∞(a+c⁢(-1)k)⁢n!k!

Since k!∣n! for k≤n, the first two terms are integers. So the third term should be an integer. However,

|∑k=n+1∞(a+c⁢(-1)k)⁢n!k!|

≤(|a|+|c|)⁢∑k=n+1∞n!k!

=(|a|+|c|)⁢∑k=n+1∞1(n+1)⁢(n+2)⁢⋯⁢k

≤(|a|+|c|)⁢∑k=n+1∞(n+1)n-k

=(|a|+|c|)⁢∑t=1∞(n+1)-t

=(|a|+|c|)⁢1n

is less than 1 by our assumption that n>|a|+|c|. Since there is only one integer which is less than 1 in absolute value, this means that ∑k=n+1∞(a+c⁢(-1)k)⁢1k!=0 for every sufficiently large n which is not the case because