There is much confusion about whether the Embedding in Keras is like
word2vec and how word2vec can be used together with Keras. I hope that
the simple example above has made clear that the Embedding class does
indeed map discrete labels (i.e. words) into a continuous vector
space. It should be just as clear that this embedding does not in any
way take the semantic similarity of the words into account. Check the
source code if want to see it even more clearly.