To do this I used https://github.com/openshift/release from commit 00731fe9d6a3e970aa1dc727041de471744d28b8. I used the cluster/test-deploy Makefile/instructions for deployment. The following link is a diff of my vars-origin.yaml from the original https://gist.github.com/chancez/1c0f28eb05d8f4ab4e66e9c261e3329a.
Besides that, I've ran ansible a few times to make a couple changes to auth settings to add Github auth but my auth settings weren't working so I reverted those, so the only major thing I can think of is re-running ansible a few times to make changes, and then again to undo those changes.

If Huamin is correct (and it looks like he is), then each time there is the "mpath_member" in the events checks also the multipathd log. On the affected machine I can see this:
Feb 27 21:13:55 ci-chancez-chargeback-openshift-build-image-instance multipathd[288]: sda: spurious uevent, path already in pathvec
Feb 27 21:13:55 ci-chancez-chargeback-openshift-build-image-instance multipathd[288]: 0Google_PersistentDisk_persistent-disk-0: failed in domap for addition of new path sda
Feb 27 21:13:55 ci-chancez-chargeback-openshift-build-image-instance multipathd[288]: uevent trigger error
It would also mean this is not something we can fix in OpenShift (see Huamin's comment #4) -- multipathd has to be configured to ignore the GCE PD disks. Disabling multipathd altogether on machines where it's not needed should work too.
Also note: to reproduce this the multipathd must be installed and running on the system (not a case of Atomic Host AFAIK).
I will try to create a pod with several disks in GCE and check their WWID -- if there is a collision we have the cause.

I was wrong: the workarounds I thought would work don't seem to help. Mount complains about mpath_member... This is the udev ID_FS_TYPE attribute value being set by udev to the multipath "legs". And mount refuses to mount those (since it should be the dm device that should be mounted). Might be there is an udev rule causing this attribute to be set for the GCE PD disks.

Where did this multipath.conf come from. If you create a default multipath.conf file, by running
# mpathconf --enable
without an already existing multipath.conf file, it automatically sets
find_multipaths yes
in the defaults section. This makes multipath only claim devices when it sees that they have multiple paths, or if it has previously claimed them.
If you add that find_multipaths line to /etc/multipath.conf, and run
multipath -w /dev/sdc (or whatever devname the google persistent disk has)
That should fix your problem. The real issue here is that a multipath.conf file without either find_multipaths or a manual blacklist will just claim all scsi devices. Who or whatever created that multipath.conf file needs to do one other the other. Like I said, the default multipath setup uses find_multipaths.

Faced this issue on Azure. And it is no matter if `find_multipaths yes` is set in config or not, it always claims Azure disks as multipath devices. So i need to blacklist them or disable multipath at system level.
Both workarounds are not acceptable since they will not survive single ansible run...

I have a customer who is facing this issue.
multipathd is installed in customer's node and OpenShift playbook always enables this service even though they have disabled it. It this related to this issue? Or should I file another bug for playbook?