Revert the main part of commit:af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests")

That commit introduced reading the pci device's msi message data to seeif a pirq was previously configured for the device's msi/msix, and re-usethat pirq. At the time, that was the correct behavior. However, alater change to Qemu caused it to call into the Xen hypervisor to unmapall pirqs for a pci device, when the pci device disables its MSI/MSIXvectors; specifically the Qemu commit:c976437c7dba9c7444fb41df45468968aaa326ad("qemu-xen: free all the pirqs for msi/msix when driver unload")

Once Qemu added this pirq unmapping, it was no longer correct for thekernel to re-use the pirq number cached in the pci device msi messagedata. All Qemu releases since 2.1.0 contain the patch that unmaps thepirqs when the pci device disables its MSI/MSIX vectors.

This bug is causing failures to initialize multiple NVMe controllersunder Xen, because the NVMe driver sets up a single MSIX vector foreach controller (concurrently), and then after using that to talk tothe controller for some configuration data, it disables the single MSIXvector and re-configures all the MSIX vectors it needs. So the MSIXsetup code tries to re-use the cached pirq from the first vectorfor each controller, but the hypervisor has already given away thatpirq to another controller, and its initialization fails.