I just completed migrating an Azure VM to a new region. I did this by exporting the VM's disks to a storage account in the new region and then creating new disks based on those images and then a new VM in the new region.

Previously, the VM was able to access itself by using its own public IP address. Now, it's unable to do that using the new public IP address given to it in the new region.

Example: In the old region, the VM had a public IP of 192.0.0.1. In the new region, it has 192.0.100.1. In both regions it is on a VNET and has the VNET IP address 10.0.1.1.

Previously, the VM could access itself by connecting to 192.0.0.1, or by connecting to the public DNS name associated with its public IP. After migration, the DNS record was updated to point to the new IP address, and I waited the TTL period (1 hour) to ensure the old IP had expired out of caches. Now, however, the VM cannot access itself either by DNS, or by using its new direct public IP 192.0.100.1.

I tried adding a firewall rule to the NSG that allows all traffic from 10.0.1.1 to 192.0.100.1 but it did not change anything.

The use case for this is a Web API and its database running on the same server. The web API tries to connect to the database using its DNS name, which points to its public IP address.

The workaround I have done for now is to just add the DNS name for the SQL server to the HOSTS file and point it at 10.0.1.1. However, this doesn't explain the change in behavior.

Question: Why can't the VM access itself via its public IP (when it used to be able to)? Can I make any changes to the network configuration or firewalls to make this work again?