Environment:
- Azure VM: DVG-PAW-01 (Windows 11, Entra-joined), NIC dvg-paw-01195, IP 10.2.0.4
VNet: vnet-paw / snet-paw (10.2.0.0/24), Resource Group rg-paw-prod, East US
Meraki vMX: vmx-std, NIC vmx-stdWanInterface, IP 10.1.0.4, VNet 10.1.0.0/24
Hub-spoke topology: PAW VNet peered to vMX hub VNet, vMX terminates AutoVPN to on-premises sites on 192.168.x.0/24
Scenario: The Azure VM needs to reach on-premises subnets (192.168.x.0/24) through the Meraki vMX acting as an NVA.
Diagnostics completed:
Network Watcher Next Hop from 10.2.0.4 to 192.168.1.1: Result = VirtualAppliance, 10.1.0.4. Azure is routing traffic to the vMX correctly.
IP Flow Verify outbound TCP 10.2.0.4:50000 to 192.168.1.1:3389: Result = Access allowed via NSG rule Allow-On-Prem-Clinics-Out in nsg-paw. No security filtering.
Effective routes on vMX NIC (vmx-stdWanInterface): 10.2.0.0/24 shows VNet peering (return path exists). 192.168.0.0/16 shows None (default Azure system route). No associated UDR on the vMX subnet. No asymmetric routing.
Result: Despite Azure correctly delivering packets to the vMX NIC, traffic to 192.168.x.x is silently dropped. No ICMP unreachable, no RST. Packets never arrive on-premises (confirmed via packet capture at destination). The vMX dashboard shows the 10.2.0.0/24 VNet in its routing table, but the data plane does not forward.
Question: All Azure diagnostics confirm packets are reaching the vMX. Is there any additional Azure-side diagnostic that could further isolate this, or is this definitively a vMX data plane issue? Has anyone seen a Meraki vMX in Azure fail to forward traffic from a peered VNet through AutoVPN despite the route appearing in the Meraki dashboard?Environment:
Azure VM: DVG-PAW-01 (Windows 11, Entra-joined), NIC dvg-paw-01195, IP 10.2.0.4
VNet: vnet-paw / snet-paw (10.2.0.0/24), Resource Group rg-paw-prod, East US
Meraki vMX: vmx-std, NIC vmx-stdWanInterface, IP 10.1.0.4, VNet 10.1.0.0/24
Hub-spoke topology: PAW VNet peered to vMX hub VNet, vMX terminates AutoVPN to on-premises sites on 192.168.x.0/24
Scenario: The Azure VM needs to reach on-premises subnets (192.168.x.0/24) through the Meraki vMX acting as an NVA.
Diagnostics completed:
Network Watcher Next Hop from 10.2.0.4 to 192.168.1.1: Result = VirtualAppliance, 10.1.0.4. Azure is routing traffic to the vMX correctly.
IP Flow Verify outbound TCP 10.2.0.4:50000 to 192.168.1.1:3389: Result = Access allowed via NSG rule Allow-On-Prem-Clinics-Out in nsg-paw. No security filtering.
Effective routes on vMX NIC (vmx-stdWanInterface): 10.2.0.0/24 shows VNet peering (return path exists). 192.168.0.0/16 shows None (default Azure system route). No associated UDR on the vMX subnet. No asymmetric routing.
Result: Despite Azure correctly delivering packets to the vMX NIC, traffic to 192.168.x.x is silently dropped. No ICMP unreachable, no RST. Packets never arrive on-premises (confirmed via packet capture at destination). The vMX dashboard shows the 10.2.0.0/24 VNet in its routing table, but the data plane does not forward.
Question: All Azure diagnostics confirm packets are reaching the vMX. Is there any additional Azure-side diagnostic that could further isolate this, or is this definitively a vMX data plane issue? Has anyone seen a Meraki vMX in Azure fail to forward traffic from a peered VNet through AutoVPN despite the route appearing in the Meraki dashboard?