This fix is interesting too.
$ sysctl -w net.core.rmem_max=16777216 $ sysctl -w net.core.rmem_default=16777216
rmem* changes helped when I was testing directly with udp-sender/receiver and when the received data was directed to /dev/null. Successful test deploy with 1 vCPU VM hosts (additional writing to the disk) that I mentioned in previous posts was with half-duplex mode which I forgot to change back to full-duplex. Full-duplex on Xen VM host is still problematic, high number of re-xmits and aborted multicast.
Xen servers’s internals switches are difficult to debug and I’m not sure where the packets are being dropped but there are some lost packets on Xen servers’s internal switches too. Xen VM booted in debug mode was still dropping RX packets too when full-duplex multicast was used. I’ll try to consult our Xen Server support to check settings on the Xen Servers. From few posts I have seen, Xen’s Open vSwitch has very small queues by default which could be a problem for UDP cast.
On the physical host, before changing rmem*, there were dropped packets on the Extreme switch on the port connected to the PC so my conclusion was that the problem was (mainly) on the receivers.
In my environment, I think I’ll stick to half-duplex mode which allows receivers to sync more often so that buffers are not maxed out. I did not find another udp-sender parameter which would allow control of how many packets are sent without the confirmation from the receiver.