Greetings all, ( Hi Monty and Jim )
I need to specify communication from a multicast IP to communicate on a specific Network Card. My routing table appears as follows:
Destination Gateway Genmask Flags MSS Window irtt I Face
216.198.99.0 0.0.0.0 255.255.255.192 U 0 0 0 eth1 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 216.198.99.254 0.0.0.0 UG 0 0 0 eth1
I have 2 network cards. One with the adress of 216.198.99.247 and a second NIC with an adress of 10.0.0.2. The NIC with the 10. address is what I want to use for a failover communication with a second server. The failover communication software uses a multicast address of 229.255.0.1. The default gateway is attached to the 216. network. And ping tests result in the following: Pinging from 10. NIC I can only receive replies from the other 10. address. From the 216. NIC I receive replies from everything but I cannot establish the failover on this NIC.
What I want to do is tell my computer ( Red Hat Enterprise Linux V.3 ) to force communication from 229.255.0.1 across the 10.0.0.2 network card. If I can accomplish this I believe my failover will work.
Thank you very much in advance for your assistance.
Note: I am not a Linux professional so please be very clear about suggestions.
Thanks Again,
Kelly McLaughlin Tech Support Mokan Dial Inc. Louisburg, Ks. 913-837-2219 Ext. 16
On Nov 13, 2007 2:14 PM, Kelly McLaughlin kelmac@mokancomm.net wrote:
Greetings all, ( Hi Monty and Jim )
I need to specify communication from a multicast IP to communicate on a specific Network Card.
My routing table appears as follows:
Destination Gateway Genmask Flags MSS Window irtt I Face
216.198.99.0 0.0.0.0 255.255.255.192 U 0 0 0 eth1 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 216.198.99.254 0.0.0.0 UG 0 0 0 eth1
I have 2 network cards. One with the adress of 216.198.99.247 and a second NIC with an adress of 10.0.0.2. The NIC with the 10. address is what I want to use for a failover communication with a second server. The failover communication software uses a multicast address of 229.255.0.1.
If that software can be reconfigured to use the actual unicast address of the other server, it would be better. Unicast IP is much cleaner. Your Cisco is clearly not in on the multicast games you're trying to play, and by your description you don't have a router between the servers on the 10. network, just a crossover cable.
That means you have to teach each of the two machines about the multicast addresses you want them to use.
If you want packets addressed to 229.255.0.1 to be sent via eth0, then try using this command: */sbin/route add -host 229.255.0.1 dev eth0* Put it in /etc/rc.local so it will always be run at boot. This should allow other multicast packets to stay on eth1. What this doesn't say is whether the other server will be listening on that IP. For that you may need to use an alias IP. I've never set up multicast, so I don't know for sure what all that entails.
The default gateway is attached to the 216. network. And ping tests result in the following: Pinging from 10. NIC I can only receive replies from the other 10. address. From the 216. NIC I receive replies from everything but I cannot establish the failover on this NIC.
What I want to do is tell my computer ( Red Hat Enterprise Linux V.3 ) to force communication from 229.255.0.1 across the 10.0.0.2 network card. If I can accomplish this I believe my failover will work.
Hi,
On Tue, Nov 13, 2007 at 11:40:22PM -0600, Monty J. Harder wrote:
If you want packets addressed to 229.255.0.1 to be sent via eth0, then try using this command: */sbin/route add -host 229.255.0.1 dev eth0*
I'm not a multicast expert (or even multicast user) BUT my guess is that the proper command should be:
/sbin/route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
This is not to say that Monty's command wouldn't work but that this is the more general command for the 224.0.0.0/4 (224.0.0.0 - 239.255.255.255) multicast network.
Put it in /etc/rc.local so it will always be run at boot.
Monty has spent too much time in the SCO world. On a RedHat system you should use /etc/rc.d/rc.local.
This should allow other multicast packets to stay on eth1.
I don't know why you would want other multicast packets to go to "The Internet" rather than your local 10. network but your 169.254.0.0 traffic, which is defined as "LINKLOCAL", is still not on your "local" network. But since you don't use it, who cares?
The other thing you need to verify is that the fourth line of the output of "ifconfig eth0" includes the word "MULTICAST". I'm sure it does, my CentOS 3.0 server has it by default.
On Nov 14, 2007 3:06 AM, Uncle Jim jim@jimani.com wrote:
/sbin/route add -host 229.255.0.1 dev eth0
I'm not a multicast expert (or even multicast user) BUT my guess is that the proper command should be:
/sbin/route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
This is not to say that Monty's command wouldn't work but that this is the more general command for the 224.0.0.0/4 (224.0.0.0 - 239.255.255.255) multicast network.
But we explicitly do NOT want all multicast traffic to go to the other server. We ONLY want this one multicast IP to go there. Presumably the other server is set up the other way, and in doing this we've defined a multicast IP they can use to talk to each other, but there's nothing else on that subnet, so no other multicast will work there.
Put it in /etc/rc.local so it will always be run at boot.
Monty has spent too much time in the SCO world. On a RedHat system you should use /etc/rc.d/rc.local.
On my RHEL4 box, /etc/rc.local is a link to rc.d/rc.local, so it's all the same, and 5 characters less to (mis)type.
This should allow other multicast packets to stay on eth1.
I don't know why you would want other multicast packets to go to "The Internet" rather than your local 10. network
Because something else may need to use *routable* multicast (not 224/8, but any other multicast IPs), and sending ALL multicast traffic out eth0 where there is no routing to those other machines will definitely break that. Remember that the 10. network has exactly two servers and no routers. I'm trying to move the specific routing to the "non-default" interface, because I just don't know what else needs to use eth1 with multicast.
but your 169.254.0.0 traffic, which is defined as "LINKLOCAL", is still not on your "local" network. But since you don't use it, who cares?
169.254 is a separate UNICAST network that allows DHCP clients to function if they can't obtain a lease.
Your more general route should make any 224/8 multicast on eth1 fail. That might be a problem.
Uncle Jim wrote:
I'm not a multicast expert (or even multicast user) BUT my guess is that the proper command should be:
/sbin/route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
This is not to say that Monty's command wouldn't work but that this is the more general command for the 224.0.0.0/4 (224.0.0.0 - 239.255.255.255) multicast network.
You could also make your life easier by using the (common) iproute2 tools. With them you can use CIDR notation (a.k.a. "slash notation") for IP network addresses. I honestly haven't been using route and ifconfig for a couple years now, the newer tools are just as common and more feature-rich.
~Bradley