Anyone successfully used link aggregation to combine two ADSL lines for greater outbound bandwidth?
Because it's asymmetrical bandwidth, I'm fine for inbound speed. but I need to regularly move large files to FTP, and it's becoming an issue.
So, if I'm moving data from a single user (me) to a single point (an FTP site), does link aggregation double my bandwidth? I understand how it would work in multi-users-to-the-net environments, but can't quite get how (or if) it would work in this scenario.
I really don't want to pay the $1025/mo for a T1 line to the house, but it's my only other option.
Any help much appreciated.
Thanks, Greg
For a number of technical reasons, it is not possible to use two residential internet connections to "accelerate" the path between the same two computers. At least when using TCP, a persistent connection has to originate from an IP address. Each of the dsl modems will be provisioning unique IP addresses. Thus one connection can only originate from one of the two dsl modems at a time and only use one modem's worth of bandwidth. With cooperation of the ISP, you could channel bond, but that cooperation will cost you $1025/mo and require a special router at both ends of the connection. The best you can hope for is to "balance" logical IP traffic over the two outbound interfaces, such that a multitude of LAN clients can split their bandwidth half and half between either internet connection. This itself is very difficult do do and would require some iptables/route table kung-foo the likes of which I have never heard of in any pre-existing firewall distro, but still wouldn't help with the problem you describe. There are some "network appliances" that offer "dual WAN" but read the fine print. They might just be for failover. You'd want load balancing.
On Tue, May 6, 2008 at 9:37 PM, Greg Brooks gregb@west-third.com wrote:
Anyone successfully used link aggregation to combine two ADSL lines for greater outbound bandwidth?
Because it's asymmetrical bandwidth, I'm fine for inbound speed. but I need to regularly move large files to FTP, and it's becoming an issue.
So, if I'm moving data from a single user (me) to a single point (an FTP site), does link aggregation double my bandwidth? I understand how it would work in multi-users-to-the-net environments, but can't quite get how (or if) it would work in this scenario.
I really don't want to pay the $1025/mo for a T1 line to the house, but it's my only other option.
Any help much appreciated.
Thanks, Greg
Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
On Wednesday 07 May 2008 00:21:50 Billy Crook wrote:
For a number of technical reasons, it is not possible to use two residential internet connections to "accelerate" the path between the same two computers.
Gee, we used to use multiple modems on the same PC all the time, there are features in a number of connection management programs to allow this. Why it wouldn't work with DSL is not apparent.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Billy Crook wrote: | For a number of technical reasons, it is not possible to use two | residential internet connections to "accelerate" the path between the | same two computers. At least when using TCP, a persistent connection | has to originate from an IP address. Each of the dsl modems will be | provisioning unique IP addresses. Thus one connection can only | originate from one of the two dsl modems at a time and only use one | modem's worth of bandwidth.
Um...sort-of.
The above is true for the *RETURN* traffic, which will be routed based on IP address.
The problem, however, is with *OUTBOUND* traffic, due to the asymmetrical nature of the DSL connection. It would be perfectly acceptable to send half of the outbound packets via ISP #1, as usual, and the other half of the packets via ISP #2. The trick is, the source IP needs to be the same for *ALL* of the packets.
As long as at least one of your ISPs isn't doing egress filtering for spoofed source IPs, your traffic will get through, and you'll have twice the upload bandwidth (assuming the system you're talking to on the other end can easily handle the out-of-order packet arrival that will likely result).
Setting this up will require some crafty playing with iptables (assuming you're masquerading your internal machines) and the kernel routing tables, but it should be quite possible. Check into the 'ip' command (iproute2) and the lartc HOWTO to get started:
You may also find a pre-canned solution like shorewall easier to implement. Even if you don't go this way, the documentation might be helpful:
http://www.shorewall.net/MultiISP.html
NOTE: If you're willing to forgo the download bandwidth of your additional link, I believe you can use shorewall to combine the outbound bandwidth of multiple links by properly specifying the masquerade addresses used (ie: use the same public IP for all outbound traffic).
I assume there's some reason you can't just get a cable modem, or alternate DSL plan with more upload traffic? That would generally be the easiest solution, and likely cheaper than paying for two separate links.
- -- Charles Steinkuehler charles@steinkuehler.net
Because it's asymmetrical bandwidth, I'm fine for inbound speed. but I need to regularly move large >>files to FTP, and it's becoming an issue.
Do they really have to be generated on the home computer, or can you generate them on your website which presumably has better speeds both ways.
Thanks,
Ron Geoffrion 913.488.7664
Because it's asymmetrical bandwidth, I'm fine for inbound speed. but I need to regularly move large files to FTP, and it's becoming an issue.
______________________________________________
Is it possible to get two ADSL lines to work together to increase up-stream bandwidth w/o the help of your ISP(s)? Yes. Is it easy? No. Will it actually double your bandwidth? No. Are you better off finding another solution? Yes.
There are a variety of ways you could make this work, but none of them are going to do quite what you want. You could wrap your regular connections in a virtual interface that load balances packets over two different interfaces, and then have a similar setup on the server that is receiving the packets. This solution will end up only getting you a slight improvement in bandwidth because of the overhead you are going to have in wrapping the other connections. In addition, you will lose latency.
If you aren't tied to FTP in particular, you could hack together a BitTorrent setup where your home machine seeds your files on both public IPs, and your server can then download different fragments of the file from both connections simultaneously. If this is an option, you could set up a private BitTorrent tracker on your server and bond together dozens of ADSL circuits if you wanted to. You'll piss your ISP off if they figure out what your're up to though. You would probably want to write a shell script or something to set up the .torrent, push it to the server (via rsync or some such), and then cause the server to initiate a bt download.
Greg Brooks wrote:
Anyone successfully used link aggregation to combine two ADSL lines for greater outbound bandwidth?
Because it's asymmetrical bandwidth, I'm fine for inbound speed. but I need to regularly move large files to FTP, and it's becoming an issue.
So, if I'm moving data from a single user (me) to a single point (an FTP site), does link aggregation double my bandwidth? I understand how it would work in multi-users-to-the-net environments, but can't quite get how (or if) it would work in this scenario.
I really don't want to pay the $1025/mo for a T1 line to the house, but it's my only other option.
Any help much appreciated.
Thanks, Greg
Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
Bradley, thanks (and thank you to everyone else who was helpful as well!) for this.
Bittorrent is an innovative solution -- I like it! However, I'm hampered by:
* Corporate clients and their corporate IT departments who will be sniffy about using it.
* Unsophisticated users who are comfy with FTP and don't want to learn new tools.
* Many different clients who need the bandwidth boost, so a point-to-point solution isn't a good fit.
Soooo... looks like I may be the only guy in Plattsburg with a T-1. (That'd be my guess, anyway -- it's a pretty tiny town.)
(Background for the folks who asked: We're doing some outsourcing work for newspapers, and the typical work deliverable is a bundle of EPS pages that weighs in at 70-300 mb. All of our client papers have extremely high bandwidth and can download the pages quickly... it's getting them uploaded to FTP on deadline that's taking more time than we'd like.)
Greg
-----Original Message----- From: Bradley Hook [mailto:bhook@kssb.net] Sent: Wednesday, May 07, 2008 5:21 PM To: gregb@west-third.com Cc: kclug@kclug.org Subject: Re: DSL link aggregation?
Is it possible to get two ADSL lines to work together to increase up-stream bandwidth w/o the help of your ISP(s)? Yes. Is it easy? No. Will it actually double your bandwidth? No. Are you better off finding another solution? Yes.
There are a variety of ways you could make this work, but none of them are going to do quite what you want. You could wrap your regular connections in a virtual interface that load balances packets over two different interfaces, and then have a similar setup on the server that is receiving the packets. This solution will end up only getting you a slight improvement in bandwidth because of the overhead you are going to have in wrapping the other connections. In addition, you will lose latency.
If you aren't tied to FTP in particular, you could hack together a BitTorrent setup where your home machine seeds your files on both public IPs, and your server can then download different fragments of the file from both connections simultaneously. If this is an option, you could set up a private BitTorrent tracker on your server and bond together dozens of ADSL circuits if you wanted to. You'll piss your ISP off if they figure out what your're up to though. You would probably want to write a shell script or something to set up the .torrent, push it to the server (via rsync or some such), and then cause the server to initiate a bt download.
Greg Brooks wrote:
Anyone successfully used link aggregation to combine two ADSL lines for greater outbound bandwidth?
Because it's asymmetrical bandwidth, I'm fine for inbound speed. but I
need
to regularly move large files to FTP, and it's becoming an issue.
So, if I'm moving data from a single user (me) to a single point (an FTP site), does link aggregation double my bandwidth? I understand how it
would
work in multi-users-to-the-net environments, but can't quite get how (or
if)
it would work in this scenario.
I really don't want to pay the $1025/mo for a T1 line to the house, but
it's
my only other option.
Any help much appreciated.
Thanks, Greg
Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
What you really need to do is to negotiate with your ISP and see if you can get a more symetrical rate, or some sort of higher upload rate. Ultimately it's something they can tweak arbitrarily, but in practice they may be idiots and unable to come up with a solution for you. All depends who you're dealing with.
Greg Brooks wrote:
- Unsophisticated users who are comfy with FTP and don't want to learn new
tools.
If the "unsophisticated users" are the ones doing the downloading, then they can still use FTP to download. Your problem was with uploading. Once the file is on the server, it can be downloaded by any mechanism you set up. If your "unsophisticated users" are in your local shop, then you could set up a local FTP server that will automatically push files up to your off-site server when they are dropped in a specific location. If your clients are uploading the files to you, then I don't get why it's your concern how their connection is set up.
- Many different clients who need the bandwidth boost, so a point-to-point
solution isn't a good fit.
Do you have many clients uploading? Or just many clients downloading, and you doing all of the uploading? If you are the only one uploading, then it isn't much of a problem.
There are many creative solutions you could set up. The question is if the time and effort to set it up and make it transparent to your users is worth saving a few hundred bucks a month.
-----Original Message----- From: Bradley Hook [mailto:bhook@kssb.net] Sent: Wednesday, May 07, 2008 5:21 PM To: gregb@west-third.com Cc: kclug@kclug.org Subject: Re: DSL link aggregation?
Is it possible to get two ADSL lines to work together to increase up-stream bandwidth w/o the help of your ISP(s)? Yes. Is it easy? No. Will it actually double your bandwidth? No. Are you better off finding another solution? Yes.
There are a variety of ways you could make this work, but none of them are going to do quite what you want. You could wrap your regular connections in a virtual interface that load balances packets over two different interfaces, and then have a similar setup on the server that is receiving the packets. This solution will end up only getting you a slight improvement in bandwidth because of the overhead you are going to have in wrapping the other connections. In addition, you will lose latency.
If you aren't tied to FTP in particular, you could hack together a BitTorrent setup where your home machine seeds your files on both public IPs, and your server can then download different fragments of the file from both connections simultaneously. If this is an option, you could set up a private BitTorrent tracker on your server and bond together dozens of ADSL circuits if you wanted to. You'll piss your ISP off if they figure out what your're up to though. You would probably want to write a shell script or something to set up the .torrent, push it to the server (via rsync or some such), and then cause the server to initiate a bt download.
Greg Brooks wrote:
Anyone successfully used link aggregation to combine two ADSL lines for greater outbound bandwidth?
Because it's asymmetrical bandwidth, I'm fine for inbound speed. but I
need
to regularly move large files to FTP, and it's becoming an issue.
So, if I'm moving data from a single user (me) to a single point (an FTP site), does link aggregation double my bandwidth? I understand how it
would
work in multi-users-to-the-net environments, but can't quite get how (or
if)
it would work in this scenario.
I really don't want to pay the $1025/mo for a T1 line to the house, but
it's
my only other option.
Any help much appreciated.
Thanks, Greg
Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
On Wed, May 07, 2008 at 05:38:38PM -0500, Greg Brooks wrote:
Bradley, thanks (and thank you to everyone else who was helpful as well!) for this.
Bittorrent is an innovative solution -- I like it! However, I'm hampered by:
- Corporate clients and their corporate IT departments who will be sniffy
about using it.
- Unsophisticated users who are comfy with FTP and don't want to learn new
tools.
- Many different clients who need the bandwidth boost, so a point-to-point
solution isn't a good fit.
Soooo... looks like I may be the only guy in Plattsburg with a T-1. (That'd be my guess, anyway -- it's a pretty tiny town.)
(Background for the folks who asked: We're doing some outsourcing work for newspapers, and the typical work deliverable is a bundle of EPS pages that weighs in at 70-300 mb. All of our client papers have extremely high bandwidth and can download the pages quickly... it's getting them uploaded to FTP on deadline that's taking more time than we'd like.)
FTP being based on UDP is connectionless so no worries about TCP connections being trashed by confusion over IP addresses.
The BONDING section of the article pointed to, by Johnathan Hutchins I think, specifically mentions that it works with DSL.
UDP is inherently unreliable so the receiving software must deal with packets arriving from multiple routers, intermediate hosts, with differing amounts of delay so FTP software reassembles out of order packets coming from different addresses constantly.
So bonding should help, but I would like to ask if these pages could be generated as "one page" files so that individual pages could be mirrored to a high speed connected server while the next one is being assembled ? That way you can combine all the pages if needed for the final push at deadline across the high speed link and spread the use of your limited upload speed out over time when it would otherwise go unused.
-- Ed Allen
It was not originally clear that Greg controlled both ends of the FTP connection. If that were the case, then you could use each dsl modem connection separately, and use some sort of VPN or tunneling software to create two separate (virtual) "trunks" between the two servers. Then bond those trunks together like regular interfaces. OpenVPN would be a good cross platform candidate for VPN. Whichever bonding mode you choose, you should make sure it can handle link failure "gracefully".
On Fri, May 9, 2008 at 10:15 AM, Ed Allen era@jimani.com wrote:
FTP being based on UDP is connectionless so no worries about TCP connections being trashed by confusion over IP addresses.
Cough Cough http://www.faqs.org/rfcs/rfc959.html Cough...
The BONDING section of the article pointed to, by Johnathan Hutchins I think, specifically mentions that it works with DSL.
UDP is inherently unreliable so the receiving software must deal with packets arriving from multiple routers, intermediate hosts, with differing amounts of delay so FTP software reassembles out of order packets coming from different addresses constantly.
A TCP stack does that too. http://www.faqs.org/rfcs/rfc793.html (Page 4, Section: Reliability)
So bonding should help, but I would like to ask if these pages could be generated as "one page" files so that individual pages could be mirrored to a high speed connected server while the next one is being assembled ? That way you can combine all the pages if needed for the final push at deadline across the high speed link and spread the use of your limited upload speed out over time when it would otherwise go unused.
-- Ed Allen
On Fri, May 09, 2008 at 10:49:30AM -0500, Billy Crook wrote:
On Fri, May 9, 2008 at 10:15 AM, Ed Allen era@jimani.com wrote:
FTP being based on UDP is connectionless so no worries about TCP connections being trashed by confusion over IP addresses.
Cough Cough http://www.faqs.org/rfcs/rfc959.html Cough...
Hardly worth wading through 150K because you wanted to point out that FTP has been switched to TCP.
Any other reason you think I should spend time reading that ?
A TCP stack does that too. http://www.faqs.org/rfcs/rfc793.html (Page 4, Section: Reliability)
TCP acks each packet. UDP is "send it and hope it gets there".
So TCP needs a channel back to the sending box for those ACKs while UDP just takes what it gets and the software above needs to check for completeness.
Even so I do not see a problem with sending alternate packets through two IP addresses for outgoing as the ACKs will choose one to reply through ignoring the other as a "more expensive" route.
Back for more eh?
On Fri, May 9, 2008 at 12:27 PM, Ed Allen era@jimani.com wrote:
Hardly worth wading through 150K because you wanted to point out that FTP has been switched to TCP.
"Has been switched?" Who flipped the switch? It's been a good 20 or more years since TCP existed. You can grep that 150k and see no mention of UDP. Some browsers even afford one the newfangled luxury of a hotkey like, say, [Ctrl]+[F] where a user could use the computer to search through their current document for a string. However, a machine that's not yet been 'upgraded' to support TCP, might not have that fancy stuff, so maybe there's a sequence of flashing lights and rotor switches that does the same.
Any other reason you think I should spend time reading that ?
Yes. You needed a refresher, or were thinking of TFTP, which is not FTP, but its own protocol typically used for link local file transfers for hardware configuration or network booting (and not FTP). There is also an obscure protocol, UFTP, which is for broadcast and multicast.
so FTP software reassembles out of order packets coming from different addresses constantly.
A TCP stack does that too. http://www.faqs.org/rfcs/rfc793.html (Page 4, Section: Reliability)
TCP acks each packet. UDP is "send it and hope it gets there".
Uh huh. TCP stack "reassembles out of order packets" like you said FTP did. FTP however does not, because its packets are always in the right order thanks to the TCP stack.
Even so I do not see a problem with sending alternate packets through two IP addresses for outgoing as the ACKs will choose one to reply through ignoring the other as a "more expensive" route.
You will if close Outlook and try it. The ACKs won't choose anything. The destination machine will send an ack back to the IP that sent whatever it's acknowledging.
-- Ed Allen
On Fri, May 9, 2008 at 10:15 AM, Ed Allen era@jimani.com wrote:
FTP being based on UDP is connectionless so no worries about TCP connections being trashed by confusion over IP addresses.
FTP is a TCP protocol.
On Fri, May 09, 2008 at 11:34:32AM -0500, Dave Hull wrote:
On Fri, May 9, 2008 at 10:15 AM, Ed Allen era@jimani.com wrote:
FTP being based on UDP is connectionless so no worries about TCP connections being trashed by confusion over IP addresses.
FTP is a TCP protocol.
It appears to have changed while I was using the superior SFTP. It *used to be* entirely UDP.
I suppose that since every machine now has TCP it was decided to be a better file transfer protocol.
Thanks for the concise correction.
-- Ed Allen
The history I see shows that FTP ran over NCP, and then changed to TCP in 1980. I didn't see anything about UDP in there (I could have missed it). Regardless, it has been running exclusively over TCP for 20+ years.
Ed Allen wrote:
On Fri, May 09, 2008 at 11:34:32AM -0500, Dave Hull wrote:
On Fri, May 9, 2008 at 10:15 AM, Ed Allen era@jimani.com wrote:
FTP being based on UDP is connectionless so no worries about TCP connections being trashed by confusion over IP addresses.
FTP is a TCP protocol.
It appears to have changed while I was using the superior SFTP. It *used to be* entirely UDP.
I suppose that since every machine now has TCP it was decided to be a better file transfer protocol.
Thanks for the concise correction.
-- Ed Allen _______________________________________________ Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
On Friday 09 May 2008, Bradley Hook wrote:
The history I see shows that FTP ran over NCP, and then changed to TCP in 1980. I didn't see anything about UDP in there (I could have missed it). Regardless, it has been running exclusively over TCP for 20+ years.
TFTP is UDP. But that's not FTP any more than SFTP is. :)
On Fri, May 9, 2008 at 8:17 PM, Luke -Jr luke@dashjr.org wrote:
TFTP is UDP. But that's not FTP any more than SFTP is. :)
that makes more sense than my guess that he was confusing FTP with NFS, in which "switching from UDP to TCP" is the standard first step in debugging issues with NFS where there are more than one router between the client and server, and both variants are seen and supported.
I'd like to point out that SFTP isn't really "superior". It's much, much more secure, but it's not "superior" to FTP. It has commands that are similar to FTP, but it's not completely command-compatible, and there are a LOT of things that FTP can do that SFTP can not.
I recently had to move some Expect scripts over from using FTP to SFTP, and it was a royal pain in the ass. In general I'm not very impressed with SFTP. It seems to be someone's weekend project, not a fully functional subsystem. If you need an example just look at the output, and try to suppress/redirect it in a useful manner. Then try the same with FTP. SFTP could really use some community effort in improving it, especially considering that a lot of businesses (like Sprint) are using it heavily in-house.
Jeffrey.
On Fri, May 9, 2008 at 12:07 PM, Ed Allen era@jimani.com wrote:
It appears to have changed while I was using the superior SFTP. It *used to be* entirely UDP.
I suppose that since every machine now has TCP it was decided to be a better file transfer protocol.
Thanks for the concise correction.
Chiming in at this point only because the technical part of it has gotten way, way beyond what I can understand....
Do I have this right:
* I can't do what I want to do with DSL bonding/aggregation/etc to boost outbound bandwidth.
* There's a T1 (feh!) in my future.
Greg (irritated at his expense-filled future...)
Actually, what you need to do is talk with the local ISP's and see if you can arrange a better DSL link. DSL with T1 uplink speeds should cost you no more than 1/4 of a T1. Different technologies.
It's possible your local ISP is too stupid to take the money and run, but it's worth a try!
Alas, there is no -- zero -- competition out here (Plattsburg, MO, 35 miles north of KC) for high-speed access. There's no cable internet access, and the only DSL is through the telco, Centurytel.
The best DSL option they've got for me is 768k uplink via DSL and I'm not close enough to the central office to get it.
Greg
-----Original Message----- From: Jonathan Hutchins [mailto:hutchins@tarcanfel.org] Sent: Friday, May 09, 2008 10:42 PM To: gregb@west-third.com Cc: kclug@kclug.org Subject: Re: DSL link aggregation?
Actually, what you need to do is talk with the local ISP's and see if you can arrange a better DSL link. DSL with T1 uplink speeds should cost you no more than 1/4 of a T1. Different technologies.
It's possible your local ISP is too stupid to take the money and run, but it's worth a try!
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Greg Brooks wrote: | Chiming in at this point only because the technical part of it has gotten | way, way beyond what I can understand.... | | Do I have this right: | | * I can't do what I want to do with DSL bonding/aggregation/etc to boost | outbound bandwidth. | | * There's a T1 (feh!) in my future.
Mostly. It's pretty well determined that setting up two DSL links to increase the performance of a single TCP session (your FTP file upload) is going to be more pain than it's worth (particularly if the discussion has gone way, way over your head, as you indicate!). :)
One option that hasn't been discussed a lot, however, is splitting your upload into lots of little pieces and re-assembling it on the far end.
There exist a lot of tools to do this, from stuff to let you post images to newsgroups to things like bit-torrent and other peer-peer protocols.
If you just want to move the file, and don't have to use FTP, you could use bit-torrent, seed it to your local machine (with your 'primary' IP address), then launch 2 client downloads: one on another local machine (with your 'secondary' IP address from the new DSL line) and one on the remote webserver. Bit-torrent will automagically split the file into tiny pieces and fill all available outbound bandwidth. This should easily scale to however many DSL links you wish to add, and could be fired off by a script or something to make it easy.
You also want to make sure your TCP stack is tweaked for high-latency connections. This isn't such a big deal with linux (although it can still help), but windows systems need some registry changes to work well with high-speed, high-latency internet connections:
http://www.speedguide.net/downloads.php
Tweaking your TCP settings can be critical to getting the maximum performance out of a *SINGLE* TCP session...with a bunch of TCP sessions things tend to balance themselves out. If you're already maxing out your uplink bandwidth (verified by monitoring transfer rates), you don't need to worry about this.
- -- Charles Steinkuehler charles@steinkuehler.net
On Sat, May 10, 2008 at 6:54 AM, Charles Steinkuehler charles@steinkuehler.net wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Greg Brooks wrote: | Chiming in at this point only because the technical part of it has gotten | way, way beyond what I can understand.... | | Do I have this right: | | * I can't do what I want to do with DSL bonding/aggregation/etc to boost | outbound bandwidth. | | * There's a T1 (feh!) in my future.
Mostly. It's pretty well determined that setting up two DSL links to increase the performance of a single TCP session (your FTP file upload) is going to be more pain than it's worth (particularly if the discussion has gone way, way over your head, as you indicate!). :)
One option that hasn't been discussed a lot, however, is splitting your upload into lots of little pieces and re-assembling it on the far end.
You can also look into using the normal Unix split(1) utility and breaking the file into chunks, sending half up one link, half up the other, and then rejoining them on the far end using cat(1). It's ghetto, but it will work.
On Sat, May 10, 2008 at 6:54 AM, Charles Steinkuehler charles@steinkuehler.net wrote:
| | * I can't do what I want to do with DSL bonding/aggregation/etc to boost | outbound bandwidth. |
Mostly. It's pretty well determined that setting up two DSL links to increase the performance of a single TCP session (your FTP file upload) is going to be more pain than it's worth (particularly if the discussion has gone way, way over your head, as you indicate!). :)
presuming that two dsl links to the same premises would both be using the same route, compiling bonding into a kernel and creating a bonded interface in mode 0 should work.
this quote from linux bonding.txt:
balance-rr or 0
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
On Fri, May 09, 2008 at 06:41:56PM -0500, Jeffrey Watts wrote:
I'd like to point out that SFTP isn't really "superior". It's much, much more secure, but it's not "superior" to FTP. It has commands that are similar to FTP, but it's not completely command-compatible, and there are a LOT of things that FTP can do that SFTP can not.
I recently had to move some Expect scripts over from using FTP to SFTP, and it was a royal pain in the ass. In general I'm not very impressed with SFTP. It seems to be someone's weekend project, not a fully functional subsystem. If you need an example just look at the output, and try to suppress/redirect it in a useful manner. Then try the same with FTP. SFTP could really use some community effort in improving it, especially considering that a lot of businesses (like Sprint) are using it heavily in-house.
Jeffrey.
Why not just use SCP? That's what I encouraged the Sprint groups I worked with to use. SFTP is designed for interactive use, and SCP is designed for scripted use. Otherwise they both run over the SSH protocol, and indeed are part of the OpenSSH package.
Thanks, -- Hal
On Fri, May 9, 2008 at 11:10 PM, Hal Duston hald@kc.rr.com wrote:
On Fri, May 09, 2008 at 06:41:56PM -0500, Jeffrey Watts wrote:
I recently had to move some Expect scripts over from using FTP to SFTP, and it was a royal pain in the ass. In general I'm not very impressed with SFTP. It seems to be someone's weekend project, not a fully functional subsystem. If you need an example just look at the output, and try to suppress/redirect it in a useful manner. Then try the same with FTP. SFTP could really use some community effort in improving it, especially considering that a lot of businesses (like Sprint) are using it heavily in-house.
Jeffrey.
Why not just use SCP? That's what I encouraged the Sprint groups I worked with to use. SFTP is designed for interactive use, and SCP is designed for scripted use. Otherwise they both run over the SSH protocol, and indeed are part of the OpenSSH package.
I agree with you completely, and believe it or not, there are commercial applications that support sftp yet do not support scp (as hard as that is to believe). The particular application we're interacting with using sftp simply does not support use of scp. It supports sftp reasonably well. How that works internally, I have no idea, but scp is a no go. This is proprietary SSH software running on a server owned by a service vendor that we have no control over.
On Sat, May 10, 2008 at 4:58 AM, Christofer C. Bell christofer.c.bell@gmail.com wrote:
I agree with you completely, and believe it or not, there are commercial applications that support sftp yet do not support scp (as hard as that is to believe). The particular application we're interacting with using sftp simply does not support use of scp. It supports sftp reasonably well. How that works internally, I have no idea, but scp is a no go. This is proprietary SSH software running on a server owned by a service vendor that we have no control over.
In our case Chris, our app uses a wrapper around common FTP commands, and uses FTP to execute them. Some of those commands are easily ported to SFTP (put, get, etc). However others simply do not work (like 'dir'). We can port them to use SSH instead, but given that the current script uses FTP to simply save the 'dir' to a text file locally means we would need to rewrite the script to do those functions differently.
Obviously that's not terribly hard to do, but it's stupid that we'd have to do it. The fact is that SFTP doesn't support many common functions of the FTP RFC standard, and for others it uses incompatible command syntax. This obviously isn't a "drop in" replacement like it should be. After all, SSH/SCP has many features enabling it to be a "drop in" replacement for rsh/rcp. There's no reason it couldn't do the same for FTP.
Again, I'm not saying it's "bad". However, it's not "superior". It's clearly not a "great" implementation of FTP. It's a very SECURE implementation, and it's capable of doing the majority of FTP tasks. But for the rest, and for non-interactive use, it's deficient. I'm sure in time it will be better, but it's not "superior".
Jeffrey.
Because scp can't do directory listings, and programs that are set up to use FTP to do an entire interactive session can't use that single session to get the files they need.
If you're working with a system that's currently using FTP, you often can't just drop in scp. Time is money, and rewriting your application to use ssh/scp to do what you were previously using FTP for doesn't make much sense.
The reality is that SFTP is poorly done. The command incompatibility doesn't make sense in many ways, there are many features that could be easily implemented that would make it much more capable, and its terrible handling of output makes scripted use a real pain. I'm not saying it's useless, I'm saying that it's surprising that it's so mediocre considering how great the rest of the OpenSSH project is.
Perhaps Theo has something against SFTP? It would make sense, as he's such a huge douche that I could see him getting in the way.
Jeffrey.
On Fri, May 9, 2008 at 11:10 PM, Hal Duston hald@kc.rr.com wrote:
Why not just use SCP? That's what I encouraged the Sprint groups I worked with to use. SFTP is designed for interactive use, and SCP is designed for scripted use. Otherwise they both run over the SSH protocol, and indeed are part of the OpenSSH package.
On Tue, May 6, 2008 at 9:37 PM, Greg Brooks gregb@west-third.com wrote:
Anyone successfully used link aggregation to combine two ADSL lines for greater outbound bandwidth?
Wouldn't it be feasible to use a router machine between the home PC and the DSL lines? Use two outbound NICs and set the router to load balance between them for the 3rd internal NIC. I don't really see a technical reason why it's as complicated as people are making it out to be. By the very nature of TCP/IP packets can take multiple paths. You just want to force half one way and half the other. That's not a big deal to do, IMHO.
Jon.
If you were like a large corporation or university, you might have an ISP setup to handle multipath routing for end-users. Multipath can route to/from the same IP address over multiple paths. But this isn't the scenario here, because it isnt' the same IP address on each connection.
In the setup described here, with 2 standard ADSL connections, you will have two different IP addresses. Many ISPs are turning on source address verification, which means you can't spoof the source address on one line to match that of the other. Keep in mind it only takes ONE hop in the entire route that has source address verification turned on to break your spoofing setup (this isn't as much of an issue if you are using the same ISP for both circuits, because they will both have all the same hops).
Even with separate IP addresses, you can still load balance between them. However, you can't load balance a single TCP session over two IP addresses (well, you can, but a lot of software will tear down the connection if you try it). You can encapsulate your TCP connections into a UDP VPN that is load balanced over the two circuits, but as I already described you will end up getting only minimal gains due to the latency and overhead of wrapping the packets. If he was uploading lots of small, individual files, he could be simultaneously uploading different files on each connection. However, he seems to be dealing with large individual files, so load balancing over two public IP addresses wont help in this case.
Jon Pruente wrote:
On Tue, May 6, 2008 at 9:37 PM, Greg Brooks gregb@west-third.com wrote:
Anyone successfully used link aggregation to combine two ADSL lines for greater outbound bandwidth?
Wouldn't it be feasible to use a router machine between the home PC and the DSL lines? Use two outbound NICs and set the router to load balance between them for the 3rd internal NIC. I don't really see a technical reason why it's as complicated as people are making it out to be. By the very nature of TCP/IP packets can take multiple paths. You just want to force half one way and half the other. That's not a big deal to do, IMHO.
Jon. _______________________________________________ Kclug mailing list Kclug@kclug.org http://kclug.org/mailman/listinfo/kclug
Maybe in a sane world. I however have time warner cable as an ISP. Three of my IPs are in the 69.76.160.0/20 subnet, one is in the 65.30.26.0/24 subnet. Both (obivously) with different gateways, and different routes at least until they leave my ISP's network.
On Thu, May 8, 2008 at 1:38 PM, Bradley Hook bhook@kssb.net wrote: ...
your spoofing setup (this isn't as much of an issue if you are using the same ISP for both circuits, because they will both have all the same hops).
Interestingly this article just came up on Linux.com:
http://www.linux.com/feature/133849
Note the section on "bonding".
Interestingly this article just came up on Linux.com:
http://www.linux.com/feature/133849
Note the section on "bonding".
(Yes, I'm aware, this probably wouldn't work with two DSL lines.)
On Thursday 08 May 2008, Billy Crook wrote:
Maybe in a sane world. I however have time warner cable as an ISP. Three of my IPs are in the 69.76.160.0/20 subnet, one is in the 65.30.26.0/24 subnet. Both (obivously) with different gateways, and different routes at least until they leave my ISP's network.
Since all 4 IPs share bandwidth, this exact circumstance is entirely unrelated to the problem at hand. However, since cable modems are all on a giant switch, you could easily get multiple modems (with their own paid-for connection, of course) and have the second one spoof the IPs given out on the first one.