Why UDP file transfers are up to 100x faster than TCP
The total time required to transfer large files over the internet can take ages. It gets even worse if your recipient is halfway across the globe. While many organizations transfer files through popular Transmission Control Protocol (TCP)-based methods like File Transfer Protocol (FTP), Secure File Transfer Protocol (SFTP) and Hypertext Transfer Protocol (HTTP), you can actually achieve shorter transfer times with the User Datagram Protocol (UDP). So, just how much faster can UDP be?
Comparison tests we conducted pitting FTP vs. a UDP-based protocol showed that the latter can be 100x faster. If this disparity blows you away, you might be interested in what we’re about to share with you. In this post, we’ll explain why UDP is faster than TCP, why TCP-based protocols are still more commonly used for data transfers and how you can achieve better results with a hybrid between the two.
If you prefer a more in-depth discussion on this topic, read our whitepaper: “How to Boost File Transfer Speeds 100x Without Increasing Your Bandwidth”. Otherwise, forge ahead!
Sample case studies for high-speed UDP file transfers
Many organizations seek fast file transfers. Consider the following sample case studies:
Media production company large video file transfer:
A global media production company frequently transfers terabytes of large video files between editing and post-production teams in Los Angeles, London and Tokyo. Using TCP-based protocols like FTP led to significant delays, impacting project timelines.
By switching to a hybrid approach incorporating UDP, the company reduced its transfer times by more than 80%. This led to timely content delivery and more efficient collaboration among the company’s geographically-dispersed teams.
You can experience similar results. To eliminate delays while keeping data intact, JSCAPE by Redwood offers AFTP, a TCP-UDP hybrid architected to enable fast and reliable file transfers. Now, you can see AFTP in action when you get a free trial. Space is limited, so request your free trial below:
Research institution large data set transfer
A research institution needed to transfer terabytes of data generated from a particle accelerator experiment to a remote supercomputing center for analysis. Since they were on a tight schedule, the researchers wanted a solution that could reduce the transfer time significantly. By switching from a purely-TCP-based protocol to one that incorporated UDP, they were able to reduce the transfer time from days to hours.
Why organizations keep using TCP to transfer files
Despite TCP’s clear disadvantage when it comes to speed, most popular file transfer tools still rely on TCP-based protocols like FTP, SFTP and HTTP. TCP’s popularity as a file transfer method is largely due to its reliability. TCP provides mechanisms that ensure individual packets reach their destination intact and in the right order.
To establish correct ordering, the protocol assigns sequence numbers to each TCP packet. Applications that transfer files via TCP then look up these sequence numbers in each TCP packet’s header to determine the correct order.
Although TCP packets can certainly get lost along the way, the protocol allows sending parties to detect packet loss and to retransmit lost packets. So, how does this mechanism work? When you send a TCP packet to another party, that receiving party must respond with an acknowledgement (ACK). Acknowledgements are a key part of TCP.
By requiring an acknowledgement, TCP enables the sending host to determine whether the packet it sent arrived at its destination and whether it needs to re-transmit the packet. If no ACK is received, it means the transmitted packet might have been lost along the way, thereby requiring a re-transmission.
Thus, when you send a file through TCP, you can be sure your recipient will receive the file exactly the way you sent it. This reliability component is crucial when you’re sending business documents, spreadsheets, health records, financial records and other important files.
In addition to its reliability mechanisms, TCP also employs congestion control and flow control algorithms that are supposed to prevent network bottlenecks and flow-related issues. While these mechanisms and algorithms do work as intended, they ironically also cause delays in networks that suffer from high latency.
What makes TCP data transfers slow?
High latency refers to an undesirable network quality characterized by delays due to properties in the network medium, internal processes in intermediary network devices and the distance between two communicating hosts. While you can sometimes address issues involving the network medium and internal processes in network devices, there’s nothing you can do with the distance between two hosts.
For instance, if you’re sending a file from, say, Tokyo to New York, you can’t do anything to reduce the distance between those two cities. Or if you’re sending files between a satellite in space and a ground station, you can’t do anything to reduce the distance between those two points. Since the speed by which a signal can travel across any medium has an upper limit, longer distances will always take a longer time to traverse.
Thus, hosts separated by long distances will always suffer from high latency, which, as stated earlier, translates to delays. Even with a high-bandwidth internet connection, your throughput will still suffer from the effects of high latency. Although latency exists regardless of whether you’re using TCP or UDP, the effect of latency is aggravated by properties only found in TCP.
For instance, not only do transmitted TCP packets have to traverse a longer distance in a wide area network (WAN) than in a local area network (LAN), but their corresponding ACK packets have to traverse that longer distance as well.
Since a TCP-based sender has to wait for certain ACK packets to arrive before sending out additional packets, longer distances can result in longer waiting times and, consequently, delay succeeding transmissions. But that’s not the only problem.
As part of its congestion control mechanism, TCP transmissions always start slow. Meaning, at the start, a TCP sender limits the rate at which it sends out data. This is done to prevent the network from getting overwhelmed. This rate is gradually increased based on the value of a variable known as the congestion window (cwnd).
Every time an ACK is received, that value and, in turn, the rate of transmission increase. So, if ACK packets are delayed, the rate increase is also adversely affected. None of these speed-impacting behaviors are present in a UDP file transfer.
What is a UDP file transfer?
As the name suggests, a UDP-based file transfer is conducted over the UDP protocol. UDP is a connectionless protocol. That means it doesn’t require a connection to be established before UDP packets can be sent out. In contrast, TCP does require a connection, which constitutes a series of ACKs before TCP packets can be sent out. UDP has no inherent concept of ACKs at all.
Why UDP data transfers are faster
Unlike TCP, UDP doesn’t have any provisions for reliability, congestion control and flow control. For instance, a sending UDP host doesn’t wait for ACKs before sending additional data. Moreover, the host doesn’t pay attention to network conditions or the receiver’s capacity to receive data when sending out packets. As such, the throughput of UDP packets isn’t as impacted by high latency and packet loss as TCP packets are.
Of course, while the absence of reliability, congestion control and flow control capabilities results in faster throughput, it also makes you susceptible to incomplete and error-plagued data transfers. If you’re sending a business document, you wouldn’t want to lose any piece of information. Unfortunately, UDP can’t guarantee that won’t happen.
Get the best of both worlds with Accelerated File Transfer Protocol (AFTP)
JSCAPE's AFTP UDP component maintains fast throughput regardless of network conditions. At the same time, its TCP component takes care of other tasks such as user authentication, file management and the coordination of the file transfers. These combined qualities can be useful in large file transfers over high-latency networks.
AFTP is a key feature of JSCAPE MFT by Redwood, a managed file transfer solution that supports a wide range of file transfer protocols, including FTP, SFTP, HTTP, AS2, OFTP and many others. JSCAPE MFT is equipped with an array of security and automation features, plus an API, making it capable of supporting any file transfer workflow.
Would you like to witness AFTP in action? Request a quick demo now.
Related Links
How to Boost File Transfer Speeds 100x Without Increasing Your Bandwidth