“I can’t believe I ate the whole thing!”

I’ve always heard stories about restaurants that throw challenges at their patrons claiming, “if you eat the whole thing, you don’t have to pay for it.”

I’ve most recently heard about the “World Famous” 72-ounce (about 2 kilograms) steak dinner that one can get for free (if eaten in 1 hour).  This feast is offered by the Big Texan Steak Ranch in Amarillo, Texas.

To quote their web site, “Many have tried.  Many have failed.”

Just as 72-ounce steaks will provide a significant challenge to the gastro-intestinal systems of the average human being, so too do large files when one attempts to transfer them via legacy file transfer systems.

One of the most pervasive file transfer methods in use is good old FTP.  FTP, which stands for “file transfer protocol” has its roots in specifications that go back as far as early 1971.  While some may argue that the FTP protocol itself has nothing inherently in it that prohibits the transfer of very large files, these large files often present a real challenge when transferred between FTP clients and servers.  Just search on the Internet for “FTP 4Gb”.  The problems frequently lay instead with design limitations of the software implementing the FTP protocol (often old clients or servers, written long ago), or the platforms on which the software is running.
The SFTP transfer protocol, an extension to the secure shell (SSH) protocol provides for more reliability while also adding security.  It does this by means of a message authentication code (MAC) that is computed for each TCP/IP packet.  This provides for data integrity which helps ensure that the file is transferred with complete accuracy and isn’t corrupted (or intentionally altered) in transit.

As managed file transfer solutions have evolved, more proprietary protocols have been developed in the interest of driving additional reliability into large file transfers.  Some of these protocols implement key reliability capabilities:

  • Notification that a file transfer has been accurately completed
  • Detailed feedback on errors and their causes
  • Checkpoint control packets

Out of these capabilities, implemented in the proprietary protocol, file transfer recovery features have been designed that really drive reliability into the file transfer process.  These include:

  • Automatic Retries – if the file transfer client knows that an error is one that can be recovered from, it can reattempt a file transfer after a pre-defined waiting period.
  • Checkpoint/Restart – Checkpoint control packets tell the sender and receiver how much of the file has been transferred so far.  In the event of a failed transfer, the file transfer can resume at the last checkpoint, rather than restart at the beginning of the file.

In addition to the reliability benefits gained through proprietary protocols, some managed file transfer solutions implement the concept of a work queue in which file transfers are lined up, waiting for their chance to execute.  Work queue recovery is the process by which those jobs are retained in the event of a system crash and loaded back into the queue upon system restart.
Maybe you only push out a 4Gb file with the same frequency that you would eat a 72-ounce steak.  In other cases, you may be experiencing reliability issues on much smaller files. Regardless, having the added reliability of a managed file transfer solution that supports both open and proprietary protocols will make a big difference when the file has to get there, complete and unaltered.

Share this post:

Leave a Reply

Your email address will not be published. Required fields are marked *