In the last post we looked at key use cases for server-to-server file transfer, where existing solutions tend to fall down, and the evolving requirements that support IT’s need to both maintain a reliable data infrastructure and respond quickly to new business initiatives.
In this post, we’ll look briefly at the modern technology insolutions that drive the ability to meet those evolving requirements.
Underlying thesolutions that support reliable and automated point-to-point transfers between servers are a core set of capabilities that are missing in older file transfer technologies. These are:
- Success Notification: The ability to know definitively that a file has successfully completed a transfer, and when.
- Failure alerts with details: If a problem occurs that interrupts the transfer, when it happens, and what the root cause is likely to be.
- Declarative Automation: A configuration-driven method (versus complex scripting) for defining and executing file transfers and the associated steps of initiating a transfer, error recovery, and post-processing of the files after they’re transferred.
- Post-processing Actions: The ability to identify commands, scripts, and batches that need to be run from the context of the target system (or the source) following a successful transfer.
- Event-driven initiation: The ability to monitor directories and initiate pre-processing activities and file transfers on the basis of a file showing up in those directories.
Some of these capabilities are often achieved by enhanced (and yes, proprietary) protocols that perform extra steps at the beginning and end of a file transfer. These allow the transfer endpoints to negotiate the elements of the transfer up front (valid security credentials, target storage definition requirements, post-processing actions that will be run at the end of the transfer, etc.) and perform a handshake at the end of the transfer (Sender, “I sent XXXX bytes.” – Receiver, “Great because I received XXXX bytes.” or Receiver, “Uh-oh, I only received XXXW bytes, better try again.”).
These capabilities are the building blocks then that are combined to drive real value in to mission-critical server-to-server file transfer scenarios. Advanced MFT solutions (like Attachmate FileXpress) hit the key areas:
- Native platform support – Windows, UNIX, Linux , IBM i, IBM z/OS
- Error alerting – Send an email or an SNMP trap as soon as a problem occurs
- Error details – Information on the source and location of the error and whether it’s recoverable or not (i.e. insufficient disk space – recoverable, failed user authentication – unrecoverable)
- Automatic retries – Have the MFT software automatically reattempt a recoverable failed transfer after a prescribed waiting period.
- Recoverable work queues – Don’t lose file transfers in a system crash. After system recovery, automatically rebuild the list of file transfers that are queued up to execute.
- Initiation – The MFT software will start the file transfer process as soon as a user, or script, or application drops a file into a monitored directory
- After the Transfer – The MFT software on the end points will kick off the next step of the process after the transfer is finished. No scripting or kludgy flag file techniques needed to determine that the transfer is finished.
- Reusability – Configuration-driven file transfer definitions that can be used as templates for future file transfers.
Putting it all together, with MFT solutions server-to-server file transfers become more dependable and automation becomes simpler and quicker to implement. As such, MFT is helping IT organizations maintain a reliable data infrastructure and respond quickly to new business initiatives.