[colug-432] file replication across data centers
Angelo McComis
angelo at mccomis.com
Tue Jan 31 23:04:53 EST 2017
I see a fair amount of this in various customer environments all the time.
I've also been around BCP/Disaster Recovery, so I can speak about the
choices, what they do, what they don't do, and so on. If you have a
specific question, you're welcome to reach out. But, sharing experience, in
general, I can share this off the top of my head...
Multi-site replication comes in a few different varieties:
A) Storage-based (e.g. your SAN/NAS device is shipping block-by-block
changes to a matching storage system at the other site, which is applying
those changes/writes to a remote mirror)
B) Host-based (e.g. your OS file system driver is taking IO writes for a
given filesystem and replicating them to another host, which that other
host is applying those writes to a remote filesystem)
C) Application-based (e.g. some built-in feature to an application [not
unlike mysql] is pushing bits over the network to another copy somewhere
else, that is receiving those bits)
In cases of A and B above, and sometimes, but not always C, the remote side
is in a "read only" mode and not usable, since there's no mechanism to take
writes on the far side and get them back to the original site. In the case
of application-based replication, there are some that handle bi-directional
replication, so you're not stuck with a far site in read-only mode.
In case of A above, you can ensure zero data loss if you have 1) short
enough distance between the two sites, and 2) enabled synchronous
replication. In this use case, a write is written to the A side,
replicated to the B side, and confirmation of the write is sent back to A
before the A side considers the write to be complete and releases control
back to the requesting application. Because of the extra hops, synchronous
replication is limited to ~20 miles of distance because of latency. If zero
data loss is not an absolute requirement, asynchronous replication is not
distance limited, but as distance increases, the lag between a write to the
near side and that write being committed to the far side increases as well.
In your specific mentioning of XtreemFS, it works like option B above, as
it plugs in at the filesystem driver layer of the OS, and appears, from my
reading of their site info, to operate asynchronously. GlusterFS is
similar. It's asynchronous, and can work across distance to create global
clusters.
The art of applying a replication strategy is to first understand what the
business or technical requirement is that must be met. What are some of the
use cases? If you're wondering what of the above I see most often? It's A
and C. Examples: Frame-to-Frame storage replication done at the volume
level, databases running log-shipping to remote sites for a DR copy, that
sort of thing.
Angelo
On Tue, Jan 31, 2017 at 9:18 PM, Christopher Cavello <cavello.1 at osu.edu>
wrote:
> Is anyone on this list willing to share his experience with file
> replication across data centers? (glusterfs geo-replication, xtreemfs
> http://www.xtreemfs.org/, etc.)
>
>
> _______________________________________________
> colug-432 mailing list
> colug-432 at colug.net
> http://lists.colug.net/mailman/listinfo/colug-432
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.colug.net/pipermail/colug-432/attachments/20170131/bbf77f7b/attachment.html
More information about the colug-432
mailing list