Quantum Forum V

Quantum Forum for DXi V5000

Dear, 

is there a way to increase tcp send/ receive buffers on the V1000 (and potentially the 67/8xx). 

Background to that is, that we have two location sites, connected with 600 MBit/s L2 line, but we only get lousy transfer rates, 0.3 to 1 MByte/s. We have seen this behaviour before and one of the most promising approaches was to tune the TCP buffers. I was wondering if there is a way to set them on the appliances as well. If not, the whole replication concept will be a bit pointless for us. 

some details on the L2 line: latency: around 160 ms, stable, 7500 km air line distance. 

some hints on the buffers: http://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php

Another thing I got puzzled about, I put 2.8 GB of data to an NFS share, but the web interface shows me: 

Original Data Size
46.92 TB
Actual Data Sent
629.62 MB

What i put there was the vmdk file from the v1000 download. 

Why is it showing 46 TB? I understand with compression and dedup it goes down to 600 MB. 

Thanks for any enlightenment.

Christian

Views: 65

Reply to This

Replies to This Discussion

Hi Christian! Is there only that one single share configured on the system? Have you sent any other data? DXi will combine the contents of all the shares into a single blockpool internally.

With your replication, does the process ever complete? Is the ethernet performance the same throughout the entire process? You can get a good look at that in the Advanced Reporting window. What I have observed in many cases is that the process will not always use all available bandwidth because some parts of the replication are only exchanging metadata and block IDs rather than a bulk transmission, although eventually it will decide which blocks need to be transmitted and begin to use more bandwidth.

Hi DoubleDensity, 

it's so far a test setup and there is only one share configured. I sent other data, around 15 GB, everything was replicated. Original data size grew to 90 TB now. However. 

The thing with the data rates is that it was an initial replication to an empty target share, so definitely there was the whole data transferred, and I see it on the other side completely arrived.

Advanced reporting shows a peak at the beginning and slows down to the end. Nothing higher than 

Once again: I know this behaviour. It's typical for high latency networks. It can be compensated with bigger TCP buffers. Since the v1000 is basically a linux system, how to change this? If there is no way, the appliance will not be helpful for us at all. 

Christian

Out of curiosity, what type of link is it? For comparison what type of throughput are you expecting to see?

Its a 600 MBit/s L2 link, technically ethernet. We have seen with optimizations around 10 to 15 MB/s. Without optimizations around the same as I see now with the Quantum.

It scales pretty well with multiple connections. If there is an option for multiple streams in the Quantum, that would be helpful, too.

RSS

Tips + Tricks

© 2024   Created by Quantum Forum V.   Powered by

Badges  |  Report an Issue  |  Terms of Service