Quantum Forum V

Quantum Forum for DXi V5000

Hi,

I'm feeding my brand new dxi v1000 Zimbra backups and am very impressed by the Total Reduction Ratio I'm getting. Awesome !

I'm a bit concerned with the Inline Throughput though.

On the Status/Performance/Inline pane, the Average Throughput graph is very very flat !

I get 5.30 MB/s when the scale is graduated to 200 MB.

Is it normal ?

File transfers from the Zimbra server to the DXI mounted as an NFS share take ages to complete.

Is there anything I can do about it as it seems to impact the Zimbra server's performance.

Thanks !

Views: 86

Reply to This

Replies to This Discussion

Is the DXi V1000 running on the same Datastore as the Zimbra server? You may see better performance by ensuring that the V1000 is kept on a separate Datastore since both DXi and Zimbra may be highly active with disk I/O. Also what is the network route like from the Zimbra server to the DXi, are there any hops in between the two? If you could connect them both to the same vSwitch/subnet that would be ideal for throughput.

Are you using any NFS mount options from the Zimbra server? You can try adjusting the rsize/wsize settings to the optimal settings for DXi like this:

mount -t nfs -o nolock,rsize=1048576,wsize=1048576 dxiv1000-brgp00:/Q/shares/ZIMBRA /mnt/BACKUPS

V1000 has a built in Diag, NetPerf.  You can test differnt points in the network - by using the DXi gui as a server or client.  Utilities>Analyzer is the path from the GUI.

 

Let me know if you need further guidance, but this is a good starting point.  Running a tracert from ZImbra to the v1000 helps illustrate Double Vswitch/subnet agruement.  If you do have both NAS and Client in the same Datastore, look at the disk statistics to consider Doubles first suggestion as well.

 

Jon

Hello,

I didn't reply immediately because I needed a little time to follow your advice.

The DXi V1000 is now on a different Datastore as the Zimbra server, on the same subnet but not on the same host. Connection between the two VMs (hosts) is as short as can be : It only goes through an HP ProCurve Gigabit switch.

Mount options are those you suggested but performance is the exact same :-(

Inline throughput < 5Mb/s

In vSphere, I can see that Received Network Rate on the DXi vm went up to 125 Mb/s (50'000 network packets) twice (2 spikes) at the beginning of the transfer but settled down to a poor few Mb/s only a couple minutes later, suggesting that it wasn't able to "ingest" the data at this pace.

Finally, I am very keen on using the Diag tool but my comprehension of it is that it can monitor connection between 2 DXis (replication), not between a linux host and a DXi.

I have found netperf for linux though (www.netperf.org) and am willing to use it if you can guide me a little more.

Thanks again for your time and help !

Hi Jean,

You can use netperf between the DXi and any box, including various flavors of Linux and or even Windows servers.

If your dealing with a linux box that is related to RHEL, such as CentOS or any of the other flavors that use RPM's you should be able to run this command.

 

‘rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/rpmforge-releas...

 

Then use Yum to install netperf.

 ‘yum install netperf’  

Once you have netperf installed the command syntax is something like.

'netperf -f MB -H (IPofDXi)'

 

Don't forget to turn on the netperf server from the DXi GUI.

 

You can reverse the traffic flow by using 'netserver' on the linux box and using the DXI to point to the targeted IP.

 

Hope that helps,

 

Jon

Hi Jon,

Thanks again.

I've now installed netperf on a couple of linux servers (SLES for VMware so I compiled the source myself).

I works fine between them but not from either of them to the DXi nor the other way round : Network throughput 0 MB/secnetwo

Before you ask : Yes, netserver is running on the target linux machine when I perform the network analysis from the DXi

Could the fact that we use (tagged) VLANs be the key to these issues ?

So strange ... and so frustrating not being able to use this appliance.

Thanks for your help anyway. It is very welcome.

I've not tested with Tagged VLANs but wouldn't rule that out as a problem with netperf.  I agree with Mark below, upgrade to the better code, there is alot of good stuff in there.  Also I've seen async NFS move data twice as fast.  Give it a shot!  Good luck.

Ah... VLAN Tagging. 

The new 2.2 version of DXi V1000 has more explicit support for VLAN tagging. 

So, try this again, after you upgrade and after you enable netperf on the DXi.

Thanks for trying DXi V1000. Glad it's working for you.

First things first : you need to go ahead and upgrade to DXi V1000 version 2.2.1, which we just released. Why? Primarily because it's a mandatory upgrade that must be installed before April 30. Secondarily, because it's a better, more stable product.

Please follow the upgrade instructions at this URL : Upgrade Here. (http://forms.quantum.com/Software/V1000upgrade/)

Second thing : NFS with Sync mode (our default mode) is the slowest performance protocol on the DXi V1000.

I recommend that you consider possibly turning NFS into Asynchronous mode. Please see the Command Line Interface guide for instructions on how to turn on Asynch mode on a per-share basis. A cautionary note that asynch mode is faster, but is, by definition, more at risk to not writing data to disk in case of a power failure.

Also, I see that there are several good notes below on trouble-shooting tips.

RSS

Tips + Tricks

© 2024   Created by Quantum Forum V.   Powered by

Badges  |  Report an Issue  |  Terms of Service