Hi Gidi,
and thanks for your valuable answer.
Taking a look at the link you posted, it has intresting section:
=====
Initial test with 64KB read operations (host measured latency = 103ms, controller lun latency == volume latency ~ 0.06ms):
fas01*> stats show -r -n 1 -i 5 volume:demo2:read_latency volume:demo2:read_ops lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops
volume:demo2:read_latency:61.42us
volume:demo2:read_ops:9/s
lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency:0.04ms
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops:9/s
Test with 128KB read operations (host measured latency = 205ms, controller LUN latency of 101ms includes 1 network round trip):
fas01*> stats show -r -n 1 -i 5 volume:demo2:read_latency volume:demo2:read_ops lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops
volume:demo2:read_latency:44.71us
volume:demo2:read_ops:9/s
lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency:101.58ms
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops:4/s
Test with 256KB read operations (host measured latency = 408ms, controller LUN latency of 302ms includes 3 network round trips):
fas01*> stats show -r -n 1 -i 5 volume:demo2:read_latency volume:demo2:read_ops lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops
volume:demo2:read_latency:47.02ussss
volume:demo2:read_ops:10/s
lun:/vol/demo2/lun2-BLH0G?BQgK/T:avg_read_latency:302.62ms
lun:/vol/demo2/lun2-BLH0G?BQgK/T:read_ops:2/s
=====
First testing is OK, but I don't understand how one round trip (which I understand is host requesting two read operations to get two 64k chunks), can add 100ms to the lun latency. And to read 256 (4*64k) adds 2 * 100ms. Must be testing thing or something, those latencies are grazy..