Discussion:
ixg(4) performances
Emmanuel Dreyfus
2014-07-01 06:56:18 UTC
Permalink
Hello

I experiment 10 GE link with ixb(4), but the result is really weak: The
two machines have a direct link through a SFTP+ câble, and copying a file
over NFS I get a throughput of 1.8 Mb/s, which is less than 2% of the
link capacity. Any idea of where to look for imrovement?

Here is dmesg output for the boards:
ixg0 at pci5 dev 0 function 0: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10
ixg0: clearing prefetchable bit
ixg0: interrupting at ioapic0 pin 18
ixg0: PCI Express Bus: Speed 2.5Gb/s Width x8
ixg1 at pci5 dev 0 function 1: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10
ixg1: clearing prefetchable bit
ixg1: interrupting at ioapic0 pin 19
ixg1: PCI Express Bus: Speed 2.5Gb/s Width x8
--
Emmanuel Dreyfus
***@netbsd.org
Stephan
2014-07-01 07:15:45 UTC
Permalink
Hi,

did you measure raw TCP and UDP throughput using iperf or netperf?


Regards,

Stephan
Post by Emmanuel Dreyfus
Hello
I experiment 10 GE link with ixb(4), but the result is really weak: The
two machines have a direct link through a SFTP+ câble, and copying a file
over NFS I get a throughput of 1.8 Mb/s, which is less than 2% of the
link capacity. Any idea of where to look for imrovement?
ixg0 at pci5 dev 0 function 0: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10
ixg0: clearing prefetchable bit
ixg0: interrupting at ioapic0 pin 18
ixg0: PCI Express Bus: Speed 2.5Gb/s Width x8
ixg1 at pci5 dev 0 function 1: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.3.10
ixg1: clearing prefetchable bit
ixg1: interrupting at ioapic0 pin 19
ixg1: PCI Express Bus: Speed 2.5Gb/s Width x8
--
Emmanuel Dreyfus
Emmanuel Dreyfus
2014-07-01 07:39:47 UTC
Permalink
Post by Stephan
did you measure raw TCP and UDP throughput using iperf or netperf?
No, this was a file copy over NFS.
--
Emmanuel Dreyfus
***@netbsd.org
Hisashi T Fujinaka
2014-07-01 16:24:40 UTC
Permalink
Post by Emmanuel Dreyfus
Post by Stephan
did you measure raw TCP and UDP throughput using iperf or netperf?
No, this was a file copy over NFS.
Step one, don't use NFS.
--
Hisashi T Fujinaka - ***@twofifty.com
BSEE(6/86) + BSChem(3/95) + BAEnglish(8/95) + MSCS(8/03) + $2.50 = latte
Emmanuel Dreyfus
2014-07-01 17:26:38 UTC
Permalink
Post by Hisashi T Fujinaka
Post by Emmanuel Dreyfus
No, this was a file copy over NFS.
Step one, don't use NFS.
What should I use instead?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
***@netbsd.org
Gary Duzan
2014-07-01 17:29:59 UTC
Permalink
In Message <1lo4dze.1ihmarucmavkvM%***@netbsd.org>,
***@netbsd.org (Emmanuel Dreyfus)wrote:

=>Hisashi T Fujinaka <***@twofifty.com> wrote:
=>
=>> > No, this was a file copy over NFS.
=>> Step one, don't use NFS.
=>
=>What should I use instead?

Regardless of alternatives to NFS, it seems to me that the first
thing to do is isolate the interface performance from the NFS
performance. If you can use ttcp or something similar to measure
the TCP/UDP performance over the same link, that should help
determine whether it is an NFS-specific problem or a more general
networking problem.

Gary Duzan
Hisashi T Fujinaka
2014-07-01 19:50:42 UTC
Permalink
Post by Gary Duzan
=>
=>> > No, this was a file copy over NFS.
=>> Step one, don't use NFS.
=>
=>What should I use instead?
Regardless of alternatives to NFS, it seems to me that the first
thing to do is isolate the interface performance from the NFS
performance. If you can use ttcp or something similar to measure
the TCP/UDP performance over the same link, that should help
determine whether it is an NFS-specific problem or a more general
networking problem.
That's what I should've said if I wanted to be helpful. On Linux I
suggest iperf (multiple threads) or multiple concurrent copies of
netperf. It takes more than one thread to saturate the link. Then after
you made sure that works, then you can go on to seeing how slow NFS
really is.

On NetBSD I'm not sure.
--
Hisashi T Fujinaka - ***@twofifty.com
BSEE(6/86) + BSChem(3/95) + BAEnglish(8/95) + MSCS(8/03) + $2.50 = latte
Thor Lancelot Simon
2014-07-07 03:44:58 UTC
Permalink
Post by Hisashi T Fujinaka
That's what I should've said if I wanted to be helpful. On Linux I
suggest iperf (multiple threads) or multiple concurrent copies of
netperf. It takes more than one thread to saturate the link. Then after
you made sure that works, then you can go on to seeing how slow NFS
really is.
On NetBSD I'm not sure.
As a first step, ensure your socket buffer sizes are adequate. The default,
and default maximum, socket buffer sizes in NetBSD are inappropriate for
10Gb unless you are using hundreds of TCP conections at once.

Thor
Emmanuel Dreyfus
2014-07-07 08:19:57 UTC
Permalink
Post by Thor Lancelot Simon
As a first step, ensure your socket buffer sizes are adequate. The default,
and default maximum, socket buffer sizes in NetBSD are inappropriate for
10Gb unless you are using hundreds of TCP conections at once.
What is an adequate value? I tried raising up this without much improvemet:
kern.sbmax=67108864
net.inet.tcp.sendbuf_max=1048576
net.inet.tcp.recvbuf_max=1048576
--
Emmanuel Dreyfus
***@netbsd.org
Thor Lancelot Simon
2014-07-07 11:56:37 UTC
Permalink
Post by Thor Lancelot Simon
As a first step, ensure your socket buffer sizes are adequate. The default,
and default maximum, socket buffer sizes in NetBSD are inappropriate for
10Gb unless you are using hundreds of TCP conections at once.
Try raising the default socket buffer size, not the maximum size for the
autotuning code -- starting from 32k it will take _forever_ to get to 1MB
by autotuning.

Thor
Emmanuel Dreyfus
2014-07-08 06:35:30 UTC
Permalink
Post by Thor Lancelot Simon
autotuning code -- starting from 32k it will take _forever_ to get to 1MB
by autotuning.
I raised net.inet.tcp.sendspace/net.inet.tcp.recvspace to 1048576
and kern.somaxkva to the same value as kern.sbmax (67108864). This
almost doubled throuighput, as I now get 2.2 Gb/s

Increasing net.inet.tcp.sendspace/net.inet.tcp.recvspace further
did not produce any improvement. But I suspect this is the reason:

ixg0: PCI Express Bus: Speed 2.5Gb/s Width x8

Is it 2.5 Gb/s maximum (in which case I hit that limit), or 2.5x8 = 20 Gb/s
in which case I still have room for improvment.
--
Emmanuel Dreyfus
***@netbsd.org
Martin Husemann
2014-07-08 08:44:59 UTC
Permalink
Post by Emmanuel Dreyfus
ixg0: PCI Express Bus: Speed 2.5Gb/s Width x8
Is it 2.5 Gb/s maximum (in which case I hit that limit), or 2.5x8 = 20 Gb/s
in which case I still have room for improvment.
That printout is confusing and does not match any PCIe variant I know of.
I can only guess that it means 2.5G transfers per second (not bit), which
then would mean a ~2Gbit/s bandwidth times x8 so 2GB per direction overall
(PCIe 1.0).

Martin
Matt Thomas
2014-07-08 08:55:32 UTC
Permalink
Post by Emmanuel Dreyfus
ixg0: PCI Express Bus: Speed 2.5Gb/s Width x8
Is it 2.5 Gb/s maximum (in which case I hit that limit), or 2.5x8 = 20 Gb/s
in which case I still have room for improvment.
The latter.
Hisashi T Fujinaka
2014-07-08 13:29:30 UTC
Permalink
Post by Matt Thomas
Post by Emmanuel Dreyfus
ixg0: PCI Express Bus: Speed 2.5Gb/s Width x8
Is it 2.5 Gb/s maximum (in which case I hit that limit), or 2.5x8 = 20 Gb/s
in which case I still have room for improvment.
The latter.
2.5 x 8 Gt/s. Gigatransfers per second. Each packet has a payload size
of 128 or 256 bytes, typically, on a very chatty bus, so you really want
PCIe Gen 2 speeds of 5GT/s x 8 for an 82599 NIC in some cases.

Another thing is to increase the frame size. Small packet performance
won't ever hit line rate (64B packet data) but I think you have other
problems before you even get close to that.
--
Hisashi T Fujinaka - ***@twofifty.com
BSEE(6/86) + BSChem(3/95) + BAEnglish(8/95) + MSCS(8/03) + $2.50 = latte
Emmanuel Dreyfus
2014-07-09 06:36:59 UTC
Permalink
Post by Emmanuel Dreyfus
kern.sbmax=67108864
net.inet.tcp.sendbuf_max=1048576
net.inet.tcp.recvbuf_max=1048576
I now run with this:
sysctl -w kern.sbmax=67108864 ;
sysctl -w kern.somaxkva=67108864 ;
sysctl -w net.inet.tcp.recvbuf_auto = 0 ;
sysctl -w net.inet.tcp.sendbuf_auto = 0 ;
sysctl -w net.inet.tcp.recvspace=1048576 ;
sysctl -w net.inet.tcp.sendspace=1048576 ;
ifconfig ixg1 mtu 9000 tso4 ip4csum-tx tcp4csum-tx udp4csum-tx ip4csum-rx

ifconfig does not want tcp4csum-rx and udp4csum-rx despite advertising
them as available.

netperf now says I get 2330 Mb/s. Any idea of what I can tune to get
a better result? Increasing the above value does not help.
--
Emmanuel Dreyfus
***@netbsd.org
David Young
2014-07-09 15:58:38 UTC
Permalink
Post by Emmanuel Dreyfus
ifconfig does not want tcp4csum-rx and udp4csum-rx despite advertising
them as available.
In the hardware, you cannot independently enable layer-4 Rx checksums
for TCP or UDP, IPv4 or IPv6. It's all or nothing. So that you cannot
set the flags to a different state than the hardware, SIOCSIFCAP returns
EINVAL when you make an unsupported selection.

Try 'ifconfig ixg0 tcp4csum-rx tcp6csum-rx udp4csum-rx udp6csum-tx',
that should work.

Dave
--
David Young
***@pobox.com Urbana, IL (217) 721-9981
Emmanuel Dreyfus
2014-07-02 03:26:07 UTC
Permalink
Post by Stephan
did you measure raw TCP and UDP throughput using iperf or netperf?
ttcp reports 1,39 Gb/s with TCP, which is still not impressing. Oddly,
it does not report anything for UDP (-u flag), while netstat -i shows
data transmitted.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
***@netbsd.org
Stephan
2014-07-02 05:10:30 UTC
Permalink
I would recommend using netperf for measuring TCP and UDP performance.
Besides that, it measures different block/segment sizes.
Post by Emmanuel Dreyfus
Post by Stephan
did you measure raw TCP and UDP throughput using iperf or netperf?
ttcp reports 1,39 Gb/s with TCP, which is still not impressing. Oddly,
it does not report anything for UDP (-u flag), while netstat -i shows
data transmitted.
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
Emmanuel Dreyfus
2014-07-02 06:09:06 UTC
Permalink
Post by Stephan
I would recommend using netperf for measuring TCP and UDP performance.
Besides that, it measures different block/segment sizes.
TCP STREAM
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

32768 32768 32768 10.00 1162.17

UDP STREAM
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughputbytes
bytes secs # # 10^6bits/sec
9216 9216 10.01 366386 0 2699.60
41600 10.01 78157 575.87
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
***@netbsd.org
Stephan
2014-07-02 08:34:07 UTC
Permalink
Sorry, I meant netio, not netperf - my bad. It should give you
something like this:

# netio -t 127.0.0.1

NETIO - Network Throughput Benchmark, Version 1.26
(C) 1997-2005 Kai Uwe Rommel

TCP connection established.
Packet size 1k bytes: 297270 KByte/s Tx, 276700 KByte/s Rx.
Packet size 2k bytes: 497285 KByte/s Tx, 24667 Byte/s Rx.
Packet size 4k bytes: 535448 KByte/s Tx, 495991 KByte/s Rx.
Packet size 8k bytes: 25511 Byte/s Tx, 472072 KByte/s Rx.
Packet size 16k bytes: 26267 Byte/s Tx, 511965 KByte/s Rx.
Packet size 32k bytes: 1234733 KByte/s Tx, 30356 Byte/s Rx.
Done.

(-u for UDP)
Post by Emmanuel Dreyfus
Post by Stephan
I would recommend using netperf for measuring TCP and UDP performance.
Besides that, it measures different block/segment sizes.
TCP STREAM
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
32768 32768 32768 10.00 1162.17
UDP STREAM
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughputbytes
bytes secs # # 10^6bits/sec
9216 9216 10.01 366386 0 2699.60
41600 10.01 78157 575.87
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
Emmanuel Dreyfus
2014-07-02 14:05:53 UTC
Permalink
Post by Stephan
Sorry, I meant netio
NETIO - Network Throughput Benchmark, Version 1.26
(C) 1997-2005 Kai Uwe Rommel

TCP connection established.
Packet size 1k bytes: 114938 KByte/s Tx, 114816 KByte/s Rx.
Packet size 2k bytes: 114924 KByte/s Tx, 114868 KByte/s Rx.
Packet size 4k bytes: 114871 KByte/s Tx, 114901 KByte/s Rx.
Packet size 8k bytes: 114877 KByte/s Tx, 114900 KByte/s Rx.
Packet size 16k bytes: 114882 KByte/s Tx, 114914 KByte/s Rx.
Packet size 32k bytes: 114881 KByte/s Tx, 114905 KByte/s Rx.
Done.
--
Emmanuel Dreyfus
***@netbsd.org
Emmanuel Dreyfus
2014-07-04 14:43:19 UTC
Permalink
Post by Stephan
TCP connection established.
Packet size 1k bytes: 114938 KByte/s Tx, 114816 KByte/s Rx.
Packet size 2k bytes: 114924 KByte/s Tx, 114868 KByte/s Rx.
Packet size 4k bytes: 114871 KByte/s Tx, 114901 KByte/s Rx.
Packet size 8k bytes: 114877 KByte/s Tx, 114900 KByte/s Rx.
Packet size 16k bytes: 114882 KByte/s Tx, 114914 KByte/s Rx.
Packet size 32k bytes: 114881 KByte/s Tx, 114905 KByte/s Rx.
ioperf reports awful perfs. But netperf says:

***@saccharose# netperf -H 10.103.101.117
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.103.101.117 (10.103.101.117) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

32768 32768 32768 10.01 1152.76

This look much better: maximum bandwith is 1200 Mb/s as I understand.

How can this be explained?
--
Emmanuel Dreyfus
***@netbsd.org
Matthias Scheler
2014-07-06 15:39:30 UTC
Permalink
Post by Emmanuel Dreyfus
Post by Stephan
TCP connection established.
Packet size 1k bytes: 114938 KByte/s Tx, 114816 KByte/s Rx.
Packet size 2k bytes: 114924 KByte/s Tx, 114868 KByte/s Rx.
Packet size 4k bytes: 114871 KByte/s Tx, 114901 KByte/s Rx.
Packet size 8k bytes: 114877 KByte/s Tx, 114900 KByte/s Rx.
Packet size 16k bytes: 114882 KByte/s Tx, 114914 KByte/s Rx.
Packet size 32k bytes: 114881 KByte/s Tx, 114905 KByte/s Rx.
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.103.101.117 (10.103.101.117) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
32768 32768 32768 10.01 1152.76
This look much better: maximum bandwith is 1200 Mb/s as I understand.
How can this be explained?
Probably by a various factors:
1.) Lack of SMP scalabity in the network stack.
2.) No MSIE-X support.
3.) No RSS support in the driver.

You will also struggle to sature a 10Gb/s link with a single TCP connection
in general.

Kind regards
--
Matthias Scheler https://zhadum.org.uk/
Fredrik Pettai
2014-07-06 20:51:11 UTC
Permalink
Post by Matthias Scheler
Post by Emmanuel Dreyfus
Post by Stephan
TCP connection established.
Packet size 1k bytes: 114938 KByte/s Tx, 114816 KByte/s Rx.
Packet size 2k bytes: 114924 KByte/s Tx, 114868 KByte/s Rx.
Packet size 4k bytes: 114871 KByte/s Tx, 114901 KByte/s Rx.
Packet size 8k bytes: 114877 KByte/s Tx, 114900 KByte/s Rx.
Packet size 16k bytes: 114882 KByte/s Tx, 114914 KByte/s Rx.
Packet size 32k bytes: 114881 KByte/s Tx, 114905 KByte/s Rx.
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.103.101.117 (10.103.101.117) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
32768 32768 32768 10.01 1152.76
This look much better: maximum bandwith is 1200 Mb/s as I understand.
How can this be explained?
1.) Lack of SMP scalabity in the network stack.
2.) No MSIE-X support.
3.) No RSS support in the driver.
You will also struggle to sature a 10Gb/s link with a single TCP connection
in general.
Actually, it used to be ok performance, but after NetBSD 2.x - 3 (somewhere) release the performance went down.

http://bsd-beta.slashdot.org/story/04/05/03/2235255/netbsd-sets-internet2-land-speed-world-record

(I think the old Dell 2650 with the iNTEL 10 GB/s cards are still somewhere in the basement of LTU…)

/P
Emmanuel Dreyfus
2014-07-17 04:19:27 UTC
Permalink
Post by Emmanuel Dreyfus
I experiment 10 GE link with ixb(4), but the result is really weak: The
two machines have a direct link through a SFTP+ câble, and copying a file
over NFS I get a throughput of 1.8 Mb/s, which is less than 2% of the
link capacity. Any idea of where to look for imrovement?
I measured TCP performance with netperf and I was able to increase
throughput up to 2.2 Gb/s using the following settings:
ifconfig ixb0 mtu 9000 tso4 ip4csum tcp4csum-tx udp4csum-tx
sysctl:
kern.sbmax = 67108864
kern.somaxkva = 67108864
net.inet.tcp.rfc1323 = 1
net.inet.tcp.sendspace = 2097152
net.inet.tcp.recvspace = 2097152
net.inet.tcp.recvbuf_auto = 0
net.inet.tcp.sendbuf_auto = 0

tcp4csum-rx and udp4csum-rx produces errors despite being advertised by
ifconfig capabilities line.

I do not find any way to improve further. But there is a very high
latency that may be responsible for the problem:
PING 10.103.101.113 (10.103.101.113): 48 data bytes
64 bytes from 10.103.101.113: icmp_seq=0 ttl=255 time=0.582 ms
64 bytes from 10.103.101.113: icmp_seq=1 ttl=255 time=0.583 ms
64 bytes from 10.103.101.113: icmp_seq=2 ttl=255 time=0.568 ms
64 bytes from 10.103.101.113: icmp_seq=3 ttl=255 time=0.565 ms

This is twice what I get on gregular gigabit ethernet. Any idea of why
it is so high? >= 500 ms seems wrong on a 10GE direct link...
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
***@netbsd.org
Stephan
2014-07-17 05:28:31 UTC
Permalink
I saw these ping latencys of 0,5 seconds on very different NetBSD
Servers with completely different hardware. There might be a
regression somewhere. I don´t know, if so, wheather it is specific to
ICMP or networking in general.
Post by Emmanuel Dreyfus
Post by Emmanuel Dreyfus
I experiment 10 GE link with ixb(4), but the result is really weak: The
two machines have a direct link through a SFTP+ câble, and copying a file
over NFS I get a throughput of 1.8 Mb/s, which is less than 2% of the
link capacity. Any idea of where to look for imrovement?
I measured TCP performance with netperf and I was able to increase
ifconfig ixb0 mtu 9000 tso4 ip4csum tcp4csum-tx udp4csum-tx
kern.sbmax = 67108864
kern.somaxkva = 67108864
net.inet.tcp.rfc1323 = 1
net.inet.tcp.sendspace = 2097152
net.inet.tcp.recvspace = 2097152
net.inet.tcp.recvbuf_auto = 0
net.inet.tcp.sendbuf_auto = 0
tcp4csum-rx and udp4csum-rx produces errors despite being advertised by
ifconfig capabilities line.
I do not find any way to improve further. But there is a very high
PING 10.103.101.113 (10.103.101.113): 48 data bytes
64 bytes from 10.103.101.113: icmp_seq=0 ttl=255 time=0.582 ms
64 bytes from 10.103.101.113: icmp_seq=1 ttl=255 time=0.583 ms
64 bytes from 10.103.101.113: icmp_seq=2 ttl=255 time=0.568 ms
64 bytes from 10.103.101.113: icmp_seq=3 ttl=255 time=0.565 ms
This is twice what I get on gregular gigabit ethernet. Any idea of why
it is so high? >= 500 ms seems wrong on a 10GE direct link...
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
Emmanuel Dreyfus
2014-07-17 06:31:13 UTC
Permalink
Post by Stephan
I saw these ping latencys of 0,5 seconds on very different NetBSD
Servers with completely different hardware. There might be a
regression somewhere. I don´t know, if so, wheather it is specific to
ICMP or networking in general.
It must be specific to the driver and/or the hardware. I have bge interfaceon the same machine that have lattency between 0.100 and 0.250 ms.
--
Emmanuel Dreyfus
***@netbsd.org
Stephan
2014-07-17 09:00:00 UTC
Permalink
I have 2 boxes here at hand with wm interfaces and they reply in
between 0,5 and 0,8 secs. Likewise, when I ping another host (Solaris)
from these boxes it gives me a response time of 0,5 - 0,8 secs. When I
ping the same Solaris host from another source it tells me a time of
~0,1 secs.
Post by Emmanuel Dreyfus
Post by Stephan
I saw these ping latencys of 0,5 seconds on very different NetBSD
Servers with completely different hardware. There might be a
regression somewhere. I don´t know, if so, wheather it is specific to
ICMP or networking in general.
It must be specific to the driver and/or the hardware. I have bge interfaceon the same machine that have lattency between 0.100 and 0.250 ms.
--
Emmanuel Dreyfus
Stephan
2014-07-17 09:21:00 UTC
Permalink
I just found a box with a bge interface and I can confirm that it
replies in around 0,1 - 0,2 msecs.
Correction: msecs. - not secs.
Post by Stephan
I have 2 boxes here at hand with wm interfaces and they reply in
between 0,5 and 0,8 secs. Likewise, when I ping another host (Solaris)
from these boxes it gives me a response time of 0,5 - 0,8 secs. When I
ping the same Solaris host from another source it tells me a time of
~0,1 secs.
Post by Emmanuel Dreyfus
Post by Stephan
I saw these ping latencys of 0,5 seconds on very different NetBSD
Servers with completely different hardware. There might be a
regression somewhere. I don´t know, if so, wheather it is specific to
ICMP or networking in general.
It must be specific to the driver and/or the hardware. I have bge interfaceon the same machine that have lattency between 0.100 and 0.250 ms.
--
Emmanuel Dreyfus
Stephan
2014-07-17 09:01:19 UTC
Permalink
Correction: msecs. - not secs.
Post by Stephan
I have 2 boxes here at hand with wm interfaces and they reply in
between 0,5 and 0,8 secs. Likewise, when I ping another host (Solaris)
from these boxes it gives me a response time of 0,5 - 0,8 secs. When I
ping the same Solaris host from another source it tells me a time of
~0,1 secs.
Post by Emmanuel Dreyfus
Post by Stephan
I saw these ping latencys of 0,5 seconds on very different NetBSD
Servers with completely different hardware. There might be a
regression somewhere. I don´t know, if so, wheather it is specific to
ICMP or networking in general.
It must be specific to the driver and/or the hardware. I have bge interfaceon the same machine that have lattency between 0.100 and 0.250 ms.
--
Emmanuel Dreyfus
Continue reading on narkive:
Loading...