Tuning networking resources

Tuning LAN Manager Client Filesystem performance

If the output from the sar -u command (see ``Identifying disk I/O-bound systems'') shows that a LAN Manager client is spending a significant proportion of time waiting for I/O to complete (%wio is consistently greater than 15%), and this cannot be attributed to local disk activity (a disk is busy if sar -d consistently shows avque greater than 1 and %busy greater than 80%), then the performance of the LAN Manager Client Filesystem (LMCFS) may be causing an I/O bottleneck.

The kernel parameters that control the behavior of LMCFS are described in ``LAN Manager Client Filesystem parameters''. These parameters can only be adjusted using idtune(ADM) as described in ``Using idtune to reallocate kernel resources''.

There are three areas where you can examine the performance of LAN Manager clients:

Examining possible network or server problems

The -v option to the vcview(LMC) command indicates possible network problems that may be affecting LMCFS performance:

   MaxXmt MaxRcv MaxMux TxCnt  TxErr RxCnt  RxErr Conns Retrans Reconns
   4096   4096   50     32131  5     34023  21    1     15      4
If the transmission error rate (100*TxErr/TxCnt) or reception error rate (100*RxErr/RxCnt) is high (greater than 10%) or either of these these rates is increasing, the network or the server may be overloaded.

Similarly, if the number of retransmissions (Retrans) or reconnections (Reconns) is increasing, this may also indicate that the network or the server is overloaded.

Examining the usage of server message blocks and lminodes

The command lmc stats (see lmc(LMC)) displays the usage of server message block (SMB) data buffers, request slots, and LAN Manager inodes (lminodes):

                   alloc    maxalloc  avail    fail    hiprifail
   SMB buffers:    49       102       1024     0       0         
   SMB req slots:  49       60        256      0       -
   SMB sync reads 840 (8.31% of total reads)

lminode alloc failures 0

If insufficient SMB data buffers or request slots are configured, processes will wait until more become available.

Increase the value of the kernel parameter LMCFS_NUM_BUF if the fail column displays a non-zero value for SMB data buffers.

Increase the value of the kernel parameter LMCFS_NUM_REQ if the fail column displays a non-zero value for SMB request slots.

Increase the value of the kernel parameter LMCFS_LMINUM if lminode alloc failures shows a non-zero value.

If the proportion of synchronous reads shown by SMB sync reads is high, this can have a significant negative impact on performance. You can increase the size of the read-ahead buffer using the rawsize option modifier to mount. This data is discarded if it is not used within the time set by the udttl option modifier to mount.

Examining the performance of each mounted filesystem

The command lmc mntstats (see lmc(LMC)) shows statistics for each mounted LAN Manager filesystem:

   NT/TMP mounted on /mnt, user-based, asynch
   rbsize rawsize wbsize awwsize timeout retrans udttl old r/a Broken oplocks
   8192   16384   8192   16384   300     5       50    0       0
old r/a shows the number of read-ahead blocks that have been discarded because the client did not make use of the data quickly enough. If the value of old r/a is increasing, either increase the value of udttl or decrease the size of rawsize. Which action you should take depends on how long you are willing to see the data age.

Broken oplocks shows the number of opportunistic locks that were relinquished. This shows contention due to several clients accessing the same files on the server.

Next topic: Other networking resources
Previous topic: LAN Manager Client Filesystem resources

© 2003 Caldera International, Inc. All rights reserved.
SCO OpenServer Release 5.0.7 -- 11 February 2003