Previous Table of Contents Next


Defining processor performance is a breeze compared to defining performance of a complete computer. There are so many variables, it becomes almost impossible to point out a single winner. Different computers, for example, have different motherboards, onboard BIOS, memory types, cache types, video adapters, disk adapters, disk drives, and a plethora of various peripherals. So, how can you arbitrarily say that one computer outperforms another? The truth of the matter is that you can’t. What you can say is that one computer has a faster processor or faster video or what not. You can only compare the individual aspects of the computer. By testing various sub components of a system to accomplish a specific series of tasks, you can then say that the computer that finishes the task sooner is the overall winner and performs better than another. This is the basis for the PC Magazine benchmark program WinBench.

Of course, WinBench is a benchmark program designed for end users to evaluate their desktop computers. It is not designed to evaluate the performance of a server. The PC Magazine NetBench on the other hand is designed to benchmark a server by simulating actual client usage. But even this is a rough estimate of a server’s capacity. It doesn’t necessarily apply to what you are trying to accomplish. By this, I mean you don’t really care what the benchmark programs have to say about your server. You’ve already purchased the server and just want it to do its job as efficiently as possible. And this is where the concept of performance begins to take shape.

Efficiency is a key concept in defining performance. If your server is processing data as efficiently as possible, it is also performing as best as it can. There is nothing you can do to improve the situation, unless you change some characteristic of the server to improve its efficiency. I bet those last two statements have you thinking I’m crazy, but I’m not. You can improve the efficiency of a server by removing the bottleneck that is impairing its ability. A bottleneck is simply a choke point that limits your server’s ability to perform a given task.

If your server cannot send or receive network packets from your clients fast enough, your network adapter is probably the bottleneck. In which case, if you have a 10 Base-T network card, you can replace it with a 100 Base-TX network card and improve the server’s ability to transmit/receive data on the network. If your server cannot access data on the I/O subsystem fast enough to keep up with the user demand, the I/O subsystem is the bottleneck. In this case, adding an array of disk drives to replace your single drives can improve the server’s ability to access data and improve performance. Each time you remove one bottleneck, you improve your server’s efficiency. But each time you remove one bottleneck, you will expose another.

On a file server, the I/O subsystem is usually the bottleneck. However, if you improve the efficiency of the I/O subsystem, you’ll probably find that the network subsystem is now the bottleneck. Improve the network subsystem, and you’ll probably find that the processor is now the bottleneck. Add another processor or replace the current processor with a faster processor, and you’ll probably find that the I/O subsystem is, once again, the bottleneck. And it goes on and on and on—forever. Somewhere, you have to draw a line and say enough is enough. At that point, you have reached your maximum efficiency level.

So, where are we heading with this discussion? We’ve reached the bottom line in defining performance, and it can be summed up with the following three observations:

  Performance is relative to the task you want to accomplish.
  Server performance is relative to the efficiency of a specific subcomponent of your server.
  Improve the efficiency of one subcomponent, and you will expose the inefficiency of another subcomponent.

Defining performance is a difficult subject, but I hope I’ve made it a bit easier for you to understand. Measuring performance is much easier, and it is the subject of our next discussion.


Previous Table of Contents Next