gcocco software Home!
gcocco software, inc
Benchmark: SPECweb®99

FAQ:
  1. What is SPECweb99 anyway?
  2. What does SPECweb99 measure?
  3. Why is SPECweb99 important?
  4. Why did gcocco software publish two SPECweb99 runs on the identical
    Sun hardware configuration?
  5. Why is gcocco software's measurement 77% better than IBM's
    own measurement?
  6. How important is the web server for a measurement?
  7. How important is caching to the SPECweb99 benchmark?

Machines Tested:

A.   IBM RS/6000 7044-170

  1. SPECweb99 Results generated with the Zeus Web Server
  2. Tricks/Tips/Cautions
  3. Net/Net - The Performance you should expect!

B.   Sun Microsystems Enterprise 420R

  1. SPECweb99 Results generated with the Zeus Web Server
  2. Non-Repeatability of SPECweb99 Results with Solaris 8
  3. Per Processor Performance Comparison
  4. SPECweb99 Results generated with the iPlanet Web Server
  5. Tricks/Tips/Cautions
  6. Net/Net - The Performance you should expect!



Questions and Answers:
  1. What is SPECweb99 anyway?

  2. SPECweb99 is a client-server measurement where the object is to test the web server/hardware performance. A set of client "drivers" simulates a mix of mostly static web requests to the system being measured. The "web request mix" being measured is probably more "static" than is typical of current web sites.


    Back to FAQ

  3. What does SPECweb99 measure?

  4. The key metric for SPECweb99 is simultaneous connections (conns). That is, how many simultenous conforming web connections (of the specific SPECweb99 "mix") are sustained over the length of the benchmark. The "synthetic web request mix" is about 70% Static GET, 25% Dynamic GET and 5% Dynamic POST operations. File sizes retrieved from the web server also vary from a trivial 100 Bytes to 1 Million Bytes. File space used for the "served documents" also increases as the size of the benchmark grows. Efforts are even made through the use of special random number distributions to simulate the effect of a localization of accesses to a subset of the directories. This "localization of access" is a natural effect of some things being more interesting than others!

    Components stressed for this benchmark are the network adapters (usually multiple copies of Gigabit Ethernet Adapters), CPU and Memory. Memory size should be large enough to cache the entire fileset being served or I/O becomes a bottleneck in the measurement. The CPU can be a bottleneck if the web server is ill-tuned or the CPUs are slow. Network adapters can quickly become a latency problem if too many requests are driven through a single adapter. Note the number of adapters that high-end measurements use.


    Back to FAQ

  5. Why is SPECweb99 important?

  6. If web serving is important to you and you are doing a fair amount of static web page serving, the techniques used for the SPECweb99 submissions should be helpful. Generally, a surprisingly few parameters make a large difference in the final performance. Picking up some of these tips should be helpful to you.

    We would caution that some if not all vendors use various means of caching technologies to speed up their runs. If you are analyzing runs, be aware that caching (SWS - from Microsoft, SNCA - from Sun, etc) boosts performance by about 30 to 50%. Also, the caching schemes don't always work correctly! Some caching techniques have been shown to not work correctly but they still could be respresented in SPEC's online results.

    So the SPECweb99 benchmark is very useful, but as with any benchmark, you must be sure you understand what you are comparing.


    Back to FAQ

  7. Why did gcocco software publish two SPECweb99 runs on the identical Sun hardware configuration?

  8. See FAQ#3 for some background on comparing results. gcocco software publishes numbers on identical hardware with varying parameters to assess various tuning options. In this specific case, we see the impact of adding SNCA (Sun's Network Cache Architecture) to the performance of the E420R.


    Back to FAQ

  9. Why is gcocco software's measurement 77% better than IBM's own measurement?

  10. Up until this measurement, IBM was drunk and disorderly on its' low end UNIX measurements. Big Blue was focusing on Big Machines! They did well with this focus, since the Big Machines are performing well. However, after this "wake-up call", the low end measurements have made "real" improvements. Good job Bill!


    Back to FAQ

  11. How important is the Web Server for a measurement?

  12. In one word, "critical". It is even more critical on large multi-processor machines. The basic web server design along with caching architectures supported "make or break" a top measurement.


    Back to FAQ

  13. How important is caching to the SPECweb99 benchmark?

  14. Caching, both that within the web server and a "network level" caching are key to successfully handling this benchmark. Large memory enables caching the entire fileset that is being "served" by the web server. A "smart" network layer (SWS, SNCA, etc) allows immediate turnaround on static pages. Intelligent caching can be a 30-50% improvement in the final "connections" result!


    Back to FAQ

A. Machine Tested:IBM RS/6000 7044-170

System RS/6000 44P Model 170
CPU (1) - POWER3-II
Clock Speed 450 MHz
Memory 2 GB

All Results quoted or displayed in this section have been
reviewed and accepted
by the OSG Web99 Subcommittee of the
Standard Performance Evaluation Corporation (SPEC®).

A hyper-link is provided on each result to the
"SPEC full disclosure report" which is located on the
OSG Web99 Web-site.


Back to Topics


a. SPECweb99 Results generated with the Zeus Web Server

Tuning CPUs MHz Zeus Test Date Result Report
ibm1 opt 1 400 3.1.8 Dec-1999 460 SPEC full disclosure
gsi2 high opt 1 450 3.3.8.4 Aug-2001 816 SPEC full disclosure

Table 1. Two runs on nearly identical hardware with vastly different results.

There are two major differences between the IBM generated results of 460 and the gcocco software generated result of 816. The clock speed was improved from 400 to 450MHz (13%) improvement, and a newer release of Zeus was used in the later (faster) measurement. With some additional I/O tuning, the total improvement was 77% over IBM's best result. Since the 816 measurement was done, Zeus has continued to fine tune its' web server and there is about another 10% improvement one should see. We'll see if IBM takes the time to come back and refresh this measurement.


Back to Topics


b. Tricks/Tips/Cautions:
The best "trick" learned here is that caching is aided by turning the large "readonly" fileset used by the benchmark into a "readonly" file system. This effectively "removes" work from what needs to be done and therefore allows more "benchmark" work to be done. This "trick" is really a "technique" which can be used by folks in the real world on their web servers.


Back to Topics


c. Net/Net - The performance you should Expect!
Having worked for IBM in a past life doing measurements, it does not surprise us that the IBM results on the SPEC site (as well as elsewhere) are typically conservative.

We have scrutinized all of the SPECweb measurements on the SPEC website and found that IBMs are probably the most conservative on the site. At least two IBM results were much lower than expected, one measurement where IBM published 460, and gcocco software published 816. A second measurement where IBM published a 1359 on a 7044-270 (4-way processor) which went up to 3497 in a re-measurement by IBM. Granted, the processor speed went from 375 MHz to 450 MHz, the memory doubled and the version of Zeus changed, but the improvement was about 150% for a clock speed improvement of 20%. Quite an improvement!

The net of this analysis is that IBM's numbers are "solid" as minimums. You don't have to worry that so many tricks are being played, or options are used that IBM doesn't recommend. As in the past, when IBM releases a measurement it is a real number, perhaps not always the "best" number, but a solid indicator of performance that you can and should easily be able to achieve.


Back to Topics

B. Machine Tested: Sun Microsystems Enterprise 420R

System Sun Enterprise 420R
CPU (4) - UltraSPARC II
Clock Speed 450 MHz
Memory 4 GB

All Results quoted or displayed in this section have been
measured according to rules defined
by the OSG Web99 Subcommittee of the
Standard Performance Evaluation Corporation (SPEC®).

A hyper-link is provided on each result to the
"SPEC full disclosure report".
Reviewed and accepted reports are located on the
OSG Web99 Web-site.


Back to Topics


a. SPECweb99 Results generated with the Zeus Web Server

Tuning Solaris 8 NCA CPUs Zeus Accepted Result Report
sun1 high opt 4/01+ no 4 3.3.8.4 13-Nov-2001 1150 SPEC full disclosure
sun2 high opt 4/01+ yes 4 3.3.8.4 13-Nov-2001 1400 SPEC full disclosure
sun3 high opt 4/01+ yes 4 3.3.8.4 failed - integrity 1500 full disclosure
sun4 high opt 4/01+ yes 4 3.3.8.4 failed - integrity 1750 full disclosure
sun5 high opt 4/01+ yes 4 3.3.8.4 failed - both 1800 full disclosure

Table 2. Comparison of runs with and without Sun's Network Cache function activated.

The first two runs above are the same except for the use of SNCA (Sun's Network Cache). Improvement due to SNCA was about 22%, far less than what was expected. Further runs, where the benchmark was taken above the 1400 simultaneous connections, showed a data integrity problem which was reported by the benchmark runtime checking code. All runs from 1450 to 1750 (which otherwise pass the performance criteria of the benchmark) fail due to multiple "POST" problems. The NCA is occasionally passing multiple "POST" requests to the web server. This is a major integrity problem. Think of the "POST" as a withdrawl from your on-line bank account. How would you like it if occasionally, you had "multiple" withdrawls sent for one request.

At 1800 connections, we both fail for multiple "POST"s and fail for performance reasons. The net of this chart is that SNCA is not in our opinion, production ready. There are also operational difficulties when using SNCA which cause reboot hang problems. Overall, you should "downgrade" the results of Sun SPECweb99 results by about [(1750-1150)/1150 = .52] 52% to get to a non-SNCA number which should be closer to what you are likely to achieve.

  1. Tuning: definitions.

    All runs for this section used all the tricks and techniques that Sun uses in its' runs. This is characterized as "high opt", and probably is more highly tuned than you can expect in production. For details on what was done, click on the "SPEC full disclosure" link and look down at the tuning sections of the disclosure. The runs above should only vary in the use of Network Cache and the number of connections were attempted.

  2. Solaris 8: versions

    Sun Solaris version 8 release 04/01 plus selected patches was the base operating system used. The exact set of patches is defined in the "SPEC full disclosure" and must be applied for full performance to be achieved. The patches listed in the disclosure are the same used by Sun for a similar measurement (Sun Fire 280R) , and are critical to achieve the level of performance shown here.

  3. NCA: Network Cache Architecture

    Sometimes referred to as SNCA (Sun's Network Cache Architecture), this mechanism inserts itself into the TCP/IP stack and attempts to cache and respond to repeated "static" requests with very low overhead. Since this caching mechanism is very low "down" in the TCP/IP stack, it automatically gains performance by not having the entire TCP/IP stack and web server get involved in answering the request. Dynamic requests, by their very nature do not take advantage of this type of caching.

  4. CPUs: Number of CPUs.

  5. Zeus: Version of Zeus Web Server tested.

    This is the same level of Zeus tested by Sun on their Sun Fire 280R (2136 conns) and Sun Netra 20 (2156 conns).

  6. Accepted: SPECweb99 Committee acceptance date

    This is the date of the SPECweb99 review meeting for this benchmark run. All results quoted or displayed in this section have been measured according to the OSG SPECweb Subcommittee of the Standard Performance Evaluation Corporation (SPEC®) which is the governing body for this benchmark. Measurements "sun1" and "sun2" have been reviewed and accepted by the OSG SPECweb Subcommittee of the Standard Performance Evaluation Corporation (SPEC®) which is the governing body for this benchmark. Measurment "sun3" failed the data integrity component of the benchmark, but performed within the "peformance" criteria. Measurement "sun4" fails both the data integrity and the performance criteria of the benchmark, which says the value 1700 is definitely an "upper bound" on the performance of this benchmark (should the caching work properly).

  7. Result: simultaneous connections (conns)

    The only approved metric used to compare SPECweb99 results from vendor to vendor simultaneous connections!


Back to Topics


b. Non-Repeatability of Results with Solaris 8

Due to the fact that we had some Data Integrity Problems with Solaris 8 04/01 - SNCA, we decided to install the next release of Solaris (07/01), which is the same release used in Sun's Netra 20 measurements. We were hoping that the 11 performance fixes needed to run SPECweb99 measurments on Solaris 8 (04/01) would be integrated into the 07/01 release and our performance testing would go smoothly.

Actually, things got much worse. Performance suffered dramatically. The previous base performance of 1150 connections went down 77% to 650! Measurement "sun07a" below shows a passing run with the new release of Solaris 8 07/01. Run "sun07b" shows that the addition of 25 more simultaneous connections causes the run to fail. Further investigation showed that only 6 of the 11 fixes needed to run sucessfully on Solaris release 04/01 were actually shipped with this code. We were somewhat stunned at the degadation in performance, but were relieved to find a potential "cause".

Applying the missing 5 fixes, one-at-a-time and repeating the measurements showed no measureable change. Measurement "sun07c" is identical to "sun07a" with the addition of the 5 missing fixes. The same is true for "sun07b" and "sun07d".

The missing fixes were:

  • 111293-04 libdevinfo
  • 108528-10 kernel update
  • 109472-07 TCP
  • 109234-06 NCA
  • 109279-15 ndd fix
The degradation that was on the base system is real. Given the degradation to the base system, we did not bother doing runs with SNCA turned on! What is very strange to us, is that Sun claims this level of Solaris for the Sun Netra 20 (2156 conns) run and discloses no fixes. Our only conclusion is that fixes weren't disclosed, or the measurements were done on a "pre-release" version of 07/01 and that they had a serious performance degradation in what was actually released.

Tuning Solaris 8 NCA CPUs Zeus Criteria Goal Result Report
sun07a high opt 7/01 no 4 3.3.8.4 pass 650 650 full disclosure
sun07b high opt 7/01 no 4 3.3.8.4 fail 675 494 full disclosure
sun07c high opt 7/01+ no 4 3.3.8.4 pass 650 650 full disclosure
sun07d high opt 7/01+ no 4 3.3.8.4 fail 675 545 full disclosure

Table 3. Comparison of runs with new release of Solaris 8 and no other changes.


Back to Topics


c. Per Processor Performance Comparison

A different way of looking at the SPECweb99 performance is to look at the "per CPU performance" of multiple CPU runs. This is an area that Sun claims "leadership", but they fall short of actually producing those "leadership" results. Let's look at some of the (very few) numbers that Sun has chosen to publish results for in the following table.

The performance per processor drops from 1068 to 728 (about 47%) when going from a 2-way to a 12-way system. It should be noted that part of the problem could be the failure of iPlanet to scale well. With Sun ever more reliant on 64 and 128-way processors to compete, users must watch the effects of scaling closely.

Machine Ultra Sparc MHz CPUs Server Result Result/CPU Report
s1 SunFire 4800 III 750 12 iPlanet 8738 728 SPEC full disclosure
s2 SunFire 280R III 750 2 Zeus 2136 1068 SPEC full disclosure
s3 SunFire 280R III Cu 900 2 Zeus 2503 1252 SPEC full disclosure
s4 Sun Netra 20 III 750 2 Zeus 2156 1078 SPEC full disclosure

Table 4. Per Processor comparison of SPECweb99 results published by Sun.


Back to Topics


d. SPECweb99 Results generated with the iPlanet Web Server

We had hoped to give a comparison of Zeus to iPlanet on equal hardware platforms. You will notice that Sun goes to great pains to release SPECweb99 runs that will not allow Zeus and iPlanet to be compared. We did do a fair amount of runs before noticing the very last line of the licensing agreement for the iPlanet code that we had purchased. That last term of the agreement is as follows:

4. Additional Restrictions: You may not publish or provide the results of any benchmark or comparison tests run on the Software to any third party without the prior written consent of Sun.

From: iPlanet/Web Server Enterprise Edition 6.0 License/Rev 2.0 19Oct01/KKP

We can tell you without hesitation (or violating the licensing terms), that the above clause was put in for a very good reason! If we say more, we would be in violation of the agreement. You should do your own tests, and make up your own mind. Fortunatelly, that is allowed by the terms of the agreement!


Back to Topics


e. Tricks/Tips/Cautions:

Our experience has shown that Sun has difficulty with the management of and itegration of fixes into their mainline operating system code. Other research has corroberated our results, and shows that the user must be aware of exactly what is needed to make their systems run "correctly" and/or "fast". You have been warned.


Back to Topics


f. Net/Net - The performance you should Expect!

Our recommendation is to run a custom benchmark if you are really concerned. With Sun, we hesitate to give a trend, other than "your results will certainly be lower than what Sun predicts". In some cases, as happened to us, it may be significantly lower!


Back to Topics

gcocco home
webmaster at gcocco dot com
Copyright © 1998 - 2006 gcocco software, inc. All Rights Reserved.
Page generated on: Saturday 30 September 2006 at 12:13:21 PM
URL: http://www.gcocco.com