JS Benchmarks: Closing In

It’s time for an update on Firefox JS performance. This post will take a look at some benchmark scores, and then dive deeper into what those benchmarks actually measure.

Now that JaegerMonkey is available, it’s time for an update on some familiar benchmarks. Contributors to the Mozilla JS engine are making performance improvements throughout the Firefox 4 development cycle, and the progress has been pretty rapid. I ran a bunch of modern browsers on a Lenovo Thinkpad X201s running Windows 7. Here’s how Firefox has progressed on the the SunSpider benchmark:

As you can see, Firefox is making rapid progress here. The chart below shows us relative to the competition. The gap here is much narrower than it used to be, and we have more improvements coming.

On the v8 benchmark from Google, Firefox’s score is improving more dramatically.

Here’s our V8 score relative to other modern browsers:

So, that’s where things stand right now on a couple of benchmarks, but expect updates from us in the near future.

What are these tests measuring?

The answer to this question is difficult. Benchmarks like these are supposed to measure the execution speed of JavaScript, but they often end up measuring very unrealistic work loads. For example, the code below is the entirety of SunSpider’s bitwise-and.js test:

bitwiseAndValue = 4294967296;
for (var i = 0; i < 600000; i++)
    bitwiseAndValue = bitwiseAndValue & i;

I'm not sure measuring the speed of this loop is very useful, but Firefox is the world champion at running it.

A bigger problem with many current benchmarks is that there's a temptation to cache results where possible. That can really pump up a score, but the likelihood of such a cache being useful in non-benchmark code is quite low. For instance, all engines now have a cache for eval strings (here's ours). This cache is occasionally useful in real world code: sites like digg.com and others were found to hit the cache. However, the cache results in a huge speedup on SunSpider, which overstates its importance.

Yet another problem we've had is that many benchmarks don't actually do anything, so they are very prone to mistakes. Google's V8 benchmark includes a test called "splay" which does some splay tree operations. However, since it isn't used in an actual program, the test has had several problems. First, we found that the test spent all of its time converting numbers to strings. Later, we found that the benchmark was adding and inserting nodes into the splay tree in an unrealistic pattern. To their credit, the V8 team has been relatively quick to fix these issues as we discover them. However, the root cause of these problems is that the benchmark doesn't run a real program.

Mozilla is also guilty of writing bad tests. Our Dromaeo test suite is full of little tiny loops and unrealistic workloads. The Dromaeo suite contained regex and string tests written in such a way that it made it easy to use a one-element cache for some tests. We had to fix that, but there are still lots of other similar issues.

One last issue that can crop up has to do with over-specialization for a specific test. While I was running the SunSpider tests above, I noticed that IE9 got a score that was at least 10x faster than every other browser on SunSpider's math-cordic test. That would be an impressive result, but it doesn't seem to hold up in the presence of minor variations. I made a few variations on the test: one with an extra "true;" statement (diff), and one with a "return;" statement (diff). You can run those two tests along with the original math-cordic.js file here.

All three tests should return approximately the same timing results, so a result like the one pictured above would indicate a problem of some sort.

We're excited about the speed improvements we've already made for Firefox 4, and even more excited about those yet to come. We hope that all of the improvements will speed up code that our users run. And we'll keep hammering on those benchmarks.

New to SpiderMonkey, in a couple different senses

Paul Biggar has joined Mozilla. He’s just written up a guide to SpiderMonkey for new contributors.

Please join me in welcoming Paul.

The Idea Guy

A blog post by Bob Sutton, Strategy Is For Amateurs, Logistics Are For Professionals, really helped me understand a situation in which I should communicate more clearly. Sometimes, the execution of an idea is so bad that you can’t really evaluate the idea itself. It doesn’t help to discuss the pros and cons of the idea.

Even more commonly, someone will shop around an idea, and wonder why their idea received a cool reception. For the proponent of the idea, it’s easy to dismiss using labels such as Fear Of Change, Developers Don’t Understand Users, UX People Don’t Understand Developers, Whatever, etc. In reality, it’s probably just that the proponent has done less than 1% of the work necessary, and the other people in the room can smell The Idea Guy. Needless to say, I know what it’s like to be on both sides of this one. In the future, I think I’ll be better at recognizing these conversations.

Mozilla’s New JavaScript Value Representation

Here at Mozilla, we have many monkeys.

One such effort, JaegerMonkey, is focused on revamping the baseline performance of our JS Engine. That effort is going really well. On the SunSpider benchmark, JaegerMonkey is starting to pull away from the Mozilla trunk’s JS Engine. Both are faster than the engine that ships in Firefox 3.6. JaegerMonkey is not a total rewrite, but it does change some fundamental parts of the engine. If you’re an extension author that uses the JSAPI directly, or you embed Mozilla’s JS Engine in other software, there are some changes you’ll need to know about. Our new representation of JavaScript Values (aka jsvals) is the first big change. It has just landed on mozilla-central. The patch was a ton of work, and most of the credit goes to Mozilla engineer Luke Wagner.

So, what is a jsval? It’s the C/C++ type that corresponds to a value in a JavaScript program. Here’s a snippet of JavaScript that assigns some values to four variables:

var foo = {dana: "zuul"};
var bar = "hi";
var baz = 37;
var qux = 3.1415;

Developers can use the JSAPI to manipulate these values from C and C++. Since JavaScript is dynamically typed, the types of those values can change at runtime. For example, the type of the value of ‘bar’ in the code above could change from a string to a number. C++ code sometimes needs to be able to tell which type a value has at a given moment, so there needs to be a clever way to pack that information into jsval type.

Below, I’ll explain how the new value representation works on 32-bit systems, using information cribbed from a presentation by Mozilla engineer David Anderson. We adapted this layout from WebKit and LuaJIT, with some modifications. On 64-bit systems, our design is different from theirs. The basic idea is pretty old–it can be found in this 1993 survey paper.

The Old Way

The old jsval representation fit in a 32 bit value, using the 3 lowest bits as a way to tag the value as a particular type. These were called type tags.


A jsval with an object value would look like this:

var foo = {dana: “xuul”} @ 0×86753090

C++ code can inspect the three tag bits at the end. By observing that all three are 0 valued, it can determine that the value is an object, and should be interpreted as a pointer to a JSObject at the address 0×86753090.


Strings worked similarly:

This time, one of the tag bits is set to 1, so C++ code knows that this value is a pointer to a string. Once the type tag was determined, the implementation would perform some bit masking to determine the true value of the last 4 bits.

var bar = “hi” @ 0×20506638

The masked value contains the correct pointer to a JS String at 0×20506638.


Integers were stored in the value itself:

The implementation only examined the least significant tag bit for a 1 value to determine whether it was an integer. If it was, it would perform right shift on the value to get the actual integer value.

var baz = 37; // 0×25 in hex

The mechanics of this scheme mean that integers only have 31 bits of space to work with, rather than the usual 32. Floating point numbers don’t fit at all. For example, the float 3.1415 looks like this in memory: 400921cac083126f. Too many bits.

When an integer got too big to fit in 31 bits, or a floating point number was encountered, the value would get converted to a double, and stored on the heap.

Once again, some bit masking was performed to determine the actual address of the memory:

var qux = 3.1415 @ 0xA0B0CCD0 => 400921cac083126f

We have a 32-bit value that contains a pointer to the real value of the float, which is 64 bits wide. This arrangement is bad for at least three reasons. Firstly, we have to allocate to create a number. Secondly, we have to clean up that number later (during GC). Thirdly, it hurts locality, because we have to fetch float values from arbitrary heap locations to do even simple calculations.

The New Way

We call them Fat Values. They’re 64 bits wide.

For Objects, Strings, and Integers, we use the first 32 bits as a type tag. The second 32 bits contains the payload.

var foo = {dana: “xuul”} @ 0×86753090

var bar = “hi” @ 0×20506638

var baz = 37

The payoff is that we can fit the full range of 32-bit integers in integer-tagged values, and floating point numbers fit right in the jsval:

var qux = 3.1415; // (400921cac083126f)

We can distinguish type tags from floating point numbers by using a quirk of IEEE-754 double precision numbers.

The first bit is the sign bit (purple), and the next eleven (yellow) are all exponent bits. If all of the exponent bits are 1s, then the number is a NaN, unless all of the remaining bits (the blue ones) are 0s. If all of the blue bits in this diagram were 0, the value would be either positive or negative infinity. We distinguish the values we’re using for type tags from other NaNs by marking the first 16 bits as 1s. In practice, all hardware and standard libraries produce a single canonical NaN value, so we’re free to use all of the other values for our own purposes. This technique is called NaN boxing.

Changes for JSAPI users

Here is the short version, courtesy of Luke Wagner.

  • jsval is no longer word-sized
  • jsval can hold a full int32
  • doubles are stored in the jsval; JSVAL_TO_DOUBLE returns double
  • jsval and jsid no longer share the same representation
  • JSClass method signatures have been modified to take jsids for id
    arguments and pass jsval arguments by const jsval*.

You can read up in more detail, and provide feedback, by checking out Luke’s mozilla.dev.tech.js-engine post on the matter.

4th Anniversary

I was taking planes, trains, and automobiles back from Whistler on July 10th. That day marked the start of my 5th year as a Mozilla employee.

In a 2006 email thread titled “signed offer”, I asked Schrep if there was anything I needed to do as a first task, other than wake up and start coding as a Mozilla employee. He replied “we need your help with FF2 b1.”


If you’re interested in working on interesting new advances to HTTP and beyond, please send mail to sayrer@mozilla.com or shaver@mozilla.com.

Offered Without Comment

David Recordon: ” I’m happy with the term “HTML5″, just want someone to define what is actually meant by it at a given time.”

Check out these HTML5 demos





Change I Believe In, Unfortunately

Jamie Zawinski: “No, the GMF isn’t that sexy: it’s when you wake up and realize that some multinational has fucked your economy and local environment so much that suddenly you live in Somalia and are trying to grow your own backyard Superfund Lemons.

With failure like that, who needs success?

Joe Hewitt: If HTML/CSS/JS had succeeded, there would be no need for native apps. OTOH, HTTP and JSON succeeded because native apps still use them.

Ah yes, HTML, CSS, and JS are abject failures. They also happen to be the only technologies that have gotten Code-on-demand remotely right, while also being popular. Much as it pains me to say it, anyone arguing that the way of the future is some full-privilege C API running your webcam while using an accelerometer API just doesn’t understand the Web very well. It sounds like something you would hear from a delusional content company.

The Web will always come from the bottom, and destroy markets and margins. It will never lead the way in fancy APIs. In fact, the Web has things called plugins, where people can experiment with fancy APIs. The successful ones get assimilated.