30

I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.

I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)

Are there any examples of this sort of thing being done?

What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?

Notes:

  • I'm not interested in browser/OS detection.
  • At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
  • Things that might be interesting to measure:
    • Base JavaScript
    • DOM manipulation
    • DOM/CSS rendering
  • I'd like to do this in a way that affects the page's render-speed as little as possible.

BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.

[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]

Community
  • 1
  • 1
Már Örlygsson
  • 14,176
  • 3
  • 42
  • 53

7 Answers7

13

Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.

There are several reasons for this, the main ones being:

  1. Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience

  2. Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.

  3. Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.

The best you can do is to either

  1. Let your users decide what they want, and enable them to easily change that decision if they regret it

    or better yet

  2. Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.

Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.

Community
  • 1
  • 1
Martin Jespersen
  • 25,743
  • 8
  • 56
  • 68
  • You're not a killjoy at all. But, let's look at this differently: how/when can we detect the edge-cases (very slow/fast browser) to "safely" make the choice for the users - i.e. when asking them would be bothersome or make us look stupid? – Már Örlygsson Jan 19 '11 at 13:13
  • So far i have not come up with a good solution to this problem myself, and i have tried to find one for the past 5 years.. The way we do it here, which is crap, but the best i got, is to test everything we do on a slow machine that runs ie7 (which is the slowest browser we support) and if it doesn't run smoothly it gets optimized. Then we use feature detection for the progressive enhancement - if the browser supports the feature we use it, but again, we test everything we do on slow machines in all browsers we support and we do extensive optimisation. – Martin Jespersen Jan 19 '11 at 13:19
  • I'm also considering low-powered mobile devices. Playing around with extreme progressive enhancement, etc. – Már Örlygsson Jan 19 '11 at 13:22
  • Sounds like a cool project, i'd wish i could help you more :) – Martin Jespersen Jan 19 '11 at 13:24
  • I'm not sure it will fly at all. It's mostly a thought experiment yet. – Már Örlygsson Jan 19 '11 at 13:34
  • One idea i have played around with, but which is academic since it will only work in html5 capable browsers, is to run some sort of benchmark in a new Worker(); on the side of loading a page - this would be totally behind the scenes, but it would still fall into the problems i already explained above- – Martin Jespersen Jan 19 '11 at 13:36
8

A different approach, that does not need explicit benchmark, would be to progressively enable features.

You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.

Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.

Guðmundur H
  • 11,478
  • 3
  • 24
  • 22
2

You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.

If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?

symcbean
  • 47,736
  • 6
  • 59
  • 94
2

Take a look at some of Google's (copyrighted!) benchmarks for V8:

I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.

Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
mVChr
  • 49,587
  • 11
  • 107
  • 104
1

You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie. This is only for HTML5 Supported browsers, of-course

0

Some Ideas:

  • Putting a time-limit on the tests seems like an obvious choice.
  • Storing test results in a cookie also seems obvious.
  • Poor test performance on a test could pause further scripts
    • and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
    • that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
    • while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
      .
  • On a sidenote - Slow network speeds could also probably be tested
    • by timing the download of external resources (like the pages own CSS or JavaScript files)
    • and comparing that result with the JavaScript benchmark results.
    • this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
      .
  • It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
Már Örlygsson
  • 14,176
  • 3
  • 42
  • 53
0

I came with a similar question and I solved it this way, in fact it helped me taking some decisions.

After rendering the page I do:

let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
    i++;
    now = Date.now();
}
console.log("I looped " + i + " times!!!");

After doing that I tested it on several browser with different OS and the i value gave me the following results:

Windows 10 - 8GB RAM:

  • 1,500,000 aprox on Chrome
  • 301,327 aprox on Internet Explorer
  • 141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)

MacOS 8GB RAM:

  • 3,000,000 aprox on Safari
  • 1,500,000 aprox on Firefox
  • 70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)

Windows 10 - 4GB RAM (This is an Old computer I have)

  • 500,000 aprox on Google Chrome

I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!

So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.

Good luck, everyone!

Noé
  • 149
  • 8
  • You're basically benchmarking `Date.now()` and hoping that's representative of other kinds of performance. But as your own results show, it isn't. That makes sense; it probably depends strongly on the browser / OS's implementation of it, and for hardware only reflects clock speed and maybe instruction throughput. It doesn't measure branch prediction, data caches, memory size / latency / bandwidth, or anything that would be relevant to handling larger data structures. – Peter Cordes May 08 '21 at 08:28
  • @PeterCordes Well in fact it does, probably you didn't notice the notes of the question, he basically said he wasn't interested on anything that you said. Do you really expect that he writes a whole code to detect cache, memory size and the like? In 200 ms we could say a lot, as the question says "how fast apparently the browser is!!!", take the given result, if you get more than 1,000,000 you could perform all the tasks with no problem (just saying), is a fast approach, don't expect absolute results, there's even function like `navigator.deviceMemory` – Noé May 08 '21 at 12:50
  • But you don't even know what browser can handle that, imagine to creating a whole code that analyzes everything you said, when you just wanted to know when to enable some JS and CSS features, I didnt say my 7 lines code would solve all of that, but according to the circumstances it worked for me, as I told you I enable more functionalities on those who gave more then 1,000,000 nothing that the users would appreciate too much (I don't think a 4K PC would give poor performance result with that 7 lines code) but very important to do not to kill other devices! – Noé May 08 '21 at 12:53
  • My point was that other things you can do in JS will depend on those other facets of performance, e.g. for handling a large table that's searched/filtered locally. For your example of adding `div` elements to a page, it might be good to actually do the first 2k, and check how long that took, to decide if you should keep doing more. (Although as you point out, scroll / render performance can be separate). Still, if you can manage it, that's probably better than burning CPU time on a separate benchmark. – Peter Cordes May 08 '21 at 12:57
  • Also, if you're going to pick a random tiny loop to benchmark, including `Date.now()` every single iteration seems like a very poor choice. It might make a system call, or might end up using `rdtsc` in user-space, or just copying a coarse timestamp from somewhere, depending on the browser/OS, so it can have *huge* performance differences that aren't correlated with performance of much else you'd want to do in JS. Maybe a nested loop that checks Date.now() every 1k iterations could work, if you design it so it's hard for any JS engine to optimize away that inner loop entirely. – Peter Cordes May 08 '21 at 13:00
  • Anyway, I'm not trying to suggest that you write benchmarks to separately characterize cache size / latency, mem bandwidth, and CPU math speed. That would obviously take more client-side CPU time than you want to use up. Just that while this might be better than nothing, you have to realize it's very coarse and not representative of a lot of stuff. e.g. a 4GHz Pentium 4 might measure as fast on this as a 4GHz Skylake. – Peter Cordes May 08 '21 at 13:05
  • 1
    @PeterCordes Yeah, in those cases we can find better solutions, I liked what you said about nesting loops, those are really good ideas. I think that every problem may come with a different solution at the end we have to take the best of all of the ideas we see to develop our sites according to our individual circumstances :) – Noé May 08 '21 at 13:34