8

Consider the code below. <GridBody Rows={rows} /> and imagine that rows.length would amount to any value 2000 or more with each array has about 8 columns in this example. I use a more expanded version of this code to render a part of a table that has been bottle necking my web application.

var GridBody = React.createClass({
    render: function () { 
        return <tbody>
            {this.props.Rows.map((row, rowKey) => {
                    return this.renderRow(row, rowKey);
            })}
        </tbody>;
    },
    renderRow: function (row, rowKey) {
        return <tr key={rowKey}>
            {row.map((col, colKey) => {
                return this.renderColumn(col, colKey);
            })}
        </tr>;
    },
    renderColumn: function (col, colKey) {
        return <td key={colKey} dangerouslySetInnerHTML={{ __html: col } }></td>;
    }
});

Now onto the actual problem. It would seem that computation (even with my own code) seems to be suprisingly fast and even ReactJS's work with the virtualDOM has no issues.

But then there are these two events in reactJS.

componentWillUpdate up until where everything is still okay. And then there is componentDidUpdate which seems to be fine and all on chrome.

The problem

But then there is IE11/Edge with about 4-6 SECONDS slower than any other browser and with the F12-Inspector this seems to be p to 8 SECONDS slower than Chrome.

The steps I have done to try and fix this issue:

  • Strip unnecessary code.

  • Shave off a handful of milliseconds in computation time.

  • Split the grid in loose components so that the virtualDOM doesn't try to update the entire component at once.

  • Attempt to concaternate everything as a string and allow react to only set innerhtml once. This actually seems to be a bug in IE here a large string takes about 25-30 seconds on IE11. And only 30 ms on chrome.

I have not found a proper solution yet. The actions I have done above seemed to make things less bad in IE but the problem still persists that a "modern" or "recent" browser is still 3-4 seconds slower.

Even worse, this seems to nearly freeze the entire browser and it's rendering.

tl;dr How to improve overal performance in IE and if possible other browsers?

I apologize if my question is unclear, I am burned out on this matter.

edit: Specifically DOM access is slow on IE as set innerHTML gets called more than 10.000 times. Can this be prevented in ReactJS?

Community
  • 1
  • 1
Perfection
  • 721
  • 4
  • 12
  • 36
  • 1
    If you can override the rendering. What you can do to drastically speed up performance is, reusing rows. What I mean is, if you show 10 rows at a time, create 20 rows in the dom (not the entire 2000+) then when the user scrolls, show the next set, clear the previous 10 of data and put the next set of data in that. A reference to elaborate on the idea: (although for a different library) https://elements.polymer-project.org/elements/iron-list. Refer to "Why should be used section. Hope this helps! – Joey Roosing May 17 '16 at 07:54
  • We were thinking of implementing a similar thing but the problem would be that we want users to be able to use CTRL+F to search in such large lists as it is a requirement. – Perfection May 18 '16 at 10:00
  • 1
    In that case, can you not just add a search field on-top or below the grid, or on the column, and then search through your dataset in javascript rather then traversing the dom, if found in your dataset (in memory in JS), load that item and maybe the surrounding 10 items into the dom. I do agree it will feel less seemless.. maybe some animations could help solve this. – Joey Roosing May 19 '16 at 06:57
  • That would require me to load the entire dataset (which can go over 1 million rows) to achieve the same functionality. We are currently limiting it to 2000 and using pagination. A search function is already present in the application but doesn't really solve the performance issue with 2000 results. – Perfection May 19 '16 at 07:18
  • Something crazy; I don't know if you can override cntrl+f search in Javascript. If so, you can catch the on-key event, search through your 2000 records that you hold in memory (but not in the dom!) right? Then you can still do the recycling, of maybe a 100-200 dom elements, while keeping 2k records at a time in memory. I think a combination of my first and second answer + how you currently handle a search that is outside your loaded 2k records should be able to solve your problem. – Joey Roosing May 19 '16 at 07:31
  • I tried to concaternate an entire tbody as a string and then attempt to render that. To my suprise in IE11, instead of needing 8-10 seconds to render everything separately, it needed 25-30 seconds to do a single setInnerHTML call (with a large string). I am also seeing that no-one has actually fixed this issue without the loss of control.... – Perfection May 19 '16 at 13:25
  • IE and Chrome have differing JS interpreters. This sort of issue is bound to happen. – mferly May 23 '16 at 01:07

3 Answers3

8

Things to try improve IE performance:

  • check you are running in production mode (which removes things like prop validation) and make Webpack / Babel optimisations where applicable

  • Render the page server side so IE has no issues (if you can support SS rendering in your setup)

  • Make sure render isnt called alot of times, tools like this are helpful: https://github.com/garbles/why-did-you-update

  • Any reason why you are using dangerouslySetInnerHTML? If you take out the dangerouslySetInnerHTML does it speed things up dramatically? Why not just automatically generate the rows and cols based on a array of objects (or multidimensional array passed), im pretty sure then React will make less DOM interaction this way (makes use of the VDOM). The <tr> and <td>'s will be virtual dom nodes.

  • Use something like https://github.com/bvaughn/react-virtualized to efficiently render large lists

Marty
  • 2,965
  • 4
  • 30
  • 45
  • react-virtualized uses the same ideas as several implementations that we've made ourselves and it also has the same performance issues that we're seeing in browsers like IE10-11. DangerouslySetInnerHTML is used to set some HTML that we're getting from our resultset. We have tried without this and didn't give us any difference from our tests. Also server side rendering is already enabled but comes with the same issues when a rerender happens. – Perfection May 24 '16 at 08:36
  • I checked with a github fork but we've already thoroughly fought off most unwanted rerenders. But that's not particularly the issue here. When the component has to rerender a new data set (i.e. a newly ordered dataset is pushed) then the browser has a very hard time to keep up with finding the dom node and setting innerhtml. In our code's current state the re-render only happens once. – Perfection May 24 '16 at 12:19
  • have you tried to remove `dangerouslySetInnerHTML` just for testing the performance against IE? You said "We have tried without this and didn't give us any difference from our tests." does this mean you removed it as a temporary measure or you replaced it with `` and ``'s? – Marty May 24 '16 at 13:01
  • Our tests are worst case senarios in our current code on Chrome, Firefox, IE Edge, IE11 + IE10 in emulated. It means we removed dangerouslySetInnerHTML and placed {col} between the '' We measure the results in miliseconds determined by the browser. – Perfection May 24 '16 at 13:09
  • 1
    Other thing you can double check is if it's in production mode and you have made other optimisations like removal of prop validation with Babel plugins – Marty May 24 '16 at 23:05
  • 1
    Hey, thanks Marty. Infact this helped us a lot. The difference between production and development mode was a literal performance gain of 40-50%. We didn't even think of that. This is a major non-intrusive step towards the way to go. You should add that to your answer so I can accept it. – Perfection May 25 '16 at 10:49
1

Shot in the dark: try not rendering, or not displaying, until everything is completely done.

  • make the table element display:none until it's done
  • render it offscreen
  • in a tiny DIV with hidden overflow
  • or even output to a giant HTML string and then insert that into the DOM upon completion.
Nate
  • 1,268
  • 13
  • 20
  • 1
    Rendering it off screen, setting the table element to display: none won't gain performance because the DOM search and manipulation still has to happen. We've tried putting it in an iframe to no avail. Already attempted to output as a giant html string and that is actually 3-8 times slower than 12000+ dom manipulations. – Perfection May 25 '16 at 09:44
  • Actually concaternating the html to a string and then doing a single DOM manipulation is much faster. For some reason IE is extremely slow with for() loops and arrays. Hence upvote. – Perfection May 30 '16 at 12:43
0

In addition to @Marty's excellent points, run a dynaTrace session to pinpoint the problematic code. It should give you a better insight into where the bottleneck is. It's results are often more useful than the IE developer tools.

Disclaimer - I am not linked with the dynaTrace team in any way.

hazardous
  • 10,627
  • 2
  • 40
  • 52
  • 1
    I'm not interrested in registering for more spam, sorry. And this would only help me find the problem that I have already found but provide no solution. – Perfection May 24 '16 at 12:22
  • Your call. It's unfortunate telerik put an excellent tool behind mail wall. – hazardous May 24 '16 at 12:26