0

Possibly related to Array.from() vs spread syntax

This is just a theoretical question, in reality we wouldn't have a such huge json dumped into the client side.

Let's say we have a object such that

{
  uuidA: { something },
  uuidB: { something }
  ...100000 items
}

vs. an array such that

  [{ id: uuidA, ...something },
   { id: uuidB, ...something },
   ...100000 items]

Since we are in a React or "an environment that kinda against mutable behavior",

we have to compute a new state/object from the previous state/object, does it mean we gain no performance benefit from the uuid: object?

even though we access the item in O(1) but we still need to create a new object in O(n), which is the same as the array.filter().

For example, when we want to remove an item from that huge array/object:

const newObj = {...oldObj}; // we cannot direct modify the oldObj as it states in React document
delete newObj.[uuidA];
setState(newObj);
vs.

const newArray = oldArray.filter(item => item.id !== uuidA);
setState(newArray);

//the overall performance are the same right?

Just to summarize my question in general: because we always need to compute a new state, does it mean in React we will always spend a O(n) operation for object state update/filter/add?

Yunhai
  • 1,283
  • 1
  • 11
  • 26
  • Yes, if you use native arrays and objects. If your collections grow to a size where this is unbearable, you can however swap them out for something more efficient, e.g. the immutable.js library. – Bergi Jun 19 '23 at 21:22

0 Answers0