And here we go… While
items
is of type
Ref<string[]>
, the
item
here is just
string
, it won’t even compile, because
v-model
tries to mount a handler for the event
"update:modelValue"
, which tries to assign the new value to
item
, which is basically a local variable in template renderer. If we use just
:model-value="item"
, the template compiler will generate the code:
As you see, item here is literally a local variable, an argument of a function, to which you shouldn’t assign anything (it wouldn’t have any actual effect).
It’s not a problem if the array is an array of objects and we are mutating its fields, the objects here will keep the reactivity and Vue will allow mutating them without a problem. The problem will start, when using a different component there that will return a new object, instead of mutating the original one, for example, a picker for category, tags (that aren’t stored as just plain text).
We see that we use idx item of items array for both setting the model-value prop and to set in "update:modelValue". The item from the v-for is not used at all.
This is a simple solution that will work in most simple cases.
We can’t just replace the items with filteredList in v-for and keep the items[idx] in the input tag, because the given idx won’t be the index of the item in the original array!
Again, if we mutate the items field, it wouldn’t be any issue here. The returned array will still contain reactive objects.
Sure, we could check if the element should be displayed in the template: v-for="(item, idx) in items":key="idx"v-if="search.includes(item)"<inputv-model="items[idx]"/></div>Enter fullscreen modeExit fullscreen mode
This solution also lets you display the items in a different order than the original array, before mapping the items to indices, sort the array (copying the array first by [...items.value].sort or by using the new toSorted method!).
Rather a clean solution, but if we would like to use the filtered array in a child component, we would have to provide both data: array and filtered list.
Anyway, we need to keep the original index here, so… we better construct a type for a wrapped arrays… but how?
A hidden bonus here: the filter and sort functions can use reactive data and if indices is used in a computed, it will be recalculated when dependency will update.
And then, we need to create the array that will wrap it:
And yeah, what should we put there? Basically returning the { value: opt.array.value[c], idx: c } won’t provide reactivity for value. Here we have a few possible approaches. Computed’s value returns exactly what it’s getter returns, so here we return a raw array of objects. We want the field value (the array’s item’s) to point at the original array, with both getter and setter.
But then we won’t know, what is the real index in the original array, if we need it ever.
Also, we have another reactive element here… Well, n reactive elements, each array item is separate reactive object.
Take a look at what we actually need here… Getter should return opt.array.value[idx], where idx is returned from indices. Setter should… depends on the approach:
just set opt.array.value[idx] = newValue
call setter given in options, or assign to opt.array.value = newArray, where newArray can be either done by any method of replacing an item in array:
Let’s see, then modelValue: item.value actually calls the getter, which returns opt.array.value[idx], which means we depend on given opt.array (the component will rerender when we change the array) and on the array items opt.array.value[idx].
On the other side, onUpdate:modelValue makes a simple assignation to .value, which calls the setter, which does opt.array.value[idx] = newValue. Everything we need. Of course, as I mentioned, we can change the way it’s assigned.
This setter will cause the update of the component it’s changing because the array items themselves were reactive before.
Logically, value acts exactly like array’s item. Underneath it’s done by getter and setter and using a cached item, so getter doesn’t hit the array each time we try to get the item. If the item in array will change, there will be new instance of ArrayProxyItem created anyway, caused by the dependency created while mapping items in function indices. If T is an object, it will remain reactive and will be the same instance as in the array.
<divv-for="(item, i) in proxyArray":key="i"><ItemEditorv-model="item.value"@remove="item.delete"/>Enter fullscreen modeExit fullscreen mode
JavaScript is a nice runtime environment, but sadly, it encourages developers to make the code slow… Sure, “no premature optimization”, but seriously… Don’t set up the traps in places you will regret later! If you are building something low-level, keep it fast and optimized, so you don’t have to care that much about optimization on high-level code (your components). Many web pages are using way too much resources, don’t join them!
I won’t talk about how the conventional for loop (for (const item of items)) is much faster than the .forEach. Look at the upper code… What happens here? While it looks clean, underneath the shiny shell of JavaScript it’s a hell… 3 closures!
Here comes the hated Object-Oriented Programming… Classes! Okay, it’s not so “object-oriented”, because we are going to use just one single class, but still.
We had previously declared the type ProxyArrayItem<T> and… forgot about it. Now let’s change the type into an actual class:
A silly micro benchmark (without Vue’s part) that tests 1000 array elements says that this method is over 20 times faster when creating. But micro benchmarks are often just a curiosity that has little significance.
Sure, it might be lost in the all the work our JS app has to do, but… why waste time and memory, when it can be saved? Especially if it’s something that could be done without that layer of abstraction, we are doing it only to write code easier.
Results for different approaches:
raw print: 17 MiB heap, 25ms render (and total)
computed: 48 MiB heap, 10.1ms creation, 20.0ms render, 30.1ms total
closures: 36 MiB heap, 13.7ms creation, 23,6ms render, 37.3ms total
classes: 24 MiB heap, 7.8ms creation, 24ms render, 31.8ms total
Well, the worst is the approach using closures. Twice that long as using classes or computers. Computeds don’t provide the functionality we need, so it’s a different problem. There are no significant differences in rendering, the benchmark wasn’t done on clean system, so we need to apply a big measurement error.
For heap usage, those are peeks I found with setInterval each 1ms. I’ve measured it multiple times, and results were always similar: with classes the peek never was as high as the others. Don’t trust those measures too much, those measures are not trustworthy, just the overall observation is that using classes had the lowest footprint memory. Computeds and closures were getting higher. I tried using profiler, but it also doesn’t find the actual peeks, and the observations were similar. I know, those tests were done with vite server, without devtools opened, but still the differences are noticable. Also note, that for raw print, there was used just one array, instead of 100.
Somehow rendering is fastest for computeds… It’s probably because it caches the item. Let’s do that in out class:
And we’ve managed to get down a few ms for objects for type, but it gets worse for primitives. It’s hard to find the best solution. The _cache will still be reactive, if it ever change (the item instance, either object as whole or if it’s primitive) the proxyArray computed should create a new proxy for that item anyway.
There might be a much bigger performance penalty when mutating data using nested computeds, but it’s a deeper problem, much harder to measure
The constructor copies the item (reference to it, if it's object) to local cache, so the original array doesn’t have to be used anymore, when we are reading the value. If you use ArrayProxyItem, the dependency will be installed only on the T object (if it’s reactive! A primitive won’t be reactive here, but it’s not any problem here), not on the array. If the item’s field change, everything will work fine on both ArrayProxyItem's value and directly on the source array. If the item will change, there will be another array of ArrayProxyItem created, so the cached reference isn’t any problem. Also again, if the new array will contain the same references, components won’t have to trigger updating the view.
I know it sounds complicated but… Thanks to the reactivity in Vue it just works.
setvalue(newValue){// newest approach, fast*this.array.value=this.array.value.with(this.idx,newValue);// older functional approach, slowthis.array.value=this.array.value.map((item,i)=>i==idx?newValue:item)// copying, patching, aplying; ugly but fastestconstarr=[...this.array.value];arr[this.idx]=newValue;this.array.value=arr;Enter fullscreen modeExit fullscreen modedelete(){// filter the item out; nice, but slowerthis.array.value=this.array.value.filter((_,idx)=>this.idx!=idx);// copy, remove with splice, apply; looking ugly, but fastconstarr=[...this.array.value];arr[this.idx].splice(this.idx,1);this.array.value=arr;Enter fullscreen modeExit fullscreen mode
The main difference is that without overwriting the array’s instance, watch won’t react on the change. Watching array is another wide problem. The proxyArray computed will react on any change here, because it reads the items.
While I wouldn’t recommend implementing both in the ArrayProxyItem, I would suggest implement both strategies with interfaces and use whatever version you need. Maybe add an option to toArrayProxy to select the updating strategy?
constsearch=ref("");consttodos:ref<TodoItem[]>([]);consttodosList=toArrayProxy({array:todos,filter:(item)=>{// don't show subtasks hereif(item.parent==null)returnfalse;// if user typed something in search, filter the itemsif(search.value.length==0)returntrue;returnitem.content.includes(search.value);sort:(a,b)=>{returna.priority-b.priority;Enter fullscreen modeExit fullscreen mode
That way we could use the new array to display, modify and remove items from the list.
If the list is returned from backend, it might be better to delete it by request and reload the whole list, but for optimization we could request the delete and just continue working without that item (this is the same thing we would achieve by reloading the array, but with reloading, we would lose the other unsaved changes, if there are any). If we wouldn’t sort, the items would remain in the same order, so we could use the index in original array for features like “add item directly after that one”.
We could still improve this by adding another functionalities, like listeners to change: triggers called when item is removed, moved etc.
We’ve managed to write a wrapper for an array, so we can use a completely reordered and filtered array in v-for and still use v-model to mutate the original array with that. If we are not printing huge amounts of data, the performance. The given solution still brings a lot of places to improve performance and usability.
Using the class approach results in fewer dependencies to track compared to using nested computeds which creates another layer. Therefore, in the long term, this approach should be much more performant. Anyway, it makes many cases using arrays a lot easier, especially if we want to simply use v-model on items directly.
Built on Forem — the open source software that powers DEV and other inclusive communities.