Skip to main content Skip to footer

.NET UI Virtualization Best Practices - How to Get Offset

When developing items-controls like ListView, TreeView, or Datagrid, it is important for the layout to be extremely fast at loading and updating because the control can potentially have a large number of items. To achieve optimal performance, we use a technique called “UI Virtualization”, which basically recycles the UI elements as the user scrolls through items.

But UI Virtualization can introduce new problems that you wouldn’t have otherwise - such as obtaining the offset position of an item deep within the list. In this blog, we will explain how to overcome it in your own custom control and using C1LengthList. For more background on why we use .NET UI virtualization, check out my previous blog Analyzing .NET WPF Datagrid Performance Benchmarks.

Ready to Try it Out? Download a 30-Day Free Trial of ComponentOne Today!

The Offset Problem with UI Virtualization

To better understand this problem, let's imagine we have a big list of numbers, and we want to know the offset (or position) of a specific index.

.NET UI Virtualization

Since we are using UI virtualization, it means that not all elements in the list are rendered at once, and therefore, we can’t simply obtain the offset. It requires computing to get the position of the elements. When the size of the elements is fixed, the computing of the position is trivial; multiplying the index of the item by the fixed length resolves it, but if every element has a different size, or the size depends on the laid out child, computing the position of an element depends on the size of all the previous elements, and summing up all the previous lengths becomes a performance problem when rendering a large number of items.

In addition to the performance issue, when the number of items is big, let’s suppose 1 billion items, storing the length of every item generates a memory issue too. In most cases, when binding to such a large number of items, it’s unnecessary to store these values since most items are never visited. Instead, an estimated size can be used until the item is finally visited.

The Offset Solution

Since summing every item previous to the one we want is expensive, we need a way to have certain portions already summed. To achieve this, we can have a tree structure with fixed array-lengths where a parent node caches the aggregated value of its children.

.NET UI Virtualization

The positions of the item in the structure are calculated mathematically from the index in the collection by the following code:

private int[] GetIndexArray(int index)
{
    var indexArray = new int[_depth];
    for (int i = _depth - 1; i >= 0; i--)
    {
        indexArray[i] = index % _partitionLength;
        index = index / _partitionLength;
    }
    return indexArray;
}

The returned array contains the indexes to navigate in each level to find the item.

For example, if the array-size (of the partition) is 3 and the searched index is 8, the returned array is [2,2], meaning it will sum all up to the index 2, and then again 2 more in the next level down.

.NET UI Virtualization

It will sum up 15+14+1+6=36. Notice we’re summing 4 numbers instead of 8.

This sample is using an array of length 3 for graphical reasons but in real cases, the arrays can be much bigger, so the performance gains are more noticeable.

On the other hand, this data structure can be used to reduce the memory by defining a default-length value and avoiding storing all the values that are equal to the default. For instance, let’s suppose the default value is 5, and we have a list of 9 values. Initially, the top level node will store the value 45, 9 items x 5 in length. And when setting values that are different from 5, only those values are saved in the structure. Let’s suppose the second item is set to 8.

.NET UI Virtualization

This way, we only allocate memory for the items that are different from the default.

Performance Comparison

Within our ComponentOne libraries, we’ve included a List class that implements this feature called C1LengthList.

To visualize the benefits of using this structure, we’ve made a benchmarking app that executes, on the one hand, the sum of the elements of a List to get the offset and, on the other hand, the GetOffset method of C1LengthList.

Size

 List (time in us)

C1LengthList (time in us)

1000

50.1

169.1

10000

406.8

157.8

100000

4103.8

157.1

1000000

48021.0

181.3

.NET UI Virtualization

For small lists, the computation can be faster for a simple list, although extremely fast in both cases. When the size of the list increases, upward to a million, summing all the items has computational issues, whereas C1LengthList maintains steady performance.

Conclusion

This hierarchical data structure allows us to efficiently get the offset of an item in the list in o(log(n)), instead of o(n) of a traditional approach using a list. The only major con is that the order to access the length of a specific item is also o(log(n)) instead of o(1). Also, this data structure allows us to drastically reduce the memory used for lists whose items are mostly equal to a default value.

Another use case scenario to consider is the calculation of a UI scrolling-index of cells that handle text-wrapping. Text-wrapping easily transmits the idea of an unknown height of the cell caused by the inner text glyph height + the width of the view-port, all to affect the positions of all the following rows. In such a case scenario, the precise index of the scroll is a challenge.

You can try C1LengthList as part of C1DataCollection and ComponentOne WPF Edition.

Ready to Try it Out? Download a 30-Day Free Trial of ComponentOne Today!

comments powered by Disqus