Skip to main content Skip to footer

UWP FlexGrid Performance Review: Our Methodology and Results

We released our first UWP Beta in 2015. Since then, Microsoft has released several platform updates and we also made a lot of changes on our side to make our controls fast and stable. UWP FlexGrid performance has improved a lot from the first versions. Now I'd like to share the FlexGrid benchmark application that we use internally. It's similar to what we did for WPF but doesn't include any competitors.

UWP FlexGrid Performance Test Application

Our benchmark application allows to select and run single test or run all tests one-by-one. You can select how many times to run the same test and find an average value in results. We did it to reduce possible side effects from OS or other applications interaction. All test results shown here were the result of 10 runs. Application window is shown below:

Benchmark application window Benchmark application window

Running a single test is handy if you want to profile some specific use case. Note: if you need comparable results for different tests, don't change the window size while the tests are running. Actual view port size affects performance numbers as larger screens take more time for layout and leave less space to virtualization. After running some or all tests you can save the results in an Excel file. The main point of interest here is how to measure time for complex operations with async UI updates. After several experiments, we found what we think is the most appropriate way to get exact moment when UI has finished all updates. The full source code is attached, so you can give it a try. We will be glad to hear any feedback if you think that we can improve something.

Benchmarks

UWP platform has no standard class similar to ListCollectionView in WPF. We have our own CollectionView for UWP which includes sorting, filtering and grouping functionality. If FlexGrid's ItemsSource is set to some generic collection, FlexGrid creates C1CollectionView to handle all data operations. In this benchmark we use C1CollectionView as a data source and fill it with business objects defined like this:


public class Customer :  INotifyPropertyChanged, IEditableObject  
{  
    public int ID { get; set;}  
    public string Name { get;}  
    public string Country { get;}  
    public int CountryID { get; set;}  
    public bool Active { get; set;}  
    public string First { get; set;}  
    public string Last { get; set;}  
    public DateTime Hired { get; set;}  
    public double Weight { get; set;}  
    public string Father { get;}  
    public string Brother { get;}  
    public string Cousin { get;}  
}  

It gives us 12 columns of different types.

Every included test follows the same steps:

  1. Remove all UI created by the previous test, call GC.Collect and GC.WaitForPendingFinalizers so that garbage collection doesn't affect the next test;
  2. Initialize next test and Stopwatch;
  3. Run test required number of times;
  4. Measure total time and count average result;
  5. Log results.

Let's explain some implementation details about specific benchmarks.

Benchmark 1: Create control and load data

This benchmark creates a user control containing FlexGrid, inserts it into visual tree, and fills with data.

Benchmark 2: Re-load data into existing control

This benchmark sets the FlexGrid's ItemsSource to null to clear both data and auto-generated columns, and then sets ItemsSource to a new C1CollectionView instance.

Benchmark 3: Sort single column

We use sorting on data source level, the same what FlexGrid does internally if end-user taps column header:


public async override Task Sort(bool ascending)  
{  
    var cv = _grid.ItemsSource as C1.Xaml.C1CollectionView;  
    using (cv.DeferRefresh())  
    {  
        cv.SortDescriptions.Clear();  
        cv.SortDescriptions.Add(new C1.Xaml.SortDescription("ID", ascending ? C1.Xaml.ListSortDirection.Ascending : C1.Xaml.ListSortDirection.Descending));  
    }  
}  

Note, it's important to use DeferRefresh. Otherwise it will take more time.

Benchmarks 4 and 5: Scroll on 100 rows; Scroll full grid

We thought it would be nice to mimic end-user interaction in this test, but didn't find good way for this. So we decided to stick with bringing specific row into view (used FlexGrid.ScrollIntoView method).

Benchmark 6: Filter column

We use filtering with filter predicate:


public async override Task Filter(bool isActive)  
{  
   var cv = _grid.ItemsSource as C1.Xaml.C1CollectionView;  
   if (cv != null)  
   {  
      if (isActive)  
         cv.Filter = IsActiveFilter;  
      else  
         cv.Filter = IsInactiveFilter;  
   }  
}  

Benchmark 6: Group by single column

Again, C1CollectionView supports grouping via GroupDescriptions, so we use it:


public override async Task Group()  
{  
   var cv = _grid.ItemsSource as C1.Xaml.C1CollectionView;  
   using (cv.DeferRefresh())  
   {  
       cv.GroupDescriptions.Clear();  
       cv.GroupDescriptions.Add(new C1.Xaml.PropertyGroupDescription("CountryID"));  
   }  
}  

Again, it's important to use DeferRefresh.

Test Results and Progress Comparing with 2016 v2

For 2017 v1 release we worked on performance optimization and got significant improvements. Below are comparison charts showing the difference comparing with 2016 v2 release.

Results for 1000 data rowsResults for 1000 data rows

Results for 10 000 data rowsResults for 10 000 data rows

Results for 100 000 data rowsResults for 100 000 data rows

You see that sorting and grouping show some degradation depending on data size. This is limitation of the current C1CollectionView implementation. It uses a generic approach based on SortDescriptions, GroupDescriptions and reflection. If sorting performance on big data is critical for your application, it's possible to set C1CollectionView.CustomSort property to your custom IComparer implementation. As it doesn't need reflection, it works much faster.

Environment, Conditions and Limitations

The benchmarks were run on HP ENVY-23 All-in-One Desktop with next parameters:


Intel i7 quad-core CPU @ 3.10 GHz  
8 GB RAM  
NVIDIA GeForce GT 630M display adapter, Full HD (1920 x 1080) resolution  
Windows 10 Pro 64-bit OS Version 10.0.14393 Build 14393  

All results were get from release version compiled with .Net Native tool chain. It is important as that is what your customers will get from Windows Store. If you try to repeat tests on your machine, please always check the list of currently running processes. I noticed that every time I rebuild and run the UWP application, Windows starts a 'Microsoft Compatibility Telemetry' process, which eats up a lot of CPU and affects application performance badly. You need to stop it before running tests or you'll get incomparable results. Windows is not a real-time OS, so you might also have other running processes that affect application performance. Try to keep the same small set of running processes when you repeat tests.

MESCIUS inc.

comments powered by Disqus