DEV Community

ibenge Uforo
ibenge Uforo

Posted on • Updated on

Entry-7: Different Ways to Collections in C#

Coding image by Safar Safarov on unsplash

This short article discusses three different strategies for handling collections using C# language, where each strategy employs a unique looping approach to achieve the same result.

The implementations include a Foreach loop, a LINQ-based approach using the Select method, and a traditional For loop run on 1000 iterations. Let's analyze each strategy, including their execution times, memory usage, and CPU usage, to understand their performance characteristics.

Before we begin, some key terms to note

Compile Time:
Compile time refers to the duration it takes for the compiler to process the source code and generate executable machine code. During compilation, the compiler checks the code for syntax errors, performs optimizations, and translates it into machine-readable instructions. A shorter compile time indicates that the code is processed quickly and is ready for execution, which is desirable for development efficiency.

Execution Time:
Execution time, also known as runtime, is the time taken by a program to complete its execution once it starts running. It measures how long it takes for the program to execute all its statements and complete its tasks. A lower execution time means that the program runs faster, which is beneficial for applications that require quick responses and high performance.

Memory Usage:
Memory usage represents the amount of memory (RAM) that a program consumes while it is running. It indicates the space required to store data, variables, and program instructions during execution. Lower memory usage is generally preferred, especially for resource-constrained environments or when dealing with large datasets, as it reduces the risk of memory-related issues like crashes or slowdowns.

CPU Time:
CPU time measures the total amount of time the central processing unit (CPU) spends executing a program. It includes the time the CPU spends processing instructions for the program and any time spent waiting for external resources like I/O operations. A lower CPU time indicates that the program is using the CPU efficiently and is not overly taxing the processor, leading to better overall system performance.

Strategies

  1. Foreach Doubling Strategy:

This strategy utilizes a foreach loop to iterate through the input list and double each element. The resulting doubled numbers are stored in a new list, which is then returned. This approach offers simplicity in syntax and is easy to read.

  • Compile Time: 0.156s
  • Execution Time: 0.016s
  • Memory Usage: 96kb
  • CPU Time: 0.016s
public List<int> ForeachDoublingStrategy(List<int> numbers)
{
   List<int> doubledNumbers = new List<int>();
   foreach (int num in numbers)
   {
       doubledNumbers.Add(num * 2);
   }
   return doubledNumbers;
}

Enter fullscreen mode Exit fullscreen mode
  1. LINQ Doubling Strategy:

In this strategy, we leverage LINQ's Select method, which provides a concise way to perform the doubling operation on each element of the input list. The Select method returns an IEnumerable that is then converted to a List<int> using the ToList() method.

  • Compile Time: 0.172s
  • Execution Time: 0.016s
  • Memory Usage: 152kb
  • CPU Time: 0.016s
public List<int> LinqDoublingStrategy(List<int> numbers)
{
   return numbers.Select(num => num * 2).ToList();
}

Enter fullscreen mode Exit fullscreen mode
  1. For Loop Doubling Strategy:

This strategy employs a traditional for loop to iterate through the input list and double each element. The doubled numbers are added to a new list, which is returned after the loop completes.

  • Last Run: 3:26:47 pm
  • Compile Time: 0.172s
  • Execution Time: 0.016s
  • Memory Usage: 104kb
  • CPU Time: 0.031s
public List<int> ForLoopDoublingStrategy(List<int> numbers)
{
   List<int> doubledNumbers = new List<int>();
   for (int i = 0; i < numbers.Count; i++)
   {
       doubledNumbers.Add(numbers[i] * 2);
   }
   return doubledNumbers;
}

Enter fullscreen mode Exit fullscreen mode

Each of the provided strategies use a sample which effectively doubles the elements in the input list of integers.

The Foreach and For Loop strategies offer similar performance, with Foreach consuming slightly less memory. On the other hand, the LINQ-based Select approach uses a bit more memory but remains concise and easy to maintain.

To decide on the best strategy, consider the specific requirements of the application. If memory usage is a significant concern, especially for large datasets, the Foreach or For Loop strategy might be more suitable. These approaches can efficiently process data without creating additional memory overhead.

On the other hand, if code readability, simplicity, and ease of maintenance are higher priorities, the LINQ-based approach could be the better choice. The Select method offers an elegant way to perform the doubling operation, making the code more expressive and easier to manage.

Ultimately, it is essential to balance code readability, simplicity, and memory efficiency based on the specific needs of the application. For critical scenarios with memory constraints, profiling different strategies with representative data can help determine the most appropriate choice. Keep in mind that different situations may warrant different strategies, so it is crucial to assess the trade-offs and select the best approach accordingly.

Profiler used -

External Links

Top comments (3)

Collapse
 
ant_f_dev profile image
Anthony Fung

Thanks for sharing.

As a general rule, I'd suggest choosing the approach most maintainable for the team. The differences in performance between all three strategies doesn't appear that great, and the results may depend on things like background processes running at the time and perhaps caching/predictive branching.

Depending on the use case, if we are using large data sets, I'd probably recommend using Select but not calling .ToList() on it. LINQ queries are evaluated lazily, i.e. the next item is only computed when required. The consequences of that is:

  • We don't need to compute the whole list at once, which might cause a noticeable performance dip depending on the data set size
  • We don't need to store the entire data set in memory.
Collapse
 
ibengeu profile image
ibenge Uforo

I like the take on this, While sticking with what works for the team can offer stability and consistency, it may not always be the most efficient or optimal choice.

Balancing the team's preferences with the overall goals of the business is a valuable alternative. It involves assessing the impact of different strategies on performance, scalability, and long-term sustainability. For instance, if an algorithm currently operates in O(n^2) time complexity, and a new approach reduces it to half the time (O(n)), it would be prudent to opt for the latter. In the long run, the improved performance will prevent potential performance bottlenecks and enhance the application's responsiveness and efficiency.

Hence this bit

For critical scenarios with memory constraints, profiling different strategies with representative data can help determine the most appropriate choice

With regards to

I'd probably recommend using Select but not calling .ToList() on it.

What do you recommend?

Collapse
 
ant_f_dev profile image
Anthony Fung • Edited

For instance, if an algorithm currently operates in O(n^2) time complexity, and a new approach reduces it to half the time (O(n)), it would be prudent to opt for the latter.

Sorry - that's not half: the first is exponential; the second is linear. With smaller numbers it may not matter so much (e.g. 4 vs 2), but with larger numbers it's much more significant (e.g. 100 vs 10).

If it's a choice between these options, I'd almost always recommend the linear option. If it's an option between O(n) and O(2n), I'd say weigh up the pros and cons. Depending on the use case, it may not matter and (again depending on the algorithm) the cost of maintainability may outweigh performance, e.g. with smaller data sets in a CRUD API (and finite time), it may be worth spending more of that time making sure everything's async as that is more likely to bring the API down in a highly concurrent environment.

What do you recommend?

Depending on the use case, I'd recommend just using it as an IEnumerable - transform each item as you need it and send it on its way. But, if you need random access or multiple passes, IEnumerable will almost certainly make things worse. It may be worth transforming it to a List, or a HashSet, or a Dictionary. It's not really possible to make a recommendation with a context though.