task fromresult net core

Learn about API Management from experts on today’s innovative API architectures to accelerate growth, improve discoverability, transform existing services, and more! All with a hybrid, multi-cloud management platform for APIs across all environments.

Understanding the Whys, Whats, and Whens of ValueTask

' data-src=

Stephen Toub - MSFT

November 7th, 2018 23 8

The .NET Framework 4 saw the introduction of the  System.Threading.Tasks  namespace, and with it the  Task  class. This type and the derived  Task<TResult>  have long since become a staple of .NET programming, key aspects of the asynchronous programming model introduced with C# 5 and its  async  /  await  keywords. In this post, I’ll cover the newer  ValueTask / ValueTask<TResult>  types, which were introduced to help improve asynchronous performance in common use cases where decreased allocation overhead is important.

Task  serves multiple purposes, but at its core it’s a “promise”, an object that represents the eventual completion of some operation. You initiate an operation and get back a  Task  for it, and that  Task  will complete when the operation completes, which may happen synchronously as part of initiating the operation (e.g. accessing some data that was already buffered), asynchronously but complete by the time you get back the  Task  (e.g. accessing some data that wasn’t yet buffered but that was very fast to access), or asynchronously and complete after you’re already holding the  Task  (e.g. accessing some data from across a network). Since operations might complete asynchronously, you either need to block waiting for the results (which often defeats the purpose of the operation having been asynchronous to begin with) or you need to supply a callback that’ll be invoked when the operation completes. In .NET 4, providing such a callback was achieved via  ContinueWith  methods on the  Task , which explicitly exposed the callback model by accepting a delegate to invoke when the  Task  completed:

But with the .NET Framework 4.5 and C# 5,  Task s could simply be  await ed, making it easy to consume the results of an asynchronous operation, and with the generated code being able to optimize all of the aforementioned cases, correctly handling things regardless of whether the operation completes synchronously, completes asynchronously quickly, or completes asynchronously after already (implicitly) providing a callback:

Task  as a class is very flexible and has resulting benefits. For example, you can await it multiple times, by any number of consumers concurrently. You can store one into a dictionary for any number of subsequent consumers to await in the future, which allows it to be used as a cache for asynchronous results. You can block waiting for one to complete should the scenario require that. And you can write and consume a large variety of operations over tasks (sometimes referred to as “combinators”), such as a “when any” operation that asynchronously waits for the first to complete.

However, that flexibility is not needed for the most common case: simply invoking an asynchronous operation and awaiting its resulting task:

In such usage, we don’t need to be able to await the task multiple times. We don’t need to be able to handle concurrent awaits. We don’t need to be able to handle synchronous blocking. We don’t need to write combinators. We simply need to be able to await the resulting promise of the asynchronous operation. This is, after all, how we write synchronous code (e.g.  TResult result = SomeOperation(); ), and it naturally translates to the world of  async  /  await .

Further,  Task  does have a potential downside, in particular for scenarios where instances are created  a lot  and where high-throughput and performance is a key concern:  Task  is a class. As a class, that means that any operation which needs to create one needs to allocate an object, and the more objects that are allocated, the more work the garbage collector (GC) needs to do, and the more resources we spend on it that could be spent doing other things.

The runtime and core libraries mitigate this in many situations. For example, if you write a method like the following:

in the common case there will be space available in the buffer and the operation will complete synchronously. When it does, there’s nothing special about the  Task  that needs to be returned, since there’s no return value: this is the  Task -based equivalent of a  void -returning synchronous method. Thus, the runtime can simply cache a single non-generic  Task  and use that over and over again as the result task for any  async Task  method that completes synchronously (that cached singleton is exposed via `Task.CompletedTask`). Or for example, if you write:

in the common case, we expect there to be some data buffered, in which case this method simply checks  _bufferedCount , sees that it’s larger than  0 , and returns  true ; only if there’s currently no buffered data does it need to perform an operation that might complete asynchronously. And since there are only two possible  Boolean  results ( true  and  false ), there are only two possible  Task<bool>  objects needed to represent all possible result values, and so the runtime is able to cache two such objects and simply return a cached  Task<bool>  with a  Result  of  true , avoiding the need to allocate. Only if the operation completes asynchronously does the method then need to allocate a new  Task<bool> , because it needs to hand back the object to the caller before it knows what the result of the operation will be, and needs to have a unique object into which it can store the result when the operation does complete.

The runtime maintains a small such cache for other types as well, but it’s not feasible to cache everything. For example, a method like:

will also frequently complete synchronously. But unlike the  Boolean  case, this method returns an  Int32  value, which has ~4 billion possible results, and caching a  Task<int>  for all such cases would consume potentially hundreds of gigabytes of memory. The runtime does maintain a small cache for  Task<int> , but only for a few small result values, so for example if this completes synchronously (there’s data in the buffer) with a value like 4, it’ll end up using a cached task, but if it completes synchronously with a value like 42, it’ll end up allocating a new  Task<int> , akin to calling  Task.FromResult(42) .

Many library implementations attempt to mitigate this further by maintaining their own cache as well. For example, the  MemoryStream.ReadAsync  overload introduced in the .NET Framework 4.5 always completes synchronously, since it’s just reading data from memory.  ReadAsync  returns a  Task<int> , where the  Int32  result represents the number of bytes read.  ReadAsync  is often used in a loop, often with the number of bytes requested the same on each call, and often with  ReadAsync  able to fully fulfill that request. Thus, it’s common for repeated calls to  ReadAsync  to return a  Task<int>  synchronously with the same result as it did on the previous call. As such,  MemoryStream  maintains a cache of a single task, the last one it returned successfully. Then on a subsequent call, if the new result matches that of its cached  Task<int> , it just returns the cached one again; otherwise, it uses  Task.FromResult  to create a new one, stores that as its new cached task, and returns it.

Even so, there are many cases where operations complete synchronously and are forced to allocate a  Task<TResult>  to hand back.

ValueTask <TResult> and synchronous completion

All of this motivated the introduction of a new type in .NET Core 2.0 and made available for previous .NET releases via a  System.Threading.Tasks.Extensions  NuGet package:  ValueTask<TResult> .

ValueTask<TResult>  was introduced in .NET Core 2.0 as a struct capable of wrapping either a  TResult  or a  Task<TResult> . This means it can be returned from an async method, and if that method completes synchronously and successfully, nothing need be allocated: we can simply initialize this  ValueTask<TResult>  struct with the  TResult  and return that. Only if the method completes asynchronously does a  Task<TResult>  need to be allocated, with the  ValueTask<TResult>  created to wrap that instance (to minimize the size of  ValueTask<TResult>  and to optimize for the success path, an async method that faults with an unhandled exception will also allocate a  Task<TResult> , so that the  ValueTask<TResult>  can simply wrap that  Task<TResult>  rather than always having to carry around an additional field to store an  Exception ).

With that, a method like  MemoryStream.ReadAsync  that instead returns a  ValueTask<int>  need not be concerned with caching, and can instead be written with code like:

ValueTask <TResult> and asynchronous completion

Being able to write an async method that can complete synchronously without incurring an additional allocation for the result type is a big win. This is why  ValueTask<TResult>  was added to .NET Core 2.0, and why new methods that are expected to be used on hot paths are now defined to return  ValueTask<TResult>  instead of  Task<TResult> . For example, when we added a new  ReadAsync  overload to  Stream  in .NET Core 2.1 in order to be able to pass in a  Memory<byte>  instead of a  byte[] , we made the return type of that method be  ValueTask<int> . That way, Streams (which very often have a  ReadAsync  method that completes synchronously, as in the earlier  MemoryStream  example) can now be used with significantly less allocation.

However, when working on very high-throughput services, we still care about avoiding as much allocation as possible, and that means thinking about reducing and removing allocations associated with asynchronous completion paths as well.

With the  await  model, for any operation that completes asynchronously we need to be able to hand back an object that represents the eventual completion of the operation: the caller needs to be able to hand off a callback that’ll be invoked when the operation completes, and that requires having a unique object on the heap that can serve as the conduit for this specific operation. It doesn’t, however, imply anything about whether that object can be reused once an operation completes. If the object can be reused, then an API can maintain a cache of one or more such objects, and reuse them for serialized operations, meaning it can’t use the same object for multiple in-flight async operations, but it can reuse an object for non-concurrent accesses.

In .NET Core 2.1,  ValueTask<TResult>  was augmented to support such pooling and reuse. Rather than just being able to wrap a  TResult  or a  Task<TResult> , a new interface was introduced,  IValueTaskSource<TResult> , and  ValueTask<TResult>  was augmented to be able to wrap that as well.  IValueTaskSource<TResult>  provides the core support necessary to represent an asynchronous operation to  ValueTask<TResult>  in a similar manner to how  Task<TResult>  does:

GetStatus  is used to satisfy properties like  ValueTask<TResult>.IsCompleted , returning an indication of whether the async operation is still pending or whether it’s completed and how (success or not).  OnCompleted  is used by the  ValueTask<TResult> ‘s awaiter to hook up the callback necessary to continue execution from an  await  when the operation completes. And  GetResult  is used to retrieve the result of the operation, such that after the operation completes, the awaiter can either get the  TResult  or propagate any exception that may have occurred.

Most developers should never have a need to see this interface: methods simply hand back a  ValueTask<TResult>  that may have been constructed to wrap an instance of this interface, and the consumer is none-the-wiser. The interface is primarily there so that developers of performance-focused APIs are able to avoid allocation.

There are several such APIs in .NET Core 2.1. The most notable are  Socket.ReceiveAsync  and  Socket.SendAsync , with new overloads added in 2.1, e.g.

This overload returns a  ValueTask<int> . If the operation completes synchronously, it can simply construct a  ValueTask<int>  with the appropriate result, e.g.

If it completes asynchronously, it can use a pooled object that implements this interface:

The  Socket  implementation maintains one such pooled object for receives and one for sends, such that as long as no more than one of each is outstanding at a time, these overloads will end up being allocation-free, even if they complete operations asynchronously. That’s then further surfaced through  NetworkStream . For example, in .NET Core 2.1,  Stream  exposes:

which  NetworkStream  overrides.  NetworkStream.ReadAsync  just delegates to  Socket.ReceiveAsync , so the wins from  Socket  translate to  NetworkStream , and  NetworkStream.ReadAsync  effectively becomes allocation-free as well.

Non-generic ValueTask

When  ValueTask<TResult>  was introduced in .NET Core 2.0, it was purely about optimizing for the synchronous completion case, in order to avoid having to allocate a  Task<TResult>  to store the  TResult  already available. That also meant that a non-generic  ValueTask  wasn’t necessary: for the synchronous completion case, the  Task.CompletedTask  singleton could just be returned from a  Task -returning method, and was implicitly by the runtime for  async Task  methods.

With the advent of enabling even asynchronous completions to be allocation-free, however, a non-generic  ValueTask  becomes relevant again. Thus, in .NET Core 2.1 we also introduced the non-generic  ValueTask  and  IValueTaskSource . These provide direct counterparts to the generic versions, usable in similar ways, just with a void result.

Implementing IValueTaskSource / IValueTaskSource<T&gt

Most developers should never need to implement these interfaces. They’re also not particularly easy to implement. If you decide you need to, there are several implementations internal to .NET Core 2.1 that can serve as a reference, e.g.

  • AwaitableSocketAsyncEventArgs
  • AsyncOperation<TResult>
  • DefaultPipeReader

To make this easier for developers that do want to do it, in .NET Core 3.0 we plan to introduce all of this logic encapsulated into a  ManualResetValueTaskSourceCore<TResult>  type, a struct that can be encapsulated into another object that implements  IValueTaskSource<TResult>  and/or  IValueTaskSource , with that wrapper type simply delegating to the struct for the bulk of its implementation. You can learn more about this in the associated issue in the dotnet/corefx repo at  https://github.com/dotnet/corefx/issues/32664 .

Valid consumption patterns for ValueTasks

From a surface area perspective,  ValueTask  and  ValueTask<TResult>  are much more limited than  Task  and  Task<TResult> . That’s ok, even desirable, as the primary method for consumption is meant to simply be  await ing them.

However, because  ValueTask  and  ValueTask<TResult>  may wrap reusable objects, there are actually significant constraints on their consumption when compared with  Task  and  Task<TResult> , should someone veer off the desired path of just  await ing them. In general, the following operations should  never  be performed on a  ValueTask  /  ValueTask<TResult> :

  • Awaiting a  ValueTask  /  ValueTask<TResult>  multiple times.  The underlying object may have been recycled already and be in use by another operation. In contrast, a  Task  /  Task<TResult>  will never transition from a complete to incomplete state, so you can await it as many times as you need to, and will always get the same answer every time.
  • Awaiting a  ValueTask  /  ValueTask<TResult>  concurrently.  The underlying object expects to work with only a single callback from a single consumer at a time, and attempting to await it concurrently could easily introduce race conditions and subtle program errors. It’s also just a more specific case of the above bad operation: “awaiting a  ValueTask  /  ValueTask<TResult>  multiple times.” In contrast,  Task  /  Task<TResult>  do support any number of concurrent awaits.
  • Using  .GetAwaiter().GetResult()  when the operation hasn’t yet completed.  The  IValueTaskSource  /  IValueTaskSource<TResult>  implementation need not support blocking until the operation completes, and likely doesn’t, so such an operation is inherently a race condition and is unlikely to behave the way the caller intends. In contrast,  Task  /  Task<TResult>  do enable this, blocking the caller until the task completes.

If you have a  ValueTask  or a  ValueTask<TResult>  and you need to do one of these things, you should use  .AsTask()  to get a  Task  /  Task<TResult>  and then operate on that resulting task object. After that point, you should never interact with that  ValueTask  /  ValueTask<TResult>  again.

The short rule is this:  with a  ValueTask  or a  ValueTask<TResult> , you should either  await  it directly (optionally with  .ConfigureAwait(false) ) or call  AsTask()  on it directly, and then never use it again, e.g.

There is one additional advanced pattern that some developers may choose to use, hopefully only after measuring carefully and finding it provides meaningful benefit. Specifically,  ValueTask  /  ValueTask<TResult>  do expose some properties that speak to the current state of the operation, for example the  IsCompleted  property returning  false  if the operation hasn’t yet completed, and returning  true  if it has (meaning it’s no longer running and may have completed successfully or otherwise), and the  IsCompletedSuccessfully  property returning  true  if and only if it’s completed and completed successfully (meaning attempting to await it or access its result will not result in an exception being thrown). For very hot paths where a developer wants to, for example, avoid some additional costs only necessary on the asynchronous path, these properties can be checked prior to performing one of the operations that essentially invalidates the  ValueTask  /  ValueTask<TResult> , e.g.  await ,  .AsTask() . For example, in the  SocketsHttpHandler  implementation in .NET Core 2.1, the code issues a read on a connection, which returns a  ValueTask<int> . If that operation completed synchronously, then we don’t need to worry about being able to cancel the operation. But if it completes asynchronously, then while it’s running we want to hook up cancellation such that a cancellation request will tear down the connection. As this is a very hot code path, and as profiling showed it to make a small difference, the code is structured essentially as follows:

This pattern is acceptable, because the  ValueTask<int>  isn’t used again after either  .Result  is accessed or it’s awaited.

Should every new asynchronous API return ValueTask / ValueTask<TResult>?

In short, no: the default choice is still  Task  / Task<TResult> .

As highlighted above,  Task  and  Task<TResult>  are easier to use correctly than are  ValueTask  and  ValueTask<TResult> , and so unless the performance implications outweigh the usability implications,  Task  /  Task<TResult> are still preferred. There are also some minor costs associated with returning a  ValueTask<TResult>  instead of a  Task<TResult> , e.g. in microbenchmarks it’s a bit faster to  await  a  Task<TResult>  than it is to  await  a  ValueTask<TResult> , so if you can use cached tasks (e.g. you’re API returns  Task  or  Task<bool> ), you might be better off performance-wise sticking with  Task  and  Task<bool> .  ValueTask  /  ValueTask<TResult>  are also multiple words in size, and so when these are  await d and a field for them is stored in a calling async method’s state machine, they’ll take up a little more space in that state machine object.

However,  ValueTask  /  ValueTask<TResult>  are great choices when a) you expect consumers of your API to only  await  them directly, b) allocation-related overhead is important to avoid for your API, and c) either you expect synchronous completion to be a very common case, or you’re able to effectively pool objects for use with asynchronous completion. When adding abstract, virtual, or interface methods, you also need to consider whether these situations will exist for overrides/implementations of that method.

What’s Next for ValueTask and ValueTask<TResult>?

For the core .NET libraries, we’ll continue to see new  Task  /  Task<TResult> -returning APIs added, but we’ll also see new  ValueTask / ValueTask<TResult> -returning APIs added where appropriate. One key example of the latter is for the new  IAsyncEnumerator<T>  support planned for .NET Core 3.0.  IEnumerator<T>  exposes a  bool -returning  MoveNext  method, and the asynchronous  IAsyncEnumerator<T>  counterpart exposes a  MoveNextAsync method. When we initially started designing this feature, we thought of  MoveNextAsync  as returning a  Task<bool> , which could be made very efficient via cached tasks for the common case of  MoveNextAsync  completing synchronously. However, given how wide-reaching we expect async enumerables to be, and given that they’re based on interfaces that could end up with many different implementations (some of which may care deeply about performance and allocations), and given that the vast, vast majority of consumption will be through  await foreach  language support, we switched to having  MoveNextAsync  return a  ValueTask<bool> . This allows for the synchronous completion case to be fast but also for optimized implementations to use reusable objects to make the asynchronous completion case low-allocation as well. In fact, the C# compiler takes advantage of this when implementing async iterators to make async iterators as allocation-free as possible.

' data-src=

Stephen Toub - MSFT Partner Software Engineer, .NET

' data-src=

23 comments

Comments are closed. Login to edit/delete your existing comments

' data-src=

I think there is a bug with the code snippets you put on the article ! I see the html tags like <span>

' data-src=

Can you point to where exactly?

' data-src=

Thnx for a good article. There is one thing i dont understand. Inside our services we have a cache layer so we use the ValueTask so it dosent create threads when we return from the Cache. The question is when using Task.WhenAll we have to use the .AsTask extension but will this create a thread? Is the code below best practice? var routeTask = (RouteService.GetByPathAsync(path)).AsTask()var routePropertiesTask = RouteService.GetPropertyBag(path).GetAllValuesAsync().AsTask()var businessProfileTask = BusinessProfileService.GetByPathAsync(path).AsTask(); await Task.WhenAll(routeTask, routePropertiesTask, businessProfileTask); var route = await routeTask;var routeProperties = await routePropertiesTask;var businuessProfile = await businessProfileTask;

Using ValueTask vs Task doesn’t impact whether an additional thread is used; just the act of returning Task doesn’t cause an additional thread to be used.  The only difference in this regard is whether there’s an allocation.  Using .AsTask() doesn’t cause an additional thread to be created.

' data-src=

Using Task or ValueTask when you return from cache can’t implicitly create thread. A good resource is available at https://blog.stephencleary.com/2013/11/there-is-no-thread.html

' data-src=

I read this article once a month. Lol. Thanks for this.  For clarity, why does ValueTask<bool> have lower allocation than a cached Task<bool> for IAsycEnumerable async scenarios? Is it due to data locality or because the Tasks state machine will be allocated while ValueTask implementation can reuse a pooled object? 

There’s no allocation difference between a cached Task<bool> and a ValueTask<bool> if MoveNext completes synchronously.  But if it completes asynchronously, if it returned a Task<bool>, we’d need to allocate a Task<bool>.  With ValueTask<bool>, we can create one that wraps the enumerator itself, which implements IValueTaskSource<bool>, which means regardless of whether MoveNext completes synchronously or asynchronously, there’s no allocation for its return.  That means that for the entire lifetime of the async enumerable, there’s just the one allocation of overhead incurred for the whole enumeration: the enumerable object, which is reused as the enumerator, which is also reused as the IValueTaskSource<bool> backing the MoveNext ValueTask<bool>, which is also reused as the IValueTaskSource backing the ValueTask returned from DisposeAsync.

' data-src=

Sorry but I find this extremely complicated to an already overly complicated and disruptive API.  Not only do we now have to account for Task but now we have ValueTask and have to know the difference between the two.  What happens in the case of simply eliding the ValueTask?  In Task, it is designed to simply return the Task without the use of the async/await.  This appears to be gone now with ValueTask and you are forced to await without simply returning a Task — or if you do you are forced to allocate.  Plus now we have IAsyncDispose and IAsyncEnumerable, what’s next ObjectAsync?  ALL THE ASYNC AND ASYNC ISN’T EVEN A WORD111  We can do better:  https://developercommunity.visualstudio.com/idea/583945/improve-asynchronous-programming-model.html

> have to know the difference between the two

Not really.  If you’re writing an API, just keep using Task: if you need even more performance and care to optimize further, you can then consider employing ValueTask.  If you’re consuming an API, just await it: it makes no difference whether you’re awaiting a Task or a ValueTask.

> What happens in the case of simply eliding the ValueTask?  In Task, it is designed to simply return the Task without the use of the async/await.  This appears to be gone now with ValueTask

I’m not understanding.  What’s gone?  If you have a ValueTask and you want to return it, just return it.

> or if you do you are forced to allocate

Forced to allocate what?

I appreciate your engagement here, @Stephen. It is much respected and welcomed. > Forced to allocate what? Forced to allocate a new Task/object/reference. > If you’re consuming an API, just await it What if we do not want to await it, was the point I was making.  There are a lot of gotchas to awaiting as outlined in the many comments of evidence in my Vote (which got 131 votes in UserVoice, BTW — meaning that I am not simply speaking for myself here). There is a collective assumption of awaiting, but what is overlooked, or perhaps forgotten, is that the system is, of course, also natively designed to not await a Task — that is, to elide await/async and reduce the hidden magic machinery that is produced by the compiler. That is one of many points of confusion now. Consider: do you want asynchronous or synchronous? If asynchronous, do you want to elide async/await or not? Further, do you want to Task or ValueTask? Lots of decisions and required knowledge here and the ask is to consider reducing the complexity in a future version of .NET. >  if you need even more performance and care to optimize further, you can then consider employing ValueTask Who wants to make slow software?  I know you state that we should keep using Task — and I want to believe you!  However, all the new APIs and new code being released in the wild now are running counter to this by using ValueTask. I hope you can understand the confusion here. In short, the exception being raised here (pardon the pun) is that we now have an incredibly fragmented ecosystem between synchronous and asynchronous APIs.  The asynchronous APIs are now further being fragmented with ValueTask.  This fragmentation is a clear sign that the APIs as initially designed did not accurately model the problem space from the outset.  This is, of course, completely understandable as it is an incredibly challenging problem to solve. The request is that perhaps going forward for a future version of .NET, we can somehow reconcile all of these identified friction points to create a much more congruent, cohesive API that improves the developer experience, reduces the decisions/confusion, and (perhaps most importantly) returns elegance/aesthetics to our API design surfaces (and resulting ecosystem). Thank you in advance for any consideration and further dialogue.

> Forced to allocate a new Task/object/reference.

I do not understand.  You wrote “This appears to be gone now with ValueTask and you are forced to await without simply returning a Task — or if you do you are forced to allocate.”  You can absolutely just return a ValueTask.  It’s no different than Task in that regard.

> What if we do not want to await it, was the point I was making.

The 99.9% use case for all of these APIs is to await it.  If you’re in the minority case where you don’t want to and you want to do something more complicated with it, then call .AsTask() on it, and now you’ve got your Task that is more flexible.  This is no different in my mind than an API returning an `IEnumerable<T>`; if you just want to iterate through it, you can do so, but if you want to do more complicated things, enumerating it multiple times and being guaranteed to get the same results every time, caching it for later use, etc., you can call ToArray or ToList to get a more functional copy.

> that is, to elide await/async and reduce the hidden magic machinery that is produced by the compiler

All of that exists for both Task and ValueTask.  I don’t understand the point you’re trying to make.  When you await either a Task or a ValueTask, if they’ve already completed, the code will just continue executing synchronously, no callbacks, no yielding out of the MoveNext state machine method.

>  Consider: do you want asynchronous or synchronous?

Yes, that is a fundamental decision you have to make when designing an API.  That was true long before async/await and Task, and it continues to be true.

> If asynchronous, do you want to elide async/await or not?

I don’t understand this, nor how it has any bearing on Task vs ValueTask.

> do you want to Task or ValueTask?

I shared my rubric.

> Who wants to make slow software?

There are tons of possible “optimizations” one can make in their code that have absolutely zero observable impact, and each of those “optimizations” often entails more complicated code that takes up more space, is more difficult to debug, is more difficult to maintain, and so on.  The observable difference between Task and ValueTask from a performance perspective in the majority case is non-existent.  It’s only for when you’re developing something that’s going to be used on a critical hot path over and over and over again, where an extra allocation matters.

> However, all the new APIs and new code being released in the wild now are running counter to this by using ValueTask.

a) That isn’t true; there are new APIs being shipped in .NET Core that return Task instead of ValueTask.

b) The core libraries in .NET are special.  The functionality is all library code that may be used in a wide variety of situations, including cases where the functionality is in fact on very hot paths.  It’s much more likely that code in the core libraries benefit from ValueTask than does code outside of the core libraries. Further, many of the places you see ValueTask being exposed (e.g. IAsyncDisposable.DisposeAsync, Stream.ReadAsync) are interfaces / abstract / virtual methods that are meant to be overridden by 3rd parties, such that we can’t predict whether the situations will necessitate the utmost in performance, and in these cases, we’ve opted to enable that extra inch of perf in exchange for the usability drawbacks, which for these methods are generally minimal exactly because of how we expect them to be used (e.g. it would be very strange to see the result of DisposeAsync stored into a collection for use by many other consumers later on… it’s just not the use case).

I fundamentally disagree with a variety of your conclusions.  We may need to agree to disagree.  Thanks.

>The 99.9% use case for all of these APIs is to await it.

Yet it is also designed to not await. Hence, fragmentation and confusion. All the guidance does use async/await but there are many documented pitfalls and explanations that occur when doing so. I personally like to elide async/await as I find it simpler and avoid all of the generated compiler magic that seems to cause so much grief. > I don’t understand the point you’re trying to make. When you await either a Task or a ValueTask… The point is that the system is naturally designed to not await by default, and you can, in fact, elide these keywords. The async/await are not required and have been designed as such from the outset. > That was true long before async/await and Task, and it continues to be true. Agreed. async/await was a step in the right direction as it did improve over the previous model. The ask here is to further simplify async/await as it has its own set of pitfalls and areas where it can improve, not to mention its increasing fragmentation of classes and functionality, each requiring their own set of rules and knowledge for successful implementation. > I shared my rubric. And indeed you did. Another point in consideration: the sheer amount of explanation around this space. While incredibly detailed and informative, to me it’s a sign that some additional work can be done to further simplify the overall design. That is really all the point being made here is. > That isn’t true; there are new APIs being shipped in .NET Core that return Task instead of ValueTask. Example of this, please? And do you happen to know the ratio between new APIs with ValueTask vs Task? All the new ones that I have seen are using ValueTask which lead to my confusion here. > The core libraries in .NET are special. Right, and developers use them as a reference point for building their own code and making their own decisions. If all the new (or at least a sizable majority of all the new) APIs are pointing in one direction, while at the same time the recommendation is to continue using the established/historical direction, confusion will ensue — and has. > I fundamentally disagree with a variety of your conclusions. It sounds like you aren’t even understanding half of my concerns, so I cannot fault you. The ask here is for further consideration in future .NET versions to improve the asynchronous workflow. Although, I am guessing at this point something competitive will need to arise from Apple/Google to get your attention. In any case, I do appreciate you taking the time to address developers and their concerns here on your posts. I have always been a big fan of you and your writings and will continue to be.

FWIW: https://developercommunity.visualstudio.com/comments/603640/view.html

' data-src=

When writing a method that returns ValueTask<T>, is it more efficient to make it non-async and manually wrap a synchronous result in a new ValueTask<T> or will the compiler automatically optimize it in the synchronous case?

The async method builder implementation handles the case where the method completes synchronously, and returns a `new ValueTask<T>(T)` instead of creating a `Task<T>` and returning `new ValueTask<T>(task)`.

' data-src=

Thanks for the post.  We just had a very lively discussion on my team as a result of this post.  Love it.

Glad it was useful!

' data-src=

You should be writing a book – you explain brilliantly !

' data-src=

Stephen quick question for you.

My application sends a lot of data across the wire (TCP). What I like to do is start the sending process and await the task later. I gather from your post that It would be “safe” to do this with a ValueTask as long as I don’t await the same ValueTask twice?

For example, is the below code an acceptable use of ValueTask.

' data-src=

Yes, that should be fine to do.

' data-src=

This is great article, so clear and comprehensive. Thank You very much.

' data-src=

Great article, thanks 🙂

light-theme-icon

# How YOU can make your .NET programs more responsive using Tasks and async/await in .NET Core, C# and VS Code

, happy to take your suggestions on topics or improvements /Chris

task fromresult net core

When we run synchronous code we block the main Thread from doing anything else than just what's it's doing currently. This makes your software and user experience slower than it needs to be.

TLDR; we have the concept of Threads in .NET/.NET Core and they are an excellent way to schedule work to be carried out in parallel. However, they might be cumbersome to use. There is, however, a library called TPL, T ask P arallel L ibrary that lives on top of the Thread model and makes it really easy to schedule and manage work.

# References

It gives you a good intro to Tasks.

This talks about Control flows, how to ensure that the code happens in the right order

This is more of an overview of Task-based programming

This talks about how to run Tasks in order, one after another.

This teaches you how to cancel and listen for cancellation messages for your tasks

This shows how to do Tasks in Serverless programming and specifically Durable Functions

This takes you all the way from synchronous code to gradually convert it to asynchronous code.

So we mentioned TPL as a library. What do we need to know? TPL is such a central and important concept that it lives in the core APIs. It's part of the System.Threading and System.Threading.Tasks namespaces. It does a lot for us like:

  • Partitioning of the work
  • Scheduling of threads on the ThreadPool
  • Cancellation support
  • State management

and other low-level details.

There are some basic concepts that we need to understand.

  • Status , this can tell us if it's currently working on something, is done, errored out or it was canceled
  • IsCanceled , if canceled this would be set to true
  • IsFaulted , if something went wrong, like an exception, this would be set to true
  • IsCompleted , once it has finished its operation it would be set to true
  • Async/Await . The await keyword means that we finish for the asynchronous operation to end and by the end of the operation we are given the result, e.g var fileContent = await GetFileAsync() . Any method that uses the await concept would need to have async keyword as part of the method header.
  • Blocking/Non-blocking . When we use Tasks we are not blocking and other Threads can carry out work. There are exceptions though when we use the method Wait() on a Task we are forcing the code to run synchronously. We will show that in our demo in the next section.

A lot of things like opening up large files or carrying out a Web Request or maybe searching through your computer - are things that can be done in parallel . This means you can return back to the user much faster with a result and your app will be perceived as faster and more responsive. Web Development already uses the concept of Tasks heavily, which is a central concept in TPL. Learning how to use TPL can really make your applications more responsive. My hope is that you with this article feel more empowered to use TPL and Tasks.

In our Demo we will demonstrate the following:

  • Authoring methods, How to author methods using async/await and how to return different types
  • Control flow , we will show how to wait for all as well as specific Tasks
  • Blocking code , we will show how the usage of Result as well as Wait() affects your code

# Scaffold a project

Let's start by creating a solution like so:

This should create a solution file.

Next, we will create a console project like so:

and now add it to the solution like so:

Ok, we are ready to start coding. Open up an IDE, I'm gonna go with VS Code.

# Authoring methods

Let's open up the file Program.cs and add the following method inside of the class Program :

There are some interesting things that go on above:

  • Return type , Task<int> . This tells us that it will be a Task that once resolved will return something of type int .
  • Task.FromResult() , This creates a Task given a value. We give it the calculation to perform, e.g a+b .
  • Async/Await , We can see how we use the async keyword inside of the method to wait for the result to arrive back to us. This needs to be followed by the async keyword to ensure the compiler is happy.

It's easy to think that the above method above doesn't need to be asynchronous but imagine instead that this is a calculation that takes time, then it would make more sense.

# Control flow

There's more to Tasks than just marking them async . We can ensure to wait for all or some of the tasks to finish before carrying on with our code. We have some constructs that help us control this flow:

  • Task.WaitAll() , this one takes a list of Tasks in. What you are essentially saying is that all tasks need to finish before we can carry on, it's blocking. You can see that by it returning void A typical use-case is to wait for all Web Requests to finish cause we want to return a result that consists of us stitching all their data together
  • Task.WaitAny() , we give it a list of Tasks here as well but the meaning is different. We say that as long as any of the Task has finished we are good. This usually a race for data towards an endpoint or search for a file/file content on a disk. We don't care how finished first, as long as we get a response . This is also blocking and waiting for one of the Tasks to finish
  • Task.WhenAll() , this gives you a Task back that you can interact with. When all of the tasks have finished it will resolve.
  • Task.WhenAny() , this gives you a Task back that you can interact with. When one of the Tasks has finished then it will resolve.

Let's create a demo of a Control flow. We will fake carrying out time-consuming work by adding an additional method to our class, like so:

Demo - Control flow

Now we can add some control flow code in our Main() method like so:

Our full code in Program.cs should now look like this:

Let' compile:

and run it:

We should get the following response:

Even though the calculation from calling Sum() took a few milliseconds, we don't get any response until 2 seconds later, when DoSomething() has finished.

If we shift our code now from WaitAll to WhenAll we would get very different behavior. The code would have kept going and reported this instead:

So the lesson here is that if we want the code to wait at a specific point, using WaitAny is a good idea but if you want to start up a lot of asynchronous work then use When... .

We can still make the code behave correctly with WhenAll but we would need to investigate the status like so:

DEMO - Wait any

To test this one out we create three new methods that mock opening up files. Each of the three methods has a delay built in that differs:

Let's update our Program() method with some code as well:

As you can see above, we are waiting for one of the three tasks to finish, with this construct:

Given what we know of the methods being called, ReadFile3() should finish first, after 2 seconds, but let's test that by running our program:

We can see above that Task3 is completed and the other tasks haven't completed yet.

# Using Async APIs

Ok, we now understand more about async and is able to leverage that on existing APIs. Let's look at reading the content of a file. Normally you would create a method like so:

The above would block though and you wouldn't be able to do much else while this finishes. Imagine this is a really large file then it would be really noticeable. If we rewrite the method to use an async version we would instead get code looking like this:

This doesn't block and everyone is happy.

### Blocking code

One of the tricky parts of using TPL is knowing what calls block. You are all happy that your code is now asynchronous but suddenly you end up blocking anyway. So what shall we look out for? Well, we touched upon this subject already:

  • WaitAll and WaitAny blocks, the rule of thumb here seems to be that they return void and use the word Wait... . Sometimes you want it to wait though, so learn to be intentional with block/non-block
  • task.Result , this also blocks and waits for the result to be available
  • Wait() , this method on a Task will block and cause you to wait here until the code has finished, for example Task.Delay(2000).Wait()

# Full code

This is the full code I was playing around with if you want to explore for yourself:

In summary, we learned about the concept of Tasks and their anatomy. Additionally, we learned about Control Flows and we also discussed blocking/non-blocking code. There is more to learn though like how to cancel Tasks. Im gonna save that one for a separate article. I will add a link to Cancellation in the References section of this article.

Hands-On RESTful Web Services with ASP.NET Core 3 by Samuele Resca

Get full access to Hands-On RESTful Web Services with ASP.NET Core 3 and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.

Use Task.FromResult over Task.Run

If you have previously worked with .NET Core or .NET Framework, you have probably dealt with both Task.FromResult and Task.Run . Both can be used to return Task<T> . The main difference between them is in their input parameters. Take a look at the following Task.Run snippet:

The Task.Run method will queue the execution as a work item in the thread pool. The work item will immediately complete with the pre-computed value. As a result, we have wasted a thread pool. Furthermore, we should also notice that the initial purpose of Task.Run method was intended for the client-side .NET applications: ASP.NET Core is not optimized for the Task.Run ...

Get Hands-On RESTful Web Services with ASP.NET Core 3 now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

It’s yours, free.

Cover of Software Architecture Patterns

Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

task fromresult net core

Code Maze

  • Blazor WASM 🔥
  • ASP.NET Core Series
  • GraphQL ASP.NET Core
  • ASP.NET Core MVC Series
  • Testing ASP.NET Core Applications
  • EF Core Series
  • HttpClient with ASP.NET Core
  • Azure with ASP.NET Core
  • ASP.NET Core Identity Series
  • IdentityServer4, OAuth, OIDC Series
  • Angular with ASP.NET Core Identity
  • Blazor WebAssembly
  • .NET Collections
  • SOLID Principles in C#
  • ASP.NET Core Web API Best Practices
  • Top REST API Best Practices
  • Angular Development Best Practices
  • 10 Things You Should Avoid in Your ASP.NET Core Controllers
  • C# Back to Basics
  • C# Intermediate
  • Design Patterns in C#
  • Sorting Algorithms in C#
  • Docker Series
  • Angular Series
  • Angular Material Series
  • HTTP Series
  • .NET/C# Author
  • .NET/C# Editor
  • Our Editors
  • Leave Us a Review
  • Code Maze Reviews

Select Page

Long-Running Tasks in a Monolith ASP.NET Core Application

Posted by Code Maze | Updated Date Nov 17, 2022 | 15

Long-Running Tasks in a Monolith ASP.NET Core Application

Often we come across scenarios where invoking a Web API endpoint triggers long-running tasks. In this article, we’re going to explore the different ways we can implement such a long-running task in an ASP.NET Core Web API.

Let’s dive into it.

Long-Running Tasks Use Case

Let’s take an example from the E-Commerce domain. We normally find a checkout functionality in a shopping cart. This is potentially a long-running task. As the user clicks on the checkout button in UI, the system triggers several steps:

  • Stock availability check for the cart items
  • Tax calculation based on customer details
  • Payment processing through a third party payment gateway
  • Receipt generation and final email communication

This business process flow may fail in the stock check and payment processing steps. The system handles such failures and communicates the same to the end-user through email.

Overview of the Monolithic Application

We have prepared the sample monolith application that we are going to use in this article. We strongly suggest opening the source code while reading this article since it will help you to understand the entire implementation.

Become a patron at Patreon!

The Web API controller has a /checkout endpoint. The UI client application invokes this endpoint:

The action method sends an instance of the CheckoutRequest class to an implementation of the ICheckoutCoordinaor .

Now, let’s see the registration code for ICheckoutCoordinaor with the DI container:

We register four versions of the CheckoutCoordinator with the DI container. The different versions of the CheckoutCoordinator contain the different approaches to implementing a long-running task. Only one version is active at any given time.

The CheckCoordinatorV1 class implements the ICheckoutCoordinator interface.  This implementation executes the checkout tasks one after the other.  After all the processing is complete, the end-user receives the final response :

This approach is blocking in nature as it makes the user wait for a long period to get a response till all the processing is complete.

Please note, that for this demo the StockValidator , TaxCalculator , PaymentProcessor, and ReceiptGenerator classes inside the Services folder do not contain any business logic. They only simulate a process flow that may take a few seconds (long-running tasks):

Here, the first Task.Delay simulates customer lookup from a database/service. The customer address can be used for tax calculation. Similarly, the second delay simulates complex tax calculations for all line items for the customer. If you want, you can inspect all the other classes in the mentioned folder.

Let’s send the request to the /checkout endpoint which indicates the checkout of two cart items:

The Web API endpoint returns a successful response after a certain period:

As a result, we can see that the endpoint took more than 6s to complete the request. So, this is not an intuitive design from the user experience perspective.

Now, there are different solutions to this problem. Let’s look into the first approach.

Process Long-Running Tasks using Blocking Collection and TPL

The CheckoutCoordinatorV2 class uses the BlockingCollection<T> class for long-running task processing:

The BlockingCollection<T> encapsulates producer-consumer collections like ConcurrentQueue , ConcurrentBag etc. It provides blocking and bounding capabilities.

The CheckoutCoordinatorV2 class constructor invokes the CreateCheckoutQueue() method to initialize the BlockingCollection instance. Internally, this BlockingCollection uses a ConcurrentQueue instance. It also offloads the queue item processing in a separate long-running task.

Furthermore, the Task.Factory.StartNew() method from TPL offloads the processing to a separate Task . The TaskCreationOptions.LongRunning parameter hints to the task scheduler that an additional thread might be required for the task so that it does not block the forward progress of other threads or work items on the local thread-pool queue.

Now, let’s implement the ProcessCheckoutAsync() method from the ICheckoutCoordinator interface:

Here, we create an instance of the CheckoutResponse class with an order status of Inprogress , a relevant message, and, a new order id.

Before returning the response to the end-user, we create an instance of the QueueItem model class with the generated order id and other request properties. Then, the blocking collection enqueues the instance. This class instance acts as intermediate storage for order id and request data. This ensures that there is no data loss when the long-running background task processes the queue items in the blocking collection.

Additionally, let’s inspect CheckoutResponse and the QueueItem classes:

Asynchronous Processing in a Background Task

Now, let’s inspect the logic that we offload to the background task:

We use the BlockingCollection<T>.GetConsumingEnumerable() in the ProcessAsync() method to remove items until adding is completed and the collection is empty. This is called a mutating enumeration or consuming enumeration because unlike a typical foreach loop, this enumerator modifies the source collection by removing items.

We can also see that we call the ProcessEachQueueItemAsync() method inside the ProcessAsync() method:

In the ProcessEachQueueItemAsync() method the checkout process continues one after the other as per the business rules. But now, this processing completely happens in a non-blocking way without making the end-user wait for a response till the processing completes. On completion of processing, the ReceiptGenerator sends an email to the end-user with all the relevant details.

The controller action can also return an HTTP status code Accepted(202) instead of OK(200) with an endpoint where the result may be available at the end of processing. The client application then may choose to call this endpoint till a response is available.

Process Long-Running Task using Reactive Extensions

The CheckoutCoordinatorV3 class uses a Subject instance to implement the observer pattern , and we need to register the required services in the Program class:

Here, we register a singleton instance of the ReplaySubject<T>  with the DI container as the concrete implementation for both the IObserver<T> and IObservable<T> interfaces. This ensures that the same instance of the ReplaySubject<T> is injected as both the observer and observable components on the resolution of dependency.

Now, let’s inspect the CheckoutCoordinatorV3 class:

We inject IObserver<QueueItem> through the constructor with the ReplaySubject<QueueItem> as an implementation. The CheckoutCoordinaorV3 class adds new data to the observable stream on getting a new request from the controller.

We also need an observer that will react to the arrival of new data in the data stream. The ObserverBackgroundWorker which is implemented as a .NET Core BackgroundService acts as the observer in this case:

The ExecuteAsync() method from the BackgroundService subscribes to the ReplaySubject<QueueItem> instance. The DI container injects the ReplaySubject<QueueItem> instance as IObservable<QueueItem> .

The ProcessItemAsync() gets executed as a callback when new data is added to the data stream from the CheckoutCoordinatorV3 class. This method continues the checkout processes one after the other as per the business rule in a non-blocking way similar to the ProcessEachQueueItemAsync() method from the previous section.

Note that we register the ObserverBackgroundWorker class in the DI container as the implementation of the IHostedService interface:

builder.Services.AddHostedService<ObserverBackgroundWorker>();

Process Long-Running Task using System.Threading.Channel

This section uses a System.Threading.Channel to implement a producer-consumer pattern. The CheckoutCoordinatorV4 class acts as the producer here:

Similar to the previous two sections, here also, the ProcessCheckoutAsync() method returns a response with in-progress status to the end-user. But, before sending the response, we invoke the AddQueueItemAsync() method from the CheckoutProcessingChannel class.

Now, let’s see how we implement the channel communication in the CheckoutProcessingChannel class:

In the constructor, we create a bounded channel of the QueueItem type. The channel allows multiple producers to support multiple concurrent checkout requests to the Web API endpoint. The bounded channel limits the capacity of the channel such that producers will have to wait if the channel is full. The AddQueueItemAsync() method adds the item to the channel as long as there is a capacity to do so. Otherwise, it will asynchronously wait for space to be available.

The CheckoutProcessingChannel class is a singleton wrapper to a System.Threading.Channel instance that we use for communication between the producer and consumer:

builder.Services.AddSingleton<ICheckoutProcessingChannel, CheckoutProcessingChannel>();

A channel is a synchronization concept that supports passing data concurrently between producers and consumers. One of many producers can write data into the channel. Similarly, one or more consumers can read the same data from the channel.

Now, let’s inspect the ExecuteAsync() method from the ChannelBackgroundWorker class:

We read the data from the channel in the ExecuteAsync() method. We use a new feature introduced in C# 8.0 called async streams.

So, the async streams feature allows awaiting an asynchronous foreach loop. The ReadAllAsync() method in the CheckoutProcessingChannel wrapper class exposes an IAsyncEnumerable , which supports the await foreach syntax.

This allows awaiting asynchronously for the availability of new data from the channel. So, every time a new QueueItem instance is available in the channel, the AsyncEnumerator will trigger the foreach loop to run. Hence, the ProcessItemAsync() is executed every time a new queue item is available in the channel. This method implementation is the same as the last two previous sections.

Here also, we register the ChannelBackgroundWorker class in the DI container as the implementation of the IHostedService interface:

builder.Services.AddHostedService<ChannelBackgroundWorker>();

The Updated User Experience

Now, let’s send the same request as earlier to the /checkout endpoint which indicates the checkout of two cart items.

The Web API endpoint returns a successful response:

Here, we can see that the endpoint takes less time to return a response to the end-user. Of course, this means that our app handles long-running tasks much faster now. So, the rest of the checkout processing happens at its own pace in the background. This is evident from the console logs entries:

This article covers the different ways to process long-running tasks in an ASP.NET Core Monolithic application. However, in the next article, we will see the processing of a long-running task in a microservices architecture using a messaging platform like RabbitMQ.

guest

Good article! How could I implement a controller action to check if there is an answer and know what its status is? Replies are sent by email but are not stored anywhere.

Good article!

How could I implement a controller action to check if there is an answer and know what its status is? Replies are sent by email but are not stored anywhere.

I would like to have the approach where an endpoint return an HTTP status code Accepted(202) instead of OK(200) with another endpoint where the result will be available at the end of processing.

sanzoghenzo

thanks for this great article!

Just a note, the CheckoutProcessingChannel class has the wrong interface, it should be ICheckoutProcessingChannel instead of ICheckoutCoordinator

Marinko Spasojević

Thank you very much for that suggestion. Yeah, it was a typo. I updated it now. If you can’t see the change just do CTRL+F5 to clear the cache.

Thanks! Also, if I may, in the last two options you mention a ProcessItemAsync, method, but actually show only the ProcessEachQueueItemAsync method… I assume they have to have the same name

Well, to be honest, I am not sure about this one. I’ve just checked the article (quick keyword search) and in every place that we mention ProcessItemAsync or ProcessEachQueueItemAsync, we use the appropriate methods. Again, I just run over the article with a keyword search, but I think it looks good. Anyway, once I found a bit more time, I will read the article again and see what is going on.

what I meant is that there’s no implementation of ProcessItemAsync anywhere in the article, and it should be the same as ProcessProcessEachQueueItemAsync anyway, so you have to either rename the method to ProcessItemAsync or call ProcessProcessEachQueueItemAsync in the last two examples

Thanks again for this very useful article!

Yes, I understand now. Yeah, we didn’t show it in the article but we have provided the source code for easier navigation. And yes, implementation is the same.

Jon P Smith

Very useful, but do your background / observable approach work on web apps that have have multiple instances of the application running in parallel (e.g. Azure scale out)? I’m looking for something like this, but it has to work with multiple instances running.

Leszek

Could IHostedService be used here?

MarinkoSpasojevic

We already use it. As you can see in the article, we register two workers for the implementation of that interface.

Alexander Batishchev

Thanks for the writeup! It’s a good food for thoughts. What would be great is not just outline the options but also compare them and come with some pros and cons, conclusions which one is preferred and why.

Please use a prefix with such as _ or access fields with this to clearly distinguish between fields and variables. Right now it’s impossible to tell. For example, checkoutQueue.

Let me correct myself. You’re already do. checkoutQueue is an one-off. So please use consistent code style 🙂

Yeah, you are correct. We always use _ in front of our private variables – this was accidentally missed. We will fix it, thank you for the suggestion.

wpdiscuz

Join our 20k+ community of experts and learn about our Top 16 Web API Best Practices .

DEV Community

DEV Community

Janki Mehta

Posted on Sep 12, 2023

Creating Custom Health Checks in .NET Core

Health checks are critical in ASP.NET Core applications developed by developers to monitor the availability and status of various dependencies and infrastructure. .NET Core provides a flexible health checking system that we can leverage at ASP.NET Development Services to create customized health checks tailored to our specific needs.

In this post, we’ll learn how ASP.NET Developers can help create custom health check implementations in ASP.NET Core to check for application-specific conditions. The .NET Development Company has extensive experience building robust health monitoring using the inbuilt .NET Core interfaces and middleware

Health checks are used to report the health status of different parts of an application by running diagnostic tests. Some examples are:

  • Checking database connectivity
  • Validating external service reachability
  • Verifying available disk space
  • Testing a circuit breaker’s state

In ASP.NET Core, we can register health check implementations in the dependency injection (DI) container. The health check middleware provided by ASP.NET Core will then execute these checks and expose endpoints to read their status.

The health check system is extensible so we can create custom checks for our own criteria. The results are exposed over HTTP at the /health endpoint which can be consumed by monitoring tools and load balancers. Unhealthy results can trigger alerts or graceedul shutdowns.

Creating a Basic Health Check

To create a health check, we need to implement the IHealthCheck interface which contains a single CheckHealthAsync method:

The CheckHealthAsync method runs the health test logic and returns a HealthCheckResult indicating a healthy or unhealthy status.

We can include details in the result like an exception, custom data, message etc. The health check middleware will capture this result.

Registering Health Checks

To enable health checks, we need to register our check classes in DI and enable the health check middleware in the pipeline.

Here is an example Startup configuration:

We register the check with a tag that identifies it. The health check middleware is added to the pipeline.

This exposes our health check at the /health/example_check endpoint. The overallhealth status is available at /health.

Checking Database Connectivity

A common health check scenario is to validate that the application can connect to external databases and services it depends on.

Here is an example database health check:

This code tries to establish a connection to the database with the connection string loaded from configuration. A successful connection returns a healthy result, while a failure returns an unhealthy result with the exception.

We can now check for database outages from the health endpoints.

Creating Composite Health Checks

The health check system also supports creating composite health checks that group together multiple checks.

This can be used to aggregate related checks like all database checks into a single result.

Here is an example composite health check class:

Composite checks execute their child checks and aggregate the results. We can use this to have overall health results like “StorageHealth” composed from individual database checks.

Response Caching

Since health checks are invoked frequently, we should enable caching of the results to optimize performance.

We can specify a cache duration when registering checks in Startup:

Final Words

Health checks are an invaluable tool for monitoring critical system dependencies and infrastructure in production environments. ASP.NET Core provides an extensible health checking framework that allows us to create customized implementations to validate application-specific conditions. By leveraging the inbuilt interfaces and middleware, we can incorporate robust health monitoring in our .NET applications and gain greater visibility into their operational status. The ability to cache results and compose aggregated checks further enhances the utility of the health checking system. Overall, implementing custom health checks is a best practice for building resilient and observable ASP.NET Core services.

Top comments (1)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

mtedcode profile image

  • Joined Dec 31, 2019

And server voltage check.

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

elanatframework profile image

How to Build a Modular System Using ASP.NET Core?

elanatframework - Dec 27 '23

yousefmajidi profile image

Enhancing Explore.CLI

Yousef - Nov 24 '23

bytehide profile image

String to Byte Array Conversion in C#: Tutorial

ByteHide - Dec 25 '23

michaeljolley profile image

Choosing Between Controllers and Minimal API for .NET APIs

Michael Jolley - Dec 20 '23

Once suspended, me_janki will not be able to comment or publish posts until their suspension is removed.

Once unsuspended, me_janki will be able to comment and publish posts again.

Once unpublished, all posts by me_janki will become hidden and only accessible to themselves.

If me_janki is not suspended, they can still re-publish their posts from their dashboard.

Once unpublished, this post will become invisible to the public and only accessible to Janki Mehta.

They can still re-publish the post if they are not suspended.

Thanks for keeping DEV Community safe. Here is what you can do to flag me_janki:

me_janki consistently posts content that violates DEV Community's code of conduct because it is harassing, offensive or spammy.

Unflagging me_janki will restore default visibility to their posts.

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

' src=

12 January 2018

168035 views

Printer friendly version

HTTP Response Headers in ASP.NET Core

ASP.NET Core has the flexibility to add HTTP response headers anywhere in the middleware pipeline. Dino Esposito explains what you need to know to handle the headers in ASP.NET Core.

By design, HTTP headers are additional and optional pieces of information in the form of name/value pairs that travel between the client and the server with the request and/or the response. HTTP headers belong in the initial part of the message—the header indeed. Adding headers to a request is slightly different than adding headers to a response. The reason lies in the way that the raw text of HTTP requests and responses is actually written by client components. In this article, I’ll focus on the ASP.NET Core middleware and the support it provides to flexibly add HTTP response headers nearly at any time of the request processing.

An Executive Summary of the ASP.NET Core Pipeline

Every request sent to an ASP.NET Core application runs through the pipeline of configured middleware before it is processed to generate a response. In this context, the term ‘middleware’ refers to a software component that exposes a well-defined contract and is assembled with similar components to form a chain—the ASP.NET Core request pipeline. Each middleware can be programmed to perform some work in two distinct steps: before and after the request gets processed. The figure below shows the overall diagram.

task fromresult net core

As you can see, in the entire lifecycle a request flows through any registered middleware components twice, before and after the response is generated. More precisely, a middleware is only called once by the runtime but, depending on its internal implementation, it can execute code twice in the lifecycle of the same request. When the request enters in the processing phase, middleware components are invoked in the order they have been registered in code. Each middleware has its first chance to execute code and, if it decides to yield to the next in the chain, then it will have a chance to inspect the response generated. The terminating middleware component runs at the end of the chain and sets the inversion point of the flow. Middleware which are waiting for the control to return—those which yielded to the next at some point—are invoked in the reverse order on the way back to the caller. A middleware component can be expressed as an anonymous or named method and it could even be wrapped up in a tailor-made class. Here’s the delegate that expressed the behavior expected from a middleware component.

Any middleware components can inspect the current status of the ongoing HTTP request and can even alter it. On the way back, the middleware can also inspect the response and perform changes. The way in which changes to the response are coded is critical and discussing that is the primary purpose of this article.

The delegate above summarizes what a middleware component is expected to do—processing the HTTP context and performing some tasks. However, the ASP.NET Core runtime environment also passes any middleware to the next middleware configured in the pipeline. Here’s the code for a sample middleware expressing the body via an anonymous method.

You configure the ASP.NET Core pipeline adding code to the Configure method of the application’s startup class.

Invoking the next middleware is optional, but it’s key to be aware of the consequences of not calling it. Omitting to call the next delegate will simply terminate the request pipeline abruptly and no subsequent middleware components will be invoked later. For a middleware component not passing control to the next is acceptable as long as the component is able to fully generate the response. Any middleware component is written as a single chunk of code split in two parts, before and after the instruction that yields to the next middleware.

A middleware component can do whatever the runtime conditions allow to do. As mentioned, it can inspect the current HTTP request, including HTTP headers and cookies, and can alter the state of the request. At the same time, it can start writing to the response output stream. Every middleware component is independent in what it does but must be able to play well with all other middleware components.

The terminating middleware is the piece of code that is ultimately responsible for generating some output for the request. If no preceding middleware takes the responsibility of generating the output and short-circuiting the request, the terminating middleware—the method app.Run —is invoked. When you turn on the MVC application model, it’s the UseMvc middleware that stops the requests and generates the output. The app.Run method, if defined, is only invoked if the URL is not recognized by MVC.

Adding HTTP Headers

Inspecting the current HTTP request is not particularly problematic but updating the response can be problematic. The response is made of three parts. The first line indicates the status code (for example, 200) followed by a description of the status, for example OK.

The second part is the list of HTTP response headers. The actual list depends on the web server and the application. For example, for ASP.NET Core it usually contains at least the following:

If you inspect the HTTP response while debugging an ASP.NET (Core) application, you can also find the X-SourceFiles header. That header is only generated for localhost requests and serves debugging purposes of Visual Studio and IIS Express. Finally, the response has a blank line and then the actual content. Each segment of the response has its own dedicated API. In ASP.NET Core, you set the status code via the StatusCode property on the Response object and add HTTP headers via the Headers collection. In addition, you access the actual content through a dedicated stream property named Body .

When it comes to writing the overall response output a fixed order exists between writing the headers and writing the body. All HTTP headers must be strictly written before a single byte of the body is written to the output stream. An exception will be thrown otherwise. This rather natural and established order of steps poses an issue for developers writing middleware components in ASP.NET Core.

HTTP Header-related Callbacks

The ASP.NET Core request pipeline is composed in the Configure method of the application’s startup class. The Configure method is invoked only once at the start of the application, but the built pipeline is kept in memory and every request runs through the linked components. Only the developer of the ASP.NET Core application knows the exact sequence of components. The middleware code just receives the memory address to the next components with no view of who came first and who’s coming next. This is no big deal if all the middleware components are under the responsibility of the team writing the application. But what if you’re a middleware author? In this case, your general-purpose component can be chained to a variety of other components in a variety of different orders. If your component needs to append a HTTP header or inspect the body it needs to play by the rules and behave like a good citizen to its unknown neighbors.

In ASP.NET Core, the Response object exposes the OnStarting method that gets automatically invoked just before the first byte is written to the response body. The method accepts a callback function with the specific purpose of performing whatever task must be performed before the body is written. Here’s one possible way to attach a callback function to the OnStarting method.

The callback can be expressed in two forms:

In the former case, the callback takes no input parameters and returns a Task object. In the latter, it also accepts an object representing some external state to process within the callback and still returns a Task object. The code snippet above implements the former case—no input parameters—and just registers a callback that will add a custom header at the last useful minute.

An OnStarting callback should always be registered in the first pass of a middleware component. Consider that the terminating middleware—whatever form it may take—will very likely be writing some content to the output stream. Therefore, registering a OnStarting callback in the second pass is definitely too late and will cause a HTTP 500 error.

Not strictly related to HTTP headers but still worth noting is that the Response object also features an OnCompleted callback that mostly exists for logging purposes and fires the provided function once the output has been successfully consumed by the client.

Inspecting the Body of the Response

Sometimes, especially if you’re writing low-level tools like loggers or monitoring APIs, you need to inspect the actual content being returned with the response before deciding which header to add or, more likely, which content to assign to it. In a similar scenario, an interesting technique you want to use is temporarily replacing the original Body stream—the one being monitored for adding headers—with an in-memory stream. Any middleware components, including the terminating middleware, will transparently write to the in-memory stream. Next, in the second pass of the middleware the content accumulated in memory will be read and the content to append as a header will be computed.

Let’s see how to add a sample HTTP header that replicates the content being sent in the body. The code below shows a full implementation of the Configure method of a startup class.

The first middleware just registers a callback to add a static content HTTP header and yields to the next component. The second middleware replaces the Body stream with an aptly created in-memory stream and ceases to the subsequent middleware. The flow proceeds until the termination of the pipeline is reached. When this happens, the terminating middleware writes a string to the output stream of the response object. This content, however, ends up saved to the underlying in-memory stream. On the second pass, the middleware which inserted the memory stream regains control and reads the current content of the stream to a local variable. Next it adds a new HTTP header named Body and sets it to the string saved in the local variable. Finally, it resets things to the natural form copying any content of the in-memory stream to the original stream and attaching it back to the Body property.

If you use a few breakpoints, you’ll see that just when the Body property is being assigned the original stream the control jumps back up to the OnStarting callback to add the sample Site header. The figure below shows the final output as reported by Fiddler.

task fromresult net core

Note that the first header you see is Body and verify that its content is exactly the same content of the response body. The Site header is the second added. This also makes sense because of the order of middleware components in the pipeline.

Adding custom headers to specific requests is not a very common task but, at the same time, it is a task that every developer needs to be familiar with for the obvious reason that, well, sooner or later everyone will face it. The different architecture of the request pipeline in classic ASP.NET and ASP.NET Core requires a significantly different approach even for a very basic task like appending a custom header. At the same time, the extended stream composition mechanism, already introduced in classic ASP.NET, makes it possible to build some buffering system on top of the output stream thus leading to write headers based on the content. You can find the full source of this example at http://bit.ly/2mgBfAO .

Automate database deployments

Standardize team-based development - Prevent rework and conflicts, build consistency and quality into your code, and gain time for development that adds value, with standardized best practices for database development.

Find out more

Subscribe for more articles

Fortnightly newsletters help sharpen your skills and keep you ahead, with articles, ebooks and opinion to keep you informed.

Rate this article

' src=

Dino Esposito

Dino Esposito has authored more than 20 books and 1,000 articles in his 25-year career. Author of “The Sabbatical Break,” a theatrical-style show, Esposito is busy writing software for a greener world as the digital strategist at BaxEnergy. Follow him on Twitter: @despos .

Follow Dino Esposito via

View all articles by Dino Esposito

Load comments

Related articles

task fromresult net core

Creating Templates with Liquid in ASP.NET Core

task fromresult net core

C# Cancellation Tokens in AWS

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Task<TResult>.Result Property

Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.

Gets the result value of this Task<TResult> .

Property Value

The result value of this Task<TResult> , which is of the same type as the task's type parameter.

The task was canceled. The InnerExceptions collection contains a TaskCanceledException object.

An exception was thrown during the execution of the task. The InnerExceptions collection contains information about the exception or exceptions.

The following example is a command-line utility that calculates the number of bytes in the files in each directory whose name is passed as a command-line argument. If the directory contains files, it executes a lambda expression that instantiates a FileStream object for each file in the directory and retrieves the value of its FileStream.Length property. If a directory contains no files, it simply calls the FromResult method to create a task whose Task<TResult>.Result property is zero (0). When the tasks finish, the total number of bytes in all a directory's files is available from the Result property.

Accessing the property's get accessor blocks the calling thread until the asynchronous operation is complete; it is equivalent to calling the Wait method.

Once the result of an operation is available, it is stored and is returned immediately on subsequent calls to the Result property. Note that, if an exception occurred during the operation of the task, or if the task has been cancelled, the Result property does not return a value. Instead, attempting to access the property value throws an AggregateException exception.

  • Task Parallel Library (TPL)
  • Task-based Asynchronous Programming
  • How to: Return a Value from a Task

Was this page helpful?

Submit and view feedback for

Additional resources

Khalid Abuhakmeh

Multi-tenancy with ASP.NET Core and FinBuckle.Multitenant

Multi-tenancy with ASP.NET Core and FinBuckle.Multitenant

Multi-tenancy is a complex topic with a generally understood definition, yet the devil is in the details. From a high level, Multi-tenancy is the idea that a single codebase can support many users in what they perceive as unique-to-them silos. Users have their tenants, which can provide isolation from others. Isolation can be logical or physical, specifically around dependencies such as data storage, authentication and authorization, and third-party services. For developers, multi-tenancy also makes the programming model more straightforward since most business logic can have a contextual baseline codified into the application’s infrastructure.

While the multi-tenancy approach is popular, it can be tricky to implement, especially within the ASP.NET Core pipeline, which heavily depends on dependency injection. In this post, we’ll see how to use the FinBuckle.Multitenant package to gain a competitive advantage when developing multi-tenant applications.

What is FinBuckle.Multitenant?

FinBuckle.Multitenant is an open-source .NET library designed to codify many best practices around multi-tenancy, taking into account many of the standard building blocks found in the .NET community. These building blocks include ASP.NET Core, dependency injection, identity management, and more. The package focuses on being “lightweight” and a drop-in dependency for your .NET (Core) solutions, providing a mechanism to support data isolation, tenancy resolution, and tenant-specific behaviors. How does the library do all that?

FinBuckle.Multitenant has three components users should understand before starting: Tenants, Strategies, and Stores.

A Tenant is a logical concept specifying a boundary for a set of users. Within a tenant, you may have unique data storage, identity management, or any other aspect of the application. If your application is an apartment complex, each tenant would be an apartment.

Strategies help your application determine which tenant is currently in context. The library provides multiple strategies, including a URL base path strategy, a claim strategy, a session strategy, a TLD host strategy, a header strategy, and many more. Additionally, strategies can be combined to create a combination unique to your use case. You may also create custom strategies depending on your application’s unique scenario. Staying with the analogy of an apartment complex, a strategy for determining your apartment might be a key, facial recognition, NFC taps, or a friendly doorman recognizing you.

The final essential element of the library is a Store. Stores provide a record of all potential tenants that exist within your overall application. These stores are a data storage mechanism backed by a database, in-memory collections, HTTP endpoints, configuration files, or a distributed cache. Which works best depends on your particular use case and the number of potential tenants. In the final analogy, a store is the building’s rental office, which has the contracts for each apartment.

All three parts are integral to how FinBuckle.MultiTenant works, but let’s see it used in an ASP.NET Core sample.

Getting Started with Multi-tenancy

Starting with an ASP.NET Core web application, we’ll first need to install the FinBuckle.Multitenant.AspNetCore package.

Once installed, we’ll need to configure all elements described in the previous section: Tenants, Strategies, and Stores. Let’s take a look at our Program.cs file and how we hook the library into the ASP.NET Core infrastructure of the application.

The library has both a services registration and a middleware component. Here you can see us adding the multi-tenancy with the TenantInfo class being our tenant definition. You can implement your own ITenantInfo instances, but the library provides a simple TenantInfo type definition for an easy way to get started.

You may have also noticed our two strategies of RouteStrategy and DelegateStrategy . The RouteStrategy is an included strategy that uses the endpoint’s route values to determine the tenant. In this sample, the route value’s key is “tenant”. We’ll see later how the DelegateStrategy is implemented in our Tenants static class, but it’s a custom method that takes an HttpContext instance.

Finally, we are using an InMemoryStore for this demo, with all the tenants hard-coded into our application. Let’s see what these references lead to.

The most complex part of the Tenants implementation is our QueryStringStrategy , which provides a default tenant fallback when an ASP.NET Core request does not specify the tenant.

Cool! Now that it’s all setup, where do we use the tenant information?

Well, an instance of TenantInfo should always be in the services collection of your .NET application. That means you can ask your application to resolve the TenantInfo as a dependency of any of your .NET services. This includes database classes, services, razor views, and more. In this case, we’ll inject our TenantInfo into a Razor View.

Note that the route of this Razor page has a tenant route value, matching our RouteStrategy from before. The value is also optional, allowing our custom QueryStringStategy to set the default tenant.

Running the page, you can now experiment with going to / , /other , and /?tenant=other , all of which should switch between the hard-coded tenants. The value used is the Identifier on your TenantInfo instances, so be sure to use the Id here appropriately.

And that’s it! Wow, how easy was that? Adding multi-tenancy to a .NET application has never been easier.

FinBuckle.Multitenant is a refreshingly complete solution built for the modern sensibilities of the newest .NET programming model. It has well-thought solutions for what becomes a quickly complex problem. The authors at FinBuckle have done a great job thinking about the different aspects of an application that might need tenancy information and providing mechanisms to retrieve the tenant in most conceivable situations. Whether you’re working with ASP.NET Core, distributed services, or authentication, you can retrieve the tenant information when and where you need it.

If you use FinBuckle.Multitenant or are thinking about using it, be sure to go to FinBuckle’s GitHub sponsors page and show your support . Just a few dollars can make a difference in making projects like these sustainable.

Khalid Abuhakmeh's Picture

About Khalid Abuhakmeh

Khalid is a developer advocate at JetBrains focusing on .NET technologies and tooling.

Measuring Unicode String Lengths with C#

Measuring Unicode String Lengths with C#

Dumb Developer Tricks - Fizz Buzz with C# 12 and Polly

Dumb Developer Tricks - Fizz Buzz with C# 12 and Polly

IMAGES

  1. Asp.Net Core and Task FromResult,CompletedTask

    task fromresult net core

  2. [Solved] Using Task.FromResult v/s await in C#

    task fromresult net core

  3. Asynchronous Programming with Async and Await in ASP.NET Core

    task fromresult net core

  4. ASP.NET Core Middleware

    task fromresult net core

  5. ASP.NET Core

    task fromresult net core

  6. ASP.NET Core: What Is It and Top 5 Advantages of .NET Core

    task fromresult net core

VIDEO

  1. e-task earning I online income 2024 I লাইক ফলো করে ইনকাম করুন I

  2. HOW TO: Permission Authorization in ASP.NET Core

  3. Getting the data from the confirmation modal in Asp.Net Core part 159

  4. 7.Implementing Delete method in ASP NET Core 5.0 Web API in Darija

  5. C# CRUD Rest API using .NET 7, ASP.NET, Entity Framework, Postgres, Docker, Docker Compose

  6. LA Net A Multi Task Deep Network for the Segmentation of the Left Atrium

COMMENTS

  1. Task.FromResult<TResult>(TResult) Method (System.Threading.Tasks

    Definition Namespace: System. Threading. Tasks Assembly: System.Runtime.dll Creates a Task<TResult> that's completed successfully with the specified result. C# public static System.Threading.Tasks.Task<TResult> FromResult<TResult> (TResult result); Type Parameters TResult The type of the result returned by the task. Parameters result TResult

  2. What is the use for Task.FromResult<TResult> in C#

    What is the use for Task.FromResult<TResult> in C# Ask Question Asked 10 years, 3 months ago Modified 29 days ago Viewed 144k times 257 In C# and TPL ( Task Parallel Library ), the Task class represents an ongoing work that produces a value of type T. I'd like to know what is the need for the Task.FromResult method ?

  3. Using Task.FromResult v/s await in C#

    What do you expect from Task.FromResult ( value.Result )? Getiing the Result from the value task (which is bad because of the implicit Wait) and wrapping that value with a task, just to await that to get the result - Sir Rufo Jun 6, 2018 at 18:10 Add a comment 2 Answers Sorted by: 67

  4. Task.CompletedTask, Task.FromResult and Return in C#

    Using Task.CompletedTask, Task.FromResult and Return in C# Async Methods Posted by Code Maze | Sep 2, 2023 | 0 Want to build great APIs? Or become even better at it? Check our Ultimate ASP.NET Core Web API program and learn how to create a full production-ready ASP.NET Core API using only the latest .NET technologies.

  5. Consuming the Task-based Asynchronous Pattern

    When you use the Task-based Asynchronous Pattern (TAP) to work with asynchronous operations, you can use callbacks to achieve waiting without blocking. For tasks, this is achieved through methods such as Task.ContinueWith. Language-based asynchronous support hides callbacks by allowing asynchronous operations to be awaited within normal control ...

  6. Breaking change: Task.FromResult may return singleton

    12/02/2021 1 contributor Feedback In this article Old behavior New behavior Version introduced Type of breaking change Show 3 more Task.FromResult<TResult> (TResult) may now return a cached Task<TResult> instance rather than always creating a new instance. Old behavior

  7. Understanding the Whys, Whats, and Whens of ValueTask

    November 7th, 2018 23 8 The .NET Framework 4 saw the introduction of the System.Threading.Tasks namespace, and with it the Task class. This type and the derived Task<TResult> have long since become a staple of .NET programming, key aspects of the asynchronous programming model introduced with C# 5 and its async / await keywords.

  8. How YOU can make your .NET programs more responsive using Tasks and

    There are some interesting things that go on above: Return type, Task<int>.This tells us that it will be a Task that once resolved will return something of type int.; Task.FromResult(), This creates a Task given a value.We give it the calculation to perform, e.g a+b.; Async/Await, We can see how we use the async keyword inside of the method to wait for the result to arrive back to us.

  9. Create pre-computed Task objects

    The FromResult method returns a finished Task<TResult> object that holds the provided value as its Result property. This method is useful when you perform an asynchronous operation that returns a Task<TResult> object, and the result of that Task<TResult> object is already computed. Example The following example downloads strings from the web.

  10. Hands-On RESTful Web Services with ASP.NET Core 3

    If you have previously worked with .NET Core or .NET Framework, you have probably dealt with both Task.FromResult and Task.Run. Both can be used to return Task<T>. The main difference between them is in their input parameters. Take a look at the following Task.Run snippet: public Task < int > AddAsync (int a, int b) { return Task. Run (() => a ...

  11. How to Execute Multiple Tasks Asynchronously in C#

    Since the tasks for fetching employee details, salary and rating are independent of each other, it is easy to execute them in parallel to improve the overall performance of the workflow: public async Task<EmployeeProfile> ExecuteInParallel(Guid id) {. var employeeDetailsTask = _employeeApiFacade.GetEmployeeDetails(id);

  12. Long-Running Tasks in a Monolith ASP.NET Core Application

    In this article, we're going to explore the different ways we can implement such a long-running task in an ASP.NET Core Web API. To download the source code for this article, you can visit our GitHub repository. Let's dive into it. Long-Running Tasks Use Case Let's take an example from the E-Commerce domain.

  13. Creating Custom Health Checks in .NET Core

    Core provides a flexible health checking system that we can leverage at ASP.NET Development Services to create customized health checks tailored to our specific needs. In this post, we'll learn how ASP.NET Developers can help create custom health check implementations in ASP.NET Core to check for application-specific conditions.

  14. How to: Return a Value from a Task

    using System; using System.Linq; using System.Threading.Tasks; class Program { static void Main() { // Return a value type with a lambda expression Task<int> task1 = Task<int>.Factory.StartNew ( () => 1); int i = task1.Result; // Return a named reference type with a multi-line statement lambda.

  15. Authentication and authorization in ASP.NET Core SignalR

    To require authentication, apply the xref:Microsoft.AspNetCore.Authorization.AuthorizeAttribute attribute to the hub:</p>\n<p dir=\"auto\"> [!code-csharp<a href=\"/dotnet/AspNetCore.Docs/blob/main/aspnetcore/signalr/authn-and-authz/sample/Hubs/ChatHub.cs?range=8-10,32\">Restrict a hub to only authorized users</a>]</p>\n<p dir=\"auto\">The constr...

  16. HTTP Response Headers in ASP.NET Core

    Dino Esposito explains what you need to know to handle the headers in ASP.NET Core. By design, HTTP headers are additional and optional pieces of information in the form of name/value pairs that travel between the client and the server with the request and/or the response. HTTP headers belong in the initial part of the message—the header indeed.

  17. Task<TResult>.Result Property (System.Threading.Tasks)

    Definition Namespace: System. Threading. Tasks Assembly: System.Runtime.dll Gets the result value of this Task<TResult>. C# public TResult Result { get; } Property Value TResult The result value of this Task<TResult>, which is of the same type as the task's type parameter. Exceptions AggregateException The task was canceled.

  18. what to return when return type is Task<IactionResult> in .netcore

    1 Im really confused in .Netcore what normally should return to a queue (messagebroker),I have a class public Task<IActionResult> GetMerchantPlatform (int merchantID) { try { var mrchantInfo = dbContext.MerchantPlatforms.Where (s => s.Id == merchantID); return Task.FromResult (mrchantInfo); } catch (Exception ex) { throw ex; } }

  19. Multi-tenancy with ASP.NET Core and FinBuckle.Multitenant

    FinBuckle.Multitenant is an open-source .NET library designed to codify many best practices around multi-tenancy, taking into account many of the standard building blocks found in the .NET community. These building blocks include ASP.NET Core, dependency injection, identity management, and more. The package focuses on being "lightweight ...

  20. How can Ok() be both Task<IActionResult> and IActionResult?

    Apr 27, 2017 at 10:18 Add a comment 2 Answers Sorted by: 7 The async keyword causes the compiler to take care of this automatically. Async methods implicitly "wrap" the return value in a Task. async Task<int> GetNumber () { return 42; } vs Task<int> GetNumber () { return Task.FromResult (42); } Share Follow answered Apr 27, 2017 at 10:19