European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core Hosting :: How to Use IOptions for ASP.NET Core 2 Configuration

clock February 7, 2019 11:48 by author Scott

Almost every project will have some settings that need to be configured and changed depending on the environment, or secrets that you don't want to hard code into your repository. The classic example is connection strings and passwords etc which in ASP.NET 4 were often stored in the <applicationSettings> section of web.config.

In ASP.NET Core this model of configuration has been significantly extended and enhanced. Application settings can be stored in multiple places - environment variables, appsettings.json, user secrets etc - and easily accessed through the same interface in your application. Further to this, the new configuration system in ASP.NET allows (actually, enforces) strongly typed settings using the IOptions<> pattern.

While working on an RC2 project the other day, I was trying to use this facility to bind a custom Configuration class, but for the life of me I couldn't get it to bind my properties. Partly that was down to the documentation being somewhat out of date since the launch of RC2, and partly down to the way binding works using reflection. In this post I'm going to go into demonstrate the power of the IOptions<> pattern, and describe a few of the problems I ran in to and how to solve them.

Strongly typed configuration

 

In ASP.NET Core, there is now no default AppSettings["MySettingKey"] way to get settings. Instead, the recommended approach is to create a strongly typed configuration class with a structure that matches a section in your configuration file (or wherever your configuration is being loaded from):

public class MySettings
{
    public string StringSetting { get; set; }
    public int IntSetting { get; set; }
}

Would map to the lower section in the appsettings.json below.

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "MySettings": {
    "StringSetting": "My Value",
    "IntSetting": 23
  }
}

Binding the configuration to your classes

 

In order to ensure your appsettings.json file is bound to the MySettings class, you need to do 2 things.

1. Setup the ConfigurationBuilder to load your file

2. Bind your settings class to a configuration section

When you create a new ASP.NET Core application from the default templates, the ConfigurationBuilder is already configured in Startup.cs to load settings from environment variables, appsettings.json, and in development environments, from user secrets:

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    if (env.IsDevelopment())
    {
        builder.AddUserSecrets();
    }

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();
}

If you need to load your configuration from another source then this is the place to do it, but for most common situations this setup should suffice. There are a number of additional configuration providers that can be used to bind other sources, such as xml files for example.

In order to bind a settings class to your configuration you need to configure this in the ConfigureServices method of Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    services.Configure<MySettings>(options => Configuration.GetSection("MySettings").Bind(options));
}

Note: The syntax for model binding has changed from RC1 to RC2 and was one of the issues I was battling with. The previous method, using services.Configure<MySettings>(Configuration.GetSection("MySettings")), is no longer available

You may also need to add the configuration binder package to the dependencies section of your project.json:

"dependencies": {
  ...
  "Microsoft.Extensions.Configuration.Binder": "1.0.0-rc2-final"
  ...
}

Using your configuration class

 

When you need to access the values of MySettings you just need to inject an instance of an IOptions<> class into the constructor of your consuming class, and let dependency injection handle the rest:

public class HomeController : Controller
{
    private MySettings _settings;
    public HomeController(IOptions<MySettings> settings)
    {
        _settings = settings.Value
        // _settings.StringSetting == "My Value";
    }
}

The IOptions<> service exposes a Value property which contains your configured MySettings class.

It's important to note that there doesn't appear to be a way to access the raw IConfigurationRoot through dependency injection, so the strongly typed route is the only way to get to your settings.

You can expose the IConfigurationRoot directly to the DI container using services.AddSingleton(Configuration). (Thanks Saša Ćetković for pointing that out!)

Complex configuration classes

 

The example shown above is all very nice, but what if you have a very complex configuration, nested types, collections, the whole 9 yards?

public class MySettings
{
    public string StringSetting { get; set; }
    public int IntSetting { get; set; }
    public Dictionary<string, InnerClass> Dict { get; set; }
    public List<string> ListOfValues { get; set; }
    public MyEnum AnEnum { get; set; }
}

public class InnerClass
{
    public string Name { get; set; }
    public bool IsEnabled { get; set; } = true;
}

public enum MyEnum
{
    None = 0,
    Lots = 1
}

Amazingly we can bind that using the same configure<MySettings> call to the following, and it all just works:

{
  "MySettings": {
    "StringSetting": "My Value",
    "IntSetting": 23,
    "AnEnum": "Lots",
    "ListOfValues": ["Value1", "Value2"],
    "Dict": {
      "FirstKey": {
        "Name": "First Class",
           "IsEnabled":  false
      },
      "SecondKey": {
        "Name": "Second Class"
      }
    }
  }
}

When values aren't provided, they get their default values, (e.g. MySettings.Dict["SecondKey].IsEnabled == true). Dictionaries, lists and enums are all bound correctly. That is until they aren't...

Models that won't bind

 

So after I'd beaten the RC2 syntax change in to submission, I thought I was home and dry, but I still couldn't get my configuration class to bind correctly. Getting frustrated, I decided to dive in to the source code for the binder and see what's going on (woo, open source!).

It was there I found a number of interesting cases where a model's properties won't be bound even if there are appropriate configuration values. Most of them are fairly obvious, but could feasibly sting you if you're not aware of them. I am only going to go into scenarios that do not throw exceptions, as these seem like the hardest ones to figure out.

Properties must have a public Get method

 

The properties of your configuration class must have a getter, which is public and must not be an indexer, so none of these properties would bind:

private string _noGetter;
private string[] _arr;

public string NoGetter { set { _noGetter = value; } }
public string NonPublicGetter { set { _noGetter = value; } }
public string this[int i]
{
    get { return _arr[i]; }
    set { _arr[i] = value; }
}

Properties must have a public Set method...

 

Similarly, properties must have a public setter, so again, none of these would bind:

public string NoGetter { get; }
public string NonPublicGetter { get; private set; }

...Except when they don't have to

 

The public setter is actually only required if the value being bound is null. If it's a simple type like a string or and int, then the setter is required as there's no way to change the value. You can create readonly properties with default values, but they just won't be bound. For properties which are complex types, you don't need a setter, as long as the value has a value at binding time:

public MyInnerClass ComplexProperty { get; } = new MyInnerClass();
public List<string> ListValues { get; } = new List<string>();
public Dictionary<string, string> DictionaryValue1 { get; } = new Dictionary<string,string>();
private Dictionary<string, string> _dict = new Dictionary<string,string>();
public Dictionary<string, string> DictionaryValue2 { get { return _dict; } }

The sub properties of the MyInnerClass object returned by ComplexProperty would be bound, values would be added to the collection in ListValues, and KeyValuePairs would be added to the dictionaries.

Dictionaries must have string keys

 

This is one of the gotchas that got me! While integers, are obviously perfectly valid keys to dictionaries usually, they are not allowed in this case thanks to this snippet in ConfigurationBinder.BindDictionary:

var typeInfo = dictionaryType.GetTypeInfo();

// IDictionary<K,V> is guaranteed to have exactly two parameters
var keyType = typeInfo.GenericTypeArguments[0];
var valueType = typeInfo.GenericTypeArguments[1];

if (keyType != typeof(string))
{
    // We only support string keys
    return;
}

Don't expose IDictionary

 

This is another one that got me accidentally. While coding to interfaces is nice, the model binder uses reflection and Activator.CreateInstance(type) to create the classes to be bound. If your properties are interfaces or abstract then the binder will throw when trying to create them.

If you are exposing your properties as a readonly getter however, then the binder does not need to create the property and you might think the configuration class would bind correctly. And that is true in almost all cases. Unforunately while the binder can bind any properties which are a type that derives from IDictionary<,>, it will not bind an IDictionary<,> property directly. This leaves you with the following situation:

public interface IMyDictionary<TKey, TValue> : IDictionary<TKey, TValue> { }

public class MyDictionary<TKey, TValue>
    : Dictionary<TKey, TValue>, IMyDictionary<TKey, TValue>
{
}

public class MySettings
{
  public IDictionary<string, string> WontBind { get; } = new Dictionary<string, string>();
  public IMyDictionary<string, string> WillBind { get; } = new MyDictionary<string, string>();
}

Our wrapper type IMyDictionary which is really just an IDictionary will be bound, whereas the directly exposed IMyDictionary will not. This doesn't feel right to me and I've raised an issue with the team.

Make properties Implementing ICollection also expose an Add method

 

Types deriving from ICollection<> are automatically bound in the same way as dictionaries, however the ICollection<> interface exposes no methods to add an object to the collection, only methods for enumerating and counting. It may seem strange then that it is this interface the binder looks for when checking whether a property can be bound.

If a property exposes a type that implements ICollection<> (and is not an ICollection<> itself, as for IDictionary above, though that makes sense in this case), then it is a candidate for binding. In order to add an item to the collection, reflection is used to invoke an Add method on the type:

var addMethod = typeInfo.GetDeclaredMethod("Add");
addMethod.Invoke(collection, new[] { item });

If an add method on the exposed type does not exist (e.g. it could be a ReadOnlyCollection<>), then this property will not be bound, but no error will be thrown, you will just get an empty collection. This one feels a little nasty to me, but I guess the common use case is you will be exposing List<> and IList<> etc. Feels like they should be looking for IList<> if that is what they need though!

Summary

 

The strongly typed configuration is a great addition to ASP.NET Core, providing a clean way to apply the Interface Segregation Principle to your configuration. Currently it seems more convoluted to retrieve your settings than tin ASP.NET 4, but I wouldn't be surprised if they add some convenience methods for quickly accessing values in a forthcoming release.

It's important to consider the gotchas described if you're having trouble binding values (and you're not getting an exceptions thrown). Pay particular attention to your collections, as that's where my issues arose.



European ASP.NET Core Hosting :: ASP.NET Core 2.0 MVC Filters

clock January 28, 2019 09:58 by author Scott

The following is tutorial how to run code before and after MVC request pipeline in ASP.NET Core.

Solution

In an empty project update Startup class to add services and middleware for MVC:

        public void ConfigureServices
            (IServiceCollection services)
        {
            services.AddMvc();
        } 

        public void Configure(
            IApplicationBuilder app,
            IHostingEnvironment env)
        {
            app.UseMvc(routes =>
            {
                routes.MapRoute(
                    name: "default",
                    template: "{controller=Home}/{action=Index}/{id?}");
            });
        }

Add the class to implement filter:

    public class ParseParameterActionFilter : Attribute, IActionFilter
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            object param;
            if (context.ActionArguments.TryGetValue("param", out param))
                context.ActionArguments["param"] = param.ToString().ToUpper();
            else
                context.ActionArguments.Add("param", "I come from action filter");
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        {
        }
    }

In the Home controller add an action method that uses Action filter:

        [ParseParameterActionFilter]
        public IActionResult ParseParameter(string param)
        {
            return Content($"Hello ParseParameter. Parameter: {param}");
        }

Browse to /Home/ParseParameter, you’ll see:

 

Discussion

Filter runs after an action method has been selected to execute. MVC provides built-in filters for things like authorisation and caching. Custom filters are very useful to encapsulate reusable code that you want to run before or after action methods.

Filters can short-circuit the result i.e. stops the code in your action from running and return a result to the client. They can also have services injected into them via service container, which makes them very flexible.

Filter Interfaces

Creating a custom filter requires implementing an interface for the type of filter you require. There are two flavours of interfaces for most filter type, synchronous and asynchronous:

    public class HelloActionFilter : IActionFilter
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            // runs before action method
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        {
            // runs after action method
        }
    } 

    public class HelloAsyncActionFilter : IAsyncActionFilter
    {
        public async Task OnActionExecutionAsync(
            ActionExecutingContext context,
            ActionExecutionDelegate next)
        {
            // runs before action method
            await next();
            // runs after action method
        }
    }

You can short-circuit the filter pipeline by setting the Result (of type IActionResult) property on context parameter (for Async filters don’t call the next delegate):

    public class SkipActionFilter : Attribute, IActionFilter
    {
        public void OnActionExecuting(ActionExecutingContext context)
        {
            context.Result = new ContentResult
            {
                Content = "I'll skip the action execution"
            };
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        { }
    } 

    [SkipActionFilter]
    public IActionResult SkipAction()
    {
       return Content("Hello SkipAction");
    }

For Result filters you could short-circuit by setting the Cancel property on context parameter and sending a response:

        public void OnResultExecuting(ResultExecutingContext context)
        {
            context.Cancel = true;
            context.HttpContext.Response.WriteAsync("I'll skip the result execution");
        } 

        [SkipResultFilter]
        public IActionResult SkipResult()
        {
            return Content("Hello SkipResult");
        }

Filter Attributes

MVC provides abstract base classes that you can inherit from to create custom filters. These abstract classes inherit from Attribute class and therefore can be used to decorate controllers and action methods:

  • ActionFilterAttribute
  • ResultFilterAttribute
  • ExceptionFilterAttribute
  • ServiceFilterAttribute
  • TypeFilterAttribute

Filter Types

There are various type of filters that run at different stages of the filter pipeline. Below a figure from official documentation illustrates the sequence:

 

 

Authorization

 

 

This is the first filter to run and short circuits request for unauthorised users. They only have one method (unlike most other filters that have Executing and Executed methods). Normally you won’t write your own Authorization filters, the built-in filter calls into framework’s authorisation mechanism.

Resource

They run before model binding and can be used for changing how it behaves. Also they run after the result has been generated and can be used for caching etc.

Action

They run before and after the action method, thus are useful to manipulate action parameters or its result. The context supplied to these filters let you manipulate the action parameters, controller and result.

Exception

They can be used for unhandled exception before they’re written to the response. Exception handling middleware works for most scenarios however this filter can be used if you want to handle errors differently based on the invoked action.

Result

They run before and after the execution of action method’s result, if the result was successful. They can be used to manipulate the formatting of result.

Filter Scope

Filters can be added at different levels of scope: Action, Controller and Global. Attributes are used for action and controller level scope. For globally scoped filters you need to add them to filter collection of MvcOptions when configuring services in Startup:

            services.AddMvc(options =>
            {
                             // by instance
                options.Filters.Add(new AddDeveloperResultFilter("Tahir Naushad")); 

                // by type
                options.Filters.Add(typeof(GreetDeveloperResultFilter));
            });

Filters are executed in a sequence:

1. The Executing methods are called first for Global > Controller > Action filters.

2. Then Executed methods are called for Action > Controller > Global filters.

Filter Dependency Injection

In order to use filters that require dependencies injected at runtime, you need to add them by Type. You can add them globally (as illustrated above), however, if you want to apply them to action or controller (as attributes) then you have two options:

ServiceFilterAttribute

This attributes retrieves the filter using service container. To use it:

Create a filter that uses dependency injection:

    public class GreetingServiceFilter : IActionFilter
    {
        private readonly IGreetingService greetingService; 

        public GreetingServiceFilter(IGreetingService greetingService)
        {
            this.greetingService = greetingService;
        } 
        public void OnActionExecuting(ActionExecutingContext context)
        {
            context.ActionArguments["param"] =
                this.greetingService.Greet("James Bond");
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        { }
    }

Add filter to service container:

services.AddScoped<GreetingServiceFilter>();

Apply it using ServiceFilterAttribute:

[ServiceFilter(typeof(GreetingServiceFilter))]
public IActionResult GreetService(string param)

TypeFilterAttribute

This attributes doesn’t need registering the filter in service container and initiates the type using ObjectFactory delegate. To use it:

Create a filter that uses dependency injection:

    public class GreetingTypeFilter : IActionFilter
    {
        private readonly IGreetingService greetingService; 

        public GreetingTypeFilter(IGreetingService greetingService)
        {
            this.greetingService = greetingService;
        } 

        public void OnActionExecuting(ActionExecutingContext context)
        {
            context.ActionArguments["param"] = this.greetingService.Greet("Dr. No");
        } 

        public void OnActionExecuted(ActionExecutedContext context)
        { }
    }

Apply it using TypeFilterAttribute:

[TypeFilter(typeof(GreetingTypeFilter))]
public IActionResult GreetType1(string param)

You could also inherit from TypeFilterAttribute and then use without TypeFilter:

public class GreetingTypeFilterWrapper : TypeFilterAttribute
{
   public GreetingTypeFilterWrapper() : base(typeof(GreetingTypeFilter))
   { }


[GreetingTypeFilterWrapper]
public IActionResult GreetType2(string param)
 



European ASP.NET Hosting :: How to Add Custom 404 and Error Pages in ASP.NET

clock January 23, 2019 10:26 by author Scott

I post this to remind myself how we got this working for both ASP.NET and static files, both for remote and local requests on IIS 7 and IIS 7.5.

<httpErrors> over <customErrors>

<customErrors> in web.config is a construct for specifying custom error pages for requests handled by ASP.NET. In other words, static files such as HTML files or directory (“friendly”) URLs are not handled.

<httpErrors> configures error pages in IIS itself, outside the web application. This handles all requests, whether they’re in fact handled by ASP.NET or IIS natively.

We ignore customErrors altogether and only use httpErrors.

Displaying a static HTML file

This is useful for error codes such as 500 where the ASP.NET web application in itself may suffer problems:

<httpErrors errorMode="Custom" defaultResponseMode="File">
    <clear />
    <error statusCode="500" path="Static\html\error.html"/>
</httpErrors>

Displaying an ASP.NET page

This displays an ASP.NET page when a 404 error occurs, without rewriting the URL (the visitor will still see the requested URL in the address bar):

<httpErrors errorMode="Custom" existingResponse="Replace">
  <remove statusCode="404" subStatusCode="-1" />
  <error statusCode="404" path="/Errors/404.aspx" responseMode="ExecuteURL"/>
</httpErrors>

Note that we skip the <clear /> element and simply remove the standard 404 handling (in order to avoid an exception caused by duplicate elements for the 404 status code).

Redirecting to another URL

ExecuteURL can only be used to execute an ASP.NET file within the same application. If we want to redirect to another application, or possibly an entirely different external URL, we use the Rewrite response mode with an absolute URL:

<httpErrors errorMode="Custom" existingResponse="Replace">
  <clear />
  <error statusCode="404" path="http://www.bing.com" responseMode="Redirect"/>
</httpErrors>

Make sure HTTP errors is enabled in IIS

For this to work you have to make sure the HTTP Errors feature is installed for IIS, otherwise you’ll just get an empty 404 response:



European ASP.NET Core Hosting :: How to Use Dapper Asynchronously in ASP.NET Core 2.1

clock November 12, 2018 08:13 by author Scott

In this post, we're going to create a very simple ASP.NET Core 2.1 application which uses Dapper to access data. There's already a sample project worked up over on GitHub, and you might want to use that to follow along here.

Step 1: Get the NuGet Package

First things first, we need to grab the NuGet package for Dapper. In Visual Studio, you can do this by right-clicking on your project file and selecting Manage NuGet Packages and then search for the Dapper package, like so:

With that installed, let's try creating a repository.

Step 2: Creating an Employee Class and Repository

For this demo, I am not going to go over how to create a database or show a demo database with sample data; I don't have one available and it's a pain to make one. So let's assume we have a table Employee with columns for FirstName, LastName, ID, and DateOfBirth. We can make a corresponding C# class for this table, like so:

public class Employee
{
   
public int ID { get; set; }
   
public string FirstName { get; set; }
   
public string LastName { get; set; }
   
public DateTime DateOfBirth { get; set; }
}

Now we need a sample repository. Let's call it EmployeeRepository and give it an interface so we can use ASP.NET Core's Dependency Injection setup.

Here's the interface:

public interface IEmployeeRepository
{
   
Task<Employee> GetByID(int id);
   
Task<List<Employee>> GetByDateOfBirth(DateTime dateOfBirth);
}

Now we can work up a skeleton implementation of this repository. Here's what we're starting with:

public class EmployeeRepository : IEmployeeRepository
{
   
public async Task<Employee> GetByID(int id)
   
{

   
}

   
public async Task<List<Employee>> GetByDateOfBirth(DateTime dateOfBirth)
    {

We also need to update our project's Startup file to include our new repository in the services layer:

public class Startup
{
   
public Startup(IConfiguration configuration)
   
{
       
Configuration = configuration;
   
}

   
public IConfiguration Configuration { get; }

   
public void ConfigureServices(IServiceCollection services)
   
{
       
services.AddTransient<IEmployeeRepository, EmployeeRepository>();

       
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
   
}

   
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
   
{
       
//...
   
}
}

Next, we need to enable this repository to use Dapper. Before we can do that, however, we need it to be aware of what connection string we are using.

Step 3: Injecting IConfiguration

Very often in ASP.NET Core projects, Connection Strings are defined in the appSettings.json file:

{
 
"Logging": {
   
"LogLevel": {
     
"Default": "Debug",
     
"System": "Information",
     
"Microsoft": "Information"
   
}
 
},
 
"ConnectionStrings": {
   
"MyConnectionString": "MyConnectionString"
 
}
}

The problem is: how do we pass that connection string to the repository so it can create a SqlConnection object for Dapper to use.

ASP.NET Core introduces a new IConfiguration object which can be injected into other classes. That injected instance will contain a method called GetConnectionString which we can use to obtain our connection string from the appSettings.json files. So, let's inject IConfiguration like so:

public class EmployeeRepository : IEmployeeRepository
{
   
private readonly IConfiguration _config;

   
public EmployeeRepository(IConfiguration config)
   
{
       
_config = config;
    }
   

   
//Remainder of file is unchanged
}

Step 4: Creating a SqlConnection

With the injected IConfiguration now available to our repository, we can create a Dapper-enabled SqlConnection object that all of our repository's methods can utilize.

public class EmployeeRepository : IEmployeeRepository
{
   
private readonly IConfiguration _config;

   
public EmployeeRepository(IConfiguration config)
   
{
       
_config = config;
   
}

   
public IDbConnection Connection
   
{
       
get
       
{
           
return new SqlConnection(_config.GetConnectionString("MyConnectionString"));
       
}
    }
   

   
//Remainder of file is unchanged
}

Step 5: Employee by ID

Let's first create a method to return employees by their ID.

To start, let's remember that the way Dapper works is by processing raw SQL and then mapping it to an object. Because our table columns and object properties share the same names, we don't need to do any special mapping here.

Here's the implementation of our GetByID method:

public class EmployeeRepository : IEmployeeRepository
{
   
//...

   
public async Task<Employee> GetByID(int id)
   
{
       
using (IDbConnection conn = Connection)
        {

            string sQuery = "SELECT ID, FirstName, LastName, DateOfBirth FROM Employee WHERE ID = @ID";
           
conn.Open();
           
var result = await conn.QueryAsync<Employee>(sQuery, new { ID = id });
           
return result.FirstOrDefault();
       
}
   
}
}

Step 6: Employees by Date of Birth

We also need to get all employees born on a particular date. Since we are now returning a collection of employees rather than a single one, the implementation changes very slightly.

public class EmployeeRepository : IEmployeeRepository
{
    //...
   

     
public async Task<List<Employee>> GetByDateOfBirth(DateTime dateOfBirth)
   
{
       
using (IDbConnection conn = Connection)
       
{
           
string sQuery = "SELECT ID, FirstName, LastName, DateOfBirth FROM Employee WHERE DateOfBirth = @DateOfBirth";
           
conn.Open();
           
var result = await conn.QueryAsync<Employee>(sQuery, new { DateOfBirth = dateOfBirth });
           
return result.ToList();
       
}
   
}
}

Step 7: Implement the Controller

The final step is creating a controller to which our EmployeeRepository can be injected. Here it is:

[Route("api/[controller]")]
[ApiController]
public class EmployeeController : ControllerBase
{
   
private readonly IEmployeeRepository _employeeRepo;

   
public EmployeeController(IEmployeeRepository employeeRepo)
   
{
       
_employeeRepo = employeeRepo;
   
}

   
[HttpGet]
   
[Route("{id}")]
   
public async Task<ActionResult<Employee>> GetByID(int id)
   
{
       
return await _employeeRepo.GetByID(id);
   
}

   
[HttpGet]
   
[Route("dob/{dateOfBirth}")]
   
public async Task<ActionResult<List<Employee>>> GetByID(DateTime dateOfBirth)
   
{
       
return await _employeeRepo.GetByDateOfBirth(dateOfBirth);
   
}
}

Summary

That's it! We've implemented Dapper into our ASP.NET Core 2.1 application!



European ASP.NET Core Hosting :: How to Measure and Report the Response Time of ASP.NET Core

clock November 8, 2018 09:57 by author Scott

Performance is a buzzword for APIs. One of the most important and measurable parameters of the API performance is the response time. In this article, we will understand how to add code to measure the response time of an API and then return the response time data to the end client.

What is the need for this?

So, let's take a moment to think why we would ever need such a feature to measure the Response time of an API. Following are some of the points that have been the inspiration for writing code to Capture response time.

  • You need to define the SLA (Service Level Agreements) for your API with your clients. The clients need to understand how much time  the API takes to respond back. The response time data over time can help us decide on an SLA for our API.
  • Management is interested in reports as to how fast or slow the application is. You need to have data to corroborate your claims. It is worth it to have reports on the performance of the application and to share it with Stakeholders.
  • The client needs to have the information of the Response time of the API so that they can track how much time is spent on the client and the Server.

You might also have encountered similar requests in your project and it is worthwhile looking at an approach to capture the response time for the API.

Where to add the code?

Let's explore a couple of approaches to capture the response time of our API focusing mostly on capturing the time spent in our API. Our objective is to calculate the time elapsed in milliseconds from the time the request is received by the Asp.net core runtime to the time the response is processed and sent back from the Server.

What factors are we ignoring?

It's important to understand that this discussion doesn't include the time spent in N/W, Time spent in IIS and Application Pool Startup. If the Application Pool wasn't up and running, then the first request can affect the overall response time of the API. There is an Application Initialization Module which we can make use of but that is out of scope for this article.

First Attempt

One very naive approach to capturing the response time of an API would be to add code to every API method at the start and end and then measure the delta to calculate the response time as shown 

// GET api/values/5   
[HttpGet]  
public IActionResult Get() {  
    // Start the watch   
    var watch = new Stopwatch();  
    watch.Start();  
    // Your actual Business logic   
    // End the watch  
    watch.Stop();  
    var responseTimeForCompleteRequest = watch.ElapsedMilliseconds;  
} 

This code should be able to calculate the time spent in an operation. But this doesn't seem to be the right approach for the following reasons.

If an API has a lot of operations, then we need to add this code to multiple places which are not good for maintainability.

This code measures the time spent in the method only, it doesn't measure the time spent on other activities like middleware, filters, Controller selection, action method selection, Model binding etc.

Second Attempt

Let's try to improve the above code by centralizing the code in one place so that it is easier to maintain. We need to execute the response time calculation code before and after a method is getting executed. If you have worked with earlier versions of Asp.net Web API, you would be familiar with concepts of Filter. Filters allow you to run code before or after specific stages in the request processing pipeline.

We will implement a filter for calculating the Response time as shown below. We will create a Filter and use the OnActionExecuting to start the timer and then stop the timer in method OnActionExecuted, thus calculating the response time of the API.

public class ResponseTimeActionFilter: IActionFilter { 
    private const string ResponseTimeKey = "ResponseTimeKey"; 
    public void OnActionExecuting(ActionExecutingContext context) { 
        // Start the timer  
        context.HttpContext.Items[ResponseTimeKey] = Stopwatch.StartNew(); 
    } 
    public void OnActionExecuted(ActionExecutedContext context) { 
        Stopwatch stopwatch = (Stopwatch) context.HttpContext.Items[ResponseTimeKey]; 
        // Calculate the time elapsed  
        var timeElapsed = stopwatch.Elapsed; 
    } 
}  

This code is not a reliable technique for calculating the response time as it doesn't address the issue of calculating the time spent in execution of middleware, controller selection, action method selection, model binding etc. The filter pipeline runs after the MVC selects the action to execute. So, it effectively doesn't instrument the time spent in the Other Asp.net pipeline. 

Third Attempt

We will use the Asp.net Core Middleware to Calculate the Response time of the API.

So, what is Middleware?

Basically, Middleware are software components which handle the Request/Response. Middleware is assembled into an application pipeline and serves in the incoming request. Each component does the following operations.

  • Chooses whether to pass the request to the next component in the pipeline. 
  • Can perform work before and after the next component in the pipeline is invoked.

If you have worked with HTTPModules or HTTPHandlers in ASP.NET, then you can think of Middleware as a replacement in ASP.NET Core. Some of the examples of middleware are -

  • MVC Middleware
  • Authentication
  • Static File Serving
  • Caching
  • CORS

 

 

We want to add code to start the timer once the request enters the ASP.NET Core pipeline and stop the timer once the response is processed by the Pipeline. Custom Middleware at the start of the request pipeline seems to be the best approach for getting the access to the request as early as possible and access until the last step is executed in the pipeline.

We will build a Response Time Middleware which we will add as the first Middleware to the request Pipeline so that we can start the timer as soon the request enters the Asp.net core pipeline.

What to do with the Response time data?

Once we capture the response time data we can process data in the following ways.

  1. Add the Response time data to a Reporting database or an analytics solution.
  2. Write the Response time data to a log file.
  3. Pass the response time data to a message queue which can further be processed by another application for reporting and analytics.
  4. Send the Response time information to the client applications consuming our Rest API using the Response headers.
  5. There may be other useful ways of using the response time data. Please leave a comment and tell me how you process the response time data in your application.

Let's write the code

We will write the code considering the following points.

  • Calculating the response time data for the API
  • Reporting the data back to client applications by passing the data in the Response headers.

Full code snippet for the ResponseTimeMiddleware is shown below.

public class ResponseTimeMiddleware { 
    // Name of the Response Header, Custom Headers starts with "X-"
 
    private const string RESPONSE_HEADER_RESPONSE_TIME = "X-Response-Time-ms";
 
    // Handle to the next Middleware in the pipeline
 
    private readonly RequestDelegate _next;
 
    public ResponseTimeMiddleware(RequestDelegate next) {
 
        _next = next;
 
    }
 
    public Task InvokeAsync(HttpContext context) {
 
        // Start the Timer using Stopwatch
 
        var watch = new Stopwatch();
 
        watch.Start();
 
        context.Response.OnStarting(() => {
 
            // Stop the timer information and calculate the time
  
            watch.Stop();
 
            var responseTimeForCompleteRequest = watch.ElapsedMilliseconds;
 
            // Add the Response time information in the Response headers.
  
            context.Response.Headers[RESPONSE_HEADER_RESPONSE_TIME] = responseTimeForCompleteRequest.ToString();
 
            return Task.CompletedTask;
 
        });
 
        // Call the next delegate/middleware in the pipeline
  
        return this._next(context);
 
    }
 
}

Explanation of the code

The interesting part happens in the InvokeAsync method, We use Stopwatch class to start the stopwatch once the requests enter into the first middleware of the request and then stop the stopwatch once the request has been processed and the response is ready to be sent back to the client. OnStarting method provides an opportunity to write a custom code to add a delegate to be invoked just before response headers will be sent to the client.

Lastly, we add the Response time information in a Custom Header. We use the X-Response-Time-msheader as a Response Header. As a convention, the Custom Header starts with an X.

Conclusion

In this article, we understood how to leverage ASP.NET middleware to manage cross-cutting concerns like measuring the response time of the APIs. There are various other useful use cases of using middleware which can help to reuse code and improve the maintainability of the application.

/p>



European ASP.NET Core Hosting - HostForLIFE.eu :: File logging on ASP.NET Core

clock March 8, 2017 10:13 by author Scott

ASP.NET Core introduces new framework level logging system. Although it is feature-rich it is not complex to use and it provides decent abstractions that fit well with the architecture of most web applications. This blog post shows how to set up and use Serilog file logging using framework-level dependency injection.

Configuring logging

Logging is configured in ConfigureServices() method of Startup class. ASP.NET Core comes with console and debug loggers. For other logging targets like file system, log servers etc third-party loggers must be used. This blog post uses Serilog file logger.

"dependencies": {
 
// ...
  "Serilog.Extensions.Logging.File": "1.0.0"
},

Project has now reference to Serilog file logger. Let’s introduce it to ASP.NET Core logging system. AddFile(string path) is the extension method that adds Serilog file logger to logger factory loggers collection. Notice that there can be multiple loggers active at same time.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();
    loggerFactory.AddFile("Logs/ts-{Date}.txt");
 
    // ...
}

Serilog will write log files to Logs folder of web application. File names are like ts-20170108.txt.

Injecting logger factory

Loggers are not injected to other classes. It’s possible to inject logger factory and let it create new logger. If it sounds weird for you then just check internal loggers collection of logger factory to see that also other classes that need logger have their own instances. The code below shows how to get logger to controller through framework level dependency injection.

public class DummyController : Controller
{
    private ILogger _logger;
 
    public DummyController(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger(typeof(DummyController));
    }
 
    // ...
}

Why we have to inject logger factory and not single instance of ILogger? Reason is simple – application may use multiple loggers like shown above. This is the fact we don’t want to know in parts of application where logging is done. It’s external detail that si not related to code that uses logging.

Logging

Logging is done using extension methods for ILogger interface. All classic methods one can expect are there:

  • LogDebug()
  • LogInformation()
  • LogWarning()
  • LogError()
  • LogCritical()

Now let’s write something to log.

public class DummyController : Controller
{
    private ILogger _logger;
 
    public DummyController(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger(typeof(DummyController));
    }
 
    public void Index()
    {
        _logger.LogInformation("Hello from dummy controller!");
    }
}

Making request to Dummy controller ends up with log message added to debug window and log file. The following image shows log message in output window.

And here is the same log message in log file.



European ASP.NET Core Hosting - HostForLIFE.eu :: How to Find and Use ASP.NET Core Session

clock February 24, 2017 06:41 by author Scott

I'm building a tutorial (hopefully soon to be a post) and in that tutorial I needed to use Session for some quick-and-dirty data storage. Unfortunately when I tried to use Session in my default project, it was nowhere to be found, and I was sent down a small rabbit hole trying to find it. This post will walk through a reminder of what Session is, where to find it in ASP.NET Core 1.0, an overview of the new extension methods available, and building our own custom extension method. Let's get started!

What is Session?

If you're just starting to develop in ASP.NET, you may not have encountered Session before. Session is a serialized collection of objects that are related to the current user's session. The values are usually stored on the local server memory, but there are alternate architectures where the values can be stored in a SQL database or other distributed storage solutions, especially when your servers are part of a server farm.

You can store any data you like in Session, however any data you store will only be available to the current user as long as the session is active. This means that if that user logs out, the Session data is lost; if you need to keep this data you have to find another way to store it.

Finding the Session

ASP.NET Core 1.0 has been written from the ground up to be a modular, choose-what-you-need framework. What this means is that you must explicitly include any packages you want to use in your project.

This allows us developers to maintain tight control over what functionality our ASP.NET Core projects actually need, and exclude anything that is not necessary.

In our case, Session is considered to be one of these "additional" packages. In order to include that package we need to add a reference to Microsoft.AspNet.Session in the project.json file. If we wanted to use memory as our caching backend, we would also include Microsoft.Extensions.Caching.Memory.

Once we've got the package included in our project, we need to make it available to the Services layer by modifying the ConfigureServices()method in the Startup file, like so:

public void ConfigureServices(IServiceCollection services) 
{
    ...
    services.AddMemoryCache();
    services.AddSession(options =>
    {
        options.IdleTimeout = TimeSpan.FromMinutes(60);
        options.CookieName = ".MyCoreApp";
    });
    ...
}

With all of these steps completed, you can now use Session in your projects just like in any other ASP.NET application. If you wanted to use a different cache backend (rather than memory) you could grab a different NuGet package like Redis or SqlServer. Don't forget to check NuGet if you can't find the functionality you need; it is probably there and you just need to download it.

How to Use Session

ASP.NET Core 1.0 has introduced some new extension methods that we can use for accessing and storing Session values. The odd thing is that these extensions are not in Microsoft.AspNet.Session; rather, they are in Microsoft.AspNet.Http, and so we will need to add that package.

Once we've got that package included, we can start using the extension methods:

[HttpGet]
public IActionResult Index() 
{
    var userID = Context.Session.GetInt("UserID");
    var userName = Context.Session.GetString("UserName");
    return View();
}

[HttpGet]
public IActionResult Default() 
{
    Context.Session.SetInt("UserID", 5);
    Context.Session.SetString("UserName", "John Smith");
    return View();
}

The new extension methods are:

  • Get: Returns a byte array for the specified Session object.
  • GetInt: Returns an integer value for the specified Session object.
  • GetString: Returns a string value for the specified Session object.
  • Set: Sets a byte array for the specified Session object.
  • SetInt: Sets an integer value for the specified Session object.
  • SetString: Sets a string value for the specified Session object.

Why do only these extensions exist, and not GetDouble, GetDateTime, etc? I'm really not sure. If I had to guess I'd say it is to ensure that the values are serializable, but don't quote me on that. If anybody knows the real reason, I'd love to hear it!

Creating Extension Methods

I'm not completely satisfied with these extensions; they don't have enough functionality for my tastes, and so I'm gonna build some more. Specifically, I want to build extensions that will store a DateTime in session and retrieve it.

Here's the method signatures for these extensions:

public static DateTime? GetDateTime(this ISessionCollection collection, string key) 
{

}

public static void SetDateTime(this ISessionCollection collection, string key, DateTime value) 
{

}

The ISessionCollection interface is exactly what it sounds like: a collection of items stored in Session.

Let's tackle the SetDateTime() method first. DateTimes are weird because they are not inherently serializable, but they can be converted to a serializable type: long. So, we must convert the given DateTime value to a long before it can be stored.

public static void SetDateTime(this ISessionCollection collection, string key, DateTime value) 
{
    collection.Set(key, BitConverter.GetBytes(value.Ticks));
}

The BitConverter class allows us to convert byte arrays into other types easily.

Now we can tackle the GetDateTime() method. There are two things we need to keep in mind when building this extension. First, it is entirely possible that there will be no value in Session for the specified key; if this happens, we should return null. Second, we are storing the DateTime as a long, and therefore we need to serialize it back into a DateTime type; luckily the DateTime constructor makes this really easy. The final code for the method looks like this:

public static DateTime? GetDateTime(this ISessionCollection collection, string key) 
{
    var data = collection.Get(key);
    if(data == null)
    {
        return null;
    }

    long dateInt = BitConverter.ToInt64(data, 0);
    return new DateTime(dateInt);
}

Now we can use these extensions in addition to the ones already defined.

Now we've seen Session in action, including what package to use from NuGet, what extension methods are available, and even how to build our own extension method. Let me know if this helped you out in the comments!

Happy Coding!



European ASP.NET Core Hosting - HostForLIFE.eu :: Customising model-binding conventions in ASP.NET Core

clock February 21, 2017 08:04 by author Scott

A pattern I use when building Web APIs is to create commands to represent an API operation and models to represent resources or results. We share these "common" objects with our .NET client so we can be sure we're using the same parameters names/types.

Here's an excerpt from Fabrik's API for creating a project:

public HttpResponseMessage Post(int siteId, AddProjectCommand command)
{
    var project = new CMS.Domain.Project(
        session.GetSiteId(siteId),
        command.Title,
        command.Slug,
        command.Summary,
        command.ContentType,
        command.Content,
        command.Template,
        command.Tags,
        command.Published,
        command.Private);

    session.Store(project);

    var model = CreateProjectModel(project);
    var link = Url.Link(RouteNames.DefaultRoute, new { controller = "projects", siteId = siteId, id = project.Id.ToIntId() });

    return Created(model, new Uri(link));
}

We also use commands for GET operations that have multiple parameters such as search endpoints. So instead of:

public IActionResult GetProjects(string searchTerm = null, int page = 1, int pageSize = 10)
{

}

We have a GetProjectsCommand:

public class GetProjectsCommand
{
    public string SearchTerm { get; set; }
    [MinValue(1, ErrorMessage = "Page must be greater than 0.")]
    public int Page { get; set; } = 1;
    public int PageSize { get; set; } = 20;
}

This provides a single place to encapsulate our default values and validation rules, keeping our controllers nice and lean.

Model-binding in ASP.NET Core MVC

To bind complex types to query strings in ASP.NET Web API we had to change the parameter binding rules. This is because the default was to bind complex types from the HTTP Request body.

When implementing the above pattern in ASP.NET Core I was pleasantly surprised to see that the following worked out of the box:

// GET: api/values
[HttpGet]
public IEnumerable<string> Get(GetValuesCommand command)
{

}

I thought that perhaps the framework detected that this was a HTTP GET request and therefore bound the parameter values from the query string instead.

Actually this is not the case - in ASP.NET Core, complex types are not bound from the request body by default. Instead you have to opt-in to body-based binding with the FromBodyAttribute:

// POST api/values
[HttpPost]
public void Post([FromBody]AddValueCommand command)
{
}

This seems an odd default given that (in my experience) binding complex types from the request body is far more common.

In any case, we can customise the default model-binding behaviour by providing a convention:

public class CommandParameterBindingConvention : IActionModelConvention
{
    public void Apply(ActionModel action)
    {
        if (action == null)
        {
            throw new ArgumentNullException(nameof(action));
        }

        foreach (var parameter in action.Parameters)
        {
            if (typeof(ICommand).IsAssignableFrom((parameter.ParameterInfo.ParameterType)))
            {
                parameter.BindingInfo = parameter.BindingInfo ?? new BindingInfo();
                parameter.BindingInfo.BindingSource = BindingSource.Body;
            }
        }
    }
}

Which is registered like so:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(options =>
    {
        options.Conventions.Add(new CommandParameterBindingConvention());
    });
}

This convention checks to see if the parameter type implements ICommand (a marker interface I created) and if so, instructs the framework to bind the values for this parameter from the request body.

All I have to do then is update my command with this interface:

public class AddValueCommand : ICommand
{
    public string Value { get; set; }
}

Then I can drop the unnecessary [FromBody] attribute:

// POST api/values
[HttpPost]
public void Post(AddValueCommand command)
{
}

 



European ASP.NET Core Hosting - HostForLIFE.eu :: How to Setup Webpack in ASP.NET Core

clock February 10, 2017 11:11 by author Scott

Webpack is a great tool for bundling the client side assets in your web application. In this post we'll briefly discuss why you should create bundles and see how Webpack can automate that for us in ASP.NET Core.

Why should I bundle

When building web applications, regardless of the server side framework, you'll need to get your client side resources over to the browser. You may have dozens of JavaScript and CSS files in your project but having to reference each of them individually in your HTML markup is just not ideal for production deployments.

Each browser only allows so many concurrent requests per hostname. BrowserScope has some data on this that you can view on their site. If your web application makes more than the allowed number of simultaneous requests then the additional requests will end up being queued. This leads to longer load times and a not so smooth experience for your users; especially on mobile devices.

It would be much better to group resources into bundles so that the browser would have fewer files to download and thus fewer requests to make. This will help in keeping bandwidth usage low and even with battery life on your users' devices.

What is Webpack?

Webpack is a module bundler for the static assets in your web application. Essentially, you point Webpack at the main entry point(s) of your code then it will determine the dependencies, run transformations and create bundles that you can provide to your browser. What's even better is that in addition to JavaScript Webpack can also handle CSS, LESS, TypeScript, CoffeeScript, images, web fonts and more.

Setting up

We're going to start by setting up a new ASP.NET Core project using the dotnet cli tooling. If you don't have tooling installed, you can find can setup files and instructions here. If you're not a Windows user, the tooling works on Windows, OSX and Linux so no need to worry.

Let's get started by opening a terminal and creating an empty directory. Now, we'll generate a new ASP.NET Core project by running the following command:

dotnet new -t web 

Currently, the generated project includes .bowerrc and bower.json files. You can delete these since we'll be using NPM to install packages. If you don't have NodeJS installed on your system, make sure you do so before continuing.

The next thing we'll do is create a folder called Scripts in the root of your project. You'll find out why later on. Also add an empty webpack.config.js to the root of your project. As you might have guessed, this is the file we'll use to configure Webpack. Your project layout should look something like this.

Configuring Webpack

Before we start configuring Webpack, let's check out where our assets actually are. ASP.NET Core places all the files destined for the browser inside of the wwwroot folder by default. If you take a peep inside that folder, you'll see sub folders for your your JavaScript, CSS and image files. Note, the names of these sub folders aren't important. Feel free to rename them if you wish.

Personally, I prefer to reserve the wwwroot folder for the bundles that I want to provide to the browser. What we'll do is use the Scripts directory that was created earlier for the working files that will get included in the bundles.

Let's add two pretty trivial JavaScript files to our Scripts folder.

//other.js
function func() {
    alert('loaded!');
}
module.exports = func;

//main.js
var other = require('./other');

other();

Our scripts have been written using the CommonJS module syntax. The main.js file imports other.jsand calls the exported function. Ok, simple enough. Let's take a look at webpack.config.js.

var path = require('path');

module.exports = {
    entry: {
        main: './Scripts/main'
    },
    output: {
       publicPath: "/js/",
       path: path.join(__dirname, '/wwwroot/js/'),
       filename: 'main.build.js'
    }
};

The webpack.config.js file is a CommonJS module that we'll use to setup Webpack. The sample above shows a fairly bare bones Webpack configuration. Inside of entry, we define a main module and point it to the location of main.js file since that's where our app starts. The name of the bundle can be changed to something else if you like. There's no requirement for it to be called main. Inside of output, we let Webpack know what to name the bundle file and where to place it. The publicPath property configures the relative URL that the browser will use to reference our bundles.

Alright, that's good for now. Before we generate our bundle, make sure you have Webpack installed globally on your machine. Type the following command in your terminal. You'll only have to do this once.

npm i -g webpack 

Now we're ready to create our bundle. Make sure your terminal path is at the root of your project directory. Now run

webpack 

Your terminal output should look similar to below.

In your code editor, open Views/_Layout.cshtml. Near the bottom of the file, add a script reference to our bundle.

<script src="~/js/main.build.js"></script>

The generated ASP.NET Core template adds a few scripts tags wrapped in environment tag helpers. Go ahead and remove these for now.

<environment names="Development">
        <script src="~/lib/jquery/dist/jquery.js"></script>
        <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script>
        <script src="~/js/site.js" asp-append-version="true"></script>
</environment>

Finally, we can run our application and see if our bundle works. Execute the following commands in the command terminal.

dotnet restore
dotnet run

Navigate to http://localhost:5000 in your browser. If everything works as expected, you should see an alert with loaded! in the browser window.

Tying the builds together

We can reduce the number of commands we have to type by leveraging the build events in project.json. Update the scripts section to include the precompile event. To see the other available events, head over to the .NET Core Tools docs.

"scripts": {
    "precompile": ["webpack"],
  },

Now running dotnet run or dotnet build will also run Webpack to generate the bundle.

Conclusion

In this post, we got a short introduction to setting up Webpack in ASP.NET Core. Webpack often gets labeled as overly complex and difficult to setup. Hopefully, this post showed you how easy it is to get started and made you a little more curious about what else it can do.



European ASP.NET Core Hosting - HostForLIFE.eu :: Cookie Authentication and Policy Based Authorization in ASP.NET Core

clock February 6, 2017 10:23 by author Scott

This is the first part of the series of articles I'll be covering about ASP.NET Core Security. We're going to start off with cookie based authentication and build our way up to configuring policy based authorization.

As part of the ASP.NET Core security, there is a new richer policy based authorization that we can use to authorize against the claims in user's possession.

Let's build an example MVC application to demonstrate the concepts. In our scenario, we'll demand users to be authenticated and have Read claim to view the home page of our application.

I am using Visual Studio 2015 Pro Edition w/Update 3 (you should also be able to use the free community edition).

1. Create a new ASP.NET Core Web Application


2. Select the Empty Template


3. We need to add the required nuget packages to configure authorization, cookie authentication, and the mvc middleware. Bring up the project.jsonfile, add the following under the dependencies section.

"Microsoft.AspNetCore.Authorization": "1.0.0"
"Microsoft.AspNetCore.Authentication.Cookies": "1.0.0"
"Microsoft.AspNetCore.Mvc": "1.0.0"


4. Once you save the project.json file, Notice Visual Studio installs the missing nuget packages automatically. Next, bring up the Startup.cs where we'll configure the middleware we just included in our project.

5. In Configure method, add the following authentication middleware configuration;

app.UseCookieAuthentication(new CookieAuthenticationOptions
    {
        AuthenticationScheme = "Cookies",
        LoginPath = new StringPath("/Account/Login"),
        AccessDeniedPath = new StringPath("/Home/Forbidden"),
        AutomaticAuthenticate = true,
        AutomaticChallenge = true
    });

Here we're using the Cookie authentication, defining our LoginPath, where users will be redirected for authentication, and AccessDeniedPath when the user is not authorized. AutomaticAuthenticate flag indicates that the middleware should run on every request and attempt to validate and reconstruct any serialized principal it created. AutomaticChallenge flag indicates that the middleware should redirect the browser to the LoginPath or the AccessDeniedPath when the authorization fails (there are various other configuration options, however this the bare minimum we need for this example).

Next, we'll configure the requirements for the ReadPolicy. The policy will demand the user to be authenticated and have the Read claim in order access the required resource(s). Depending on your authorization logic, you can setup your policy to require additional claims.

public void ConfigureServices(IServiceCollection services) 
{
    services.AddAuthorization(options =>
    {
        options.AddPolicy("ReadPolicy", policyBuilder =>
        {
            policyBuilder.RequireAuthenticatedUser()
                .RequireAssertion(context => context.User.HasClaim("Read", "true"))
                .Build();
        });
    });
}

6. Finally we need to add the mvc middleware configuration.

app.UseMvc(builder =>
    {
        builder.MapRoute("default", "{controller=Home}/{action=index}/{id?}");
    });

Let's add couple of controllers so that we can test the login and the policy we've created. Create AccountController for user login and HomeController where we'll apply the ReadPolicy.

7. In the AccountController.cs add the following actions to login user;

[HttpGet]
public IActionResult Login(string returnUrl) 
{
    ViewData["ReturnUrl"] = returnUrl;
    return View();
}

8. Add a simple Login.cshtml view under the Views/Account folder (create the folder structure if it doesn't exists) where the user can login to the application.

<form asp-action="Account/Login" method="post" 
      asp-route-returnUrl="@ViewData["ReturnUrl"]">
    <div>
        <label>Username</label>
        <input type="text" name="username" />
    </div>
    <div>
        <label>Password</label>
        <input type="password" name="password" />
    </div>
    <div><button>Login</button></div>
</form> 

In the POST login action, we have a simple verification; The username and the password must match in order to authenticate the user (Obviously you wouldn't do this in a real production application but for our demo purposes this is fine). If they match, we then create a set of claims, the claims identity, and the claims principle that represents the authenticated user. Then, we sign in the user (means we issue a cookie to the user which contains the set of claims we've created) and redirect back to the resource that was requested for access.

[HttpPost]
public async Task<IActionResult> Login(string username, string password, string returnUrl) 
{
    if (username == password)
    {
        var claims = new List<Claim>
        {
            new Claim("Read", "true"),
            new Claim(ClaimTypes.Name, "ayayalar"),
            new Claim(ClaimTypes.Sid, "12345")
        };

        var claimsIdentity = new ClaimsIdentity(claims, "password");
        var claimsPrinciple = new ClaimsPrincipal(claimsIdentity);

        await HttpContext.Authentication.SignInAsync("Cookies", claimsPrinciple);

        if (Url.IsLocalUrl(returnUrl))
        {
            return Redirect(returnUrl);
        }

        return Redirect("~/");
    }

    return View();
}

9. Add the following action to the HomeController.cs;

[Authorize(Policy = "ReadPolicy")]
public IActionResult Index() 
{
    return View();
}

Note that we passed the ReadPolicy to the authorization attribute. The user must be authenticated and have a Read claim to have access. Otherwise, they'll be forwarded to the Forbidden page as we specified in the authentication middleware configuration.

The Index.cshtml view for the home page (can be as simple as one line of code) under Views/Home folder;

<h1>Access Granted</h1>

We should be able to test our changes at this point. Once you run the application, you'll be redirected to the login page since you're not authenticated (notice the return url in the query string is set automatically by the framework). Upon successfully submitting your credentials, you will be authorized and redirected to the home page.

For testing purposes, try removing the Read claim we've added in the Loginaction, rebuild your solution and restart the application, even if the user can login successfully, authorization will be denied and the user will be redirected to the Forbidden page.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in