European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core Hosting :: Creating A .NET Application From The CLI

clock September 13, 2021 08:37 by author Peter

In today’s article, we will take a look at creating a .NET application from the command line interface (CLI). In the past, I have always created applications using Visual Studio. Today, we will create a simple console application using only the CLI and notepad. Finally, we will open this application in Visual Studio to verify all is compatible. So, let us begin.

The first step is to open the Command Prompt for the Visual Studio instance or simply open the command line. I would prefer to use the “Run as Administrator” option.

First, we create a folder for our application and move to that folder. Here we type “dotnet new console” to create the console application.

 

Next, we will create the solution using “dotnet new sln”.

We now want to modify the “Program.cs” file. For this, we will use notepad. Run the “notepad Program.cs” command.

 

Modify, the code as below.


Code is below,
var myName = Console.ReadLine();
Console.WriteLine($"Hello from {myName}");


We are now ready to build and run the code. Run “dotnet build” to build the code.

Finally, we run the code using “dotnet run”. Here we enter a string, and it is displayed to us with the Prefix “Hello from”.

Testing the application in Visual Studio 2022 community preview edition

Let us verify that the solution and project we just created works with Visual Studio. We will open the same solution using Visual Studio 2022 community preview edition.


When I ran it for the first time, I got the below error.


To fix this, I simply opened the project file, made a small change (adding a space and then removing it), and saved it. After that, all worked fine

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 

 




European ASP.NET Core Hosting :: How to Block IP Address In ASP.NET Core Web API?

clock September 7, 2021 07:23 by author Peter

There are several articles and blogs on the subject that describe how to block or limit an IP address from making repeated requests in order to reduce server load.

So, why do we need another blog or article to add to that list? What makes this article stand out from others?
    It uses Session State, which is the most recent item in current Asp.Net Core 3.1, rather to keep the IP details in the database.
    It performs all operations outside of the action's logic, thus if the request exceeds the predetermined request count, the user will be denied access to the API's actions.
    It will become extremely fast in the process by utilizing session state and distributed memory concepts.

This post will go through a few key aspects that are commonly utilized while building APIs in ASP.Net Core. APIs are a sort of interface that connects a development system to a third-party system. APIs allow various programs to communicate with one another. On the other hand, as we all know, APIs are currently being used by single applications to manage frontend and backend communication. Overall, security is one of the most important characteristics to consider when creating APIs since no one knows who will use them or how they will be utilized.

This article will cover how to avoid a brute force attack and how to manage numerous requests from the client side. We'll also examine several key aspects of the ASP.Net Core Web API, such as Session State and built-in attributes. This article will show beginners and intermediates how to protect APIs from malicious third-party activity and reduce server load from repeated queries.

Step 1: Create a project

To begin, build a new project or add functionality to an existing project to manage or handle web API requests. Create a project with the most recent version of the.Net framework. ASP.Net Core 3.1 is the most recent and stable version.

Step 2: Add and configure session services
After successfully creating the project, the first step is to add session services to the Startup.cs file. As a result, we will be able to include session states into our app. Our goal is to detect client requests without relying on the database so that our program does not have to connect with the database every time. Also, we'll utilize session state storage, which will keep all client requests in distributed memory, making it incredibly fast, and preventing the user from entering the program if there are numerous requests in a particular time period. The following is a list of services that we'll need to include in our startup file.
services.AddDistributedMemoryCache();
services.AddSession();

Both of these will notify the system to keep the session items in memory for a long. We can also handle a session with a variety of settings such as timeout, httponly, and so on.

Now, in the Startup.cs file, add SessionMiddleware to the Configure section.
app.UseSession();


UseSession is a middleware component that enables the system to save items in session state.

Step 3: Add class to handle IP details.
Create a class that will handle and store request details. I've created a class called "IPDetailModel." This includes basic information about the incoming request.
public class IPDetailModel {
    public string IPAddress {
        get;
        set;
    }
    public DateTime Time {
        get;
        set;
    }
    public int Count {
        get;
        set;
    }
}

Step 4: Create a custom attribute class.
Now, in the project, create a class to handle the attribute logic. I named my class "TraceIPAttribute." After you've created a class, assign ActionFilterAttribute to it. Now, inside the class, add an "OnActionExecuting" method. The logic for handling an incoming request is shown below, and it determines if the request should go through the API logic or whether it has to be bypassed.

public class TraceIPAttribute: ActionFilterAttribute {
    IPDetailModel model = new IPDetailModel();
    public override void OnActionExecuting(ActionExecutingContext context) {
        var remoteIp = context.HttpContext.Connection.RemoteIpAddress.ToString();
        if (context.HttpContext.Session.GetString(remoteIp) == null) {
            model.Count = 1;
            model.IPAddress = remoteIp;
            model.Time = DateTime.Now;
            context.HttpContext.Session.SetString(remoteIp, JsonConvert.SerializeObject(model));
        } else {
            var _record = JsonConvert.DeserializeObject < IPDetailModel > (context.HttpContext.Session.GetString(remoteIp));
            if (DateTime.Now.Subtract(_record.Time).TotalMinutes < 1 && _record.Count > 1) {
                context.Result = new JsonResult("Permission denined!");
            } else {
                _record.Count = _record.Count + 1;
                context.HttpContext.Session.Remove(remoteIp);
                context.HttpContext.Session.SetString(remoteIp, JsonConvert.SerializeObject(_record));
            }
        }
    }
}


So, as you can see from the code snippet, I've created a one-minute time slot with a single request count. As a result, if two or more requests from the same IP address are received within a minute, logic will stop a user from accessing the API's operations. The same IP address can access the APIs after one minute has passed.

The supplied code saves the incoming request in the IPDetailModel and then serializes and deserializes the data to get it from the modal. We may use the Newtonsoft.Json package to serialize and deserialize data.

After creating a custom middleware class, add it to the Startup.cs file's ConfigureServices section. This will inform the system of the presence of this middleware.
public void ConfigureServices(IServiceCollection services) {
    services.AddControllers();
    services.AddDistributedMemoryCache();
    services.AddSession();
    services.AddScoped < TraceIPAttribute > ();
}

Step 5: Add custom middleware to the controller side.
On the controller side, add the TraceIPAttribute. Before calling the controller's operations, it will analyze the request.
[ApiController]
[Route("[controller]")]
[ServiceFilter(typeof(TraceIPAttribute))]
public class WeatherForecastController: ControllerBase {
    //All actions and APIs will placed here.
}

I've added the TraceIPAttribute to the controller because it'll be applied to all of the actions under it. However, if we want to use it for specific actions, we may assign it to those actions.

Understanding the flow of the process
When a request arrives, the TraceIPAttribute logic is executed first. If this is the first time the application has received a request, IPDetailModel will record the IP Address, time, and request count, and then it will give access to the controller's logic and return a response. However, if the same IP sends multiple requests, it will first go to the TraceIPAttribute logic, and if the request is from the same IP address and within the given time period, it will not allow the request to pass through the controller's actions, but will instead stop the request from moving forward, bypassing the requests from outside the controllers.

APIs are now the most common means of exchanging data between various systems and applications. Also, security attacks such as brute force attacks, man-in-the-middle attacks, and others are on the rise these days. As a result, this article will help to make your application secure against brute force attempts. Furthermore, if the server receives a large number of requests, the server's load might become extremely high, causing the application to go down. As a result, this approach will help in decreasing the load on the server as a result of the numerous requests from outside.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 


 



European ASP.NET Core Hosting :: Caching Mechanism In ASP.NET Core

clock September 6, 2021 07:08 by author Peter

Caching refers to the process of storing frequently used data so that those data can be served much faster for any future requests. So we take the most frequently used data and copy it into temporary storage so that it can be accessed much faster in future calls from the client. If we try to explain with a simple example, Let User-1 request some data and it takes 12-15 seconds for the server to fetch the data. While fetching, we will make a copy of our fetched data parallelly to any temporary storage. So now, when User-2 requests the same data, this time we will simply serve his from the cache and it will take only 1-2 seconds for the response as we already stored the response in our cache.

There are two important terms used with cache, cache hit and cache miss. A cache hit occurs when data can be found in a cache and a cache miss occurs when data can't be found in the cache.

Caching significantly improves the performance of an application, reducing the complexity to generate content. It is important to design an application so that it never depends directly on the cached memory. The application should only cache data that don't change frequently and use the cache data only if it is available.

ASP.NET Core has many caching features. But among them the two main types are,

    In-memory caching
    Distributed Caching

In-memory caching
An in-memory cache is stored in the memory of a single server hosting the application. Basically, the data is cached within the application. This is the easiest way to drastically improve application performance.

The main advantage of In-memory caching is it is much quicker than distributed caching because it avoids communicating over a network and it's suitable for small-scale applications. And the main disadvantage is maintaining the consistency of caches while deployed in the cloud.
Implementing In-memory Caching with ASP.NET Core

First create an ASP.NET Core web API application.

Now inside the Startup.cs file just add the following line. This will add a non-distributed in-memory caching implementation to our application.
public void ConfigureServices(IServiceCollection services)
{
    services.AddMemoryCache();
    //Rest of the code
}


Now let's create a new controller "EmployeeController". And in this controller we will implement our cache.
[Route("api/[controller]")]
[ApiController]
public class EmployeeController : ControllerBase
{
    private readonly IMemoryCache _memoryCache;
    private readonly ApplicationContext _context;
    public EmployeeController(IMemoryCache memoryCache, ApplicationContext context)
    {
        _memoryCache = memoryCache;
        _context = context;
    }

    [HttpGet]
    public async Task<IActionResult> GetAllEmployee()
    {
        var cacheKey = "employeeList";
        //checks if cache entries exists
        if(!_memoryCache.TryGetValue(cacheKey, out List<Employee> employeeList))
        {
            //calling the server
            employeeList = await _context.Employees.ToListAsync();

            //setting up cache options
            var cacheExpiryOptions = new MemoryCacheEntryOptions
            {
                AbsoluteExpiration = DateTime.Now.AddSeconds(50),
                Priority = CacheItemPriority.High,
                SlidingExpiration = TimeSpan.FromSeconds(20)
            };
            //setting cache entries
            _memoryCache.Set(cacheKey, employeeList, cacheExpiryOptions);
        }
        return Ok(employeeList);
    }
}


This is a pretty simple implementation. We are simply checking if any cached value is available for the specific cache key. If exists it will serve the data from the cache, if not we will call our service and save the data in the cache.

Explanation
Line 9: Injecting ImemoryCache to the constructor

Line 16: Creating a cache key. As we know that data will be saved as key-value pair.

Line 18: Checking if cache value is available for the specific key.

Line 24: Setting the cache. MemoryCacheEntryOptions is used to define crucial properties of cache. some of the properties are:

1. Priority - Priority defines the priority of keeping cache entry in the cache. The default value is set to Normal.

2. Sliding Expiration - A specific timespan within which the cache will expire if it is not used by anyone. As we set the sliding expiration to 20 seconds so it means after cache entry if there is no client request for 20 seconds the cache will be expired.

3. Absolute Expiration - It refers to the actual expiration of the cache entry without considering the sliding expiration. In our code, we set the absolute expiration to 50 seconds. So it means the cache will expire every 50 seconds for sure.

Now let's observe the performance boost of our application after implementing the In-memory caching.

For this run the application and send a get request to the web API using Postman. So the first time we send a request to our API it takes about 2061ms.

So for the first time when we call our API it directly fetches data from the database and parallelly we store the data to the cache.

Now if we request the same endpoint for the same data this time it will only take 20ms.

So this is a pretty amazing improvement. In my case, the dataset is small. If there is a big set of data on that case it will drastically improve our service.

Distributed Caching

Distributed cache is a cache that can be shared by one or more applications and it is maintained as an external service that is accessible to all servers. So distributed cache is external to the application.

The main advantage of distributed caching is that data is consistent throughout multiple servers as the server is external to the application, any failure of any application will not affect the cache server.

Here we will try to implement Distributed Caching with Redis.

Redis is an open-source(BSD licensed), in-memory data structure store, used as a database cache and message broker. It is really fast key-value based database and even NoSQL database as well. So Redis is a great option for implementing highly available cache.
Setting up Redis in Docker

Step 1
Pull docker Redis image from docker hub.
docker pull redis

Step 2
Run redis images by mapping Redis port to our local system port.
docker run --name myrediscache -p 5003:379 -d redis

Step 3
Start the container.
docker start myrediscache

As our Redis is set up now let's go for the implementation of Distributed caching with ASP.NET Core Application.
Implementation of Distributed Cache(Redis) with ASP.NET Core

Create an ASP.NET Core Web API project and install the following library using Nuget Package Manager.

As we have already added our required package, now register the services in Startup.cs file.
public void ConfigureServices(IServiceCollection services) {
    //Rest of the code
    services.AddStackExchangeRedisCache(options => {
        options.Configuration = Configuration.GetConnectionString("Redis");
        options.InstanceName = "localRedis_";
    });
}

Here, we set "options.InstanceName" it will just act as a prefix to our key name on the redis server. Ex. if we set a cache name employeelist in the redis server it will be something like localRedis_employeelist.

And we will provide the configuration-related settings for the Redis in appsettings.json.
{
  "AllowedHosts": "*",
  "ConnectionStrings": {
    "Redis": "localhost:5003",
    "DefaultConnection": "Data Source=.;Initial Catalog=BuildingDataDB;Integrated Security=True"
  }
}

Create a helper class "DistributedCacheExtensions" where we will Get and Set Values from and to Redis Cache.

public static class DistributedCacheExtension {
    public static async Task SetRecordAsync < T > (this IDistributedCache cache, string recodeId, T data, TimeSpan ? absoluteExpireTime = null, TimeSpan ? slidingExpirationTime = null) {
        var options = new DistributedCacheEntryOptions();
        options.AbsoluteExpirationRelativeToNow = absoluteExpireTime ?? TimeSpan.FromSeconds(60);
        options.SlidingExpiration = slidingExpirationTime;
        var jsonData = JsonSerializer.Serialize(data);
        await cache.SetStringAsync(recodeId, jsonData, options);
    }
    public static async Task < T > GetRecordAsync < T > (this IDistributedCache cache, string recordId) {
        var jsonData = await cache.GetStringAsync(recordId);
        if (jsonData is null) {
            return default (T);
        }
        return JsonSerializer.Deserialize < T > (jsonData);
    }
}

Here this code is pretty self-explanatory. In the "SetRecodeAsync" method we are saving the data to the Redis Cache. Here we have configured the IDistributedCache server with AbsoluteExpirationRelativeToNow and SlidingExpiration(Line 12 & Line 13) and we have already discussed these terms in our In-memory Caching section.

And in the "GetRecordAsync" we are getting the cached value depending on some recodeKey.

Now we will create a controller named "StudentController",

public class StudentController: ControllerBase {
    private readonly ApplicationContext _context = null;
    private readonly IDistributedCache _cache;
    public StudentController(ApplicationContext context, IDistributedCache cache) {
            _context = context;
            _cache = cache;
        }
        [HttpGet]
    public async Task < ActionResult < List < Student >>> Get() {
        var cacheKey = "GET_ALL_STUDENTS";
        List < Student > students = new List < Student > ();
        var data = await _cache.GetRecordAsync < List < Student >> (cacheKey);
        if (data is null) {
            Thread.Sleep(10000);
            data = _context.Student.ToList();
            await _cache.SetRecordAsync(cacheKey, data);
        }
        return data;
    }
}

Explanation
Line 5: Injecting the IDistributeCache in our constructor.

Line 15: creating a Cache key

Line 18: Trying to get the from the Redis cache server. If data is found on the cache server it will serve the client with the cached data.

Line 20: Checking if Cache data is available. If data are not cached then it will fetch the data from the database or other services. So to simulate it we intentionally put some delay using the Thread.Sleep() method.

So it's pretty simple.

Now let's run the application.

So for the first time, we call the Get method of the student controller it takes about 12 seconds to load the data. As for the first run, it didn't find the data in the cache so it goes to the database and fetches the data and parallelly it saves the fetched data to the Redis cache server.

But in the second run it fetches the data within 28 milliseconds. Because for the second time user request the same data application, found the data had been cached in the Redis server so it server the user with the cached data.

This is a super optimization of our application with a blazing fast speed.

So with this our journey to the introduction to Caching ends. Here I tried to keep the example as simple as possible and discussed its basic concepts. Hope you will find it helpful.

Happy coding!

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 




European ASP.NET Core Hosting :: How To Configure Swagger UI In ASP.NET Core Web API?

clock September 3, 2021 07:29 by author Peter

Swagger is an open-source tool that is used to interact directly with the API through the Swagger UI. In this article, we will see how we can add Swagger in ASP.Net Core application and generate documentation for our web API.

Let's get started,
irst, create a sample Asp.net core web application.


Give a meaningful name to the application, here I have given the name as SwaggerDemoApplication.


Once the project is created let's add a new controller. I have added HomeController. Once the Controller is created add the following NuGet packages into the application,
Swashbuckle.AspNetCore
Swashbuckle.AspNetCore.Swagger
Swashbuckle.AspNetCore.SwaggerUI


Create new folder Model and add Employee class.
public class Employee {
    public int Id {
        get;
        set;
    }
    public string FirstName {
        get;
        set;
    }
    public string LastName {
        get;
        set;
    }
    public string EmailId {
        get;
        set;
    }
}


Now add below Actionmethod into Homecontroller.
public List < Employee > GetEmployee() {
    return new List < Employee > () {
        new Employee() {
                Id = 1,
                    FirstName = "Yogesh",
                    LastName = "Vedpathak",
                    EmailId = "[email protected]"
            },
            new Employee() {
                Id = 2,
                    FirstName = "Amit",
                    LastName = "Kanse",
                    EmailId = "[email protected]"
            }
    };
}


Configuring the Swagger Middleware
Now it's time to configure services inside a startup.cs class. Open startup.cs class and add the below line of code into configuring services method.
public void ConfigureServices(IServiceCollection services) {
    // Register the Swagger or more Swagger documents
    services.AddSwaggerGen(c => {
        c.SwaggerDoc("v1", new OpenApiInfo {
            Title = "SwaggerDemoApplication", Version = "v1"
        });
    });
    services.AddControllers();
}


Add the below line of code inside configure method. Basically we are going to enable middleware for swagger UI.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) {
        // Enable middleware Swagger for JSON endpoint.
        app.UseSwagger();
        app.UseSwaggerUI(c => {
                c.SwaggerEndpoint("/swagger/v1/swagger.json", SwaggerDemoApplication V1 ");
                });
        }   


Now it's time to check the result. Run the application and navigate to https://localhost:44338/swagger/Index.html.

Now click on Get action method, after that click on execute button.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.


 

 



European ASP.NET Core Hosting :: Exploring OData Protocol With ASP.NET Core

clock August 31, 2021 07:16 by author Peter

In this article, we will try to have a general introduction to OData Protocol, and later on, we will try to implement OData with ASP.NET Core Web API to supercharge it.

Let's get started,
Open Data Protocol(OData) is an open protocol that allows the creation and consumption of queryable and interoperable REST APIs in a standard way. It defines a set of best practices building and consuming RESTful services without having to worry about the various approaches to define request and response header, status code, HTTP methods, URL convention and helps us to focus only on the business logic. It helps to enhance the capabilities of the API by advocating a standard way of implementing API that allows SQL-Like querying capabilities which are generally considered to be "SQL for the Web".OData basically adds one layer over the API treating the endpoint itself as a resource and adds the transformation capabilities (Selection, Sorting, Filtering, Paging) via the URL.

Advantages of using OData

The main advantage of OData is the support for generic queries against the service data. It basically replaces the classic "Get<EntityName>By><Criteria>" web services(like GetAllEmployees, GetEmployeeByID, GetEmployeeByDepartment). For example, we have a method "GetAllEmployees()" on our server that returns the list of all employees, and from the client-side, we call "https://localhost:<port>/odata/Employee" and show the list of Employees on our client-side. But as a requirement we also need to show the details of any specific employee, in our traditional development for this we need to create another method on our server-side "GetEmployeeByID(int id)" and this method will return the details of a specific employee. But with OData we don't need to create this new method, we can simply use our "GetAllEmployees()" and just filter our existing endpoint like this "https://localhost:<port>/odata/Employee?$filter=EmployeeId eq 1" and this will return us the employee details of the first employee. So look that's really simple, we don't need to implement various actions or use query parameters to provide filtered data or you may need a new filter for your client, it is very likely that we don't have to change the server just put up the query on the client-side. Along with the filter, we can also do operations like paging data, etc.

Another notable advantage of OData is the ability to provide metadata about the service interface. A metadata document is a static resource that describes the data model and type of particular OData services. And the client can use the metadata to understand how to query and navigate between entities.

Requesting & Querying Data in OData
Now we have introductory-level knowledge about OData let's explore the requesting process of OData. For this, we will use the demo OData service "https://services.odata.org/V4/TripinService".

Get Entity Collection
OData services support requests for data via HTTP Get requests. So from the demo OData service if we want to get a collection of all People entity we can query "https://services.odata.org/V4/TripPinService/People".

$filter
$filter query option allows the client to filter collection of each depending on the expression address by URL.
For example,

https://services.odata.org/V4/TripPinService/People?$filter=FirstName eq ‘Scott’

This query returns the list of People with FirstName "Scott". So the response for this will be something like this,

$orderby
"$orderby" query option allows the client to request the resource in either ascending order using 'asc' or descending order by 'desc'.

Example: https://services.odata.org/V4/TripPinService/People('scottketchum')/Trips?$orderby=EndsAt desc

This query return Trips of individual people and order them on property 'EndsAt' in descending order.

$top
"$top" requests the number of items in the queried collection to be included in the result.
Example:

https://services.odata.org/V4/TripPinService/Peope?$top=2

This query returns the first two people of the people entity.

$skip
"$skip" query option request the number of items in the queried collection that are to be skipped.

Example: https://services.odata.org/V4/TripPinService/Airports?$skip=14

This query returns resources skipping the first 14


$count
"$count" query option allows the client to request a count of the matching resources included with the resources in the response.

Ex:
https://services.odata.org/V4/TripPinService/Airports?$count=true

This example query requests to return the total number of items in the collection.

$expand
"$expand" query options specify the related resource to be included in the resource collection.

Example:
https://services.odata.org/V4/TripPinService/People?$expand=Friends,Trips

This example query returns People with navigation property Friend and Trips of a person.


$Select
"$select" query option allows the client to request a limited set of resources for each entity.

Ex.
https://services.odata.org/V4/TripPinService/People?$select=FirstName,LastName

Here the client requesting to return only the FirstName and LastName property of each person.


Lambda Operators
OData defines two operators any and all that evaluate the Boolean expression on a collection.

Ex.

https://services.odata.org/V4/TripPinService/People?$filter=Emails/any(s:endswith(s, 'contoso.com'))


Implementing OData with ASP.NET Core
First, we will create a new ASP.NET Core API Project and install "Microsoft.AspNetCore.OData" using the package manager console.
Install-Package Microsoft.AspNetCore.OData -Version 7.5.5

Now we need to add the entity class "Student".
public class Student {
    public int StudentID {
        get;
        set;
    }
    public string FName {
        get;
        set;
    }
    public string SName {
        get;
        set;
    }
    public string IDNumber {
        get;
        set;
    }
    public string EmailAddress {
        get;
        set;
    }
    public DateTime DateCreated {
        get;
        set;
    }
}

Then we will proceed to create DbContext class called AppDbContext. As we are familiar with this context class and the whole connection things to database with .NET application so I'm skipping the description of it.

public class AppDbContext: DbContext {
    public AppDbContext(DbContextOptions < AppDbContext > options): base(options) {}
    public AppDbContext() {}
    public DbSet < Student > Student {
        get;
        set;
    }
}

Now we need to modify the startup.cs file to support OData. On the configureServices() method we are disabling endpoint routing. And inside the Configure() method we will add routeBuilder inside the UseMvc method and specify various OData options(Select, Filter, etc) and another more important thing is we are providing "GetEdmModel()" this method creates the necessary IEdmModel object for us.
public void ConfigureServices(IServiceCollection services) {
    services.AddControllers(mvcOptions => mvcOptions.EnableEndpointRouting = false);
    services.AddOData();
    //rest of the code
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) {
    //Rest of the code
    app.UseMvc(routeBuilder => {
        routeBuilder.Select().Filter().Expand().OrderBy().Count().MaxTop(100);
        routeBuilder.MapODataServiceRoute("odata", "odata", GetEdmModel());
    });
}
private IEdmModel GetEdmModel() {
    var edmBuilder = new ODataConventionModelBuilder();
    edmBuilder.EntitySet < StudentCourse > ("StudentCourse");
    edmBuilder.EntitySet < Student > ("Student");
    return edmBuilder.GetEdmModel();
}


Now to add the OData service we will create Controller "StudentController" that will be inherited from the ODataController.
[ODataRoutePrefix("Student")]
public class StudentController: ODataController {
    private readonly AppDbContext db;
    public StudentController(AppDbContext db) {
            this.db = db;
        }
        [HttpGet]
        [EnableQuery()]
    public IQueryable < Student > Get() {
            return db.Student.AsQueryable();
        }
        [EnableQuery]
        [ODataRoute("({id})")]
    public Student Get([FromODataUri] int id) {
        return db.Student.Find(id);
    }
}

Explanation

Line 1
We have added a "[ODataRoutePrefix("Student")]" so the sample route will be "https://localhost/odata/Student"

Line 12

[EnableQuery()] attribute enables OData querying for the underlying action.

Now if we build and run the application and navigate to "https://localhost:<port>/odata/Student" we will get a list of Students.

Consume OData Feed with C# Client
Simple.OData.client is a library that supports all OData protocol versions. This is used to consume the OData feed with C# that supports different .NET versions.

Let's create a .NET Core console application named and inside the application install the Simple.OData.Client library using this command in package manager console.
Install-Package Simple.OData.Client -Version 5.18.2

Now create a new folder "Models" and create a class Student inside this folder.  Here we will create typed classes for each table data to be consumed. As we are going to consume student data so we built a Student class according to the metadata exposed by our service.
public class Student
{
    public int StudentID { get; set; }
    public string FName { get; set; }
    public string SName { get; set; }
    public string IDNumber { get; set; }
    public string EmailAddress { get; set; }
    public DateTime DateCreated { get; set; }
}


Now we will make a get request using the ODataClient and print the response in the console.
class Program {
    static async Task Main(string[] args) {
        SimpleQuery();
        Console.ReadKey();
    }
    static async void SimpleQuery() {
        var settings = new ODataClientSettings(new Uri("https://localhost:44340/odata/"));
        var client = new ODataClient(settings);
        try {
            var response = await client.For < Student > ().Filter(x => x.StudentID > 1).OrderBy(s => s.SName).Select(p => new {
                p.StudentID, p.SName, p.FName
            }).FindEntriesAsync();
            foreach(var res in response) {
                Console.WriteLine("Student ID:" + res.StudentID + ", Student Name: " + res.SName + "," + " Father's Name:" + res.FName);
            }
        } catch (Exception e) {
            Console.WriteLine("Simple Query " + e);
        }
    }
}


Finally, our project structure will look like this,


Now if we run our Client project (make sure the API service is running) we will get the response.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core Hosting :: The New PriorityQueue Collection In .NET 6

clock August 30, 2021 07:01 by author Peter

In today’s article, we will take a look at the new PriorityQueue collection introduced with .NET 6. This is an advanced version of the existing queue collection. It allows us to add the priority of an item when the item is added to the queue. We will be using the Visual Studio 2022 preview version for this article. So, let us begin.
Creating our console application in Visual Studio 2022

Let us create a console application in Visual Studio 2022. I am using the community preview edition.

 

 

 

We now see the below,


Add the below code in the “Program.cs” file.
using System;
using System.Collections.Generic;
// Old Queue implementation
Console.WriteLine("Old Queue implementation");
var numbers = new Queue < string > ();
numbers.Enqueue("one");
numbers.Enqueue("two");
numbers.Enqueue("three");
numbers.Enqueue("four");
var total = numbers.Count;
for (var i = 0; i < total; i++) {
    var number = numbers.Dequeue();
    Console.WriteLine(number);
}
// New C#10 PriorityQueue
Console.WriteLine("New C#10 PriorityQueue implementation");
var newNumbers = new PriorityQueue < string,
    int > ();
newNumbers.Enqueue("one", 3);
newNumbers.Enqueue("two", 4);
newNumbers.Enqueue("three", 2);
newNumbers.Enqueue("four", 1);
var newTotal = newNumbers.Count;
for (var i = 0; i < newTotal; i++) {
    var number = newNumbers.Dequeue();
    Console.WriteLine(number);
}
Console.ReadKey();


Looking at the above code we see that we first create a simple queue collection and enqueue four strings to it. When we dequeue the elements of the queue collection, we get them back in the same order in which they were added. This is the default working of the queue.

However, in the second part of the code, we are using the new PriorityQueue collection in which we can specify the order in which we would want to dequeue the elements. Hence, we are giving the members of the queue a priority.

When we run the code, we will see the below,


In this article, we took a look at the new PriorityQueue collection introduced with .NET 6. This can be very useful for creating a collection in which certain members need to be dequeued and processed before others irrespective of when they were added to the queue. Happy coding!

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core Hosting :: Authorization Attribute In ASP.NET Core Web API

clock August 25, 2021 07:07 by author Peter

uthorization is a basic requirement while the application is used by multiple & multilevel users. While developing APIs, it's also important to provide basic authorization to handle security. Here we will see how to implement the authorization attribute in ASP. Net Core web API. This post will cover the basics of developing authorization attributes for both intermediate and experienced users.

Step 1 - Create Authorization Attribute Class
Create a class for handling the logic of the authorization process. Here I have assigned the class name "AuthAttribute". Assign TypeFilterAttribute to AuthAttribute class and create a constructor of the class. The constructor can contain parameters as per requirements. Here I have assigned two parameters to the constructor name as actionName and roleType.

public class AuthAttribute: TypeFilterAttribute {
    public AuthAttribute(string actionName, string roleType): base(typeof(AuthorizeAction)) {
        Arguments = new object[] {
            actionName,
            roleType
        };
    }
}


Step 2 - Create a class to handle the logic for an Authorization
Now, create a class for handling the logic for the authorization process.
public class AuthorizeAction: IAuthorizationFilter {
    private readonly string _actionName;
    private readonly string _roleType;
    public AuthorizeAction(string actionName, string roleType) {
        _actionName = actionName;
        _roleType = roleType;
    }
    public void OnAuthorization(AuthorizationFilterContext context) {
        string _roleType = context.HttpContext.Request?.Headers["role"].ToString();
        switch (_actionName) {
            case "Index":
                if (!_roleType.Contains("admin")) context.Result = new JsonResult("Permission denined!");
                break;
        }
    }
}


The authorization can get a role name in different ways. Here role name is passing in the header, so httpcontext will return us the role name from the header details. However, the action name will fetch from the action's attribute parameter which will be bind in the controller.

Step 3 - Assign Authorization Attribute to Action

Now, assign authorization attribute to action as per requirements. The attribute contains a parameter like rolename and actionname. So during the declaration of the authorization attribute with the action, it will require rolename and actionname as parameters.

[HttpGet]
[AuthAttribute("Index", "Admin")]
[Route("sample")]
Public async Task<IActionResult> Index()
{
    return Ok("Action perform successfully!");
}


Here, I have created a simple action under the controller and assigned an authorization attribute to the action. Now, I have assigned the action name as "Index" and role type as "Admin". We can manage action names and role names based on application requirements.

Step 4 - API call from the postman
Call API using postman with role name in the header.

Step 5 - Logic behind the process
When API call, first of all, it will redirect to the authorization process. The authorization will get the action name and role type in a parameter from the action. Roles which are passing in the parameter are eligible to access the action. And authorization process will check requesting role with eligible parameter roles, and if the condition will fulfill then the process will allow the user to access the action; otherwise, it will not allow the user to access or enter in action logic.

It's difficult to manage Authorization inside action's services while creating an Asp.Net Core web API. It's possible that an unauthorized person might gain access to the logic of the process if the logical criteria don't meet. As a result, if a user is prohibited from a specific process, that user will be unable to enter the action's logic. On the other side, if actions have any loopholes for managing logic, this technique can also make logic safe against unauthorized user access.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting :: Purpose and Use of Delegate

clock August 24, 2021 06:39 by author Peter

Why use delegate in C#?
Delegate is one of the most incorrectly-interpreted words by developers in C#. Delegate is widely used inside .net framework itself. Let’s go further and break down  that delegate question that every interviewer asks.
 
My Experience with Delegates
In my all years of experience I have used delegate several times and noticed that after I am done with development who ever over takes that code from me is not able to grasp that particular delegate logic.
 
If used wisely it can save a lot of time and lines of code but if used in inappropriately it will confuse everyone in the future.
 
Purpose
It helps achieve the following,
    Encapsulation / Abstraction
    Security
    Callback
    Re-usability

Most common definition of Delegate,
“Delegate is a keyword in .net that is used as function pointer” or
“Delegate is used for callbacks only”
 

Well nothing is wrong with these definitions but they don't tell you the whole picture.
 
Characteristics
Delegate has a few characteristics:
    Type safe
    Takes method in assignment

Let’s start by an example,
 
First let’s create our Product model
 
We are going to use this model
using System;
namespace Models
{
    public class Products
    {
        public string ProductName { get; set; }
        public int ProductId { get; set; }

    }
}


Let’s create and interface
General practice to create interface
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace BusinessLayer
{
    public interface ICustomer<T>
    {
        void Process(T products);
    }
}

Second, inherit this interface in class
using Models;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace BusinessLayer
{
    public class FrequentCustomer : ICustomer<Products>
    {
        public void Process(Products product)
        {
            Console.WriteLine($"Product Count : 1");
            Console.WriteLine("--Product Details--");
            Console.WriteLine($"Name : {product.ProductName} Product Id : {product.ProductId}");
        }
    }
}   

Process is the method which will be called by an anonymous later
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ServiceCallManager
{
    public static class ServiceCaller
    {
        public static void Invoke<TService>(Action<TService> action)
        {
            Type typ = typeof(TService);
            TService instance = (TService)Activator.CreateInstance(typ);

            action(instance);
        }
    }
}


Invoke is the method which takes Action<TService> type argument, in .Net Action delegate's return type is void.

See the summary, it says “encapsulates.”
 
Now we all know delegate takes method in assignment (Encapsulates method)
 
Method can be in any form, let’s say anonymous methods.
 
What is the anonymous method?
“A method without a name!”
 
Now let’s look into our next class.
class Program
{
    static void Main(string[] args)
    {
        IList<Products> products = new List<Products>();

        Products product1 = new Products();
        product1.ProductName = "Rice";
        product1.ProductId = 1;
        products.Add(product1);

        product1 = new Products();
        product1.ProductName = "Bread";
        product1.ProductId = 2;
        products.Add(product1);

        product1 = new Products();
        product1.ProductName = "Pasta";
        product1.ProductId = 3;
        products.Add(product1);

        ServiceCaller.Invoke<FrequentCustomer>(x => x.Process(product1));

        Console.WriteLine();
        Console.ReadKey();
    }
}
ServiceCaller.Invoke<FrequentCustomer>(x => x.Process(product1));


Remember that the invoke method accepts delegate as argument and delegate encapsulates methods, so here inside Invoke method I am passing an anonymous method which will get invoked when action() is called which executes the anonymous method and calls the Process method of FrequentCustomer class.

Let’s go step by step,
    Breakpoint comes to ServiceCaller.Invoke<FrequentCustomer>(x => x.Process(product1));
    It goes inside Invoke method of staticServiceCaller class
    Via reflection we are creating object of ServiceT type (You can ignore this if you don’t understand reflection)
    At the last line of method action is called with parameter instance; i.e. object of ServiceT type
    After action (instance) is called breakpoint comes to ServiceCaller.Invoke<FrequentCustomer>(x => x.Process(product1)); and it starts executing the x => x.Process(product1) part only
    You will notice that x is of type FrequentCustomer


    So this part of execution calls the Process method or FrequentCustomer and is also passing local parameter in it.

Now the benefit of delegate here is I am able to use local variable inside method; i.e. product1, which may not be available in other class due to security reasons.

Delegate helps me implement Encapsulation, Security, Callback (Obviously), Re-usability and if used wisely, Polymorphism also.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 

 




European ASP.NET Core Hosting :: How to Implement Read/Write Operations Using CQRS And Dapper In ASP.NET Core?

clock August 23, 2021 07:47 by author Peter

In this article, we will look into the implementation of reading Queries and Write Commands using CQRS and Dapper ORM in the ASP.Net Core 5.0 template. CQRS is a popular architecture pattern because it addresses a common problem with most enterprise applications. Separating write behavior from reading behavior, which is the essence of the CQRS architectural pattern, provides stability and scalability to enterprise applications while also improving overall performance.

In scenarios, when you have complex business logic CQRS may simplify understanding of the domain by dividing the problem into the command and query parts. In situations, when your UI is based on workflows and utilizes the Interface pattern it is easier to identify user's intents and translate them into domain events.

Setup the Project

    Open Visual Studio and select "Create a new project" and click the "Next" button.
    Add the "project name" and "solution name", also choose the path to save the project in that location, click on "Next".
    Now choose the target framework ".Net 5.0" which we get once we install the SDK and also will get one more option to configure Open API support by default with that check box option.

Tables Schema
Create a Database in the SQL Server to execute the below schema under that database to create respective tables. If the database already exists then we can directly execute the schema without creating the database. Here we have created two tables Order & Product to work with CRUD Operations.
USE [OrderDb]
GO
/****** Object:  Table [dbo].[Orders]    Script Date: 17-08-2021 10:54:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Orders](
    [OrderId] [int] IDENTITY(1,1) NOT NULL,
    [OrderDetails] [nvarchar](max) NULL,
    [IsActive] [bit] NOT NULL,
    [OrderedDate] [datetime2](7) NOT NULL,
 CONSTRAINT [PK_Orders] PRIMARY KEY CLUSTERED
(
    [OrderId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object:  Table [dbo].[Products]    Script Date: 17-08-2021 10:54:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Products](
    [ProductId] [int] IDENTITY(1,1) NOT NULL,
    [Name] [nvarchar](max) NULL,
    [Price] [real] NOT NULL,
    [isDisCountApplied] [bit] NOT NULL,
 CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED
(
    [ProductId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Adding the Domain Models
Let's create the table models inside the project. To separate the things from each other we will be following the Onion Architecture pattern in which we will maintain all our Models under the Domain Layer. Create a Class library named Domain Layer and add the below class files inside that.

Order.cs
using System;

namespace DomainLayer
{
   public class Order
    {
       public int OrderId { get; set;}
       public string OrderDetails { get; set; }
       public bool IsActive { get; set; }
       public DateTime OrderedDate { get; set; }
    }
}

Product.cs
namespace DomainLayer
{
    public class Product
    {
       public int ProductId { get; set; }
       public string Name { get; set; }
       public float Price { get; set; }
       public bool isDisCountApplied  { get; set; }
    }
}


Domain Layer

Commands & Queries
Commands are nothing but task-based operations where they will perform the write operations to the database. Here will need one more project (Class Library) to maintain all these Read/Write Operations so that will have layered architecture with fewer dependencies. Create a class library named ApplicationLayer to have the commands and queries based on requirements.

Create a folder named Commands and Query and inside that will add the classes based on the tables and operations. Below is the image of the Application Layer

Adding the Required Packages in Application Layer
As referenced before, the Application Layer will contain the CQRS Commands and Queries that are explicit for this application.

Right off the bat, Add Reference to the Domain Project.

Then, at that point, introduce the necessary bundles through Console.

Install-Package MediatR.Extensions.Microsoft.DependencyInjection

Adding the Dependency Injection
This is another variation that I have seen in numerous tremendous arrangements. Suppose you have around 100 interfaces and 100 executions. Do you add this load of 100 lines of code to the Startup.cs to enroll them in the holder? That would be crazy according to the viability perspective. To keep things clean, what we can do is, Create a DependencyInjection static Class for each layer of the arrangement and just add the comparing expected administrations to the comparing Class.

Along these lines, we are decentralizing the code lines and keeping our Startup class slick and clean. Here is an augmentation technique over the IServiceCollection.
using MediatR;
using Microsoft.Extensions.DependencyInjection;
using System.Reflection;

namespace ApplicationLayer
{
public static class DependencyInjection
{
    #region Services Injection
    public static void AddApplication(this IServiceCollection services)
    {
        services.AddMediatR(Assembly.GetExecutingAssembly());
    }
    #endregion

}
}


Basically, these classes would cover our CRUD Operations implementation by using the SQL Queries - Dapper

CreateOrUpdateOrderCommand.cs
using Dapper;
using MediatR;
using Microsoft.Extensions.Configuration;
using System.ComponentModel.DataAnnotations;
using System.Data.SqlClient;
using System.Threading;
using System.Threading.Tasks;

namespace ApplicationLayer.Commands.Orders
{
public class CreateOrUpdateOrderCommand : IRequest<int>
{
    public int OrderId { get; set; }
    [Required]
    public string OrderDetails { get; set; }
    public class CreateOrUpdateOrderCommandHandler : IRequestHandler<CreateOrUpdateOrderCommand, int>
    {
        private readonly IConfiguration configuration;
        public CreateOrUpdateOrderCommandHandler(IConfiguration configuration)
        {
            this.configuration = configuration;
        }
        public async Task<int> Handle(CreateOrUpdateOrderCommand command, CancellationToken cancellationToken)
        {
            if (command.OrderId > 0)
            {
                var sql = "Update Orders set OrderDetails = @OrderDetails Where OrderId = @OrderId";
                using (var connection = new SqlConnection(configuration.GetConnectionString("DefaultConnection")))
                {
                    connection.Open();
                    var result = await connection.ExecuteAsync(sql, command);
                    return result;
                }
            }
            else
            {
                var sql = "Insert into Orders (OrderDetails) VALUES (@OrderDetails)";
                using (var connection = new SqlConnection(configuration.GetConnectionString("DefaultConnection")))
                {
                    connection.Open();
                    var result = await connection.ExecuteAsync(sql, new { ClientName = command.OrderDetails });
                    return result;
                }
            }
        }
    }
}
}


DeleteProductByIdCommand.cs
using Dapper;
using MediatR;
using Microsoft.Extensions.Configuration;
using System.ComponentModel.DataAnnotations;
using System.Data.SqlClient;
using System.Threading;
using System.Threading.Tasks;

namespace ApplicationLayer.Commands.Products
{
public class DeleteProductByIdCommand : IRequest<int>
{
    [Required]
    public int ProductId { get; set; }
    public class DeleteProductByIdCommandHandler : IRequestHandler<DeleteProductByIdCommand, int>
    {
        private readonly IConfiguration _configuration;
        public DeleteProductByIdCommandHandler(IConfiguration configuration)
        {
            _configuration = configuration;
        }
        public async Task<int> Handle(DeleteProductByIdCommand command, CancellationToken cancellationToken)
        {
            var sql = "DELETE FROM Products WHERE ProductId = @ProductId";
            using (var connection = new SqlConnection(_configuration.GetConnectionString("DefaultConnection")))
            {
                connection.Open();
                var result = await connection.ExecuteAsync(sql, new { ClientID = command.ProductId });
                return result;
            }
        }
    }
}
}


GetAllORdersQuery.cs
using Dapper;
using DomainLayer;
using MediatR;
using Microsoft.Extensions.Configuration;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

namespace ApplicationLayer.Queries.Orders
{
public class GetAllOrdersQuery : IRequest<IList<Order>>
{
    public class GetAllOrderQueryHandler : IRequestHandler<GetAllOrdersQuery, IList<Order>>
    {
        private readonly IConfiguration _configuration;
        public GetAllOrderQueryHandler(IConfiguration configuration)
        {
            _configuration = configuration;
        }
        public async Task<IList<Order>> Handle(GetAllOrdersQuery query, CancellationToken cancellationToken)
        {
            var sql = "Select * from Orders";
            using (var connection = new SqlConnection(_configuration.GetConnectionString("DefaultConnection")))
            {
                connection.Open();
                var result = await connection.QueryAsync<Order>(sql);
                return result.ToList();
            }
        }
    }
}
}


GetAllProductsQuery.cs
using Dapper;
using DomainLayer;
using MediatR;
using Microsoft.Extensions.Configuration;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

namespace ApplicationLayer.Queries.Products
{
public class GetAllProductsQuery : IRequest<IList<Product>>
{
    public class GetAllOrderQueryHandler : IRequestHandler<GetAllProductsQuery, IList<Product>>
    {
        private readonly IConfiguration _configuration;
        public GetAllOrderQueryHandler(IConfiguration configuration)
        {
            _configuration = configuration;
        }
        public async Task<IList<Product>> Handle(GetAllProductsQuery query, CancellationToken cancellationToken)
        {
            var sql = "Select * from Products";
            using (var connection = new SqlConnection(_configuration.GetConnectionString("DefaultConnection")))
            {
                connection.Open();
                var result = await connection.QueryAsync<Product>(sql);
                return result.ToList();
            }
        }
    }
}
}


Firstly, add a connection string to the appsettings.json found in the WebApi Project.

appsettings.json
"ConnectionStrings": {
"DefaultConnection": "Server=**********;Database=OrderDb;Trusted_Connection=True;"
}

Furthermore, in the Startup class/ConfigureServices strategy for the WebApi Just Add the accompanying line. You would now be able to see the benefit of this sort of approach.
#region Dependency Injection
services.AddApplication();
#endregion

Adding the MediatR Handler and Controllers

This is the last step of setting up Onion Architecture In ASP.NET Core. We should wire up a regulator to the Application Layer.

Make a Base API Controller. This will be an Empty API Controller which will have a MediatR object. What is the point of this Base Controller? It is simply to diminish the lines of code. Say, we add another regulator. We won't need to re-characterize the MediatR object. Be that as it may, we will simply add the BaseAPI Controller as the base class. Get it? I will show it in execution.

Add another Empty API Controller in the Controllers envelope and name it BaseController.

BaseController.cs
using MediatR;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.DependencyInjection;

namespace OnionArchitecture_CQRS_Dapper.Controllers
{
[Route("api/[controller]")]
[ApiController]
public abstract class BaseController : ControllerBase
{
    #region Property
    private IMediator _mediator;
    #endregion
    protected IMediator Mediator => _mediator ??= HttpContext.RequestServices.GetService<IMediator>();
}
}


OrderController.cs
using ApplicationLayer.Commands.Orders;
using ApplicationLayer.Queries.Orders;
using Microsoft.AspNetCore.Mvc;
using System.Threading.Tasks;

namespace OnionArchitecture_CQRS_Dapper.Controllers
{
public class OrderController : BaseController
{
    /// <summary>
    /// Save newly added order to database
    /// </summary>
    /// <param name="command"></param>
    /// <returns></returns>
    [HttpPost(nameof(SaveOrderData))]
    public async Task<IActionResult> SaveOrderData(CreateOrUpdateOrderCommand command) => Ok(await Mediator.Send(command));
    /// <summary>
    /// Fetch all data from the Orders table.
    /// </summary>
    /// <returns></returns>
    [HttpGet]
    public async Task<IActionResult> GetAllOrders() => Ok(await Mediator.Send(new GetAllOrdersQuery()));
}
}

ProductController.cs
using ApplicationLayer.Commands.Products;
using ApplicationLayer.Queries.Products;
using Microsoft.AspNetCore.Mvc;
using System.Threading.Tasks;

namespace OnionArchitecture_CQRS_Dapper.Controllers
{
public class ProductController : BaseController
{
    /// <summary>
    /// Delete Product from the Products Table
    /// </summary>
    /// <param name="command"></param>
    /// <returns></returns>
    [HttpDelete(nameof(DeleteProduct))]
    public async Task<IActionResult> DeleteProduct(DeleteProductByIdCommand command) => Ok(await Mediator.Send(command));
    /// <summary>
    /// Fetch all Product Data from the Database
    /// </summary>
    /// <returns></returns>
    [HttpGet]
    public async Task<IActionResult> GetAllProducts() => Ok(await Mediator.Send(new GetAllProductsQuery()));
}
}


Testing

Run the application and open up Swagger. We will do a simple test to ensure that our solution works.


 

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.




European ASP.NET Core Hosting :: Data Source Controls In ASP.NET

clock August 18, 2021 09:06 by author Peter

SqlDataSource control is a data source control provided in ASP.NET to connect to database providers such as SQL, OLEDB, ODBC, and Oracle. This control only establishes a connection with the data source; it does not display data on the web page. The data is displayed by binding the SqlDataSource control to a data-bound control such as a GridView or DataList. These data-bound controls, in turn, display the data on the web page. The SqlDataSource control also supports editing, updating, deleting, and sorting of the records retrieved from the database. This support to manipulate the data in the data source can be implemented in data-bound controls without writing any code. In other words, a SqlDataSource control allows you to access databases without creating a Connection, Command, or a Data Reader object.
Adding SqlDataSource Control

A SqlDataSource control is added to the webform using markup or by dragging it from the Toolbox and dropping it on the web form. The code below shows the markup to create a SqlDataSource control.
<asp:SqlDataSource ID=”sqldsSuppliers” runat=”server”>
</asp:SqlDataSource>


The markup in this code creates a SqlDataSource control named sqldsSuppliers. This control will be used to connect to the NWSuppliers table in the Northwind database.
Configuring SqlDataSource Control

The basic configuration of the SqlDataSource control involves setting two attributes, namely ConnectionString and SelectCommand. The ConnectionString attribute specifies the connection string to connect to a particular database. The SelectCommand attribute specifies the select statement to retrieve the records from a database table. Source code uses these attributes to connect to the Northwind database located on the SQL Server instance, SQLEXPRESS, installed on the system 10.2.1.51.

<asp:SqlDataSource ID=”sqldsSuppliers” runat=”server” ConnectionString=”Data Source=10.2.1.51\SQLEXPRESS;Initial Catalog=Northwind; Integrated Security=True” SelectCommand=”select * from NWSupplierss;”>
</asp:SqlDataSource>


The SqlDataSource control, sqldsSuppliers, will retrieve all the records from the NWSuppliers table.

The attribute ConnectionString and SelectCommand are available only in the Source view of the web form in design view, the selectCommand attribute is available as a SelectQuery property.

Binding to a Data Bound Control
After the SqlDataSource control is configured to retrieve data from the database, it is bound to a data-bound control. The code below shows the code to bind an SQqlDataSource control named sqldsSuppliers to a DataList control named dlstSuppliers using the DataSourceID property.

<asp:DataList ID=”dlstSuppliers” runat=”server” backcolor=”LightGoldenrodYellow” BorderColor=”Tan” BorderWidth=”1px” CellPadding=”2” DataSourceID=”sqldsSuppliers” Font-Name=”verdana” Font-Size=”Smaller” ForeColor=”Black”>
</asp:Datalist>

The same SqlDataSource control can be bound to another data-bound control such as the GridView control by setting the DataSourceID property to sqldsSuppliers. Code shows the ItemTemplate code to display the data in a DataList control named dlstSuppliers.
<ItemTemplate>
  <strong>Supplier ID </strong>
  <asp:Label ID=”lblSupplierID” runat=”server” Text=’<%#Eval(“SupplierID”)%>’ width=”45px” Font-size=”x-small”>
  </asp:Label> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <strong> Company Name:</strong>
  <asp:Label ID=”lblCompanyName” runat=”server” Test=’<%#Eval(“Companyname”)%>’ width=”197px” Font-size=”x-small”>
  </asp:Label>
  <br />
  <br />
  <strong> Contact Name:</strong>
  <asp:Label ID=”lblContactName” runat=”server” Test=’<%#Eval(“ContactName”)%>’ width=”127px” Font-size=”x-small” Font-Bold=”False”>
  </asp:Label>
  <br />
  <br />
  <strong> Phone:</strong>
  <asp:Label ID=”lblPhone” runat=”server” Test=’<%#Eval(“Phone”)%>’ width=”111px” Font-size=”xx-small” Font-Bold=”False”>
  </asp:Label>
  <br />
  <br />
</ItemTemplate>
<AlternatingItemStyle BackColor=”PaleGoldenrod” />

The ItemTemplate in the source code comprises four times. It displays four textual labels namely Supplier ID, Company name, Contacts Name, and Phone. The ItemTemplate also has four corresponding Label controls namely lblSupplierID, lblCompanyName, lblContactName, and lblPhone respectively. These Label controls are bound to the columns SupplierID, CompanyName, ContactName, and Phone respectively of the NWSuppliers table.
Commands in SqlDataSource Control

The SqlDataSource control uses the SelectCommand property to retrieve the records from a database table. Similarly, SqlDataSource control is provided with DeleteCommand, UpdateCommand, and InsertCommand properties to delete, update and add a new record into a database table.

DeleteCommand Property
The DeleteCommand property is used to specify the DELETE statement to delete a record. DeleteCommand is available as an attribute of the SQLDataSource tag in source view. In the Design view, the DELETE statement is specified as a value of the property, DeleteQuery. Source code shows in bold how to set the DeleteCommand attribute of the SqlDataSource tag.
<asp:SqlDataSource ID=”sqldssuppliers” runat=”server” ConnectionString=”Data Source=10.2.1.51\SQLEXPRESS;Initial Catalog=Northwind;Integrated Security=”True” SelectCommand=”Select * from NWSuppliers ;”DeleteCommand=”delete from NWSuppliers where SupplierID=@SupplierID” >
    <DeleteParameters>
        <asp:Parameter Type=”Int32” Name=”SupplierID” />
    </DeleteParameters>
</asp:SqlDataSource>


The DeleteCommand in code contains a placeholder, @SupplierID, for SupplierID value. The details about this placeholder are specified by the <DeleteParameters> element. This data source control can now be used with a data-bound control such as a DataList control.

If the SqlDataControl is used with a GridView control, you can provide delete functionality by setting two properties. The DataSource property is set to sqldsSuppliers and the DataKeyNames property is set to SupplierID.

The steps to be followed in order to implement delete functionality in the DataList control, dlstSuppliers are listed below:

In the Properties window of the dlstSuppliers control, set the DatakeyField property to the primary key column of the database table, NWSuppliers. In this case, you set it to SupplierID. The DataKeyField attribute specifies the name of the primary key column, which is used to delete or update a record in the database table.

In the ItemTemplate of dlstSuppliers control, add a LinkButton Web Server control with the CommandName property set to delete and Text property set to Delete. This will render the DataList control with a Delete link to delete an item.

Add a DeleteCommand event handler to dlstSuppliers control a shown in code below,

protected void dlstSuppliers DeleteCommand(object source, DataListCommandEventArgs e) {
    int id = (int) dlstSuppliers.DataKeys[e.Item.ItemIndex];
    sqldsSuppliers.DeleteParameters[“SupplierID”].DefaultValue = id.ToString();
    sqldsSuppliers.Delete();
}


The DataKeys collection of dlstSuppliers contains all the keys values of the items listed in the control. Therefore, the SupplierID value of the item clicked is retrieved by indexing the Datakeys collection with the index of the item clicked. The ItemIndex property returns the index of the item clicked. The SupplierID value retrieved is then assigned to the delete parameter of the SqlDataSource control named sqldsSuppliers. Finally, the delete() method is invoked.

The SqlDataSource control is used to connect to databases such as Access, SQLServer, and Oracle. The SqlDataSource and XMLDataSource controls are associated with a data-bound control using the DataSourceID attribute. The SelectCommand attribute of the SqlDataSource tag is used to specify a SELECT statement while the DeleteCommand attribute is used to specify a DELETE statement. Parameters to DELETE statement in a DeleteCommand are described using the <DeleteParameters> element.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in