European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: Dynamic Service Registration In ASP.NET Core Dependency Injection Container

clock October 16, 2020 09:45 by author Peter

In ASP.NET Core, whenever we inject a service as a dependency, we must register this service to ASP.NET Core Dependency Injection container. However, registering services one by one is not only tedious and time-consuming, but it is also error-prone. So here, we will discuss how we can register all the services at once dynamically. To register all of the services dynamically, we will use TanvirArjel.Extensions.Microsoft.DependencyInjection library. This is a small but extremely useful library that enables you to register all your services into ASP.NET Core Dependency Injection container at once without exposing the service implementation.

First, install the latest version of TanvirArjel.Extensions.Microsoft.DependencyInjection NuGet package into your project as follows,
    Install-Package TanvirArjel.Extensions.Microsoft.DependencyInjection  

Using Marker Interface
Now let your services inherit any of the ITransientService, IScoperService, and ISingletonService marker interfaces as follows,
    using TanvirArjel.Extensions.Microsoft.DependencyInjection

    // Inherit `IScopedService` interface if you want to register `IEmployeeService` as scoped service.    
    public class IEmployeeService : IScopedService     
    {    
        Task CreateEmployeeAsync(Employee employee);    
    }    
        
    internal class EmployeeService : IEmployeeService    
    {    
       public async Task CreateEmployeeAsync(Employee employee)    
       {    
           // Implementation here    
       };    
    }    

ITransientService, IScoperService, and ISingletonService are available in TanvirArjel.Extensions.Microsoft.DependencyInjection namespace.
 
Using Attribute
Now mark your services with any of the ScopedServiceAttribute, TransientServiceAttribute, and SingletonServiceAttribute attributes as follows,
    using TanvirArjel.Extensions.Microsoft.DependencyInjection

    // Mark with ScopedServiceAttribute if you want to register `IEmployeeService` as scoped service.  
    [ScopedService]  
    public class IEmployeeService  
    {  
            Task CreateEmployeeAsync(Employee employee);  
    }  
          
    internal class EmployeeService : IEmployeeService   
    {  
        public async Task CreateEmployeeAsync(Employee employee)  
        {  
           // Implementation here  
        };  
    }  


ScopedServiceAttribute, TransientServiceAttribute, and SingletonServiceAttribute are available in TanvirArjel.Extensions.Microsoft.DependencyInjection namespace.
 
Now in your ConfigureServices method of the Startup class,
    public void ConfigureServices(IServiceCollection services)    
    {    
       services.AddServicesOfType<IScopedService>();   
       services.AddServicesWithAttributeOfType<ScopedServiceAttribute>();    
    }    


AddServicesOfType<T> is available in TanvirArjel.Extensions.Microsoft.DependencyInjection namespace.
 
Moreover, if you want only specific assemblies to be scanned during type scanning,
    public static void ConfigureServices(IServiceCollection services)  
    {  
        // Assemblies start with "TanvirArjel.Web", "TanvirArjel.Application" will only be scanned.  
        string[] assembliesToBeScanned = new string[] { "TanvirArjel.Web", "TanvirArjel.Application" };  
        services.AddServicesOfType<IScopedService>(assembliesToBeScanned);  
        services.AddServicesWithAttributeOfType<ScopedServiceAttribute>(assembliesToBeScanned);  
    }  


That's it! The job is done! It is as simple as above to dynamically register all your services into ASP.NET Core Dependency Injection container at once. If you have any issues, you can submit it to the Github Repository of this library. You will be helped as soon as possible.



ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: Validating Data Model Using Fluent Validation in ASP.NET Core WebApi

clock October 8, 2020 08:45 by author Peter

Validating user input is a basic function in a web application. For production systems, developers usually spend a lot of time writing a lot of code to complete this function. If we use Fluent Validation to build the ASP.NET Core Web API, the task of input validation will be much easier than before. Fluent Validation is a very popular. NET library for building strong type validation rules.

Configuration project
Step 1: Download fluent validation

We can use nuget to download the latestFluentValidationlibrary
PM> Install-Package FluentValidation.AspNetCore

Step 2: Add the Fluent Validation service
We need to be in the ____________Startup.csAdd Fluent Validation Service to File
public void ConfigureServices(IServiceCollection services)
{
  // mvc + validating
  services.AddMvc()
  .SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
  .AddFluentValidation();
}


Adding Checker
FluentValidationA variety of built-in calibrators are provided. In the following examples, we can see two of them.
    NotNull Checker
    NotEmpty Checker

Step 1: Add a data model that needs to be validated

Now let’s add oneUserClass.
public class User
{
  public string Gender { get; set; }
  public string FirstName { get; set; }
  public string LastName { get; set; }
  public string SIN { get; set; }
}


Step 2: add verifier class
UseFluentValidationTo create a validator class, the validator class needs to inherit from an abstract classAbstractValidator
public class UserValidator : AbstractValidator<User>
{
  public UserValidator()
  {
   //Add rules here
  }
}


Step 3: Add validation rules
In this example, we need to verify that FirstName, LastName, SIN can’t be null, can’t be empty. We also need to verify that the SIN (Social Insurance Number) number is legitimate.
public static class Utilities
{
  public static bool IsValidSIN(int sin)
  {
   if (sin < 0 || sin > 999999998) return false;

   int checksum = 0;
   for (int i = 4; i != 0; i--)
   {
     checksum += sin % 10;
     sin /= 10;

     int addend = 2 * (sin % 10);
     
     if (addend >= 10) addend -= 9;
     
     checksum += addend;
     sin /= 10;
   }
     
   return (checksum + sin) % 10 == 0;
  }
}


Here we areUserValidatorClass, add validation rules
public class UserValidator : AbstractValidator<User>
{
  public UserValidator()
  {
   RuleFor(x => x.FirstName)
   .NotEmpty()
   .WithMessage("FirstName is mandatory.");

   RuleFor(x => x.LastName)
   .NotEmpty()
   .WithMessage("LastName is mandatory.");

   RuleFor(x => x.SIN)
   .NotEmpty()
   .WithMessage("SIN is mandatory.")
   .Must((o, list, context) =>
   {
     if (null != o.SIN)
     {
      context.MessageFormatter.AppendArgument("SIN", o.SIN);
      return Utilities.IsValidSIN(int.Parse(o.SIN));
     }
     return true;
   })
   .WithMessage("SIN ({SIN}) is not valid.");
  }
}


Step 4: Injecting authentication services
public void ConfigureServices(IServiceCollection services)
{
  // Add validator
  services.AddSingleton<IValidator<User>, UserValidator>();
  // mvc + validating
  services
    .AddMvc()
    .SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
    .AddFluentValidation();
}


Step 5:Startup.csManage your validation errors
In ASP.NET Core 2.1 and above, you can override the default behavior (ApiBehavior Options) managed by ModelState.
public void ConfigureServices(IServiceCollection services)
{
  // Validators
  services.AddSingleton<IValidator<User>, UserValidator>();
  // mvc + validating
  services
    .AddMvc()
    .SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
    .AddFluentValidation();

  // override modelstate
  services.Configure<ApiBehaviorOptions>(options =>
  {
    options.InvalidModelStateResponseFactory = (context) =>
    {
     var errors = context.ModelState
       .Values
       .SelectMany(x => x.Errors
             .Select(p => p.ErrorMessage))
       .ToList();
      
     var result = new
     {
       Code = "00009",
       Message = "Validation errors",
       Errors = errors
     };
      
     return new BadRequestObjectResult(result);
    };
  });
}

When data model validation fails, the program executes this code.

In this example, I set up how to display errors to the client. In the returned result here, I just include an error code, error message and error object list.

Let’s take a look at the final results.

Using Verifier
Verifier is very easy to use here.

You just need to create an action and put the data model that needs to be validated into the action parameters.

Since the authentication service has been added to the configuration, when this action is requested,FluentValidationYour data model will be validated automatically!

Step 1: Create an action using the data model to be validated
[Route("api/[controller]")]
[ApiController]
public class DemoValidationController : ControllerBase
{
  [HttpPost]
  public IActionResult Post(User user)
  {
   return NoContent();
  }
}



ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: Web Protection Library (WPL)

clock September 15, 2020 09:15 by author Peter

Web applications have always been threatened by a series of attacks. Thankfully, IT Security organizations have worked tirelessly to secure web application development by coming up with ways to mitigate malicious attacks. One of these developments is the Microsoft Web Protection Library, a tool that can be used to protect ASP.NET web application and Windows applications malicious attacks

In this article, we are going to learn about Microsoft Web Protection Library. We will first look at threats surrounding web applications and then delve into the protection measures that WPL introduces.
 
What is the Microsoft Web Protection Library (WPL)?
The WPL is a set of .NET assemblies put together for protection against the most common attack vectors. WPL comprises the Anti-XSS which is a bunch of encoding functions for user input which includes JavaScript, XML, CSS, HTML, and HTML attributes. WPL also has a Security Runtime Engine which works as a shield protecting web applications from the common attack vectors.
 
The Anti-XSS Library
A cross-site script (XSS) attack is a very common attack that involves malicious user input (e.g. in the form of scripts) from attackers using poorly validated form fields on web applications. Anti-XSS provides a class that can be used to encode all user input on forms in MVC, web pages, and web forms applications. It uses a white-list approach which entails that it checks the expected input from users and if not recognized it classifies that input as a possible danger or possible harm. It comprises of encoders for:
    HTML
    HTML Attributes
    CSS
    XML
    JavaScript

Anti-XSS Examples
ASPX
<td><asp:Label id='lblIDNO' runat='server'></asp:Label></td>
 
ASPX.CS
lblIDNO.Text = Request['IDNO'];
 
Normally an unsafe way of rendering can be done as in the above codes snippet but Anti-XSS provides a safe way using the HTML encoding.
 
ASPX
<td><asp:Label id='lblIDNO' runat='server'></asp:Label></td>
 
ASPX.CS
lblIDNO.Text = Microsoft.Security.Application.Encoder.HtmlEncode(Request['IDNO']);
 
In the above code, the dynamic IDNumber property is being encoded using the Anti-XSS HTML encoder before it is put in the HTML context. The same could be done using a shortcut ()
 
The code below shows an example of JavaScript encoding:
<a onclick='<%# string.Format('isDelete({0})', Microsoft.Security.Application.Encoder.JavaScriptEncode(Item.Address)) %>'>Delete</a>
 
Scripts should also be encoded just in case an attacker uses a malicious script that might end up executing unwanted commands at the server-side.
 
Dynamic data including URLs should be encoded before they are written in href because they may contain malicious input or untrusted URL and end up exfiltrating data to attacker sites.
 
The following code shows an example of URL encoding using WPL:
<a href=<%# Microsoft.Security.Application.Encoder.UrlEncode(Item.Url) %>>Customer Details</a>
 
It very important that developers understand the various malicious vectors used by attackers which can be implemented using threat modeling at design time. Safety can be applied to applications at development time or to existing applications and developers need to review code which gives users output, determine if the given output has any untrusted input parameters, also understand the context in which untrusted input is being compromised to give output and lastly encode the output properly. WPL uses the whitelist approach and when it is not sure that the input is trusted or not, it assumes that it is not and rejects the input as untrusted. Most potential dangers are found in form fields, query strings, and cookie contents.
 
In order to use Anti-XSS encoders after installation of WPL, you need to make use of the following directive:
using Microsoft.Security.Application;
 
WPL Architecture
The following is a diagram that shows the architectural pattern of the WPL.

The impact that can be caused by malicious attacks on businesses and individuals is so great that it is very important that developers and analysts try to find all possible vulnerabilities and not overlook certain aspects of the application. WPL is an effective tool for protecting individuals as well as organizations from such devastating web attacks.

 



ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: Merge Different File Formatted Documents Into A Single PDF

clock September 7, 2020 08:55 by author Peter

Why merge different documents?
There are a lot of common yet crucial reasons to merge documents. Let's understand the need with some use-cases.
 
Real estate
When you buy or lease a property, you have to go through a lot of documentation (e.g. mortgage, loan application, agreements, various expense recordings). Such documentation is mostly recorded in multiple file formats (e.g. PDF, Word, Excel, Presentation). Wouldn't it be super if you could compile all the documents into a single understandable format such as PDF?
 
Archived documents
Most of the time we have a lot of electronic documents saved in various formats. They all have similar content and need to be combined. For example Excel file with charts, or Word file swith some formatted text. These details could be combined in a single PDF. Eventually, you can share this resultant PDF with colleagues or print it without any issue.
 
Merge documents to PDF
 
Let's see how we merge DOC, PPT, XLS and PDF files into a single PDF.
    using (Merger merger = new Merger(@"c:\document1.pdf"))  
    {  
        merger.Join(@"c:\document2.doc");  
        merger.Join(@"c:\document3.ppt");  
        merger.Join(@"c:\document4.xls");  
        merger.Save(@"c:\merged.pdf");  
    }  


Download the DLL and add it as a reference in your .NET project (existing or new).

 



ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: How To Call Web API In Another Project From C#?

clock August 26, 2020 09:28 by author Peter

This article explains how to call a web API from another project using C# instead of making an Ajax call. I'm  creating a web API in MVC  in project1 and want to call this API in another project (like.MVC,Asp.net,.core etc) project but don't want to make any Ajax requests.
So let's see how to make a C# request for an Api Call.

Here I  am creating an API in MVC for getting  a statelist   in Project 1.
 public class StateController : ApiController 
   { 
[HttpGet] 
       [Route("api/State/StateList")] 
       public List<StateDto> StateList() 
       { 
           List<StateDto> StateList = new List<StateDto>(); 
           SqlConnection sqlConnection = new SqlConnection(); 
 
           string connectionString = ConfigurationManager.ConnectionStrings["Connection"].ConnectionString; 
           SqlCommand sqlCommand = new SqlCommand(); 
           sqlConnection.ConnectionString = connectionString; 
           sqlCommand.CommandType = CommandType.Text; 
           sqlCommand.CommandText = "Select * From lststate where deletedbyid is null"; 
           sqlCommand.Connection = sqlConnection; 
           sqlConnection.Open(); 
           DataTable dataTable = new DataTable(); 
           dataTable.Load(sqlCommand.ExecuteReader()); 
           sqlConnection.Close(); 
 
           if (dataTable != null) 
           { 
               foreach (DataRow row in dataTable.Rows) 
               { 
                   StateList.Add(new StateDto 
                   { 
                       Id = (int)row["id"], 
                       StateCode = row["Statecode"].ToString(), 
                       StateName = row["StateName"].ToString(), 
                       CompanyId = (int)row["Companyid"], 
                       CreatedDate = (DateTime)row["CreatedDate"] 
                   }); 
 
               } 
               return StateList; 
           } 
           else 
           { 
 
           } 
 
 
       } 


Project 2 where we want to call this API.
public List<StateDto> StateIndex() 
  { 
      var responseString = ApiCall.GetApi("http://localhost:58087/api/State/StateList"); 
      var rootobject = new JavaScriptSerializer().Deserialize<List<StateDto>>(responseString); 
      return rootobject; 
  } 


ApiCall.cs class 
using System; 
using System.Collections.Generic; 
using System.IO; 
using System.Linq; 
using System.Net; 
using System.Text; 
using System.Threading.Tasks; 
 
namespace MaheApi.Dto 

    public static class ApiCall 
    { 
        public static string GetApi(string ApiUrl) 
        { 
 
            var responseString = ""; 
            var request = (HttpWebRequest)WebRequest.Create(ApiUrl); 
            request.Method = "GET"; 
            request.ContentType = "application/json"; 
 
            using (var response1 = request.GetResponse()) 
            { 
                using (var reader = new StreamReader(response1.GetResponseStream())) 
                { 
                    responseString = reader.ReadToEnd(); 
                } 
            } 
            return responseString; 
 
        } 
 
        public static string PostApi(string ApiUrl, string postData = "") 
        { 
 
            var request = (HttpWebRequest)WebRequest.Create(ApiUrl); 
            var data = Encoding.ASCII.GetBytes(postData); 
            request.Method = "POST"; 
            request.ContentType = "application/x-www-form-urlencoded"; 
            request.ContentLength = data.Length; 
            using (var stream = request.GetRequestStream()) 
            { 
                stream.Write(data, 0, data.Length); 
            } 
            var response = (HttpWebResponse)request.GetResponse(); 
            var responseString = new StreamReader(response.GetResponseStream()).ReadToEnd(); 
            return responseString; 
        } 
  } 
}
 

Here we get data in StateIndex method



ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: How To Write In Hindi (Or Another Font) In ASP.NET Core?

clock August 18, 2020 13:27 by author Peter

In this blog I am explaining how to read and write in the Hindi Language (or we can use any language as per our requirement).
 
Here I will explain how to write Hindi in an ASP.NET Core text box using Devlys_010 font. So please follow the below steps .
 
Step 1
Download Devlys_010 font in any format like .ttf , .woff, etc. You can download from here
This is a zip file so you can extract it in the folder.
 
Step 2
Open your Asp.Net core project and create a new folder under css. Give the folder a name like Fonts.
 
Under the Fonts folder paste the downloaded font which we have already exctracted. And now create one css file and give it the name font.css

Here you can see I have added a screenshot. I have pasted a ttf format font and created a font.css under a newly-created folder, Fonts.
 
Step 3
Now open font.css in your editor. Now we add font in our project using @font-face.
 
So write the css code in your font.css like below:
    @font-face { 
        font-family: 'Devlys_010'; 
        src: local('Devlys_010'),url('./Devlys_010.ttf') format('truetype'); 
    } 

Step 4

Now create a new class css in font.css below @font-face and add a font family which we have using @font-face.
    .hFont { 
        font-family: 'Devlys_010' !important; 
    } 

Now you can see all css code in font.css
    @font-face { 
        font-family: 'Devlys_010'; 
        src: local('Devlys_010'),url('./Devlys_010.ttf') format('truetype'); 
    } 
     
    .hFont { 
        font-family: 'Devlys_010' !important; 
    }
 

Here I have added .hFont class -- you can change this name.
 
Step 5
Now go to your cshtml page where you want to write your Hindi font. This means If you have used input type text then just add class hFont like below.
 
And add css in header for getting our css code.
    <link href="~/css/fonts/font.css" rel="stylesheet" /> 

Now add css class for writing Hindi font. See in the below code I have added hFont class in input type.
    <input asp-for="@Model.AdminMaster.AdminName" class="form-control hFont" id="txtAdminName" /> 

    OR
    <input type = "text" class="form-control hFont" id="txtAdminName" /> 

Note

You can use any other font also just add font in css and use it. Also use any language font like Gujarati, Marathi, Urdu or any other language.  

 



ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: HTTP Requests Using IHttpClientFactory

clock August 11, 2020 13:14 by author Peter

The very first time Microsoft launched the HttpClient in .NET Framework 4.5, it became the most popular way to consume a Web API HTTP request, such as Get, Put, Post, and Delete in your .NET server-side code. However, it has some serious issues, for example, disposing of the object like HttpClient object doesn’t dispose of the socket as soon as it is closed. There are also too many instances open so that it affecting the performance and private HttpClientor shared HttpClient instance not respecting the DNS Time.

When Microsoft released dotnet core 2.1, it introduced HttpClientFactory that solves all these problems.

Basically, it provides a single place (central place) for configuration and consumption HTTP Verbs Client in your application IHttpClientFactory offers the following benefits,

  Naming and configuring HttpClient instances.
  Build an outgoing request middleware to manage cross-cutting concerns around HTTP requests.
  Integrates with Polly for transient fault handling.
  Avoid common DNS problems by managing HttpClient lifetimes.

There are the following ways to use IHttpClientFactory.
  Direct HttpClientFactory
  Named Clients
  Typed Clients

We will see an example one by one for all 3 types...

Direct HttpClientFactory

In dotnet core, we have Startup.cs class, and inside this class, we have the ConfigureService method. In this method we use middleware, where we inject some inbuilt/custom pipeline.

So for HttpClientFactory we need to register HttpClient like below:
services.AddHttpClient(); 

Now the question is how to use this in our API controller.

So here is the example of Direct HttpClientFactory use in controller:
  public class HttpClientFactoryController: Controller { 
      private readonly IHttpClientFactory _httpClientFactory; 
      public HttpClientFactoryController(IHttpClientFactory httpClientFactory) { 
              _httpClientFactory = httpClientFactory; 
          } 
          [HttpGet] 
      public async Task < ActionResult > Get() { 
          var client = _httpClientFactory.CreateClient(); 
          client.BaseAddress = new Uri("http://api.google.com"); 
          string result = await client.GetStringAsync("/"); 
          return Ok(result); 
      } 
  }


Here in this example we have pass IHttpClientFactory is a dependency injection and directly use _httpClientFactory.CreateClient();

This example is better in this situation when we need to make a quick request from a single place in the code

Named Clients
Just above I have explained how to register the middleware in startup.cs class in configureService method for HttpClient same we can use for Named Clients as well, but this is useful when we need to make multiple requests from multiple locations.

We can also do some more configuration while registering, like this:
  services.AddHttpClient("g", c => 
  { 
     c.BaseAddress = new Uri("https://api.google.com/"); 
     c.DefaultRequestHeaders.Add("Accept", "application/json"); 
  }); 


Here in this configuration, we use two parameter names and an Action delegate taking a HttpClient

We can use the named client in the API controller in this way:
  public class NamedClientsController: Controller { 
      private readonly IHttpClientFactory _httpClientFactory; 
      public NamedClientsController(IHttpClientFactory httpClientFactory) { 
              _httpClientFactory = httpClientFactory; 
          } 
          [HttpGet] 
      public async Task < ActionResult > Get() { 
          var client = _httpClientFactory.CreateClient("g"); 
          string result = await client.GetStringAsync("/"); 
          return Ok(result); 
      } 
  }


Note
"g" indicates names client that I use in during registration and also call from the API action method

Typed Clients

A types client provides the same capabilities as named clients but without the need to use strings as keys in configuration. Due to this it also provide IntelliSense and compiler help when consuming clients. It provides a single location to configure and interact with a particular httpclient

It works with dependency injection and can be injected where required in the application.

A typed client accepts an HttpClient parameter in its constructor,

We can see here by an example that I have defined custom class for httpclient:
  public class TypedCustomClient { 
      public HttpClient Client { 
          get; 
          set; 
      } 
      public TypedCustomClient(HttpClient httpClient) { 
          httpClient.BaseAddress = new Uri("https://api.google.com/"); 
          httpClient.DefaultRequestHeaders.Add("Accept", "application/json"); 
          httpClient.DefaultRequestHeaders.Add("User-Agent", "HttpClientFactory-Sample"); 
          Client = httpClient; 
      } 
  }

Now we can register this as a typed client using in this way in startup.cs clss under configureService method.

  services.AddHttpClient<TypedCustomClient>(); 

Now this time we see how we can use it in API controller,
  public class TypedClientController: Controller { 
      private readonly TypedCustomClient _typedCustomClient; 
      public TypedClientController(TypedCustomClient typedCustomClient) { 
              _typedCustomClient = typedCustomClient; 
          } 
          [HttpGet] 
      public async Task < ActionResult > Get() { 
          string result = await _typedCustomClient.client.GetStringAsync("/"); 
          return Ok(result); 
      } 
  }


Now we have learned all three types to use.

But here is one better way to use typeclient using an interface.

We will create an interface and encapsulate all the logic here, that also helps in writing UTC as well

Here is an example:
  public interface ICustomClient 
  { 
    Task<string> GetData(); 
  } 


Now inherit this interface in custom class
  public class TypedCustomClient: ICustomClient { 
      public HttpClient Client { 
          get; 
          set; 
      } 
      public TypedCustomClient(HttpClient httpClient) { 
          httpClient.BaseAddress = new Uri("https://api.google.com/"); 
          httpClient.DefaultRequestHeaders.Add("Accept", "application/json"); 
          httpClient.DefaultRequestHeaders.Add("User-Agent", "HttpClientFactory-Sample"); 
          Client = httpClient; 
      } 
  }


Register this in startup class:
  services.AddHttpClient<ICustomClient, TypedCustomClient>(); 

  public class CustomController: Controller { 
      private readonly ICustomClient _iCustomClient; 
      public ValuesController(ICustomClient iCustomClient) { 
              _iCustomClient = iCustomClient; 
          } 
          [HttpGet] 
      public async Task < ActionResult > Get() { 
          string result = await iCustomClient.GetData(); 
          return Ok(result); 
      } 
  }


Here we are using the interface for the same, so it would be good to go for repository mock.

HostForLIFE.eu ASP.NET Core 3.1.5 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



 



ASP.NET Core 3.1.5 Hosting - HostForLIFE.eu :: Consume OData Feed With C# Client Application

clock August 4, 2020 13:04 by author Peter

OData represents Open Data Protocol, an OASIS standard initiated by Microsoft in  2007. This defines the best practices for the consumption of data and building with quarriable REST APIs.

The difference between Odata and REST API is, Odata is a specific protocol and REST is an architectural style and design pattern. Using REST we can request and get data using HTTP calls. OData is a technology that uses REST to consume and build data.

I expect readers of this article to have some knowledge about OData queries. But, to make it simple, this protocol gives the power to the client to query the data on the database using a query string of REST API requests. It also helps to make the data more secure by not exposing any database related information as well as limiting the data to the outside world.

In general, we need to build the Odata enabled web service using any popular programming language which takes care of building URLs, verbs, and their requests and expected responses accordingly. Now, at the client end to consume these Odata REST APIs, we need to have metadata that contains the request type as well response types to build the concrete classes or we need to create a service proxy class.

This article is about how clients can consume existing Odata REST API using C#. So, let's start.

Simple.Odata.Client is a library that supports all Odata protocol versions and can be installed from the NuGet package and supports both .NET Framework and .NET Core latest versions.

Initializing Client Instance
To communicate with Odata REST API, Simple.Odata.Client library has some predefined class called ODataClient. This class accepts service URL with some optional settings to do a seamless communication with Odata service.
var client = new ODataClient("https://services.odata.org/sferp/"); 

If you want to log all the requests and responses from this client object to the Console, we can have additional optional settings as below.
var client = new ODataClient(new ODataClientSettings("https://services.odata.org/sferp/") 

    OnTrace = (x, y) => Console.WriteLine(string.Format(x, y)) 
}); 


Building the Typed Classes
This library doesn't help you to build any Typed classes of the responses (tables/views) from the given service as we do this with the entity framework. To build the typed DTO classes, we need to fetch the metadata from the configured Odata web service by appending $metadata at the end of the base URL as follows.
https://services.odata.org/sferp/$metadata

Metadata will be displayed in XML format, we need to identify the elements for the table and their columns with datatypes to create classes accordingly for each table.

Retrieving Data Collection
As we did with initializing the communication to the service, now we need to fetch some data from the service. For example, the following statement will fetch all the data in the table Articles with the help of Odata protocol.
var articles = await client      
                    .For<Article>() 
                    .FindEntriesAsync(); 

Here, the client is an object of the ODataClient class.

There is a problem with the above statement. If this Article table has millions of records, your HTTP call will block or return a timeout exception and you cannot achieve your requirement. To avoid such situations, we need to use annotations defined in this library.

Annotations will help to minimize the load on the network by limiting the records in a single fetch. Along with records it also sends the link to fetch the next set of records so that this can be fetched until all records get fetched from the Articles table.
var annotations = new ODataFeedAnnotations(); 
var article = await client 
    .For<Article>() 
    .FindEntriesAsync(annotations) 
    .ToList(); 
while (annotations.NextPageLink != null) 

    article.AddRange(await client 
        .For<Article>() 
        .FindEntriesAsync(annotations.NextPageLink, annotations) 
    ); 


In the above code, the first call will fetch a set of 8 records (by default or it can decide as per network speed to avoid timeout exception) along with nextpagelink property. Just this property will set the URL of OData web service to fetch the next page of records.

Include Associated Data Collections

So far, we fetched the table directly but we can also have the requirement to fetch or include in terms of entity framework all their constraint key records as a response to the request. To achieve it, our library provides an Expand() method to declare all included tables' information so that the OData API will associate it and map to the response object while sending the response.
var articles = await client 
    .For<Article>() 
    .Expand(x => new { x.Ratings, x.Comments }) 
    .FindEntryAsync(); 


In the above example, the system will fetch all  information along with ratings and comment data as part of each article record by joining it accordingly at the web service end.

Authentication
So far, we have understood how to configure and consume the data from the existing Odata service API. Now, in real-time to make their data secure, API should authenticate the request as well as the requested client. To do so, we need to generate a token based on the credentials shared by API services.

The following are the codebases to prepare the ODataClient object by generating the token based on given credentials.
private ODataClient GetODataClient() 
        { 
            try 
            { 
                string url = _config.GetSection("APIDetails:URL").Value; 
                String username = _config.GetSection("APIDetails:UserName").Value; 
                String password = _config.GetSection("APIDetails:Password").Value; 
 
                String accessToken = System.Convert.ToBase64String(System.Text.Encoding.GetEncoding("ISO-8859-1").GetBytes(username + ":" + password)); 
 
                var oDataClientSettings = new ODataClientSettings(new Uri(url)); 
                oDataClientSettings.BeforeRequest += delegate (HttpRequestMessage message) 
                { 
                    message.Headers.Add("Authorization", "Basic " + accessToken); 
                }; 
 
                var client = new ODataClient(oDataClientSettings); 
 
                Simple.OData.Client.V4Adapter.Reference(); 
 
                return client; 
            } 
            catch(Exception ex) 
            { 
                LogError("", $"Failed to connect API Services : {ex.Message}"); 
            } 
 
            return null; 
        } 


In the above code, we are getting URL, Username, and password from appsettings.json file and then creating a Basic token by converting the username and password strings. Once the token is generated, we are sending this token in the Authorization header by using the ODataClientSettings object and creating the ODataClient object.

Once this ODataClient object is created, we can request the data from OData web services as discussed above.

I hope this article helped you to understand how C# based client applications can be created and used to consume existing OData API services.



ASP.NET Core 3.1.4 Hosting - HostForLIFE.eu :: How To Serialize Nonstandard Enum Values?

clock July 21, 2020 13:50 by author Peter

.NET client libraries that integrate with third-party APIs occasionally need to compromise on how enum values are represented in model classes. For example, an API that requires values to be expressed in all uppercase letters force the creation of an enum similar to:

public enum YesNoMaybeEnum  
 {  
     YES,  
     NO,  
     MAYBE   
 }  


While this will compile, it violates .NET naming conventions. In other cases, the third party may include names that include invalid characters like dashes or periods. For example, Amazon's Alexa messages include a list of potential request types that include a period in the names. These values cannot be expressed as enumation names. While this could be addressed by changing the data type of the serialized property from an enumeration to a string, the property values are no longer constrained and any suggestions from Intellisense are lost.

This article demonstrates how to eat your cake and have it, too. Using attributes and reflection, values can be serialized to and deserialized from JSON.

Serialization with EnumDescription
Let's say we need to serialize values that include periods. Creating an enum like the following generates compile time errors,

public enum RequestTypeEnum  
{  
    LaunchRequest,  
    IntentRequest,  
    SessionEndedRequest,  
    CanFulfillIntentRequest,  
    AlexaSkillEvent.SkillPermissionAccepted,  
    AlexaSkillEvent.SkillAccountLinked,  
    AlexaSkillEvent.SkillEnabled,  
    AlexaSkillEvent.SkillDisabled,  
    AlexaSkillEvent.SkillPermissionChanged,  
    Messaging.MessageReceived  
}  

The EnumMember attribute defines the value to serialize when dealing with data contracts. Samples on Stack Overflow that show enumeration serialization tend to use the Description attribute. Either attribute can be used or you can create your own. The EnumMember attribute is more tightly bound to data contract serialization while the Description attribute is for use with design time and runtime environments, the serialization approach in this article opts for the EnumMember. After applying the EnumMember and Data Contract attributes, the enum now looks like,

[DataContract(Name = "RequestType")]    
public enum RequestTypeEnum    
{    
    [EnumMember(Value = "LaunchRequest")]    
    LaunchRequest,    
    [EnumMember(Value = "IntentRequest")]    
    IntentRequest,    
    [EnumMember(Value = "SessionEndedRequest")]    
    SessionEndedRequest,    
    [EnumMember(Value = "CanFulfillIntentRequest")]    
    CanFulfillIntentRequest,    
    [EnumMember(Value = "AlexaSkillEvent.SkillPermissionAccepted")]    
    SkillPermissionAccepted,    
    [EnumMember(Value = "AlexaSkillEvent.SkillAccountLinked")]    
    SkillAccountLinked,    
    [EnumMember(Value = "AlexaSkillEvent.SkillEnabled")]    
    SkillEnabled,    
    [EnumMember(Value = "AlexaSkillEvent.SkillDisabled")]    
    SkillDisabled,    
    [EnumMember(Value = "AlexaSkillEvent.SkillPermissionChanged")]    
    SkillPermissionChanged,    
    [EnumMember(Value = "Messaging.MessageReceived")]    
    MessageReceived    
}    

The EnumMember attribute is also applied to enum members without periods. Otherwise, the DataContractSerilizer would serializes the numeric representation of the enumeration value. Now we can define a DataContract with,
[DataContract]  
public class SamplePoco  
{  
    [DataMember]  
    public RequestTypeEnum RequestType { get; set; }  
}
 

And serialize it to XML with,
SamplePoco enumPoco = new SamplePoco();  
enumPoco.RequestType = RequestTypeEnum.SkillDisabled;  
DataContractSerializer serializer = new DataContractSerializer(typeof(SamplePoco));  
 
var output = new StringBuilder();  
using (var xmlWriter = XmlWriter.Create(output))  
{  
    serializer.WriteObject(xmlWriter, enumPoco);  
    xmlWriter.Close();  
}  
string xmlOut = output.ToString();   


This generates the following XML,
<?xml version="1.0" encoding="utf-16"?><SamplePoco xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/EnumSerializationSample"><RequestType>AlexaSkillEvent.SkillDisabled</RequestType>  
</SamplePoco>  


DataContract serialization is sorted out, but doesn't yet address JSON serialization.

JSON Serialization

If you need to work with any REST API endpoints, then you'll need to support JSON. The NewtonSoft JSON has its own serialization strategy, and so the EnumMember attribute needs to be leveraged to integrate with it using a custom JsonConverter, but before taking that step, the enumation value must be read from the attribute.

This method accepts an enum value and returns the string value in the EnumMember attribute.
private string GetDescriptionFromEnumValue(Enum value)  
        {  
 
#if NETSTANDARD2_0  
            EnumMemberAttribute attribute = value.GetType()  
                .GetField(value.ToString())  
                .GetCustomAttributes(typeof(EnumMemberAttribute), false)  
                .SingleOrDefault() as EnumMemberAttribute;  
 
            return attribute == null ? value.ToString() : attribute.Value;  
#endif  
 
#if NETSTANDARD1_6 || NETSTANDARD1_3 || NET45 || NET47  
 
            EnumMemberAttribute attribute = value.GetType()  
                .GetRuntimeField(value.ToString())  
                .GetCustomAttributes(typeof(EnumMemberAttribute), false)  
                .SingleOrDefault() as EnumMemberAttribute;  
 
            return attribute == null ? value.ToString() : attribute.Value;  
            
#endif  
              throw new NotImplementedException("Unsupported version of .NET in use");  
        }  


There's a subtle difference between the .NET Standard 2.0 implementation and the others. In .NET Standard 1.6 and prior versions, use the GetRuntimeField method to get a property from a type. In .NET Standard 2.0, use the GetField method to return the property of a type. The compile-time constants and checks in the GetDescriptionFromEnumValue abstract away that complexity.  

Coming the other way, a method needs to take a string and convert it to the associated enumeration.
public T GetEnumValue(string enumMemberText)   
{  
 
    T retVal = default(T);  
 
    if (Enum.TryParse<T>(enumMemberText, out retVal))  
        return retVal;  
 
      var enumVals = Enum.GetValues(typeof(T)).Cast<T>();  
 
    Dictionary<string, T> enumMemberNameMappings = new Dictionary<string, T>();  
 
    foreach (T enumVal in enumVals)  
    {  
        string enumMember = enumVal.GetDescriptionFromEnumValue();  
        enumMemberNameMappings.Add(enumMember, enumVal);  
    }  
 
    if (enumMemberNameMappings.ContainsKey(enumMemberText))  
    {  
        retVal = enumMemberNameMappings[enumMemberText];  
    }  
    else  
        throw new SerializationException($"Could not resolve value {enumMemberText} in enum {typeof(T).FullName}");  
 
    return retVal;  
}  

The values expressed in the EnumMember attributes are loaded into a dictionary. The value of the attribute serves as the key and the associated enum is the value. The dictionary keys are compared to the string value passed to the parameter and if a matching EnumMember value is found, then the related enum is returned and so "AlexaSkillEvent.Enabled" returns RequestTypeEnum.SkillEnabled.
Using this method in a JSON Converter, the WriteJson method looks like the following,
public class JsonEnumConverter<T> : JsonConverter where T : struct, Enum, IComparable, IConvertible, IFormattable
{  
 
    public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)  
    {  
        if (value != null)  
        {  
            Enum sourceEnum = value as Enum;  
 
            if (sourceEnum != null)    
            {  

                string enumText = GetDescriptionFromEnumValue(sourceEnum);  
                writer.WriteValue(enumText);  
            }  
        }  
    }  


Please note that an enum constraint is applied to the generic type class declaration. This wasn't possible until C# version 7.3. If you cannot upgrade to use C# version 7.3, just remove this constraint.

The corresponding ReadJson method is,
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)  
{  
      object val = reader.Value;  
      if (val != null)  
    {  
        var enumString = (string)reader.Value;  
          return GetEnumValue(enumString);  
    }  
      return null;  
}  


Now the class definition needs to apply the JsonConverter class to the RequestType property,
[DataContract]  
public class SamplePoco  
{  
    [DataMember]  
    [JsonConverter(typeof(JsonEnumConverter<RequestTypeEnum>))]  
    public RequestTypeEnum RequestType { get; set; }  
 
 

Finally, the SamplePoco class is serialized to JSON,
SamplePoco enumPoco = new SamplePoco();  
enumPoco.RequestType = RequestTypeEnum.SkillEnabled;  
string samplePocoText = JsonConvert.SerializeObject(enumPoco);  
This generates the following JSON,
{  
"RequestType":"AlexaSkillEvent.SkillEnabled"  
}  

And deserialing the JSON yields the RequestTypeEnum.SkillEnabled value on the sample class.

string jsonText = "{\"RequestType\":\"AlexaSkillEvent.SkillEnabled\"}";  
SamplePoco sample = JsonConvert.DeserializeObject<SamplePoco>(jsonText); 

HostForLIFE.eu ASP.NET Core 2.2.1 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.




ASP.NET Core Hosting - HostForLIFE.eu :: How .NET Support Multiple Languages?

clock July 6, 2020 13:38 by author Peter

An application is said to be multilingual if it can be deployed in many different languages. With .NET, all of the languages including Visual Basic, .NET, C#, and J# compile to a common Intermediate language (IL). This makes all languages interoperable. Microsoft has created Java bytecode, which is a low-level language with a simple syntax, which can be very quickly translated into native machine code.

CLR
.NET Framework is a multilingual application because of CLR. CLR is the key of .NET Framework. The code running under the control of the CLR is often termed as managed code.

The main task of CLR is to convert compiled code into the native code.
.NET Framework has one or more compilers; for e.g., VB .NET, C#, C++, JScript or any third party compiler such as COBOL. Anyone of these compilers will convert your source code into Microsoft Intermediate Language (MSIL). The main reason for .NET to be multilingual is that you can compile your code from IL and this compiled code will be interoperable with the code that has been compiled to IL from another language.

It simply means that you can create pages in different languages (like C#, VB .NET, J# etc.) and once all of these pages are compiled they all can be used in a single application. Let us understand this point clearly with an example.

Let us consider a situation where a customer needs an application to be ready in 20 days. For completing the application in 20 days we want 30 developers who all know the specific language but we have 15 developers who know C# and 15 developers who know VB .NET. In this situation, if we don’t use .NET then we need to hire 15 more developers of C# or VB .NET which is a difficult and costly solution. Now, if we use .NET then we can use C# and VB .NET language in the same application. This is possible because once C# code is compiled from IL it becomes interoperable with VB .NET code which is compiled from IL.

Then JIT (Just In Time) of CLR converts this MSIL code into native code using metadata which is then executed by OS.

CLR stands for common language runtime. Common language runtime provides other services like memory management, thread management, remoting, and other security such as CTS and CLS.

CLR is a layer between an operating system and .NET language, which uses CTS and CLS to create code.

CTS
CTS stands for the common type system. CTS defines rules that common language runtime follows when we are declaring, using and managing type. CTS deals with the data type. .NET supports many languages and every language has its own data type. One language cannot understand data types of another language.

For example: When we are creating an application in C#, we have int and when we are creating an application in VB .NET, we have an integer. Here CTS comes into play --  after the compilation, CTS converts int and integer into the int32 structure.

CLS
CLS stands for common language specification.
CLS is a subset of CTS and it declares all the rules and restrictions that all languages under .NET Framework must follow. The language which follows these rules is known as CLS compliant.
For example, we can use multiple inheritances in c++ but when we use the same code in C# it creates a problem because C# does not support multiple inheritances. Therefore, CLS restricts multiple inheritances for all language.

One other rule is that you cannot have a member with the same name and a different case. In C# add() and Add() are different because it is case sensitive but a problem arises when we use this code in VB .NET because it is not case-sensitive and it considers add() and Add() as the same.


HostForLIFE.eu ASP.NET Core 3.1.4 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in