European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core Hosting :: Data and Business Layer, the New Way

clock March 21, 2022 10:05 by author Peter

Existing Patterns
Every enterprise application is backed by a persistent data store, typically a relational database. Object-oriented programming (OOP), on the other hand, is the mainstream for enterprise application development. Currently there are 3 patterns to develop business logic:

  • Transaction Script and Domain Model: The business logic is placed in-memory code and the database is used pretty much as a storage mechanism.
  • Logic in SQL: Business logic is placed in SQL queries such as stored procedure.

Each pattern has its own pros and cons, basically it's a tradeoff between programmability and performance. Most people go with the in-memory code way for better programmability, which requires an Object-Relational Mapping (ORM, O/RM, and O/R mapping tool), such as Entity Framework. Great efforts have been made to reconcile these two, however it's still The Vietnam of Computer Science, due to the misconceptions of SQL and OOP.
 
The Misconceptions
SQL is Obsolete
The origins of the SQL take us back to the 1970s. Since then, the IT world has changed, projects are much more complicated, but SQL stays - more or less - the same. It works, but it's not elegant for today's modern application development. Most ORM implementations, like Entity Framework, try to encapsulate the code needed to manipulate the data, so you don't use SQL anymore. Unfortunately, this is wrongheaded and will end up with Leaky Abstraction.
 
As coined by Joel Spolsky, the Law of Leaky Abstractions states:
  All non-trivial abstractions, to some degree, are leaky.
 
Apparently, RDBMS and SQL, being a fundamental of your application, is far from trivial. You can't expect to abstract it away - you have to live with it. Most ORM implementations provide native SQL execution because of this.
 
OOP/POCO Obsession

OOP, on the other hand, is modern and the mainstream of application development. It's so widely adopted by developers that many developers subconsciously believe OOP can solve all the problems. Moreover, many framework authors have the religion that any framework, if not support POCO, is not a good framework.
 
In fact, like any technology, OOP has its limitations too. The biggest one, IMO, is: OOP is limited to local process, it's not serialization/deserialization friendly. Each and every object is accessed via its reference (the address pointer), and the reference, together with the type metadata and compiled byte code (further reference to type descriptors, vtable, etc.), is private to local process. It's just too obvious to realize this. By nature, any serialized data is value type, which means:

  • To serialize/deserialize an object, a converter for the reference is needed, either implicitly or explicitly. ORM can be considered as the converter between objects and relational data.
  • As the object complexity grows, the complexity of the converter grows respectively. Particularly, the type metadata and compiled byte code (the behavior of the object, or the logic), are difficult or maybe impossible for the conversion - in the end, you need virtually the whole type runtime. That's why so many applications start with Domain Drive Design, but end up with Anemic Domain Model.
  • On the other hand, relational data model is very complex by nature, compares to other data format such as JSON. This adds another complexity to the converter. ORM, which is considered as the converter between objects and relational data, will sooner of later hit the wall.

That's the real problem of object-relational impedance mismatch, if you want to map between arbitrary objects (POCO) and relational data. Unfortunately, almost all ORM implementations are following this path, none of them can survive from this.
 
The New Way
When you're using relational database, implementing your business logic using SQL/stored procedure is the shortest path, therefore can have best performance. The cons lies in the code maintainability of SQL. On the other hand, implementing your business logic as in-memory code, has many advantages in terms of code maintainability, but may have performance issue in some cases, and most importantly, it will end up with object-relational impedance mismatch as described above. How can we get the best of both?
 
RDO.Data, an open source framework to handle data, is the answer to this question. You can write your business logic in both ways, as stored procedures alike or in-memory code, using C#/VB.Net, independent of your physical database. To achieve this, we're implementing relational schema and data into a comprehensive yet simple object model:


The following data objects are provided with rich set of properties, methods and events:

  • Model/Model<T>: Defines the meta data of model, and declarative business logic such as data constraints, automatic calculated field and validation, which can be consumed for both database and local in-memory code.
  • DataSet<T>: Stores hierarchical data locally and acts as domain model of your business logic. It can be conveniently exchanged with relational database in set-based operations (CRUD), or external system via JSON.
  • Db: Defines the database session, which contains:
    • DbTable<T>: Permanent database tables for data storage;
    • Instance methods of Db class to implement procedural business logic, using DataSet<T> objects as input/output. The business logic can be simple CRUD operations, or complex operation such as MRP calculation:
      • You can use DbQuery<T> objects to encapsulate data as reusable view, and/or temporary DbTable<T> objects to store intermediate result, to write stored procedure alike, set-based operations (CRUD) business logic.
      • On the other hand, DataSet<T> objects, in addition to be used as input/output of your procedural business logic, can also be used to write in-memory code to implement your business logic locally.
      • Since these objects are database agnostic, you can easily port your business logic into different relational databases.
  • DbMock<T>: Easily mock the database in an isolated, known state for testing.

The following is an example of business layer implementation, to deal with sales orders in AdventureWorksLT sample. Please note the example is just CRUD operations for simplicity, RDO.Data is capable of doing much more than it.
    public async Task<DataSet<SalesOrderInfo>> GetSalesOrderInfoAsync(_Int32 salesOrderID, CancellationToken ct = default(CancellationToken))  

    {  
        var result = CreateQuery((DbQueryBuilder builder, SalesOrderInfo _) =>  
        {  
            builder.From(SalesOrderHeader, out var o)  
                .LeftJoin(Customer, o.FK_Customer, out var c)  
                .LeftJoin(Address, o.FK_ShipToAddress, out var shipTo)  
                .LeftJoin(Address, o.FK_BillToAddress, out var billTo)  
                .AutoSelect()  
                .AutoSelect(c, _.Customer)  
                .AutoSelect(shipTo, _.ShipToAddress)  
                .AutoSelect(billTo, _.BillToAddress)  
                .Where(o.SalesOrderID == salesOrderID);  
        });  
      
        await result.CreateChildAsync(_ => _.SalesOrderDetails, (DbQueryBuilder builder, SalesOrderInfoDetail _) =>  
        {  
            builder.From(SalesOrderDetail, out var d)  
                .LeftJoin(Product, d.FK_Product, out var p)  
                .AutoSelect()  
                .AutoSelect(p, _.Product)  
                .OrderBy(d.SalesOrderDetailID);  
        }, ct);  
      
        return await result.ToDataSetAsync(ct);  
    }  
      
    public async Task<int?> CreateSalesOrderAsync(DataSet<SalesOrderInfo> salesOrders, CancellationToken ct)  
    {  
        await EnsureConnectionOpenAsync(ct);  
        using (var transaction = BeginTransaction())  
        {  
            salesOrders._.ResetRowIdentifiers();  
            await SalesOrderHeader.InsertAsync(salesOrders, true, ct);  
            var salesOrderDetails = salesOrders.GetChild(_ => _.SalesOrderDetails);  
            salesOrderDetails._.ResetRowIdentifiers();  
            await SalesOrderDetail.InsertAsync(salesOrderDetails, ct);  
      
            await transaction.CommitAsync(ct);  
            return salesOrders.Count > 0 ? salesOrders._.SalesOrderID[0] : null;  
        }  
    }  
      
    public async Task UpdateSalesOrderAsync(DataSet<SalesOrderInfo> salesOrders, CancellationToken ct)  
    {  
        await EnsureConnectionOpenAsync(ct);  
        using (var transaction = BeginTransaction())  
        {  
            salesOrders._.ResetRowIdentifiers();  
            await SalesOrderHeader.UpdateAsync(salesOrders, ct);  
            await SalesOrderDetail.DeleteAsync(salesOrders, (s, _) => s.Match(_.FK_SalesOrderHeader), ct);  
            var salesOrderDetails = salesOrders.GetChild(_ => _.SalesOrderDetails);  
            salesOrderDetails._.ResetRowIdentifiers();  
            await SalesOrderDetail.InsertAsync(salesOrderDetails, ct);  
      
            await transaction.CommitAsync(ct);  
        }  
    }  
      
    public Task<int> DeleteSalesOrderAsync(DataSet<SalesOrderHeader.Key> dataSet, CancellationToken ct)  
    {  
        return SalesOrderHeader.DeleteAsync(dataSet, (s, _) => s.Match(_), ct);  
    } 

The above code can be found in the downloadable source code, which is a fully featured WPF application using well known AdventureWorksLT sample database.

RDO.Data Features

  • Comprehensive hierarchical data support.
  • Rich declarative business logic support: constraints, automatic calculated filed, validations, etc, for both server side and client side.
  • Comprehensive inter-table join/lookup support.
  • Reusable view via DbQuery<T> objects.
  • Intermediate result store via temporary DbTable<T> objects.
  • Comprehensive JSON support, better performance because no reflection required.
  • Fully customizable data types and user-defined functions.
  • Built-in logging for database operations.
  • Extensive support for testing.
  • Rich design time tools support.
  • And much more...

Pros

  • Unified programming model for all scenarios. You have full control of your data and business layer, no magic black box.
  • Your data and business layer is best balanced for both programmability and performance. Rich set of data objects are provided, no more object-relational impedance mismatch.
  • Data and business layer testing is a first class citizen which can be performed easily - your application can be much more robust and adaptive to change.
  • Easy to use. The APIs are clean and intuitive, with rich design time tools support.
  • Rich feature and lightweight. The runtime DevZest.Data.dll is less than 500KB in size, whereas DevZest.Data.SqlServer is only 108KB in size, without any 3rd party dependency.
  • The rich metadata can be consumed conveniently by other layer of your application such as the presentation layer.

Cons

  • It's new. Although APIs are designed to be clean and intuitive, you or your team still need some time to get familiar with the framework. Particularly, your domain model objects are split into two parts: the Model/Model<T> objects and DataSet<T> objects. It's not complex, but you or your team may need some time to get used to it.
  • To best utilize RDO.Data, your team should be comfortable with SQL, at least to an intermediate level. This is one of those situations where you have to take into account the make up of your team - people do affect architectural decisions.
  • Although data objects are lightweight, there are some overhead comparisons to POCO objects, especially for the simplest scenarios. In terms of performance, It may get close to, but cannot beat, native stored procedure.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET SignalR Hosting - HostForLIFE :: How To Get List Of Connected Clients In SignalR?

clock March 18, 2022 07:33 by author Peter

I have not found any direct way to call. So, we need to write our own logic inside a method provided by SignalR Library.

There is a Hub class provided by the SignalR library.
 
In this class, we have 2 methods.

  • OnConnectedAsync()
  • OnDisconnectedAsync(Exception exception)

So, the OnConnectedAsync() method will add user and OnDisconnectedAsyn will disconnect a user because when any client gets connected, this OnConnectedAsync method gets called.

In the same way, when any client gets disconnected, then the OnDisconnectedAsync method is called.
 
So, let us see it by example.
 
Step 1
Here, I am going to define a class SignalRHub and inherit the Hub class that provides virtual method and add the Context.ConnectionId: It is a unique id generated by the SignalR HubCallerContext class.
    public class SignalRHub : Hub  
    {  
    public override Task OnConnected()  
    {  
    ConnectedUser.Ids.Add(Context.ConnectionId);  
    return base.OnConnected();  
    }  
    public override Task OnDisconnected()  
    {  
    ConnectedUser.Ids.Remove(Context.ConnectionId);  
    return base.OnDisconnected();  
    }  
    }  


Step 2
In this step, we need to define our class ConnectedUser with property Id that is used to Add/Remove when any client gets connected or disconnected.
 
Let us see this with an example.
    public static class ConnectedUser  
    {  
    public static List<string> Ids = new List<string>();  
    }  

Now, you will get the result of currently connected client using ConnectedUser.Ids.Count.
 
Note
As you see, here I am using a static class that will work fine when you have only one server, but when you will work on multiple servers, then it will not work as expected. In this case, you could use a cache server like Redis cache, SQL cache.



European ASP.NET Core Hosting :: Serialization

clock March 14, 2022 07:52 by author Peter

A - Introduction
Serialization is the process of converting an object into a stream of bytes to store the object or transmit it to memory, a database, or a file. Its main purpose is to save the state of an object in order to be able to recreate it when needed. The reverse process is called deserialization.

This is the structure of this article,

    A - Introduction
    B - How Serialization Works
    C - Uses for Serialization
    D - .NET Serialization Features
    E - Sample of the Serialization
        Binary and Soap Formatter
        JSON Serialization

B - How Serialization Works
The object is serialized to a stream that carries the data. The stream may also have information about the object's type, such as its version, culture, and assembly name. From that stream, the object can be stored in a database, a file, or memory.

C - Uses for Serialization
Serialization allows the developer to save the state of an object and re-create it as needed, providing storage of objects as well as data exchange. Through serialization, a developer can perform actions such as:

  • Sending the object to a remote application by using a web service
  • Passing an object from one domain to another
  • Passing an object through a firewall as a JSON or XML string
  • Maintaining security or user-specific information across applications

D - .NET Serialization Features

  • .NET features the following serialization technologies:
  • Binary serialization preserves type fidelity, which is useful for preserving the state of an object between different invocations of an application. For example, you can share an object between different applications by serializing it to the Clipboard. You can serialize an object to a stream, to a disk, to memory, over the network, and so forth. Remoting uses serialization to pass objects "by value" from one computer or application domain to another.
  • XML and SOAP serialization serializes only public properties and fields and does not preserve type fidelity. This is useful when you want to provide or consume data without restricting the application that uses the data. Because XML is an open standard, it is an attractive choice for sharing data across the Web. SOAP is likewise an open standard, which makes it an attractive choice.
  • JSON serialization serializes only public properties and does not preserve type fidelity. JSON is an open standard that is an attractive choice for sharing data across the web.

Note
For binary or XML serialization, you need:

  • The object to be serialized
  • Binary serialization can be dangerous. The BinaryFormatter.Deserialize method is never safe when used with untrusted input.


E - Samples of the Serialization
Binary and Soap Formatter
The following example demonstrates serialization of an object that is marked with the SerializableAttribute attribute. To use the BinaryFormatter instead of the SoapFormatter, uncomment the appropriate lines.
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Soap;
//using System.Runtime.Serialization.Formatters.Binary;

public class Test {
public static void Main()  {

  // Creates a new TestSimpleObject object.
  TestSimpleObject obj = new TestSimpleObject();

  Console.WriteLine("Before serialization the object contains: ");
  obj.Print();

  // Opens a file and serializes the object into it in binary format.
  Stream stream = File.Open("data.xml", FileMode.Create);
  SoapFormatter formatter = new SoapFormatter();

  //BinaryFormatter formatter = new BinaryFormatter();

  formatter.Serialize(stream, obj);
  stream.Close();

  // Empties obj.
  obj = null;

  // Opens file "data.xml" and deserializes the object from it.
  stream = File.Open("data.xml", FileMode.Open);
  formatter = new SoapFormatter();

  //formatter = new BinaryFormatter();

  obj = (TestSimpleObject)formatter.Deserialize(stream);
  stream.Close();

  Console.WriteLine("");
  Console.WriteLine("After deserialization the object contains: ");
  obj.Print();
}
}

// A test object that needs to be serialized.
[Serializable()]
public class TestSimpleObject  {

public int member1;
public string member2;
public string member3;
public double member4;

// A field that is not serialized.
[NonSerialized()] public string member5;

public TestSimpleObject() {

    member1 = 11;
    member2 = "hello";
    member3 = "hello";
    member4 = 3.14159265;
    member5 = "hello world!";
}

public void Print() {

    Console.WriteLine("member1 = '{0}'", member1);
    Console.WriteLine("member2 = '{0}'", member2);
    Console.WriteLine("member3 = '{0}'", member3);
    Console.WriteLine("member4 = '{0}'", member4);
    Console.WriteLine("member5 = '{0}'", member5);
}
}


The code is from Microsoft at SerializableAttribute Class (System).

Run the code
1, If we comment out Line 44, the Serializable Attribute, then we will get a SerializationException,

2, If you use SoapFormatter, you will get XML output,

3, If you use BinaryFormatter, you will get the binary output like this,

JSON Serialization:
// How to serialize and deserialize (marshal and unmarshal) JSON in .NET
// https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-how-to?pivots=dotnet-6-0

using System;
using System.IO;
using System.Text.Json;

namespace SerializeToFile
{
    // A test object that needs to be serialized.
    [Serializable()]
    public class WeatherForecast
    {
        public DateTimeOffset Date { get; set; }
        public int TemperatureCelsius { get; set; }
        //public string? Summary { get; set; }
        public string Summary { get; set; }
    }

    public class Program
    {
        public static void Main()
        {
            var weatherForecast = new WeatherForecast
            {
                Date = DateTime.Parse("2019-08-01"),
                TemperatureCelsius = 25,
                Summary = "Hot"
            };

            string fileName = "WeatherForecast.json";
            string jsonString = JsonSerializer.Serialize(weatherForecast);
            File.WriteAllText(fileName, jsonString);

            Console.WriteLine(File.ReadAllText(fileName));
            Console.ReadLine();
        }
    }
}
// output:
//{"Date":"2019-08-01T00:00:00-07:00","TemperatureCelsius":25,"Summary":"Hot"}


The code is from Microsoft at How to serialize and deserialize JSON using C# - .NET.

Run the code,
4, You will get the JSON output like this,



European ASP.NET Core Hosting :: Differences Between Scoped, Transient, And Singleton Service

clock March 11, 2022 05:46 by author Peter

In this article, we will see the difference between AddScoped vs AddTransient vs AddSingleton in .net core.
Why we require

  • It defines the lifetime of object creation or a registration in the .net core with the help of Dependency Injection.
  • The DI Container has to decide whether to return a new object of the service or consume an existing instance.
  • The lifetime of the Service depends on how we instantiate the dependency.
  • We define the lifetime when we register the service.

Three types of lifetime and registration options

  • Scoped
  • Transient
  • Singleton

Scoped

  • In this service, with every HTTP request, we get a new instance.
  • The same instance is provided for the entire scope of that request. eg., if we have a couple of parameter in the controller, both object contains the same instance across the request
  • This is a better option when you want to maintain a state within a request.

services.AddScoped<IAuthService,AuthService>();

Transient
    A new service instance is created for each object in the HTTP request.
    This is a good approach for the multithreading approach because both objects are independent of one another.
    The instance is created every time they will use more memory and resources and can have a negative impact on performance
    Utilize for the lightweight service with little or no state.

services.AddTransient<ICronJobService,CronJobService>();

Singleton
    Only one service instance was created throughout the lifetime.
    Reused the same instance in future, wherever the service is required
    Since it's a single lifetime service creation, memory leaks in these services will build up over time.
    Also, it has memory efficient as they are created once reused everywhere.

services.AddSingleton<ILoggingService, LoggingService>();

When to use which Service
Singleton approach => We can use this for logging service, feature flag(to on and off module while deployment), and email service
Scoped approach => This is a better option when you want to maintain a state within a request.
Transient approach =>  Use this approach for the lightweight service with little or no state.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting :: Performance Comparison Using Three Different Methods for Joining Lists

clock March 8, 2022 06:46 by author Peter

Combining Lists comparing AddRange, Arrays and Concat
I had to combine two large lists, and was curious what the performance was of various methods. The 3 methods that came to mind was:
    Using AddRange
    Using an array to copy both lists into, then convert the array back to the final list.
    Using Linq's Concat extension method.

I was looking for the straight union of the two lists. No sorting, merging or filtering.

To compare the methods I had in mind, I decided to combine 2 huge lists and measure the time it took. That's not perfect, since it's a rare use-case, but performance tests are necessarily contrived in one dimension or other. Milliseconds and up are easier to compare than ticks, so I use a dataset that gets me in that range.

When it comes to measuring performance, I am a bit paranoid about compiler and run-time optimizations, and a collection with all zeroes strikes me as a good target to optimize for. So I add some guff to the collections before combining them.

I use the StopWatch class as time-taker. I don't like repeated start/stop calls, as it's easy to inadvertently get some code in between start and stop that was not part of the code to be measured. I isolated the StopWatch calls using an interface like this:
interface ITimedWorker
{
    string Label { get; }
    bool Check();
    void Work();
}


Then I can make a single measure with a method like this:
static void RunTimedOperation(ITimedWorker op)
{
    Stopwatch sw = new Stopwatch();

    Log("Starting {0}", op.Label);
    GC.Collect();
    sw.Start();
    op.Work();
    sw.Stop();
    if (!op.Check())
        Log("{0} failed its check", op.Label);
    Log("{0} finished in:\n\t{1:D2}:{2:D2}:{3:D2}:{4:D4}", op.Label, sw.Elapsed.Hours, sw.Elapsed.Minutes, sw.Elapsed.Seconds, sw.Elapsed.Milliseconds);
}

To get around garbage collection interference, I tried various ways to disable or discourage it during the actual combine operation I wanted to measure, but it did not work for the data sizes I was using. I found that inducing the collect just before the operation was enough to not trigger any collections where I didn't want them.


The test program is written to run either a single test or all three. When running all three tests, to alleviate the issues with garbage collection, the code picks pseudo-randomly which method to run first. It is an attempt to be fair to each of the three methods, since the garbage collector will be more likely to run in subsequent operations compared to the first one completed.

For the combining logic, I created a common base class for the three methods:
abstract class ListJoinerBase
{
    protected List<object> m_l1, m_l2, m_lCombined;
    private long m_goalCount;

    protected ListJoinerBase(List<object> l1, List<object> l2)
    {
        m_l1 = l1; m_l2 = l2;
        m_goalCount = m_l1.Count + m_l2.Count;
    }

    public abstract void Work();
    public bool Check()
    {
        Debug.Assert(m_lCombined.Count == m_goalCount);
        return m_lCombined.Count == m_goalCount;
    }
}


I checked correctness with the debugger. To minimize the chance of new errors getting introduced as code is added, I use a simple check method, which just verifies that the combined list is of the expected size.

Then remaining work was to add labels and implement the Work() method for the three specializations:
class ListJoinerArray : ListJoinerBase, ITimedWorker
{
    [...]
    public override void Work()
    {
        object[] combined = new object[m_l1.Count + m_l2.Count];

        m_l1.CopyTo(combined);
        m_l2.CopyTo(combined, m_l1.Count);
        m_lCombined = new List<object>(combined);
    }
}
class ListJoinerAddRange : ListJoinerBase, ITimedWorker
{
    [...]
    public override void Work()
    {
        m_lCombined = new List<object>();
        m_lCombined.AddRange(m_l1);
        m_lCombined.AddRange(m_l2);
    }
}
class ListJoinerConcat : ListJoinerBase, ITimedWorker
{
    [...]
    public override void Work()
    {
        m_lCombined = m_l1.Concat(m_l2).ToList();
    }
}

Results
The Array and AddRange methods were close, with an edge to Array copying. The Concat method was somewhat behind. The first two methods came in at 18-24 milliseconds for the data size I was using. The Concat method took 144-146 milliseconds. Based on this, I decided to use the array copying method. Before deciding yourself, I encourage you to download the program, play with the methods and add any others you want to compare with, and come to your own conclusions.

Appendix: Sample output from program
[usrdir]\source\repos\ListCombineDemo\results>..\bin\Release\ListCombineDemo.exe /all
Starting Join lists Using Concatenation
Join lists Using Concatenation finished in:
        00:00:00:0147
Starting Join lists Using Arrays
Join lists Using Arrays finished in:
        00:00:00:0018
Starting Join lists Using AddRange
Join lists Using AddRange finished in:
        00:00:00:0024

[usrdir]\source\repos\ListCombineDemo\results>..\bin\Release\ListCombineDemo.exe /all
Starting Join lists Using AddRange
Join lists Using AddRange finished in:
        00:00:00:0024
Starting Join lists Using Concatenation
Join lists Using Concatenation finished in:
        00:00:00:0140
Starting Join lists Using Arrays
Join lists Using Arrays finished in:
        00:00:00:0019

[usrdir]\source\repos\ListCombineDemo\results>..\bin\Release\ListCombineDemo.exe /random
Starting Join lists Using AddRange
Join lists Using AddRange finished in:
        00:00:00:0023

[usrdir]\source\repos\ListCombineDemo\results>..\bin\Release\ListCombineDemo.exe /concat
Starting Join lists Using Concatenation
Join lists Using Concatenation finished in:
        00:00:00:0145

[usrdir]\source\repos\ListCombineDemo\results>..\bin\Release\ListCombineDemo.exe /arrays
Starting Join lists Using Arrays
Join lists Using Arrays finished in:
        00:00:00:0020

Appendix: Some raw results in seconds

Method Seconds
Arrays 0.0021
Arrays 0.0019
Arrays 0.0019
Arrays 0.0018
Arrays 0.0019
Arrays 0.0019
Arrays 0.0019
Arrays 0.0018
Arrays 0.002
Arrays 0.0019
Arrays 0.0018
Concatenation 0.0145
Concatenation 0.0145
Concatenation 0.0144
Concatenation 0.0146
Concatenation 0.0144
Concatenation 0.0145
Concatenation 0.0144
Concatenation 0.0145
Concatenation 0.0145
Concatenation 0.0147
Concatenation 0.0144
AddRange 0.0022
AddRange 0.0021
AddRange 0.0021
AddRange 0.0022
AddRange 0.0022
AddRange 0.0022
AddRange 0.0027
AddRange 0.0025
AddRange 0.0024
AddRange 0.0025
AddRange 0.0023

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.





European ASP.NET Core Hosting :: Using API Key Authentication To Secure ASP.NET Core Web API

clock March 7, 2022 07:34 by author Peter

API key authentication will keep a secure line between the API and clients, however, if you wish to have user authentication, go with token-based authentication, aka OAuth2.0. In this article, you will learn how to implement the API Key Authentication to secure the ASP.NET Core Web API by creating a middleware.
API Key Authentication

Step 1

Open Visual Studio Create or open a ASP.NET Core Web API Project, in my case I’m creating a new project with .NET 6.

Creating a new project

Select a template as shown in the below figure 

 

Step 2
Run the application and you will get swagger UI to access WeatherForecast API.   


public class ApiKeyMiddleware {
    private readonly RequestDelegate _next;
    private
    const string APIKEY = "XApiKey";
    public ApiKeyMiddleware(RequestDelegate next) {
        _next = next;
    }
    public async Task InvokeAsync(HttpContext context) {
        if (!context.Request.Headers.TryGetValue(APIKEY, out
                var extractedApiKey)) {
            context.Response.StatusCode = 401;
            await context.Response.WriteAsync("Api Key was not provided ");
            return;
        }
        var appSettings = context.RequestServices.GetRequiredService < IConfiguration > ();
        var apiKey = appSettings.GetValue < string > (APIKEY);
        if (!apiKey.Equals(extractedApiKey)) {
            context.Response.StatusCode = 401;
            await context.Response.WriteAsync("Unauthorized client");
            return;
        }
        await _next(context);
    }
}

The middleware will check the API key in the header and validate the key by extracting it from the header and compare with the key defined in code.

InvokeAsync method is defined in this middleware so that it will contain the main process, in our case, the main process will be to search and validate the ApiKey header name and value within the httpcontext request headers collection
if (!context.Request.Headers.TryGetValue(APIKEY, out
        var extractedApiKey)) {
    context.Response.StatusCode = 401;
    await context.Response.WriteAsync("Api Key was not provided ");
    return;
}


If there is no header with APIKEY it will return “Api Key was not provided”

Step 4
Open Program.cs file to register the middleware
app.UseMiddleware<ApiKeyMiddleware>();

Step 5
Open appsettings.json file and add an API Key
"XApiKey": "pgH7QzFHJx4w46fI~5Uzi4RvtTwlEXp"

Step 6
Run the application, and test the API using POSTMAN without passing the ApiKey in header, you will get “Api Key was not provided” message in payload, as shown in the below figure.


Passing wrong API Key

Providing correct API Key


HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.




European ASP.NET Core Hosting :: Building And Launching An ASP.NET Core App From Google Cloud Shell

clock March 4, 2022 07:36 by author Peter

ASP.NET Core is a new open-source and cross-platform framework for building modern cloud-based and internet-connected applications using the C# programming language. Google Cloud Shell is a browser-based command-line tool to access the resources provided by Google Cloud. Cloud Shell makes it really easy to manage your Cloud Platform Console projects and resources without having to install the Cloud SDK and other tools on your system.

In this article, I will demonstrate how to build and launch an ASP.NET Core App from the Google Cloud Shell.

Prerequisites
Basic Linux commands and text editors like vim, nano, etc..,
Google Cloud Account(You can get a free account from this Link)

Get Started !
Kindly login to the google cloud account which you have created.

After the successful login. You will see the welcome page like this.

Cloud Shell is a virtual machine that is loaded with development tools. It offers a constant 5GB memory to store the data and runs on the Google Cloud. Cloud Shell provides command-line access to our Google Cloud resources.

On the Cloud Console, in the top right toolbar, click the Activate Cloud Shell button.

In prompts the tab and press the Continue button.



It takes some time to provision and connects to the environment. When it's connected, those already authenticated, and the project is set to our PROJECT_ID like this


gcloud is the command-line tool for Google Cloud. It's pre-installed on Cloud Shell and supports tab completion.
We can see the active account name with this command: gcloud auth list.

We can see the project ID using this command: gcloud config list project

Creating an ASP.NET Core App in Cloud Shell
Create a global.json file to provide the .NET core version. Type nano global.json
It will automatically create a JSON file and open it to edits it. Paste the following line to define the version
{
    "sdk": {
        "version": "3.1.401"
    }
}


JavaScript
Press Ctrl+X to exit, Y to save the file, then Enter to confirm the filename.
The dotnet the command-line tool is already installed in Cloud Shell.
Verify by checking the version: dotnet --version



Use the following command to disable Telemetry coming from our new app: export DOTNET_CLI_TELEMETRY_OPTOUT=1
Create a structure of an ASP.NET Core web app using the following dotnet command: dotnet new razor -o HelloWorldAspNetCore

Building and Launching an ASP.NET Core App from Google Cloud Shell

The above command creates a project and then restores all its dependencies.

Build the ASP.NET Core App
Find the default project name created using the ls command: ls

The default project name is "HelloWorldAspNetCore". Navigate to our project folder: cd HelloWorldAspNetCore

We can see all our project files inside the folder:

Enter the following command to run the app: dotnet run --urls=http://localhost:8080

To check that the app is running, click on the web preview button on the top right in Cloud Shell and select Preview on port 8080.


It will open the tab with the URL and then loads the site successfully,


In this article, I have shown the practical use of Google Cloud Shell and ASP.NET basics, then created a simple ASP.NET core App using Cloud Shell and launched it on the Google Cloud without once leaving the browser. You can also use the different versions of DOTNET on the platform by tweaking the version in the global.json file.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core Hosting :: Getting Started With .NET 6.0 Console Application

clock February 23, 2022 08:02 by author Peter

As we know .NET 6 is the latest version of .NET and is generally available with lots of improvement in performance, security, code quality, developer experience and provides long-term support. It is recommended for beginners or newbies who want to start their .NET learning journey using the latest framework and latest Visual Studio IDE. Additionally, sooner or later we need to upgrade our existing solutions to the latest framework.

This article describes how to get started with .NET 6 and what is new in .NET 6 with comparison to .NET5. We will create a console application in .NET6 and .NET 5 and compare the differences between them. Additionally, the article will show how to add more classes in .NET 6 console application and call it from the Program.cs.

Let’s move on.
Create Console App in .NET 6

Step 1
Open Visual Studio 2022 and click Create a new project.

Step 2
Select Console App and click Next.

Step 3
Give the project name and location of the project.

Step 4
Select framework: .NET 6.0 (Long-term support).

This creates the console app which looks like below.

Default Program.cs file is listed below.

// See https://aka.ms/new-console-template for more information
Console.WriteLine("Hello, World!");

The project.csproj file is given below.

Now, let’s compile and run the program. Click on Strat without debugging option or press Ctrl+F5 as illustrated below.

You can see the FirstConsoleApp.exe is created in the folder location of the project under the directory: “E:\SampleProject\FirstConsoleApp\bin\Debug\net6.0” as shown below.

We can run this .exe file which displays the “Hello, World!” message.
Create Project in .NET 5

Let’s create a console application in Visual Studio using .NET 5. Follow all steps as per previous, only choose framework .NET 5 in Step 4.

Below is Program.cs of console app using .NET 5

Comparing .NET6 and .NET 5

We can see here, Program.cs of .NET 5 contains:

    Using system
    Namespace
    class keyword
    Main method

If we compare both the Program.cs file from the above two console apps, these are not available in .NET 6. so, this is the difference between .NET5 and .NET6.
Extending .NET 6 Console Application

Add New Class
Right-click on the project—Go to Add-->Class and click add.

We have added a class with the name: Class1.

Now, will create a public void method named Sum: in this Class1.

public void Sum() {
    int a = 5;
    int b = 6;
    int Sum = a + b;
    Console.WriteLine("Sum : {0}", Sum);
}

Complete code of Class1.cs

namespace FirstConsoleApp {

    internal class Class1 {
        public void Sum() {
            int a = 5;
            int b = 6;
            int Sum = a + b;
            Console.WriteLine("Sum : {0}", Sum);
        }
    }
}

You might have noticed that this class contains:

  • the namespace
  • internal Class
  • using statements (by default)

Whereas Program.cs doesn’t contain those and you can write logic directly.

Call Sum() Method

In Program.cs class we will call the sum method of the Class1.cs. Furthermore, to call a method from class, here we need to add namespace: using FirstConsoleApp;

using FirstConsoleApp; //need to call method from Class1
Class1 class1 = new Class1();
class1.Sum();

Now, if we run the app then we will get Sum:11 in the console.

Complete code of Program.cs

// See https://aka.ms/new-console-template for more information
using FirstConsoleApp;
Console.WriteLine("Hello, World!");
Class1 class1 = new Class1();
class1.Sum();

When we run the console application, the output looks like below.

Hence, in this article, we created our first Console application in .NET 6 and also created a console app in .NET 5 and compared the differences between them. Additionally, we added the new class in .NET 6 console application and learned to call the method from class in Program.cs file. I hope it helps you to understand the differences between .NET 5 and .NET 6 and get start your journey in the latest .NET framework.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting :: Easily Do Web Scraping In .NET Core 6.0

clock February 21, 2022 07:49 by author Peter

Web scraping is a programmed strategy to get enormous amounts of information from websites. Most of this information is unstructured information in an HTML format which is at that point changed over into organized information in a spreadsheet or a database so that it can be used in different applications. There are many distinctive ways to perform web scraping to get information from websites. These incorporate using online administrations, specific API’s or indeed making your code for web scraping from scratch. Many websites allow you to get to their information in an organized format. This is often the most excellent choice, but there are other sites that do not allow users to get massive amounts of information in an organized format or they are not that innovatively progressed. In that circumstance, it is best to use web scraping to scrape the site for information.

Python is the most popular language in the current days used for web scraping. Python has various libraries available for web scraping. At the same time, we can use .NET also for web scraping. Some third-party libraries allow us to scrape data from various sites.  

HtmlAgilityPack is a common library used in .NET for web scraping. They have recently added the .NET Core version also for web scraping. 

We will use our C# Corner site itself for web scraping. C# Corner gives RSS feeds for each author. We can get information like articles / blogs link, published date, title, feed type, author name from these RSS feeds. We will use HtmlAgilityPack library to crawl the data for each article / blog post and get required information. We will add this information to an SQL database. So that we can use this data for future usage like article statistics. We will use Entity Framework and code first approach to connect SQL server database. 

Create ASP.NET Core Web API using Visual Studio 2022
We can use Visual Studio 2022 to create an ASP.NET Core Web API with .NET 6.0.


We have chosen the ASP.NET Core Web API template from Visual Studio and given a valid name to the project.


We can choose the .NET 6.0 framework. We have also chosen the default Open API support. This will create a swagger documentation for our project.  

We must install the libraries below using NuGet package manger.
HtmlAgilityPack
Microsoft.EntityFrameworkCore.SqlServer
Microsoft.EntityFrameworkCore.Tools


We can add database connection string and parallel task counts inside the appsettings.  

appsettings.json
{
"Logging": {
"LogLevel": {
  "Default": "Information",
  "Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"ConnectionStrings": {
"ConnStr": "Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=AnalyticsDB;Integrated Security=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False"
},
"ParallelTasksCount": 20
}


Database connection string will be used by entity framework to connect SQL database and parallel task counts will be used by web scraping parallel foreach code.  

We can create a Feed class inside a Models folder. This class will be used to get required information from C# Corner RSS feeds.

Feed.cs
namespace Analyitcs.NET6._0.Models
{
public class Feed
{
    public string Link { get; set; }
    public string Title { get; set; }
    public string FeedType { get; set; }
    public string Author { get; set; }
    public string Content { get; set; }
    public DateTime PubDate { get; set; }

    public Feed()
    {
        Link = "";
        Title = "";
        FeedType = "";
        Author = "";
        Content = "";
        PubDate = DateTime.Today;
    }
}
}


We can create an ArticleMatrix class inside the Models folder. This class will be used to get information for each article / blog once we get after web scraping.  

ArticleMatrix.cs
using System.ComponentModel.DataAnnotations.Schema;

namespace Analyitcs.NET6._0.Models
{
public class ArticleMatrix
{
    public int Id { get; set; }
    public string? AuthorId { get; set; }
    public string? Author { get; set; }
    public string? Link { get; set; }
    public string? Title { get; set; }
    public string? Type { get; set; }
    public string? Category { get; set; }
    public string? Views { get; set; }
    [Column(TypeName = "decimal(18,4)")]
    public decimal ViewsCount { get; set; }
    public int Likes { get; set; }
    public DateTime PubDate { get; set; }
}
}


We can create our DB context class for Entity framework.  

MyDbContext.cs
using Microsoft.EntityFrameworkCore;

namespace Analyitcs.NET6._0.Models
{
public class MyDbContext : DbContext
{
    public MyDbContext(DbContextOptions<MyDbContext> options)
        : base(options)
    {
    }
    public DbSet<ArticleMatrix>? ArticleMatrices { get; set; }

    protected override void OnModelCreating(ModelBuilder builder)
    {
        base.OnModelCreating(builder);
    }
}
}


We will use this MyDbContext class later for saving data to the database.  

We can create our API controller AnalyticsController and add web scraping code inside it.

AnalyticsController.cs
using Analyitcs.NET6._0.Models;
using HtmlAgilityPack;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using System.Globalization;
using System.Net;
using System.Xml.Linq;

namespace Analyitcs.NET6._0.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class AnalyticsController : ControllerBase
{
    readonly CultureInfo culture = new("en-US");
    private readonly MyDbContext _dbContext;
    private readonly IConfiguration _configuration;
    public AnalyticsController(MyDbContext context, IConfiguration configuration)
    {
        _dbContext = context;
        _configuration = configuration;
    }

    [HttpPost]
    [Route("CreatePosts/{authorId}")]
    public async Task<ActionResult> CreatePosts(string authorId)
    {
        try
        {
            XDocument doc = XDocument.Load("https://www.c-sharpcorner.com/members/" + authorId + "/rss");
            if (doc == null)
            {
                return BadRequest("Invalid Author Id");
            }
            var entries = from item in doc.Root.Descendants().First(i => i.Name.LocalName == "channel").Elements().Where(i => i.Name.LocalName == "item")
                          select new Feed
                          {
                              Content = item.Elements().First(i => i.Name.LocalName == "description").Value,
                              Link = (item.Elements().First(i => i.Name.LocalName == "link").Value).StartsWith("/") ? "https://www.c-sharpcorner.com" + item.Elements().First(i => i.Name.LocalName == "link").Value : item.Elements().First(i => i.Name.LocalName == "link").Value,
                              PubDate = Convert.ToDateTime(item.Elements().First(i => i.Name.LocalName == "pubDate").Value, culture),
                              Title = item.Elements().First(i => i.Name.LocalName == "title").Value,
                              FeedType = (item.Elements().First(i => i.Name.LocalName == "link").Value).ToLowerInvariant().Contains("blog") ? "Blog" : (item.Elements().First(i => i.Name.LocalName == "link").Value).ToLowerInvariant().Contains("news") ? "News" : "Article",
                              Author = item.Elements().First(i => i.Name.LocalName == "author").Value
                          };

            List<Feed> feeds = entries.OrderByDescending(o => o.PubDate).ToList();
            string urlAddress = string.Empty;
            List<ArticleMatrix> articleMatrices = new();
            _ = int.TryParse(_configuration["ParallelTasksCount"], out int parallelTasksCount);

            Parallel.ForEach(feeds, new ParallelOptions { MaxDegreeOfParallelism = parallelTasksCount }, feed =>
            {
                urlAddress = feed.Link;

                var httpClient = new HttpClient
                {
                    BaseAddress = new Uri(urlAddress)
                };
                var result = httpClient.GetAsync("").Result;

                string strData = "";

                if (result.StatusCode == HttpStatusCode.OK)
                {
                    strData = result.Content.ReadAsStringAsync().Result;

                    HtmlDocument htmlDocument = new();
                    htmlDocument.LoadHtml(strData);

                    ArticleMatrix articleMatrix = new()
                    {
                        AuthorId = authorId,
                        Author = feed.Author,
                        Type = feed.FeedType,
                        Link = feed.Link,
                        Title = feed.Title,
                        PubDate = feed.PubDate
                    };

                    string category = "Uncategorized";
                    if (htmlDocument.GetElementbyId("ImgCategory") != null)
                    {
                        category = htmlDocument.GetElementbyId("ImgCategory").GetAttributeValue("title", "");
                    }

                    articleMatrix.Category = category;

                    var view = htmlDocument.DocumentNode.SelectSingleNode("//span[@id='ViewCounts']");
                    if (view != null)
                    {
                        articleMatrix.Views = view.InnerText;

                        if (articleMatrix.Views.Contains('m'))
                        {
                            articleMatrix.ViewsCount = decimal.Parse(articleMatrix.Views[0..^1]) * 1000000;
                        }
                        else if (articleMatrix.Views.Contains('k'))
                        {
                            articleMatrix.ViewsCount = decimal.Parse(articleMatrix.Views[0..^1]) * 1000;
                        }
                        else
                        {
                            _ = decimal.TryParse(articleMatrix.Views, out decimal viewCount);
                            articleMatrix.ViewsCount = viewCount;
                        }
                    }
                    else
                    {
                        articleMatrix.ViewsCount = 0;
                    }
                    var like = htmlDocument.DocumentNode.SelectSingleNode("//span[@id='LabelLikeCount']");
                    if (like != null)
                    {
                        _ = int.TryParse(like.InnerText, out int likes);
                        articleMatrix.Likes = likes;
                    }

                    articleMatrices.Add(articleMatrix);
                }

            });

            _dbContext.ArticleMatrices.RemoveRange(_dbContext.ArticleMatrices.Where(x => x.AuthorId == authorId));

            foreach (ArticleMatrix articleMatrix in articleMatrices)
            {
                await _dbContext.ArticleMatrices.AddAsync(articleMatrix);
            }

            await _dbContext.SaveChangesAsync();
            return Ok(articleMatrices);
        }
        catch
        {
            return BadRequest("Invalid Author Id / Unhandled error. Please try again.");
        }
    }

}
}


We have created a “CreatePosts” method inside the API controller. We are passing C# Corner author id to this method and get all the author post details from RSS feeds.
XDocument doc = XDocument.Load("https://www.c-sharpcorner.com/members/" + authorId + "/rss");
            if (doc == null)
            {
                return BadRequest("Invalid Author Id");
            }
            var entries = from item in doc.Root.Descendants().First(i => i.Name.LocalName == "channel").Elements().Where(i => i.Name.LocalName == "item")
                          select new Feed
                          {
                              Content = item.Elements().First(i => i.Name.LocalName == "description").Value,
                              Link = (item.Elements().First(i => i.Name.LocalName == "link").Value).StartsWith("/") ? "https://www.c-sharpcorner.com" + item.Elements().First(i => i.Name.LocalName == "link").Value : item.Elements().First(i => i.Name.LocalName == "link").Value,
                              PubDate = Convert.ToDateTime(item.Elements().First(i => i.Name.LocalName == "pubDate").Value, culture),
                              Title = item.Elements().First(i => i.Name.LocalName == "title").Value,
                              FeedType = (item.Elements().First(i => i.Name.LocalName == "link").Value).ToLowerInvariant().Contains("blog") ? "Blog" : (item.Elements().First(i => i.Name.LocalName == "link").Value).ToLowerInvariant().Contains("news") ? "News" : "Article",
                              Author = item.Elements().First(i => i.Name.LocalName == "author").Value
                          };

            List<Feed> feeds = entries.OrderByDescending(o => o.PubDate).ToList();


After that we use a parallel foreach statement to loop entire article / blog detail and scrape the data from each post.  

We will get article / blog category from below code.
string category = "Uncategorized";
                    if (htmlDocument.GetElementbyId("ImgCategory") != null)
                    {
                        category = htmlDocument.GetElementbyId("ImgCategory").GetAttributeValue("title", "");
                    }

                    articleMatrix.Category = category;

We will get article / blog views from the code below.
var view = htmlDocument.DocumentNode.SelectSingleNode("//span[@id='ViewCounts']");
                    if (view != null)
                    {
                        articleMatrix.Views = view.InnerText;

                        if (articleMatrix.Views.Contains('m'))
                        {
                            articleMatrix.ViewsCount = decimal.Parse(articleMatrix.Views[0..^1]) * 1000000;
                        }
                        else if (articleMatrix.Views.Contains('k'))
                        {
                            articleMatrix.ViewsCount = decimal.Parse(articleMatrix.Views[0..^1]) * 1000;
                        }
                        else
                        {
                            _ = decimal.TryParse(articleMatrix.Views, out decimal viewCount);
                            articleMatrix.ViewsCount = viewCount;
                        }
                    }
                    else
                    {
                        articleMatrix.ViewsCount = 0;
                    }


We will get article / blog user likes from below code.
var like = htmlDocument.DocumentNode.SelectSingleNode("//span[@id='LabelLikeCount']");
                    if (like != null)
                    {
                        _ = int.TryParse(like.InnerText, out int likes);
                        articleMatrix.Likes = likes;
                    }


After getting all this information from each article / blog using parallel foreach statement, we have saved entire information to database using below code.
_dbContext.ArticleMatrices.RemoveRange(_dbContext.ArticleMatrices.Where(x => x.AuthorId == authorId));

           foreach (ArticleMatrix articleMatrix in articleMatrices)
           {
               await _dbContext.ArticleMatrices.AddAsync(articleMatrix);
           }

           await _dbContext.SaveChangesAsync();


We must change the Program.cs class with the code change below. So that, the Entity framework connection must be established.

We can create the SQL database and table using migration commands.

We can use the command below to create migration scripts in Package Manager Console.  
PM > add-migration InititalScript


Above command will create a new migration script. We can use this script to create our database and table.

We can use the command below to update the database.
PM> update-database

If you look at the SQL server explorer, and you can see that our new database and table is created now.


We can run our application and use swagger to execute the CreatePosts method.


We must use our correct C# corner author id. You can easily get your author id from the profile link
Above is my profile link. sarath-lal7 is my author id.  

We can use the above user id in the swagger and get all the article / blog details.

You can see that authors’ post details received in the swagger. Please note that, currently C# Corner returns maximum 100 posts details only. 

If you look at the database, you can see that 100 records are available in the database as well.

If you enter the same author id again, the earlier data in the database will be removed and new data will be inserted. Hence, for a particular author a maximum of 100 records will be available on the database at any time. We can use this data to analyze the post details using any of the client-side applications like Angular or React.  

I have already created a client application using Angular 8 and hosted it on Azure. You may try this Live App. I will create an Angular 13 version of this application soon.  

In this post, we have seen how to scrape data from websites in .NET 6.0 application using HtmlAgilityPack library. We have used C# Corner site to scrape data from and we have scraped all the post information for a particular author using his author id. C# Corner allows us RSS feeds for each author, and we will get maximum of 100 posts for a user.

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 

 



European ASP.NET Core Hosting :: How To Send One Time Password On Registered Mobile Number Using C#?

clock February 17, 2022 07:32 by author Peter

In this article, you will learn how to send a One Time Password[OTP] on registered mobile number using C# and asp.net.

Step 1
First create one web page in your Visual Studio and design it. Design is given below,
<asp:Panel ID="pnl1" runat="server">
  <table>
    <tr>
      <td>Enter Your Mobile Number:</td>
      <td>
        <asp:TextBox ID="txtmobileNo" runat="server"></asp:TextBox>
      </td>
    </tr>
    <tr>
      <td></td>
      <td>
        <asp:Button ID="btnsendOtp" runat="server" Text="Send OTP" OnClick="btnsendOtp_Click" />
      </td>
    </tr>
  </table>
</asp:Panel>
<asp:Panel ID="pnl2" runat="server" Visible="false">
  <table>
    <tr>
      <td>Enter Your OTP:</td>
      <td>
        <asp:TextBox ID="txtverifyMobileNO" runat="server"></asp:TextBox>
      </td>
    </tr>
    <tr>
      <td></td>
      <td>
        <asp:Button ID="btnverify" runat="server" Text="Verify" OnClick="btnverify_Click" />
      </td>
    </tr>
  </table>
</asp:Panel>


Step 2
Add below namespace in .cs page,
using System.Data.SqlClient;
using System.Data;
using System.Net;
using System.Web.ClientServices;
using System.Collections.Specialized;
using System.Configuration;


Step 3
For sending OTP you need API Key. Register your details for API KEY and get 10 SMS.

Step 4
After registering successfully go to setting option and click on API Key for API key generatiton

Step 5
Click on create API Key and no need to enter IP address and notes just save it.

Step 6
Write the below code on btnOtp_click
protected void btnsendOtp_Click(object sender, EventArgs e) {
    pnl1.Visible = false;
    pnl2.Visible = true;
    Random random = new Random();
    int value = random.Next(1001, 9999);
    string destinationaddress = "+91" + txtmobileNo.Text;
    string message = "Your OTP is " + value + "(Send by R.R.Research and development founder is Ramesh Chandra)";
    string message1 = HttpUtility.UrlEncode(message);
    using(var wb = new WebClient()) {
        byte[] response = wb.UploadValues("https://api.textlocal.in/send/", new NameValueCollection() {
            {
                "apikey",
                "here is enter your API Key"
            }, {
                "numbers",
                destinationaddress
            }, {
                "message",
                message1
            }, {
                "sender",
                "TXTLCL"
            }
        });
        string result = System.Text.Encoding.UTF8.GetString(response);
        Session["OTP"] = value;
    }
}


Step 6
Verify your OTP. Write the below code on verify button.

protected void btnverify_Click(object sender, EventArgs e) {
    if (txtverifyMobileNO.Text == Session["OTP"].ToString()) {
        pnl2.Visible = false;
        ScriptManager.RegisterStartupScript(this, typeof(string), "Message", "confirm('Your Mobilenumber has been verify sccessfully.');", true);
    } else {
        ScriptManager.RegisterStartupScript(this, typeof(string), "Message", "confirm('Your OTP is not correct please enter correct OTP');", true);
        pnl2.Visible = true;
    }
}


Step 7
Now run the project.

Step 8
After entering the mobile number and click on send OTP first panel will be false and the second panel will be true.

Step 9
Enter OTP and click on verify button, you will get the below message.
"Your Mobile number has been verified sccessfully."

HostForLIFE.eu ASP.NET Core Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 

 

 



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in