European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core 9.0 Hosting - HostForLIFE :: Examining the GetItems() Method in.NET 8 to Handle Randomness

clock July 3, 2024 07:08 by author Peter

The Random class's GetItems() method is one of the potent new features introduced in.NET 8. Working with randomness should be simpler, more effective, and intuitive with this approach. This post will go over the uses, functionality, and improvements that the GetItems() method may provide to your.NET projects.

Table of Contents

  1. Introduction to the GetItems() Method
  2. Basic Usage
  3. Practical Applications
  4. Comparing Traditional Methods with GetItems()
  5. Best Practices
  6. Conclusion

Overview of the GetItems() Procedure
The Random class in.NET 8 now has a new method called GetItems(). It lets you choose a predetermined number of objects at random from a collection. This can be very helpful in situations where you need to add some unpredictability to your application or shuffle data or create random samples.

Standard Usage

Using the GetItems() function is simple. This is the fundamental syntax:

public static T[] GetItems<T>(this Random random, IList<T> list, int count);

  • random: An instance of the Random class.
  • list: The collection from which items are to be selected.
  • count: The number of random items to select.

Here’s a simple example to illustrate its usage.
Random random = new Random();
List<int> numbers = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
int[] randomNumbers = random.GetItems(numbers, 3);
foreach (var number in randomNumbers)
{
    Console.WriteLine(number);
}


In this example, GetItems() selects three random numbers from the numbers list.

Practical Applications

Random Sampling in Surveys
Suppose you're conducting a survey and need to randomly select participants from a list. The GetItems() method makes this easy:
List<string> participants = new List<string> { "Alice", "Bob", "Charlie", "David", "Eve" };
string[] selectedParticipants = random.GetItems(participants, 2);
Console.WriteLine("Selected Participants:");
foreach (var participant in selectedParticipants)
{
    Console.WriteLine(participant);
}


Random Shuffling of Cards
In game development, shuffling a deck of cards is a common requirement. Using GetItems(), you can shuffle cards effortlessly:
List<string> deck = new List<string> { "2H", "3H", "4H", ..., "KS", "AS" };
string[] shuffledDeck = random.GetItems(deck, deck.Count);
Console.WriteLine("Shuffled Deck:");
foreach (var card in shuffledDeck)
{
    Console.WriteLine(card);
}


Comparing Traditional methods with GetItems()
Before GetItems(), achieving similar functionality required more verbose and less readable code. Here’s how you might have done it traditionally:
Random random = new Random();
List<int> numbers = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
List<int> selectedNumbers = new List<int>();
HashSet<int> usedIndices = new HashSet<int>();
while (selectedNumbers.Count < 3)
{
    int index = random.Next(numbers.Count);
    if (usedIndices.Add(index))
    {
        selectedNumbers.Add(numbers[index]);
    }
}
foreach (var number in selectedNumbers)
{
    Console.WriteLine(number);
}


Using GetItems(), the same task is simplified.
int[] randomNumbers = random.GetItems(numbers, 3);
foreach (var number in randomNumbers)
{
    Console.WriteLine(number);
}


Best Practices

  • Validate Parameters: Ensure the count parameter does not exceed the size of the list to avoid exceptions.
  • Seed Control: For reproducible results, initialize the Random class with a fixed seed.
  • Performance Considerations: For very large collections, be mindful of performance implications when using GetItems() frequently.

Conclusion
The GetItems() method in .NET 8 is a welcome addition for developers who frequently work with random data selections. By providing a concise and efficient way to select random items from a collection, it simplifies code and enhances readability. Whether you’re developing games, conducting surveys, or implementing any feature requiring randomness, GetItems() is a tool that can significantly streamline your development process.

HostForLIFE ASP.NET Core 9.0 Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Comprehending.NET Session Management

clock June 26, 2024 07:05 by author Peter

A crucial component of web programming is keeping state across several queries. Because HTTP is stateless, developers must put in place ways to store user data. Sessions are useful in this situation. This post will explain sessions, explain how they function in.NET, and offer real-world examples to demonstrate how to use them.

A session: what is it?

A session is a type of server-side data storage that allows data to be maintained between requests made by the same user. Web applications depend on sessions to save many state variables, including user preferences, shopping cart contents, and authentication status. A distinct session ID is assigned to each session, forwarded to the client, and returned with each new request.

How Sessions Work in .NET

  • Session Initialization: When a user accesses a web application for the first time, a new session is created, and a unique session ID is generated. This ID is stored in a cookie on the client side.
  • Data Storage: The session object is used to store data on the server side, tied to the session ID.
  • Subsequent Requests: The client sends the session ID back to the server with each request. The server retrieves the session data using this ID.
  • Session Termination: Sessions can be terminated explicitly by the application, or they can expire after a period of inactivity.

Enabling and Using Sessions in ASP.NET Core
To use sessions in an ASP.NET Core application, you need to configure the session middleware. Here’s a step-by-step guide:
Step 1. Configure session state
Middleware for managing session state is included in the framework. To enable the session middleware, Program.cs must contain:

  • Any of the IDistributedCache memory caches. The IDistributedCache implementation is used as a backing store for the session. For more information, see Distributed Caching in ASP.NET Core.
  • A call to AddSession
  • A call to UseSession

The following code shows how to set up the in-memory session provider with a default in-memory implementation of IDistributedCache:
builder.Services.AddDistributedMemoryCache();
builder.Services.AddSession(options =>
{
    options.IdleTimeout = TimeSpan.FromSeconds(10);
    options.Cookie.HttpOnly = true;
    options.Cookie.IsEssential = true;
});
app.UseSession();


The preceding code sets a short timeout to simplify testing.
The order of middleware is important. Call UseSession after UseRouting and before MapRazorPages and MapDefaultControllerRoute. See Middleware Ordering.

HttpContext.Session is available after the session state is configured.

HttpContext.The session can't be accessed before UseSession has been called.

A new session with a new session cookie can't be created after the app has begun writing to the response stream. The exception is recorded in the web server log and not displayed in the browser.

Step 2. Set and Get Session Data
The following example shows how to set and get an integer and a string:
public class IndexModel : PageModel
{
    public const string SessionKeyName = "_Name";
    public const string SessionKeyAge = "_Age";
    private readonly ILogger<IndexModel> _logger;
    public IndexModel(ILogger<IndexModel> logger)
    {
        _logger = logger;
    }
    public void OnGet()
    {
        if (string.IsNullOrEmpty(HttpContext.Session.GetString(SessionKeyName)))
        {
            HttpContext.Session.SetString(SessionKeyName, "The Doctor");
            HttpContext.Session.SetInt32(SessionKeyAge, 73);
        }
        var name = HttpContext.Session.GetString(SessionKeyName);
        var age = HttpContext.Session.GetInt32(SessionKeyAge).ToString();
        _logger.LogInformation("Session Name: {Name}", name);
        _logger.LogInformation("Session Age: {Age}", age);
    }
}


The following example retrieves the session value for the IndexModel.SessionKeyName key (_Name in the sample app) in a Razor Pages page:
@page
@using Microsoft.AspNetCore.Http
@model IndexModel
...
Name: @HttpContext.Session.GetString(IndexModel.SessionKeyName)


Serialize objects data
All session data must be serialized to enable a distributed cache scenario, even when using the in-memory cache. String and integer serializers are provided by the extension methods of ISession. Complex types must be serialized by the user using another mechanism, such as JSON.

Use the following sample code to serialize objects:
public static class SessionExtensions
{
    public static void Set<T>(this ISession session, string key, T value)
    {
        session.SetString(key, JsonSerializer.Serialize(value));
    }

    public static T? Get<T>(this ISession session, string key)
    {
        var value = session.GetString(key);
        return value == null ? default : JsonSerializer.Deserialize<T>(value);
    }
}


Benefits of Using Sessions

  • State Management: Sessions help maintain the state across multiple requests, which is essential for features like user authentication and shopping carts.
  • Security: Data stored in sessions is kept on the server, reducing the risk of client-side manipulation.
  • Convenience: Sessions simplify the development of stateful web applications by providing an easy way to store and retrieve user-specific data.

Conclusion
One of.NET's most useful features for handling state in web applications is sessions. They offer a practical and safe means of storing user-specific information for usage in response to various requests. You may improve user experience and keep a smooth user interface during the user's visit to your web application by using sessions correctly. By successfully utilizing and comprehending sessions, you can create powerful, stateful online applications that enhance user experience. Sessions are an essential tool in the toolbox of any web developer, whether they are using distributed cache storage for scalability or in-memory storage for simplicity.

HostForLIFE ASP.NET Core 9.0 Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core 9.0 Hosting - HostForLIFE :: How Can I Use My Current Tools to Migrate to.NET 8?

clock June 20, 2024 08:00 by author Peter

The technology framework varies according on the context in which the program will run, as is common with Microsoft technology, which comes with numerous environments like Desktop, Web, Mobile apps, etc. Thus, the application needs to be planned and built based on the environment. But difficulties will now arise.

Step 1: Please verify that the current application target framework is in place before moving to the higher version. After selecting Solution Properties, check as shown below.

Step 2: To access the upgrade button menu, simply perform a right-click on the solution.


Step 3: The window for the upgrade assistant will appear and ask what has to be done next to upgrade the project.


Step 4: You'll be prompted to choose the target framework in this window.

Step 5: At this point, you may choose which components to upgrade to the newest target framework or migrate to an updated version of.

Step 6: As I pick every component, the system will verify and either provide you with the most recent code alteration or advise you to make the necessary adjustments. Improvement is underway. It will require some time to finish and provide the report.

Step 7: The report is now generated and displayed in the window for the upgrading assistant. Please take the example below into consideration. The project output will vary if the project is an enterprise application.

Step 8: The Visual Studio output window allows the developer to view the current status.

After the DotNet or.Net upgrade assistant is finished, we must follow its recommendations and make changes to the code base. Step 7 indicates that the.net8-windows target framework does not support MVVMLight. Thus, we must decide and update such packages. In this case, CommunityToolkit is an alternative to MVVMLight.Since MVVM is available in Nuget, the codebase must also be modified.

HostForLIFE ASP.NET Core 9.0 Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core 9.0 Hosting - HostForLIFE :: The .NET Core Null Object Design Pattern

clock June 10, 2024 09:13 by author Peter

The Null Object Pattern is a behavioral design pattern that provides an object to stand in for the missing object in an interface. It's a way to offer an alternate action in situations when a null object would raise a null reference exception. In this post, we'll go deeply into the C# Null Object Pattern and gradually advance to more complicated scenarios.

The Pattern of Null Object Design What's that?
The use of the Null object pattern is one design strategy that facilitates the use of potentially undefined dependencies. This is done by using instances of a concrete class that implements a recognized interface in place of null references. Together with concrete classes that extend it and a null object class that provides a do-nothing version of the class that can be used anyplace we need to verify the null value, an abstract class detailing the various actions to be performed is constructed.

The Null Object Design Pattern's constituent parts
Customer

The client is the code that depends on an object that either implements a Dependency interface or extends an abstract DependencyBase class. The client uses this item to complete a task. The client shouldn't need to know which kind of object it is working with in order to handle both real and null objects in the same manner.

DependencyBase or Abstract Dependency
An abstract class or interface called DependencyBase specifies the methods that must be implemented by all concrete dependents, including the null object. The contract that all dependencies have to abide by is defined by this class.

Dependency or Real Dependency

The Client may utilize this class as a functional dependency. It is not necessary for the client to know if Dependency objects are actual or null in order to interact with them.

NullObject or Null Dependency
This is the class of null objects that the client may utilize as a dependency. While it implements every member specified by the DependencyBase abstract class, it lacks functionality. A null or non-existent dependency in the system is represented by a NullObject. Methods on a NullObject can be securely called by the client without resulting in errors or requiring null checks.

As an illustration,
Abstract Dependency

Here is the code for the ICar.cs file.
public interface ICar
{
    void Drive();

    void Stop();
}

Real Dependency
Here is the code for the SedanCar.cs file.
public class SedanCar : ICar
{
    public void Drive()
    {
        Console.WriteLine("Drive the sedan car.");
    }

    public void Stop()
    {
        Console.WriteLine("Stop the sedan car.");
    }
}

Null Object Dependency
Here is the code for the NullCar.cs file.
public class NullCar : ICar
{
    public void Drive()
    {

    }

    public void Stop()
    {

    }
}

Client

Here is the code for the CarService.cs file.
public class CarService(ICar car)
{
    private readonly ICar _car = car;

    public void Run()
    {
        Console.WriteLine($"Start run method. {nameof(ICar)}: {_car}");
        _car.Drive();
        _car.Stop();

        Console.WriteLine($"Complete run method. {nameof(ICar)}: {_car}");
        Console.WriteLine();
    }
}


Program
Here is the code for the Program.cs file.
var sedanCar = new SedanCar();
var carService = new CarService(sedanCar);

carService.Run();

var nullCar = new NullCar();
carService = new CarService(nullCar);

carService.Run();

Output

When to apply the Design Pattern for Null Objects?
When you want to provide a default or no-op implementation of an object's functionality to avoid null checks and handle null references gracefully, you can use the Null Object Design Pattern. The following situations call for the application of the Null Object Design Pattern.

  • Default Behavior: This is the behavior you want to give an object in the event that its real implementation is unavailable or inappropriate.
  • Avoid Null Checks: When you want to provide a null object implementation that may be used safely in place of a null reference, you can avoid having to do explicit null checks in your code.
  • Consistent Interface: In situations when you need to give customers access to an interface that stays the same whether they are working with real or null objects.
  • Simplifying Client Code: When you wish to spare client code from handling null references by letting them handle null objects in the same manner as actual objects.

When the Null Object Design Pattern should not be used?
The Null Object Design Pattern might not be appropriate in the following situations.

  • Sophisticated Behavior: The Null Object Design Pattern is meant to provide simple default behavior; therefore, it might not be acceptable when the null object needs to implement sophisticated behavior or store state.
  • Performance Considerations: It could be preferable to handle null references directly in the code if generating and utilizing null objects significantly increases overhead or complexity in the system.
  • Confusion with Real Objects: Explicit null checks may be preferable in order to improve the readability and clarity of the code if there is a chance that null objects and real objects could be confused in the system.

Summary
A solid method for handling the lack of objects is provided by the Null Object Pattern, a design pattern. It lowers the possibility of runtime mistakes and simplifies client code by offering a default behavior and doing away with the necessity for null checks. To improve stability and maintainability, the Null Object Pattern can be a useful tool when building a new system or restructuring an old one.

We learned the new technique and evolved together.

HostForLIFE ASP.NET Core 9.0 Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.


 



European ASP.NET Core 9.0 Hosting - HostForLIFE :: The One Behind Concurrency in C#

clock June 6, 2024 09:31 by author Peter

Applications developed today do not make use of the System Thread class.Threading straight. It means you are still keeping legacy code if you see it in one of your projects. Is learning Thread API worthwhile? Well, that's a different subject altogether. The Thread API was introduced in.NET 1.0, as we well know, but it was not very user-friendly. Why then is it preferable not to utilize it?

  • The underlying API that gives you direct control over everything is called Thread API. Although this choice seems appealing, you can become frustrated if you have more influence over something you don't understand.
  • Gaining mastery over Thread API was not an easy feat. It has a rich history that you must really understand by delving into the OS's intricacies. It does seem a little frightening, but the Thread API makes you think in terms of Threads rather than jobs. Instead of focusing about building threads, developers should consider writing concurrent programming. Thread is an implementation detail and a physical concept.
  • The process of creating a thread is costly.
  • From the standpoint of understandability and maintainability, using Thread API introduces more complexity.

Microsoft introduced the ThreadPool class in.NET 2.0, which is an improvement over the Thread API. Isolating us from the Thread notion was the main advancement toward Threading. It is not necessary for you to consider threads. As a developer, you ought to consider tasks that need to be parallelized.

A wrapper around your threads is called ThreadPool.

Why is it a superior choice?

  • You are no longer "the owner" of Threads thanks to ThreadPool. There is no need for manual development, intricate procedures, thread management issues, deadlock, etc. What do you think? You were the main issue, not a thread:)
  • When and why to create threads are managed by ThreadPool. Put simply, let's say we have five approaches. You should use multithreading to run them. To run them, you require five methods, but ThreadPool might need to use two or three threads to finish your job. If you use a traditional thread API, the limit is determined by the thread count you utilize.
internal class Program
{
    static void Main(string[] args)
    {
        new Thread(ComplexTask).Start();
        Thread.Sleep(1000);
        new Thread(ComplexTask).Start();
        Thread.Sleep(1000);
        new Thread(ComplexTask).Start();
        Thread.Sleep(1000);
        new Thread(ComplexTask).Start();
        Thread.Sleep(1000);
        Console.ReadLine();
    }

    static void ComplexTask()
    {
        Console.WriteLine($"Running complex task in Thread={Environment.CurrentManagedThreadId}");
        //Thread.Sleep(40);
        Console.WriteLine($"Finishing complex task in Thread={Environment.CurrentManagedThreadId}");
    }
}


Here is the result

Every operation is a Thread, and it is not an optimized way of using it.

ThreadPool isolates us from The Thread stuff and provides a simple API to enter multithreading mode. Let’s modify our code and try to understand the value of using ThreadPool instead of Thread directly.
internal class Program
{
    static void Main(string[] args)
    {
        ThreadPool.QueueUserWorkItem((x) => ComplexTask());
        Thread.Sleep(1000);
        ThreadPool.QueueUserWorkItem((x) => ComplexTask());
        Thread.Sleep(1000);
        ThreadPool.QueueUserWorkItem((x) => ComplexTask());
        Thread.Sleep(1000);
        ThreadPool.QueueUserWorkItem((x) => ComplexTask());
        Thread.Sleep(1000);

        Console.ReadLine();
    }
    static void ComplexTask()
    {
        Console.WriteLine($"Running complex task in Thread={Environment.CurrentManagedThreadId}");
        //Thread.Sleep(40);
        Console.WriteLine($"Finishing complex task in Thread={Environment.CurrentManagedThreadId}");
    }
}

As you have probably noticed, ThreadPool optimizes the use of Thread. ThreadPool will utilize an existing thread rather than start a new one if any of them are free. Hold on a minute... Can we perform the same procedure together? Can we really execute the same Thread twice? Let's give it a shot.

You can’t run the same Thread twice but somehow ThreadPool can. ThreadPool can reuse Threads and it helps not to create a Thread every time when you have a block of code to run in multithreading mode. In classical .NET, depending on its version and your CPU architecture the capacity of Worker threads is different.

I have an x64 Intel processor running the most recent version of.NET (.NET 8) with 32767 worker threads with 1000 completion post threads. Typically, ThreadPool begins with just one thread in it. It may automatically increase Threads based on the circumstances. ThreadPool allows you to check the available threads.

In my case, I’m using the latest .NET ( .NET 8) with an x64 intel processor, and I have 32767 worker Threads with 1000 completion post Threads.

ThreadPool usually starts with 1 Thread in the pool. Depending on the context, it may automatically increase Threads. You can Check available threads in ThreadPool.
internal class Program
{
    static void Main(string[] args)
    {
        ThreadCalculator();
        for (int i = 0; i < 10; i++)
        {
            ThreadPool.QueueUserWorkItem((x) => ComplexTask());
        }
        Thread.Sleep(15);
        ThreadCalculator();
        Console.ReadLine();
    }
    static void ThreadCalculator()
    {
        ThreadPool.GetAvailableThreads(out int workerThreads, out int completionPortThreads);
        Console.WriteLine($"worker threads = {workerThreads}, " +
            $"and completion Port Threads = {completionPortThreads}");
    }
    static void ComplexTask()
    {
        Console.WriteLine($"Running complex task in Thread={Environment.CurrentManagedThreadId}");
        //Thread.Sleep(40);
        Console.WriteLine($"Finishing complex task in Thread={Environment.CurrentManagedThreadId}");
    }
}

The most used API in ThreadPool is, of course, QueueUserWorkItem. It has a generic version, also.
Well, we have another great concept called completion post Threads that we need to cover.

ThreadPool.QueueUserWorkItem is a method in C# that allows you to schedule tasks for execution in a thread pool. Here's a breakdown of its functionality:

Purpose

Schedule tasks to run asynchronously without creating and managing individual threads.
Leverages a pool of pre-created threads, improving performance and resource management.

How it works
You provide a delegate (like Action or WaitCallback) that represents the work you want to be done.
Optionally, you can pass an object containing data to be used by the delegate.

ThreadPool.QueueUserWorkItem adds the delegate and data (if provided) to the thread pool's queue.

When a thread from the pool becomes available, it picks up the first item from the queue and executes the delegate with the provided data.

Benefits or Why to Use It

Performance Boost: Reusing threads avoids the overhead of creating and destroying them for each task.
Resource Optimization: Maintains a controlled number of threads, preventing system overload.
Simplified Concurrency Management: The thread pool handles scheduling and ensures tasks are executed efficiently.

Additional notes
The thread pool can dynamically adjust the number of threads based on workload.
Tasks are queued if all threads are busy, ensuring none are dropped.

ThreadPool.QueueUserWorkItem has overloads, including a generic version for type safety and an option to influence thread selection.

Ok, but what about completion port Threads?
The concept of completionPortThreads within the ThreadPool class in C# might not be directly exposed as you might expect.
The ThreadPool class internally manages two types of threads.

  • Worker Threads: These handle general-purpose tasks submitted through QueueUserWorkItem.
  • I/O Completion Port Threads (Completion Port Threads): These are specialized threads optimized for processing asynchronous I/O operations.

Purpose of completion port threads
Asynchronous I/O operations typically involve waiting for network requests, file access, or other external events. Completion port threads efficiently wait for these events using a system mechanism called I/O Completion Ports (IOCP).

When an I/O operation completes, the corresponding completion port thread is notified, and it can then dequeue and process the completed task.

What is the value?

  • Improved performance for asynchronous I/O bound tasks.
  • Completion port threads avoid busy waiting, reducing CPU usage while waiting for I/O events.
  • Dedicated threads for I/O operations prevent worker threads from being blocked, improving overall responsiveness.

Important Notes
You don't directly control completionPortThreads. The ThreadPool class manages its number dynamically based on the system workload. Methods like ThreadPool.SetMinThreads and ThreadPool.GetAvailableThreads allow you to indirectly influence the minimum number of worker and completion port threads, but there's no separate control for each type.

Who Uses the ThreadPool in C#?
The ThreadPool is a fundamental mechanism in C# for efficiently managing threads. It provides a pool of worker threads that can be reused by various parts of your application, including:

  • Task Parallel Library (TPL): When you create Task or Task<TResult> objects, the TPL typically schedules them to run on ThreadPool threads by default. This enables parallel execution of tasks without the need to explicitly manage threads.
  • Asynchronous Programming: Asynchronous operations like those using the async and await keywords often rely on the ThreadPool to execute the actual work in the background while the main thread remains responsive.
  • Parallel Programming: Libraries like the Parallel For loop (Parallel.For) and PLINQ (Parallel LINQ) often leverage the ThreadPool to distribute work items across multiple threads.

Benefits of Using the ThreadPool

  • Reduced Thread Creation Overhead: Creating threads can be expensive. The ThreadPool eliminates this overhead by creating a pool of threads upfront and reusing them as needed.
  • Improved Performance: By efficiently managing threads, the ThreadPool can enhance the responsiveness and throughput of your application, especially when dealing with concurrent tasks.
  • Simplified Thread Management: The ThreadPool handles thread creation, destruction, and idle thread management, freeing you from these complexities.

When to consider alternatives

  • Long-Running Operations: If your work items are long-running (e.g., several seconds or more), use dedicated threads or the TaskFactory.StartNew method with a custom thread creation option might be more suitable to avoid saturating the ThreadPool and affecting overall application performance.
  • Specialized Thread Requirements: If your tasks require specific thread priority or affinity, you might need to create dedicated threads with the desired settings.

Conclusion
The ThreadPool is a valuable tool for concurrent programming in C#. It offers a convenient and efficient way to manage threads, especially for short-lived, CPU-bound tasks. By understanding how the ThreadPool works and its appropriate use cases, you can create well-structured and performant C# applications.

HostForLIFE ASP.NET Core 9.0 Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core 9.0 Hosting - HostForLIFE :: Understanding SOLID Principles in .NET Core

clock June 3, 2024 08:19 by author Peter

The five SOLID design principles in object-oriented programming are intended to improve the readability, flexibility, and maintainability of software systems. We'll go into great detail about each SOLID principle in this blog article, using.NET Core examples.

1. The principle of single responsibility (SRP)
According to the Single Responsibility Principle, a class should only have one duty or responsibility, or one cause to change.

As an illustration
Take into consideration the User class, which manages emailing people and storing user data in a database. Due to the class's various responsibilities, this violates the SRP.

Bad Example
public class User
{
    public void Save()
    {
        // Saving user to the database
    }

    public void SendEmail()
    {
        // Sending email to the user
    }
}


Good Example
public class User
{
    public void Save()
    {
        // Saving user to the database
    }
}

public class EmailService
{
    public void SendEmail(User user)
    {
        // Sending email to the user
    }
}


2. Open/Closed Principle (OCP)
The Open/Closed Principle states that software entities should be open for extension but closed for modification. This means that classes should be designed in a way that allows new functionality to be added without changing existing code.

Example
Consider a class Area Calculator that calculates the area of shapes. Initially, it only supports rectangles. To adhere to the OCP, we can refactor the code to allow adding new shapes without modifying the existing AreaCalculator class.

Bad Example
public class Rectangle
{
    public double Width { get; set; }
    public double Height { get; set; }
}

public class AreaCalculator
{
    public double CalculateArea(Rectangle[] shapes)
    {
        double area = 0;

        foreach (var shape in shapes)
        {
            area += shape.Width * shape.Height;
        }

        return area;
    }
}


Good Example

public abstract class Shape
{
    public abstract double Area();
}

public class Rectangle : Shape
{
    public double Width { get; set; }
    public double Height { get; set; }

    public override double Area()
    {
        return Width * Height;
    }
}

public class AreaCalculator
{
    public double CalculateArea(Shape[] shapes)
    {
        double area = 0;

        foreach (var shape in shapes)
        {
            area += shape.Area();
        }

        return area;
    }
}


3. Liskov Substitution Principle (LSP)
The Liskov Substitution Principle states that objects of a superclass should be replaceable with objects of its subclasses without affecting the correctness of the program.

Example
Consider a Rectangle class and a Square class where a Square inherits from Rectangle. Violating the LSP would mean that substituting a Square object for a Rectangle object could lead to unexpected behavior.

Bad Example
public class Rectangle
{
    public virtual double Width { get; set; }
    public virtual double Height { get; set; }
}

public class Square : Rectangle
{
    private double _side;

    public override double Width
    {
        get => _side;
        set
        {
            _side = value;
            Height = value;
        }
    }

    public override double Height
    {
        get => _side;
        set
        {
            _side = value;
            Width = value;
        }
    }
}

Good Example
public abstract class Shape
{
    public abstract double Area();
}

public class Rectangle : Shape
{
    public double Width { get; set; }
    public double Height { get; set; }

    public override double Area()
    {
        return Width * Height;
    }
}

public class Square : Shape
{
    public double Side { get; set; }

    public override double Area()
    {
        return Side * Side;
    }
}


4. Interface Segregation Principle (ISP)

The Interface Segregation Principle states that clients should not be forced to depend on interfaces they don’t use. It emphasizes breaking interfaces into smaller, more specific ones.

Example
Consider an IWorker interface that contains both Work() and TakeBreak() methods. This forces all implementing classes to implement both methods, even if they don’t need them.

Bad Example
public interface IWorker
{
    void Work();
    void TakeBreak();
}

public class Programmer : IWorker
{
    public void Work()
    {
        // Programming tasks
    }

    public void TakeBreak()
    {
        // Taking a break
    }
}

Good Example
public interface IWorker
{
    void Work();
}

public interface IBreakable
{
    void TakeBreak();
}

public class Programmer : IWorker, IBreakable
{
    public void Work()
    {
        // Programming tasks
    }

    public void TakeBreak()
    {
        // Taking a break
    }
}


5. Dependency Inversion Principle (DIP)
The Dependency Inversion Principle states that high-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions.

Example

Consider a UserManager class that directly depends on a Logger class. This creates a tight coupling between the two classes, making it difficult to change the logging implementation.

Bad Example
public class Logger
{
    public void Log(string message)
    {
        // Logging implementation
    }
}

public class UserManager
{
    private Logger _logger;

    public UserManager()
    {
        _logger = new Logger();
    }
}


Good Example
public interface ILogger
{
    void Log(string message);
}

public class Logger : ILogger
{
    public void Log(string message)
    {
        // Logging implementation
    }
}

public class UserManager
{
    private ILogger _logger;

    public UserManager(ILogger logger)
    {
        _logger = logger;
    }
}


Conclusion
You can produce software architectures that are more flexible, scalable, and maintainable by comprehending and implementing the SOLID principles in your.NET Core applications. Writing clear, modular, and testable code is made easier by following these guidelines, which eventually improves software quality and developer efficiency.

HostForLIFE ASP.NET Core 8.0.4 Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core 8.0.1 Hosting - HostForLIFE :: .NET Core Integrating FullCalendar in .NET Core App with JavaScript

clock May 28, 2024 08:06 by author Peter

In order to improve user experience, interactive and user-friendly interface design is essential in modern web development. The integration of dynamic calendars for planning, scheduling, and event management is one typical requirement. To do this, "Integrating FullCalendar in.NET Core Applications with JavaScript" is the best resource. A feature-rich calendar interface may be accessed through the sophisticated JavaScript library FullCalendar, and scalable web applications can be built with a solid backend architecture provided by.NET Core.

From basic setup to sophisticated customisation, this tutorial will lead you through the process of incorporating FullCalendar into your.NET Core application. You will discover how to organize events, make responsive and interactive calendars, and alter the calendar's look to suit your own requirements using JavaScript. Regardless of the complexity of your event management system or scheduling tool, this article will provide you with the knowledge and abilities to integrate FullCalendar with.NET Core with ease, giving your web application a polished and functional calendar solution.

Step 1. Create the calendar view
Create a View for the Calendar
Create a new Razor view named Calendar.cshtml in the Views/Home directory and include the FullCalendar setup.

Initialize the JS and CSS file as the below attachment.
<div class="col-lg-5 ">
    <div class="w-100 m-t-6 task-calendar">
        <div class="card mb-0">
            <div class="card-header">
                <div>
                    <h5>Task Calendar</h5>
                    <span class="d-inline-block calender-date"></span>
                </div>
                <div>

                    <!-- Status badges will be dynamically appended here -->
                </div>
            </div>


            <div class="card-body p-3">
                <div class="table-scrol">
                    <table class="table table-bordered mb-0 tblTaskDetails">
                        <thead>
                            <tr>
                                <th>Sl#</th>
                                <th>Activity</th>
                                <th>Region</th>
                                <th>Mines</th>
                            </tr>
                        </thead>
                        <tbody>
                        </tbody>
                    </table>
                </div>
            </div>
        </div>
    </div>
</div>


Step 2. Write the Javascript code
<script type="text/javascript">
document.addEventListener('DOMContentLoaded', function () {
    var calendarEl = document.getElementById('calendar');

    var calendar = new FullCalendar.Calendar(calendarEl, {
        initialView: 'dayGridMonth',
        headerToolbar: {
            left: 'prev,next today',
            center: 'title',
            right: 'dayGridMonth,timeGridWeek,timeGridDay'
        },
        events: function (fetchInfo, successCallback, failureCallback) {
            $.ajax({
                url:"@Url.Action("FetchCalenderData", "User")",
                method: 'GET',
                dataType: 'json',
                success: function (response) {
                    if (response.status === "Successful") {
                        // Aggregate events by date and status
                        var aggregatedEvents = {};
                        response.data.forEach(function (item) {
                            var date = item.actionDate;
                            var status = item.actionStatus;

                            if (!aggregatedEvents[date]) {
                                aggregatedEvents[date] = {};
                            }
                            if (!aggregatedEvents[date][status]) {
                                aggregatedEvents[date][status] = 0;
                            }
                            aggregatedEvents[date][status]++;
                        });

                        // Transform aggregated data into FullCalendar event format
                        var events = [];
                        for (var date in aggregatedEvents) {
                            for (var status in aggregatedEvents[date]) {
                                events.push({
                                    title: status + ' (' + aggregatedEvents[date][status] + ')',
                                    start: date,
                                    extendedProps: {
                                        status: status,
                                        statusCount: aggregatedEvents[date][status]
                                    }
                                });
                            }
                        }

                        successCallback(events);
                    } else {
                        console.log('Error: Data status is not successful');
                        failureCallback();
                    }
                },
                error: function (jqXHR, textStatus, errorThrown) {
                    console.log('Error: ' + textStatus);
                    failureCallback();
                }
            });
        },
        eventClick: function (info) {
            var status = info.event.extendedProps.status;
            var clickedDate = info.event.start;

            /*Date Formating For Send To DB for Filter*/
            const CurrentDateForDB = new Date(clickedDate);
            const year = CurrentDateForDB.getFullYear();
            const month = String(CurrentDateForDB.getMonth() + 1).padStart(2, '0');
            const day = String(CurrentDateForDB.getDate()).padStart(2, '0');
            const ClickedformattedDate = `${year}-${month}-${day}`;
            /*END*/

            var options = { day: 'numeric', month: 'short', year: 'numeric' };
            var formattedDate = clickedDate.toLocaleDateString('en-US', options);
            // Make an AJAX call with the status name
            $.ajax({
              url: "@Url.Action("FetchActivityDtlsByStatus", "User")",
                method: 'GET',
                data: { ActivityStatus: status, ActivityDate: ClickedformattedDate },
                success: function (response) {
                    console.log(response);

                    if (response.status === "Successful") {
                        var tbody = $('.tblTaskDetails tbody');
                        tbody.empty(); // Clear existing rows

                        response.data.forEach(function (item, index) {
                            var row = '<tr>' +
                                '<td>' + (index + 1) + '</td>' +
                                '<td>' + item.activiTyName + '</td>' +
                                '<td>' + item.regionName + '</td>' +
                                '<td>' + item.minesName + '</td>' +
                                '</tr>';
                            tbody.append(row);
                        });


                        if (clickedDate) {
                            $('.calender-date').text(formattedDate);
                        } else {
                            $('.calender-date').text('No Date Clicked');
                        }

                        // Clear existing status badges
                        $('.card-header div:nth-child(2)').empty();

                        // Display all unique statuses
                        var statusColors = {
                            'Approved': 'text-bg-success',
                            'Open': 'text-bg-primary',
                            'InProgress': 'text-bg-warning'
                        };

                        var uniqueStatuses = [...new Set(response.data.map(item => item.activityStatus))];
                        uniqueStatuses.forEach(function (status) {
                            var badgeClass = statusColors[status] || 'text-bg-secondary'; // Default color if status is not mapped
                            var statusBadge = '<h5>Status</h5><span class="badge ' + badgeClass + ' calender-status">' + status + '</span>';
                            $('.card-header div:nth-child(2)').append(statusBadge);
                        });

                    } else {
                        alert('Error: Data status is not successful');
                    }
                },
                error: function (jqXHR, textStatus, errorThrown) {
                    console.log('Error: ' + textStatus);
                }
            });


        }
    });

    calendar.render();
});
</script>

Step 3. Write the Controller code
#region------------------------Calendar View-------------------------------------
/// <summary>
/// CalenderView Page
/// </summary>
/// <returns></returns>
[HttpGet]
public IActionResult CalenderView()
{
    return View();
}

/// <summary>
/// </summary>
/// <returns></returns>
[HttpGet]
public async Task<IActionResult> FetchCalenderData()
{
    FetchCalendarData fcdaTa = new FetchCalendarData();
    try
    {
        using (var httpClient = new HttpClient(_clientHandler))
        {
            var response = await httpClient.PostAsJsonAsync(_options._apiRootURL + "User/GetCalendarData", fcdaTa);
            if (response.StatusCode.ToString() != "ServiceUnavailable")
            {
                string apiResponse = await response.Content.ReadAsStringAsync();
               var  ResultData = JsonConvert.DeserializeObject<FetchCalendarDataInfo>(apiResponse);
                return Json(ResultData);
            }
        }
    }
    catch (Exception ex)
    {
        throw ex;
    }
    return Json("");
}


/// <summary>
/// </summary>
/// <returns></returns>
[HttpGet]
public async Task<IActionResult> FetchActivityDtlsByStatus(FetchActivityDetailsData fcdaTa)
{
    try
    {
        using (var httpClient = new HttpClient(_clientHandler))
        {
            var response = await httpClient.PostAsJsonAsync(_options._apiRootURL + "User/GetActivityDtlsByStatus", fcdaTa);
            if (response.StatusCode.ToString() != "ServiceUnavailable")
            {
                string apiResponse = await response.Content.ReadAsStringAsync();
                var ResultData = JsonConvert.DeserializeObject<FetchActivityDetailsDataInfo>(apiResponse);
                return Json(ResultData);
            }
        }
    }
    catch (Exception ex)
    {
        throw ex;
    }
    return Json("");
}
#endregion-----------------------------------------------------------------------


Step 4. Page load JSON response
{
    "status": "Successful",
    "data": [
        {
            "actionDate": "2024-05-23",
            "actionStatus": "Open"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "Open"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "InProgress"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "Approved"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "Open"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "InProgress"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "Approved"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "Open"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "InProgress"
        },
        {
            "actionDate": "2024-05-23",
            "actionStatus": "Approved"
        },
        {
            "actionDate": "2024-05-24",
            "actionStatus": "InProgress"
        },
        {
            "actionDate": "2024-05-27",
            "actionStatus": "InProgress"
        },
        {
            "actionDate": "2024-05-27",
            "actionStatus": "Open"
        },
        {
            "actionDate": "2024-05-27",
            "actionStatus": "InProgress"
        },
        {
            "actionDate": "2024-05-27",
            "actionStatus": "Approved"
        }
    ]
}

JSON
Step 5. Button click JSON response

{
    "status": "Successful",
    "data": [
        {
            "activityStatus": "Open",
            "activiTyName": "Promotion Activities",
            "regionName": "Manchester",
            "minesName": "
Manchester",
            "activityDate": "2024-05-23"
        },
        {
            "activityStatus": "Open",
            "activiTyName": "Promotion Activities",
            "regionName": "Leeds",
            "minesName": "
Leeds",
            "activityDate": "2024-05-23"
        },
        {
            "activityStatus": "Open",
            "activiTyName": "Content Creation",
            "regionName": "",
            "minesName": "",
            "activityDate": "2024-05-23"
        },
        {
            "activityStatus": "Open",
            "activiTyName": "Promotion Activities",
            "regionName": "London",
            "minesName": "
London",
            "activityDate": "2024-05-23"
        }
    ]
}


Conclusion
Using JavaScript to integrate FullCalendar into a.NET Core application provides a strong way to create dynamic and interactive calendar interfaces. You've successfully installed FullCalendar in your.NET Core project, setup the required dependencies, made a view to show the calendar, and put a controller action in place to serve events by following the instructions in this guide.You can now use FullCalendar's features to create powerful scheduling tools, event management systems, or any other application that needs calendar capability, once it has been integrated into your program. Additionally, you are able to alter the calendar's look, functionality, and event handling to meet your own needs.

As you continue to develop your .NET Core application, you can further enhance the integration with FullCalendar by exploring additional plugins, implementing features such as drag-and-drop event creation, integrating with external data sources, or incorporating advanced scheduling functionalities.

Overall, integrating FullCalendar with .NET Core empowers you to create seamless and intuitive user experiences, enhancing the functionality and usability of your web applications.

HostForLIFE ASP.NET Core 8.0.4 Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core 8.0.1 Hosting - HostForLIFE :: Effectively Parsing Solutions in.NET 8 with DTE and Microsoft.Build

clock May 21, 2024 08:44 by author Peter

In the context of C#, a solution parser refers to a program or library that can parse and comprehend data from Visual Studio solution (.sln) files and its project (.csproj) files. These parsers enable developers to work with, adjust, and examine a solution's architecture and components, including its projects, dependencies, references, and build settings.

Benefits of a Solution Parser in C#

  • Automated Analysis and Reporting: Solution parsers enable the automated extraction of information regarding the structure, dependencies, and configurations of a solution. This feature proves to be valuable in generating reports on code quality, dependencies, and build status.
  • Dependency Management: Through the parsing of solution and project files, tools can analyze and visualize the dependencies between projects. This capability aids in identifying potential issues like circular dependencies or outdated packages.
  • Build Automation: Integrating solution parsers into build automation scripts allows for dynamic modification of solution and project configurations. This proves to be beneficial in continuous integration (CI) pipelines where different build environments may require different settings.
  • Refactoring Support: Developers can utilize solution parsers to automate refactoring tasks, such as renaming projects, updating namespaces, or restructuring solution folders.
  • Custom Tooling: Solution parsers can be utilized to build custom development tools that cater to specific needs, such as custom linting rules, project templates, or automated code generation.


Popular Parsing Solutions for C#

  • Microsoft Build (MSBuild): Visual Studio and.NET use MSBuild as their build platform. It provides developers with a variety of APIs to load, query, and edit project files. Solution files can be parsed indirectly using these APIs. Microsoft.Build.Evaluation and Microsoft.Build.Locator are the most often utilized namespaces for this kind of work.
  • EnvDTE: An automation model that gives developers a way to communicate with the Visual Studio environment is called the Development Tools Environment. Developers can work with solution and project elements from Visual Studio extensions or automation scripts thanks to this access.
  • NuGet Package Manager: The NuGet Package Manager offers tools such as NuGet.Protocol and NuGet.Packaging, which can be employed to manage dependencies specified in project files. When combined with other libraries, these tools can form an integral part of a comprehensive solution parsing and manipulation toolkit.


Implementation
Step 1. Xaml View
<Window x:Class="SolutionParserExample.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:SolutionParserExample"
        mc:Ignorable="d"
        Title="MainWindow" Height="450" Width="800">

    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="*"/>
            <RowDefinition Height="*"/>
            <RowDefinition Height="*"/>
        </Grid.RowDefinitions>

        <GroupBox Header="WithDTE Result[First Approach]" Grid.Row="0">
            <DataGrid x:Name="DataridWithDTE"/>
        </GroupBox>

        <GroupBox Header="WithoutOutDTE Result[Second Approach]" Grid.Row="1">
            <DataGrid x:Name="DataridWithoutDTE"/>
        </GroupBox>

        <GroupBox Header="WithoutOutDTE Result[Third Approach]" Grid.Row="2">
            <DataGrid x:Name="DataridThirdApproach"/>
        </GroupBox>
    </Grid>

</Window>

Code behind(CS file)
using EnvDTE80;
using Microsoft.Build.Construction;
using System.Collections.ObjectModel;
using System.IO;
using System.Reflection;
using Window = System.Windows.Window;

namespace SolutionParserExample
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        Type? s_SolutionParserFromSolutionFile;
        PropertyInfo? s_SolutionParserReader;
        MethodInfo? s_SolutionParserSolution;
        PropertyInfo? s_SolutionParserProjects;
        public List<ProjectSolution> ProjectsList { get; set; }
        ProjectInSolution[] arrayOfProjects = null;

        // Change below path with your solution file
        string solutionFilePath = @"C:\Users\sanjay\Desktop\MediaElementSampleProject (3)\MediaElementSampleProject\MediaElementSampleProject.sln";
        private ObservableCollection<ProjectDetail> _ProjectsListWithoutDTE;

        public ObservableCollection<ProjectDetail> ProjectsListWithoutDTE
        {
            get { return _ProjectsListWithoutDTE; }
            set { _ProjectsListWithoutDTE = value; }
        }

        private ObservableCollection<ProjectDetail> _ProjectsListWithDTE;

        public ObservableCollection<ProjectDetail> ProjectsListWithDTE
        {
            get { return _ProjectsListWithDTE; }
            set { _ProjectsListWithDTE = value; }
        }

        private ObservableCollection<ProjectDetail> _ProjectsListThirdApproachExample;

        public ObservableCollection<ProjectDetail> ProjectsListThirdApproachExample
        {
            get { return _ProjectsListThirdApproachExample; }
            set { _ProjectsListThirdApproachExample = value; }
        }

        public MainWindow()
        {
            InitializeComponent();
            GetProjectDetailsWithDTE(); // First Approach
            GetProjectDetailsWithoutDTE(); // Second Approach
            GetProjectInformationByThirdApproach(); // Third Approach
            DataridWithoutDTE.ItemsSource = ProjectsListWithoutDTE;
            DataridWithDTE.ItemsSource = ProjectsListWithDTE;
            DataridThirdApproach.ItemsSource = ProjectsListThirdApproachExample;
        }

        private void GetProjectInformationByThirdApproach()
        {
            // Load the solution file
            SolutionFile solutionFile = SolutionFile.Parse(solutionFilePath);
            ProjectsListThirdApproachExample = new ObservableCollection<ProjectDetail>();

            // Iterate through each project in the solution
            foreach (var project in solutionFile.ProjectsInOrder)
            {
                string projectName = project.ProjectName;
                string projectRelativePath = project.RelativePath;
                string projectFullPath = System.IO.Path.Combine(System.IO.Path.GetDirectoryName(solutionFilePath), projectRelativePath);
                string projectGuid = project.ProjectGuid;
                string projectType = project.ProjectType.ToString();

                ProjectsListThirdApproachExample.Add(new ProjectDetail
                {
                    ProjectName = projectName,
                    ProjectGuid = projectGuid,
                    ProjectType = projectType,
                    RelativePath = projectRelativePath,
                });
            }
        }

        private string GetRelativePath(string basePath, string fullPath)
        {
            Uri baseUri = new Uri(basePath + Path.DirectorySeparatorChar);
            Uri fullUri = new Uri(fullPath);
            Uri relativeUri = baseUri.MakeRelativeUri(fullUri);
            string relativePath = Uri.UnescapeDataString(relativeUri.ToString());

            return relativePath.Replace('/', Path.DirectorySeparatorChar);
        }

        private void GetProjectDetailsWithDTE()
        {
            string solutionDirectory = Path.GetDirectoryName(solutionFilePath);
            var dte = GetDTE();
            dte.Solution.Open(solutionFilePath);
            ProjectsListWithDTE = new ObservableCollection<ProjectDetail>();

            foreach (EnvDTE.Project project in dte.Solution.Projects)
            {
                ProjectsListWithDTE.Add(new ProjectDetail
                {
                    ProjectName = project.Name,
                    RelativePath = GetRelativePath(solutionDirectory, project.FullName),
                });
            }
            dte.Solution.Close(false);
        }

        private DTE2 GetDTE()
        {
            Type dteType = Type.GetTypeFromProgID("VisualStudio.DTE.17.0", true);
            object obj = Activator.CreateInstance(dteType, true);
            return (DTE2)obj;
        }

        private void GetProjectDetailsWithoutDTE()
        {
            s_SolutionParserFromSolutionFile = Type.GetType("Microsoft.Build.Construction.SolutionFile, Microsoft.Build, Version=15.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", false, true);

            if (s_SolutionParserFromSolutionFile != null)
            {
                s_SolutionParserReader = s_SolutionParserFromSolutionFile.GetProperty("SolutionReader", BindingFlags.NonPublic | BindingFlags.Instance);
                s_SolutionParserProjects = s_SolutionParserFromSolutionFile.GetProperty("ProjectsInOrder", BindingFlags.Public | BindingFlags.Instance);
                s_SolutionParserSolution = s_SolutionParserFromSolutionFile.GetMethod("ParseSolution", BindingFlags.NonPublic | BindingFlags.Instance);
            }

            if (s_SolutionParserFromSolutionFile != null)
            {
                var solutionResultParser = s_SolutionParserFromSolutionFile.GetConstructors(BindingFlags.Instance | BindingFlags.NonPublic).First().Invoke(null);

                using (var streamReader = new StreamReader(solutionFilePath))
                {
                    s_SolutionParserReader.SetValue(solutionResultParser, streamReader, null);
                    s_SolutionParserSolution.Invoke(solutionResultParser, null);
                }

                var projects = new List<ProjectSolution>();
                var projectsobj = s_SolutionParserProjects.GetValue(solutionResultParser, null) as IReadOnlyList<ProjectInSolution>;

                if (projectsobj != null)
                {
                    arrayOfProjects = projectsobj.ToArray();
                }

                for (int i = 0; i < arrayOfProjects.Length; i++)
                {
                    projects.Add(new ProjectSolution(arrayOfProjects.GetValue(i)));
                }

                this.ProjectsList = projects;
                ProjectsListWithoutDTE = new ObservableCollection<ProjectDetail>();

                foreach (var item in ProjectsList)
                {
                    ProjectsListWithoutDTE.Add(new ProjectDetail
                    {
                        ProjectName = item.ProjectName,
                        ProjectGuid = item.ProjectGuid,
                        ProjectType = item.ProjectType,
                        RelativePath = item.RelativePath,
                    });
                }
            }
        }
    }

    public class ProjectSolution
    {
        static readonly PropertyInfo? s_ProjectName;
        static readonly PropertyInfo? s_RelativePath;
        static readonly PropertyInfo? s_ProjectGuid;
        static readonly PropertyInfo? s_ProjectType;

        static ProjectSolution()
        {
            Type? ProjectInSolution = Type.GetType("Microsoft.Build.Construction.ProjectInSolution, Microsoft.Build, Version=15.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", false, true);

            if (ProjectInSolution != null)
            {
                s_ProjectName = ProjectInSolution.GetProperty("ProjectName", BindingFlags.Public | BindingFlags.Instance);
                s_RelativePath = ProjectInSolution.GetProperty("RelativePath", BindingFlags.Public | BindingFlags.Instance);
                s_ProjectGuid = ProjectInSolution.GetProperty("ProjectGuid", BindingFlags.Public | BindingFlags.Instance);
                s_ProjectType = ProjectInSolution.GetProperty("ProjectType", BindingFlags.Public | BindingFlags.Instance);
            }
        }

        public string ProjectName { get; private set; }
        public string RelativePath { get; private set; }
        public string ProjectGuid { get; private set; }
        public string ProjectType { get; private set; }

        public ProjectSolution(object solutionProject)
        {
            this.ProjectName = (s_ProjectName == null ? "" : s_ProjectName.GetValue(solutionProject, null) as string);
            this.RelativePath = (s_RelativePath == null ? "" : s_RelativePath.GetValue(solutionProject, null) as string);
            this.ProjectGuid = (s_ProjectGuid == null ? "" : s_ProjectGuid.GetValue(solutionProject, null) as string);
            this.ProjectType = (s_ProjectType == null ? "" : s_ProjectType.GetValue(solutionProject, null).ToString());
        }
    }
}


Step 2. Result View

HostForLIFE ASP.NET Core 8.0.4 Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core 8.0.1 Hosting - HostForLIFE :: Recognizing OLTP versus OLAP Techniques

clock May 16, 2024 09:04 by author Peter

Two key methodologies stand out in the field of data management and analytics: online transaction processing, or OLTP, and online analytical processing, or OLAP. These approaches address various facets of data processing, storage, and analysis and have diverse functions. It is essential to comprehend the distinctions between OLTP and OLAP in order to construct efficient data management systems and make wise decisions across a range of industries, including e-commerce and banking.

1. History and Evolution
OLTP

OLTP traces its origins back to the 1960s with the emergence of early database management systems (DBMS). It primarily focuses on managing transaction-oriented tasks, such as recording sales, processing orders, and updating inventory levels in real time. OLTP systems are designed for high concurrency and rapid response times, ensuring efficient handling of numerous short online transactions.

OLAP
On the other hand, OLAP gained prominence in the late 1980s and early 1990s as organizations recognized the need for advanced analytics and decision support systems. OLAP systems are optimized for complex queries, ad-hoc reporting, and multidimensional analysis. They provide a consolidated view of data from various sources, enabling users to gain insights through interactive analysis and reporting.

2. Purpose and Need

OLTP: The primary goal of OLTP systems is to support the day-to-day transactional operations of an organization. These transactions are typically characterized by short response times and high throughput. For example, when a customer places an order on an e-commerce website, the OLTP system ensures that the order is processed promptly, inventory is updated, and the transaction is recorded in the database.

OLAP: In contrast, OLAP systems are designed to facilitate decision-making by providing a comprehensive view of historical and aggregated data. They enable users to analyze trends, identify patterns, and make informed strategic decisions. For instance, a retail company might use OLAP to analyze sales data across different regions, product categories, and time periods to optimize inventory management and marketing strategies.

3. Evolution to Address Modern Challenges

As technology evolves and data volumes continue to grow exponentially, both OLTP and OLAP systems have undergone significant transformations to address modern challenges:

  • Scalability: With the advent of cloud computing and distributed databases, OLTP and OLAP systems have become more scalable and resilient. They can handle massive volumes of data and support high levels of concurrency, ensuring optimal performance even under heavy workloads.
  • Real-time Analytics: The demand for real-time analytics has led to the integration of OLTP and OLAP functionalities in hybrid transactional/analytical processing (HTAP) systems. These systems combine the benefits of both OLTP and OLAP, allowing organizations to perform analytics on live transactional data without the need for separate data warehouses.
  • In-memory Computing: In-memory computing has emerged as a game-changer for both OLTP and OLAP systems, enabling faster data processing and analysis by storing data in memory rather than on disk. This approach significantly reduces latency and enhances overall system performance, making it ideal for time-sensitive applications and interactive analytics.

Demonstration in C#
Below is a simplified C# code snippet demonstrating the difference between OLTP and OLAP queries using a hypothetical e-commerce scenario:
// OLTP Query: Retrieve order details for a specific customer
public Order GetOrderDetails(int customerId)
{
    using (var dbContext = new OLTPDbContext())
    {
        return dbContext.Orders
            .Include(o => o.OrderItems)
            .SingleOrDefault(o => o.CustomerId == customerId);
    }
}

// OLAP Query: Analyze sales data by product category
public Dictionary<string, int> GetSalesByCategory()
{
    using (var dbContext = new OLAPDbContext())
    {
        return dbContext.OrderItems
            .GroupBy(oi => oi.Product.Category)
            .ToDictionary(g => g.Key, g => g.Sum(oi => oi.Quantity));
    }
}


In this example, the OLTP query retrieves order details for a specific customer in real time, while the OLAP query analyzes sales data by product category for strategic decision-making.

OLTP and OLAP practices play complementary roles in modern data management and analytics. By understanding their differences and capabilities, organizations can design robust systems that meet their transactional and analytical needs effectively.

HostForLIFE ASP.NET Core 8.0.4 Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core 8.0.1 Hosting - HostForLIFE :: Putting Role-Based and Policy-Based Authorization into Practice with .NET Core

clock May 7, 2024 09:03 by author Peter

The focus of this essay is on utilizing.NET Core for policy-based authorization. If you want to understand the fundamentals of authorization techniques, please click the link. Let's dissect the essential elements:

Configuring Authentication

  • Initially, configure your application's authentication. By doing this, users' identity and authentication are guaranteed.
  • To handle user logins and token issuance, you will usually employ a third-party authentication provider (such as Microsoft Identity Platform).

Authorization Policies

  • Next, define authorization policies. These policies determine who can access specific parts of your application.
  • Policies can be simple (e.g., “authenticated users only”) or more complex (based on claims, roles, or custom requirements).

Default Policy

  • Create a default policy that applies to all endpoints unless overridden.
  • For example, you might require users to be authenticated and have specific claims (like a preferred username).

Custom Policies

  • Add custom policies for specific scenarios. These allow fine-grained control over access.
  • For instance, you can create policies based on permissions (e.g., “create/edit user” or “view users”).

Permission Requirements

  • Define permission requirements (e.g., PermissionAuthorizationRequirement). These represent specific actions or features.
  • For each requirement, check if the user has the necessary permissions (based on their roles or other criteria).

Role-Based Authorization

  • Optionally, incorporate role-based authorization.
  • Roles group users with similar access levels (e.g., “admin,” “user,” etc.). You can assign roles to users.

Authorization Handlers

  • Implement custom authorization handlers (e.g., AuthorizationHandler).
  • These handlers evaluate whether a user meets the requirements (e.g., has the right permissions or roles).

Controller Actions

  • In your controller actions, apply authorization rules.
  • Use [Authorize] attributes with either policies or roles.

Middleware Result Handling

  • Customize how authorization results are handled (e.g., 401 Unauthorized or 403 Forbidden responses).
  • You can create an AuthorizationMiddlewareResultHandler to manage this behavior

Let's go ahead will the actual coding for your Web API:

Program.cs: User Azure AD authentication as it is. Decide your Permission Keys for Authorization.
Only one policy in the AddPolicy method
//In Program.cs file, add the below code

        var builder = WebApplication.CreateBuilder(args);

        // Authentication using Microsoft Identity Platform
        services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
            .AddMicrosoftIdentityWebApi(builder.Configuration.GetSection("AzureAd"));

        // Default policy-based authorization
        services.AddAuthorizationCore(options =>
        {
            options.DefaultPolicy = new AuthorizationPolicyBuilder()
                .RequireAuthenticatedUser()
                .RequireClaim("preferred_username")
                .RequireScope("user_impersonation")
                .Build();

            options.AddPolicy("Permission1", policy =>
                policy.Requirements.Add(new PermissionAuthorizationRequirement("Permission1")));

            options.AddPolicy("Permission2", policy =>
                policy.Requirements.Add(new PermissionAuthorizationRequirement("Permission2")));
        });

        // Authorization handler
        services.AddScoped<IAuthorizationHandler, AuthorizationHandler>();

        // Middleware result handler for response errors (401 or 403)
        services.AddScoped<IAuthorizationMiddlewareResultHandler, AuthorizationMiddlewareResultHandler>();

        // Other services and configurations...


PermissionAuthorizationRequirement .cs: Just copy and paste the requirement
//Create new PermissionAuthorizationRequirement.cs file for Custom requirement for permission-based authorization
    public class PermissionAuthorizationRequirement : IAuthorizationRequirement
    {
        public PermissionAuthorizationRequirement(string allowedPermission)
        {
            AllowedPermission = allowedPermission;
        }

        public string AllowedPermission { get; }
    }


AuthorizationHandler.cs: (Just copy and paste the code. Make sure to hit the DB call to get the permissions list by using App Manager)
// Custom authorization handler to check user permissions
    public class AuthorizationHandler : AuthorizationHandler<PermissionAuthorizationRequirement>
    {
        // Business layer service for user-related operations
        private readonly IAppManager _appManager;

        public AuthorizationHandler(IAppManager appManager)
        {
            _appManager= appManager;
        }

        protected override async Task HandleRequirementAsync(AuthorizationHandlerContext context, PermissionAuthorizationRequirement requirement)
        {
            // Find the preferred_username claim
            var preferredUsernameClaim = context.User.Claims.FirstOrDefault(c => c.Type == "preferred_username");

            if (preferredUsernameClaim is null)
            {
                // User is not authenticated
                context.Fail(new AuthorizationFailureReason(this, "UnAuthenticated"));
                return;
            }

            // Call the business layer method to check if the user exists
            var user = await _appManager.GetUserRolesAndPermissions(preferredUsernameClaim);

            if (user is null || !user.IsActive)
            {
                // User does not exist or is inactive
                context.Fail(new AuthorizationFailureReason(this, "UnAuthenticated"));
                return;
            }

            // Select the list of permissions that the user is assigned
            // Here you will fetch the Permission1 and Permission2
            var userPermissions = user.UserPermissions?.Select(k => k.PermissionKey);

            // Get the current permission key from the controller's action method
            string allowedPermission = requirement.AllowedPermission;


            // Check if the current request carries this permission
            if (userPermissions.Any(permission => permission == allowedPermission))
            {
                // Permission granted
                context.Succeed(requirement);
                return;
            }

            // Permission denied
            context.Fail();
        }
    }


AuthorizationMiddlewareResultHandler.cs: Just copy and paste. No changes are required.

// AuthorizationMiddlewareResultHandler to decide response code (401 or success)
public class AuthorizationMiddlewareResultHandler : IAuthorizationMiddlewareResultHandler
{
    private readonly ILogger<AuthorizationMiddlewareResultHandler> _logger;
    private readonly Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler _defaultHandler = new();

    public AuthorizationMiddlewareResultHandler(ILogger<AuthorizationMiddlewareResultHandler> logger)
    {
        _logger = logger;
    }

    public async Task HandleAsync(
        RequestDelegate next,
        HttpContext context,
        AuthorizationPolicy policy,
        PolicyAuthorizationResult authorizeResult)
    {
        var authorizationFailureReason = authorizeResult.AuthorizationFailure?.FailureReasons.FirstOrDefault();
        var message = authorizationFailureReason?.Message;

        if (string.Equals(message, "UnAuthenticated", StringComparison.CurrentCultureIgnoreCase))
        {
            // Set response status code to 401 (Unauthorized)
            context.Response.StatusCode = "401";
            _logger.LogInformation("401 failed authentication");
            return;
        }

        // If not unauthorized, continue with default handler
        await _defaultHandler.HandleAsync(next, context, policy, authorizeResult);
    }
}

Controller.cs: Finally, in the dashboard controller, add the attributes [Authorize] and [Policies]. Here you will define Permission1 and Permission2
// Dashboard controller
[Authorize]
[Route("api/[controller]")]
[ApiController]
public class DashboardController : ControllerBase
{
    [HttpGet]
    [Route("get-dashboardDetails")]
    [Authorize(Policy = "Permission1")]
    public async Task GetAllDashboardDetailsAsync()
    {
        // Your logic for fetching all user details
        return await GetAllDashboardDetailsAsync();
    }

    [HttpGet]
    [Route("create-product")]
    [Authorize(Policy = "Permission2")]
    //[Authorize(Policy = nameof(Read string from ENUM))]
    public async Task CreateProductAsync([FromBody] Product product)
    {
        // Your logic for creating a new product
    }
}

Now if you want to combine with Roles based authorization in your Web API code. Follow the below steps or you can avoid moving forward. It is almost the same steps as we did with the policy-based authorization with a small difference that you will figure out in the code.

1. Register the RoleAuthorizationHandler: In your Program.cs file, just add the following line to register the RoleAuthorizationHandler
builder.Services.AddScoped<IAuthorizationHandler, RoleAuthorizationHandler>();

2. RoleAuthorizationHandler: Below is the RoleAuthorizationHandler.cs one that checks whether the user has the required roles. Create one more handler
public class RoleAuthorizationHandler : AuthorizationHandler<RolesAuthorizationRequirement>
{
    private readonly IAppManager _appManager;

    public RoleAuthorizationHandler(IAppManager appManager)
    {
        _appManager= appManager;
    }

    protected override async Task HandleRequirementAsync(AuthorizationHandlerContext context, RolesAuthorizationRequirement requirement)
    {
        // Find the preferred_username claim
        Claim claim = context.User.Claims.FirstOrDefault(c => c.Type == "preferred_username");

        if (claim is not null)
        {
            // Get user details
            var userRoles = await _appManager.GetUserRolesAndPermissions(claim.Value);

            // Check if the user's roles match the allowed roles
            var roles = requirement.AllowedRoles;
            if (userRoles .Any(x => roles.Contains(x)))
            {
                context.Succeed(requirement); // User has the required roles
            }
            else
            {
                context.Fail(); // User does not have the required roles
                return;
            }
        }
        await Task.CompletedTask;
    }
}


3. Usage in DashboardController: In your DashboardController, you can use both roles and policies for authorization. For example:
[HttpGet]
[Route("get-dashboardDetails")]
[Authorize(Roles = "Super_Admin, Role_Administrator")] // Can have multiple roles. You can choose either Roles or Policy or both
[Authorize(Policy = "Permission1")] // Can have only one policy per action to accept.
public async Task GetAllDashboardDetailsAsync()
{
    // Your logic for fetching all user details
    return await GetAllDashboardDetailsAsync();
}


That's all. Your Web API will work like magic.
Stay tuned for more learning.

HostForLIFE ASP.NET Core 8.0.4 Hosting

European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in