European ASP.NET 4.5 Hosting BLOG

BLOG about ASP.NET 4, ASP.NET 4.5 Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European ASP.NET Core Hosting - HostForLIFE :: Knowing C#'s GUID, UUID, and ULID

clock June 2, 2025 09:11 by author Peter

Ensuring the uniqueness of identifiers across systems, sessions, or records is a crucial challenge when creating distributed systems and databases. Popular methods for creating unique IDs include ULID (Universally Unique Lexicographically Sortable Identifier), UUID (Universally Unique Identifier), and GUID (Globally Unique Identifier).

In this article, you'll discover.

What GUID, UUID, and ULID are,

  • Their differences and use cases,
  • How to implement them in C#, and
  • A performance-oriented analysis of their suitability for specific applications.

What are GUID, UUID, and ULID?
1. GUID (Globally Unique Identifier)
A GUID is a 128-bit unique identifier widely used in Microsoft-based systems. While GUID is a term Microsoft uses, it is essentially a UUID (as specified in RFC 4122). By nature, GUIDs guarantee uniqueness across distributed systems by relying on either randomness or timestamp-based generation strategies.

Format: GUIDs are typically represented as a 36-character hexadecimal string with hyphens.
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Example: f47ac10b-58cc-4372-a567-0e02b2c3d479

Key Properties

  • Uniqueness: Globally unique across systems.
  • Randomness: Commonly generated with UUID v4 (random generation).

Primary Use Cases

  • Unique Primary Keys in distributed databases.
  • Globally unique API keys, session tokens, and resource identifiers.
  • Identification of components/resources in Microsoft technologies like COM (.NET Class IDs).

2. UUID (Universally Unique Identifier)
A UUID is an international standard for generating unique 128-bit identifiers, defined under RFC 4122. Conceptually, GUID and UUID are nearly identical, though UUID strictly follows the RFC and is widely used across non-Microsoft frameworks.

Format: UUID adopts the same format as GUID.
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Example: 550e8400-e29b-41d4-a716-446655440000

Key Properties

  • Cross-platform Compatibility: Supported by programming languages like Python, Java, Go, and Node.js.
  • Determinism: Using UUID v3/v5 allows deterministic generation for the same inputs.

Primary Use Cases

  • Unique IDs in distributed services and databases.
  • Assigning identifiers to resources in cross-platform systems.
  • APIs and microservices need universal identifiers.

3. ULID (Universally Unique Lexicographically Sortable Identifier)
A ULID is a 128-bit identifier designed to overcome the shortcomings of GUID/UUID in sorting scenarios. ULIDs embed timestamp information into the identifier, making them lexicographically sortable (i.e., they can be sorted naturally based on their textual representations).

Format: Base32-encoded string without special characters.

01GZHT44KMWWT5V2Q4RQ6P8VWT

First 48 bits: Millisecond timestamp (ensures natural ordering).
Last 80 bits: Random entropy.

Key Properties

  • Ordered: Unlike GUID/UUID, ULIDs are naturally sortable because the timestamp is embedded upfront.
  • Readable: Fewer characters than UUID (Base32 encoding instead of Base16/Hex).

Primary Use Cases

  • Log and Event Tracking: Create lexicographically ordered event logs.
  • High-Frequency Inserts: Reduce database index fragmentation compared to GUIDs/UUIDs.
  • Human-readable, unique IDs.

Comparing GUID, UUID, and ULID: Features and Suitability

Aspect GUID UUID ULID
Bit Size 128 bits 128 bits 128 bits
Encoding Hexadecimal Hexadecimal Base32
Sorting Not Sortable Not Sortable Lexicographically Sortable
Contains Timestamp? Optional Optional Yes
Write Performance High Fragmentation in DB Index High Fragmentation in DB Index Low Fragmentation (Sequential IDs)
Primary Use Microsoft systems Cross-platform, API, databases Logging, time-ordered systems

Use Cases for GUID, UUID, and ULID
When to Use GUID

  • Microsoft Ecosystems: COM components, .NET assemblies, and Azure Services utilize GUIDs extensively.
  • Distributed Databases: Ensures unique keys even when records are written independently across systems.
  • Session Tracking: Use a GUID to assign globally unique session IDs.

When to Use UUID

  • Cross-Platform Compatibility: Works across distributed applications and languages like Python, Java, .NET.
  • APIs and Microservices: Generate identifiers for resources shared across multiple systems.
  • Randomized Unique IDs: UUID v4 is ideal for cases requiring uniqueness without predictable patterns.

When to Use ULID

  • Logging Systems: Generate sortable, unique IDs to track events or logs while maintaining a time correlation.
  • Performance Analysis for Databases

Database Indexing

  • GUID/UUID: IDs generated randomly (e.g., v4) lead to non-sequential inserts in clustered indexes, resulting in index fragmentation.
  • ULID: ULID's sequential nature (timestamp) ensures that inserts are ordered naturally, reducing index fragmentation.

Example Performance Metrics (MySQL/PostgreSQL)

Metric GUID/UUID ULID
Insert Speed Slower (Random Inserts) Faster (Sequential Inserts)
Index Fragmentation High Low
Query Performance Moderate Better
Storage 16 bytes per ID 16 bytes per ID

Implementing GUID, UUID, and ULID in C#

1. GUID in C#
The System.Guid class provides built-in support for creating GUIDs.
using System;

class Program
{
    static void Main()
    {
        Guid guid = Guid.NewGuid();
        Console.WriteLine($"Generated GUID: {guid}");
    }
}


2. UUID in C#
In .NET Core and Framework, Guid is already a UUID generator (v4 by default).
using System;

class Program
{
    static void Main()
    {
        // Generate a UUID (same as GUID)
        Guid uuid = Guid.NewGuid();
        Console.WriteLine($"Generated UUID (v4): {uuid}");
    }
}


3. ULID in C#
The .NET ecosystem does not natively support ULIDs, but several libraries like Ulid can be used.
Steps
Install UlidSharp from NuGet
dotnet add package Ulid

Generate ULID
class Program
{
    static void Main()
    {
        // Generate a ULID
        var ulid = Ulid.NewUlid();
        Console.WriteLine($"Generated ULID: {ulid}");
    }
}

Test Samples

Best Practices
GUID/UUID

  • Avoid using GUIDs/UUIDs as a primary key in clustered indexes unless uniqueness is more critical than performance.
  • Where possible, consider surrogate keys for better performance.

ULID

  • Use ULIDs in time-sensitive applications or logs.
  • Ideal for high-concurrency, high-volume systems like IoT devices or analytics workloads.

Conclusion
GUID, UUID, and ULID each offer distinct advantages based on the application's requirements. If you're developing in the Microsoft ecosystem or need universal uniqueness, GUIDs/UUIDs are excellent. If your application requires ordering, low index fragmentation, and better performance for high-insert workloads, ULID is a superior option.
By understanding the strengths and limitations of each, you can optimize your systems for unique identifier performance, scalability, and order requirements!

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core Hosting - HostForLIFE :: An Example of a .NET core MAUI with a SQLite Database Login Page

clock May 26, 2025 09:11 by author Peter

We'll talk about creating a basic SQLite DB application with a login page here, using MAUI (Multi-platform App UI). The SQLite Database will be used to develop logic using a SQLite helper class. First, we'll use Visual Studio 2022 to build an MAUI App project. The project will produce a single home page similar to the one below.

Following that, some basic login and registration xaml pages will be created. The SQLiteHelper class for database activity will then be created. Install the NuGet package's Microsoft.Data.Sqlite package.

public class SQLiteHelper
{
    private string dbPath;
    public SQLiteHelper()
    {
        dbPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "users.db");
        InitializeDatabase();
    }
    private void InitializeDatabase()
    {
        using var connection = new SqliteConnection($"Data Source={dbPath}");
        connection.Open();
        var command = connection.CreateCommand();
        command.CommandText =
        @"
            CREATE TABLE IF NOT EXISTS Users (
                Id INTEGER PRIMARY KEY AUTOINCREMENT,
                Username TEXT NOT NULL,
                Password TEXT NOT NULL
            );
        ";
        command.ExecuteNonQuery();
    }
}


Login page Example given below.
public partial class LoginPage : ContentPage
{
    private SQLiteHelper dbHelper = new();
    public LoginPage()
    {
        InitializeComponent();
    }
    private void OnLoginClicked(object sender, EventArgs e)
    {
        if (dbHelper.ValidateUser(UsernameEntry.Text, PasswordEntry.Text))
            DisplayAlert("Success", "Login successful!", "OK");
        else
            DisplayAlert("Error", "Invalid credentials.", "OK");
    }
    private void OnGoToRegisterClicked(object sender, EventArgs e)
    {
        Navigation.PushAsync(new RegisterPage());
    }
}

Go to App.xaml.cs page adds the Login page as main page like launch view as mention below example
namespace MauiApp1
{
    public partial class App : Application
    {
        public App()
        {
            InitializeComponent();

            //MainPage = new AppShell();
            MainPage = new NavigationPage(new LoginPage());
        }
    }
}


Output

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting - HostForLIFE :: Using C# 13 to Create a "Pooled" Dependency Injection Lifetime

clock May 22, 2025 09:14 by author Peter

It supports the notion of Inversion of Control (IoC), which facilitates better testability, simpler code maintenance, and a clear division of responsibilities. A key component of the contemporary ASP.NET Core application architecture is dependency injection, or DI. Three common service lifetimes are supported by ASP.NET Core's lightweight, extensible container for injecting dependencies:

  • Transient: An instance of the service is created every time it is requested. This is suitable for lightweight, stateless applications.
  • Scoped: A single instance is created per HTTP request or per scope. This is ideal for services that maintain state throughout a single operation.
  • Singleton: Use singletons for stateless services that are more expensive to instantiate or manage shared state safely across threads.

These durations are adequate for the majority of situations, but you might require a hybrid behavior—a service that does not survive for the full application lifetime but avoids frequent allocations like a singleton. This is especially important in high-throughput systems where strict control over garbage collection pressure, memory allocation, and performance is required.

Through the ObjectPool and ObjectPoolProvider APIs in the Microsoft.Extensions,.NET presents the idea of object pooling in order to meet this requirement. namespace for ObjectPool. By keeping a pool of pre-allocated objects, object pooling enables you to reuse object instances effectively, lowering memory load and enhancing performance for frequently used resources.

In this article, we’ll go beyond the default service lifetimes and demonstrate how to implement a custom "Pooled" lifetime registration. We’ll explore how to integrate ObjectPool<T> with the ASP.NET Core DI system, taking advantage of new features in C# 13 and .NET 9 to build a robust, performant, and reusable solution. You'll learn how to:

  • Utilize object pooling to define a custom service lifetime.
  • Cleanly register and configure pooled services.
  • Pooling in multithreaded environments should follow best practices.
  • You should avoid common pitfalls such as thread safety issues and misuse of objects.
As a result of this course, you will be able to create performant services with hybrid lifetime behavior that bridge the gap between transient and singleton design patterns.

When to Use Object Pooling?

When creating and destroying objects frequently would result in excessive resource consumption and memory pressure, object pooling is an effective performance optimisation technique. We maintain a pool of reusable objects and serve them on demand instead of instantiating new objects repeatedly.

When the following conditions are met, use object pooling:
  • Object creation is expensive: A pool can significantly reduce CPU and memory overhead if an object involves non-trivial setup (e.g., allocating large arrays, loading configuration, or initialising resources).
  • Objects are short-lived and used frequently: Pooling can prevent constant allocation and garbage collection cycles when a particular type is repeatedly required during the application's lifespan, but only for short bursts (e.g., during each request, batch operation, or parsing cycle).
  • Objects that are thread-safe or can be reset easily: The objects should be stateless, thread-safe, or able to be safely reset to a clean state before reuse in order to ensure consistency and prevent unpredictable behavior.
Common Real-World Examples
Object pooling is highly effective in the following use cases:
  • StringBuilder instances: Creating new StringBuilder instances each time can be wasteful, especially in tight loops and logging.
  • Memory buffers (e.g., byte[] arrays): Network I/O, file I/O, and serialization processes use memory buffers for network I/O, file I/O, and serialization. The reuse of buffers reduces GC pressure and maintains throughput.
  • Parsers or serializer: When handling data streams or messages repeatedly, objects such as JsonSerializer, XmlReader, or custom parsers can benefit from pooling.
Best Practices
  • Reset before reuse: When returning objects to the pool, ensure they are reset to a known state before reuse. This prevents data leaks.
  • Avoid pooling complex dependencies: Objects with deep dependency trees or significant shared states should not be pooled unless explicitly designed to be so.
  • Benchmark before adopting: To validate object pools' benefits, benchmark their performance before and after introducing one.
In the right context-especially for high-throughput, memory-sensitive applications-object pooling can yield significant performance gains with minimal trade-offs.

Step-by-Step: Implementing Object Pooling in ASP.NET Core with DI
The use of object pooling is a proven strategy for reducing memory allocations and garbage collection overhead in .NET applications. Let's walk through how to set up and use object pooling in a clean, reusable manner using Microsoft.Extensions.ObjectPool.

Step 1: Install the Required NuGet Package

It is necessary to install Microsoft's official object pooling library before you can get started:
dotnet add package Microsoft.Extensions.ObjectPool

or if using NuGet Package Manager Console
NuGet\Install-Package Microsoft.Extensions.ObjectPool

In .NET applications, this package provides the foundational interfaces and default implementations for managing object pools.

Step 2: Define the Pooled Service
Suppose we have a service that performs intensive in-memory string operations using StringBuilder, which is expensive to allocate repeatedly.
using System.Text;

namespace PooledDI.Core.Services;
public class StringProcessor
{
    private readonly StringBuilder _builder = new();

    public string Process(string input)
    {
        _builder.Clear();
        _builder.Append(input.ToUpperInvariant());
        return _builder.ToString();
    }
}


Why Pool It?
When the StringBuilder object is used repeatedly at scale, internal buffers are allocated, which can be expensive. Pooling StringProcessor reduces these allocations and improves performance.

Step 3: Create a Custom PooledObjectPolicy<T>
An object pool must know how to create and reset objects. This is achieved through a custom PooledObjectPolicy<t> implementation:</t>
using Microsoft.Extensions.ObjectPool;
using PooledDI.Core.Services;

namespace PooledDI.Core.Policies;

public class StringProcessorPolicy : PooledObjectPolicy<StringProcessor>
{
    public override StringProcessor Create() => new StringProcessor();

    public override bool Return(StringProcessor obj)
    {

        return true;
    }



}


Best Practice
Ensure sensitive or inconsistent data is cleaned or reset before returning an object to the pool.

Step 4: Register the Pooled Service with Dependency Injection
Although ASP.NET Core's built-in DI container doesn't provide a "pooled" lifetime, we can achieve the same effect by injecting an ObjectPool<t>.</t>

In order to encapsulate the logic of registration, create an extension method as follows:
using Microsoft.Extensions.ObjectPool;

namespace PooledDI.Api.Services;

public static class ServiceCollectionExtensions
{
    public static IServiceCollection AddPooled<TService, TPolicy>(this IServiceCollection services)
        where TService : class
        where TPolicy : PooledObjectPolicy<TService>, new()
    {
        services.AddSingleton<ObjectPoolProvider, DefaultObjectPoolProvider>();
        services.AddSingleton<ObjectPool<TService>>(sp =>
        {
            var provider = sp.GetRequiredService<ObjectPoolProvider>();
            return provider.Create(new TPolicy());
        });

        return services;
    }
}


Register your pooled service in Program.cs  as follows:
builder.Services.AddPooled<StringProcessor, StringProcessorPolicy>();

Best Practice

Create a singleton pool to ensure consistent reuse across requests, while consumers can be scoped or transient.

Step 5: Consume the Pooled Service
Pooled objects should be injected into services where they are needed. Get() to fetch an object and Return() to add it back to the pool.
using Microsoft.Extensions.ObjectPool;

namespace PooledDI.Core.Services;

public class ProcessingService
{
    private readonly ObjectPool<StringProcessor> _pool;

    public ProcessingService(ObjectPool<StringProcessor> pool)
    {
        _pool = pool;
    }

    public string Execute(string input)
    {
        var processor = _pool.Get();

        try
        {
            return processor.Process(input);
        }
        finally
        {
            _pool.Return(processor);
        }
    }
}


Last but not least, register the service that consumes the data:
builder.Services.AddScoped<ProcessingService>();

Best Practice
When an exception occurs, always use a try-finally block to ensure the object is returned to the pool.

With just a few steps, you've implemented efficient object pooling

In ASP.NET Core, Microsoft.Extensions.ObjectPool provides:
  • Allotments for heavy or frequently used services have been reduced.
  • Implemented a clean, extensible integration pattern with the DI container.
  • Performance-sensitive applications benefit from improved throughput.
Best Practices for Using Object Pooling in .NET
When implemented thoughtfully and correctly, object pooling can yield significant performance improvements. To ensure your implementation is safe, efficient, and maintainable, follow these best practices:

Avoid Mutable Shared State
Why it Matters
If the object's internal state is not reset, it can lead to data leaks, race conditions, and unpredictable behavior.

Best Practice
If the object maintains state (such as a StringBuilder or buffer), clear or reset it explicitly before returning it to the pool.
using System.Text;

namespace PooledDI.Core.Services;

public sealed class ZiggyProcessor
{
    private readonly StringBuilder _sb = new();

    public void Append(string value) => _sb.Append(value);

    public string Finish()
    {
        var result = _sb.ToString();
        _sb.Clear();
        return result;
    }



    internal void Reset() => _sb.Clear();
}


Use Policies to Reset Objects Cleanly
Why it Matters
Pooled object policies give you the ability to control how objects are cleaned up before reuse.

Best Practice
Finish() should implement reset logic to ensure the object is returned to a safe, known state.
public string Finish()
{
    var result = _sb.ToString();
    _sb.Clear();
    return result;
}


Pool Only Performance-Critical or Expensive Objects
Why it Matters
Using pooling for cheap, lightweight objects may actually hurt performance due to its complexity and overhead.

Best Practice
Limit object pooling to high-cost objects that are instantiated frequently, such as large buffers, parsers, serializers, or reusable builders. Don't pool trivial data types or objects that are rarely used.

Ensure Thread Safety

Why it Matters
A pooled object that isn't thread-safe can cause race conditions or data corruption if multiple threads access it concurrently.

Best Practice
  • In most cases, pooled objects should be used in single-threaded, isolated scopes (e.g., for HTTP requests).
  • Ensure that shared objects are thread-safe or use locking mechanisms carefully if they must be shared across threads.
Use DefaultObjectPoolProvider
Why it Matters
Due to its efficient internal data structures, the DefaultObjectPoolProvider is optimized for high-throughput scenarios.

Best Practice
Use the DefaultObjectPoolProvider unless you have a very specific requirement for a custom implementation. It provides excellent performance for typical workloads out of the box.
builder.Services.AddSingleton<ObjectPoolProvider, DefaultObjectPoolProvider>();

Bonus Tips
Benchmark: Verify the performance gains of your application before and after introducing object pooling.
Monitor: Consider rethinking the pooling strategy if pooled objects are rarely reused or leak memory.
Combine: Use performance profilers like dotTrace or PerfView to understand hotspots in object allocation.

In ASP.NET Core applications, you can safely integrate object pooling to optimize resource utilization, reduce garbage collection pressure, and improve throughput by adhering to these best practices.

Advanced Technique: Pooling Services That Implement Interfaces
As in real-world applications, services are often registered by interface to provide abstraction, testability, and flexibility. But how can object pooling be integrated into this model?

You will learn how to wrap pooled objects behind an interface, enabling clean dependency injection while still benefiting from reuse and memory efficiency.

Step 1: Define a Service Interface

Defining an interface for your service contract is the first step:
namespace PooledDI.Core.Interfaces;
public interface IStringProcessor
{
    string Process(string input);
}

The interface allows you to inject the service rather than a concrete class, which is ideal for unit testing and clean coding.

Step 2: Create a Wrapper That Uses Object Pooling

A class implementing the desired interface needs to wrap the pool access logic since object pooling manages a concrete type (e.g., StringProcessor).
using Microsoft.Extensions.ObjectPool;
using PooledDI.Core.Interfaces;

namespace PooledDI.Core.Services;

public class PooledStringProcessor : IStringProcessor
{
    private readonly ObjectPool<StringProcessor> _pool;

    public PooledStringProcessor(ObjectPool<StringProcessor> pool)
    {
        _pool = pool;
    }

    public string Process(string input)
    {
        var processor = _pool.Get();
        try
        {
            return processor.Process(input);
        }
        finally
        {
            _pool.Return(processor);
        }
    }
}

Best Practice
If an exception occurs, always wrap pooled object access in a try-finally block to ensure it is returned to the pool.

Step 3: Register the Interface Wrapper with DI
The wrapper implementation should be registered as the concrete type for your interface:
builder.Services.AddScoped<IStringProcessor, PooledStringProcessor>();

The pool itself should also be registered using the utility method you used earlier or manually:
builder.Services.AddPooled<StringProcessor, StringProcessorPolicy>();


Why This Matters
  • Testability: Your classes depend on IStringProcessor instead of the pooled implementation, making them easier to test.
  • Encapsulation: By encapsulating the object pooling logic, consumers remain unaware of the object pooling logic.
  • Reuse with Safety: A wrapper ensures that pooled objects are properly managed throughout their lifecycle.
Optional: Factory-Based Approach for Complex Cases
For most scenarios, the wrapper approach shown above is the best method for managing pooled objects or adding lazy resolution. If you need to manage more than one type of pooled object or introduce lazy resolution, you can inject a factory or delegate for the interface.

For scalable enterprise .NET applications, wrapping pooled objects behind interfaces maintains clean architecture, testability, and performance.

Summary

With the introduction of modern language features in C# 13 and the continued evolution of .NET 9, developing memory-efficient, high-performance applications has never been simpler. The built-in Dependency Injection (DI) container in ASP.NET Core does not natively support a "pooled" object lifetime, but the Microsoft.Extensions.ObjectPool package fills that void.

You can benefit from object pooling by integrating it into your service architecture in the following ways:
  • Memory allocations for expensive or frequently-used objects should be reduced.
  • Performance-critical workloads can improve throughput and responsiveness.
  • Maintain control over the object lifecycle, ensuring safe reusability through PooledObjectPolicy<t>.</t>
In a number of scenarios, this technique excels, including:
  • Manipulating strings with StringBuilder
  • In-memory data transformation
  • Our parsers and serializers are tailored to your needs
  • Using computational helpers or reusable buffers
In conjunction with solid architectural patterns (such as DI and interface-based design), object pooling can become a powerful optimisation tool.

Final Thoughts
  • Consider pooling for objects that are expensive, short-lived, and reusable.
  • Reset and cleanup logic should always be implemented properly.
  • For clean, testable code, wrap pooled services behind interfaces.
  • Reuse objects efficiently and thread-safely with DefaultObjectPoolProvider.
With these principles, you can build highly efficient .NET applications that scale gracefully under load without sacrificing code clarity or maintainability. I have uploaded the code to my GitHub Repository. If you have found this article useful please click the like button.

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting - HostForLIFE :: How to Fix v3 API Downtime Problems with NuGet Package Manager?

clock May 14, 2025 07:57 by author Peter

If you're working with NuGet in Visual Studio and encountering issues due to the NuGet v3 API being temporarily unavailable, you're not alone. Many developers have experienced downtime with the NuGet v3 API, leading to errors and hindering package management workflows. Fortunately, there is a quick workaround to resolve these issues.

Recognizing the Issue
Installing, updating, and managing third-party libraries and tools in projects is made simple for developers by NuGet, a well-liked package management for.NET. One of the main resources for managing and retrieving these packages is the NuGet v3 API. However, developers may encounter challenges when trying to restore or manage packages because of sporadic outages or connectivity problems with the NuGet v3 API.

A timeout or an inability to retrieve the required resources from the v3 API are common ways that the issue appears. When you're in the thick of development and require access to particular packages, this becomes really challenging.

Steps to Switch to NuGet v2 API

  • Open Visual Studio
    • Launch Visual Studio, the IDE you are using for your .NET projects.
  • Navigate to NuGet Package Manager Settings
    • Go to Tools in the top menu.
    • Select NuGet Package Manager.
    • Choose Package Manager Settings.
  • Change the Package Source URL
    • In the settings window, go to Package Sources.
    • You'll see the default NuGet source listed as https://api.nuget.org/v3/index.json.
    • Change this URL to https://www.nuget.org/api/v2/ to switch to the v2 API.
  • Save and Close
    • After updating the URL, click OK to save your settings.
  • Rebuild Your Project
    • Clean your project and rebuild it. This will allow NuGet to start using the v2 API to restore and manage packages.

Once these steps are completed, NuGet will automatically use the v2 API, bypassing the downtime issues caused by the v3 API.

Why Switch to NuGet v2 API?

The v2 API is older but still very reliable for managing packages. It allows for smoother transitions in cases of downtime, ensuring that your workflow remains uninterrupted. By using the v2 API, you can avoid the issues caused by API unavailability and continue your development efforts.

Additional Tips

  • Clear the NuGet Cache: If you face persistent issues even after switching the source, clearing the NuGet cache might help. This ensures that NuGet doesn’t use any outdated or corrupted cached data.
  • To clear the cache, go to:
    • Tools -> NuGet Package Manager -> Package Manager Settings -> Clear All NuGet Cache(s)
  • Check NuGet Status: Keep an eye on the official NuGet status page to see when the v3 API is back online. The NuGet team regularly updates the page with the status of their API services.
  • Revert Back to v3 Once Restored: Once the v3 API is back up and running, you can switch the URL back to the default v3 URL to take advantage of its enhanced features, such as better performance and newer functionalities.

Conclusion
Package management for your project may come to a complete stop if the NuGet v3 API goes down. However, you can carry on with your development without any disruptions if you swiftly transition to the v2 API as a temporary fix. This straightforward procedure guarantees that your workflow is unaffected while you wait for the v3 API to reactivate. To reduce interruptions to your development process, always maintain your NuGet settings current and monitor the state of the NuGet services.

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



European ASP.NET Core Hosting - HostForLIFE :: Constructing a Secure SQL Injection Test Form with C# and ASP.NET

clock May 5, 2025 08:50 by author Peter

This blog post will discuss how to use C# and ASP.NET Web Forms to create a secure SQL injection testing website. This project is perfect for implementing input sanitization, error handling, and parameterized queries as well as learning about fundamental SQL injection protection measures.

Technologies Used

  • ASP.NET Web Forms (ASPX)
  • C# (Code-Behind)
  • SQL Server
  • ADO.NET with SqlHelper (Application Block)
  • Bootstrap 5 (Frontend UI)

Goal of This Application

  • Provide a login form that is intentionally structured to test SQL injection patterns.
  • Detect and block malicious inputs from both query string and form fields.
  • Log all suspicious activity.
  • Redirect users to a custom error page when SQL injection attempts are detected.

1. ASPX Page Code (Frontend Form)
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="SqlInjection.aspx.cs" Inherits="TaskPractices.SqlInjection" %>

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>SQL Injection Test</title>
    <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" />

    <style>
        body {
            background-color: #f8f9fa;
            padding: 50px;
        }

        .container {
            max-width: 500px;
            margin: auto;
        }

        .warning {
            color: red;
            font-size: 0.9rem;
        }
    </style>
</head>
<body>
    <div class="container border p-4 bg-white rounded shadow">
        <h3 class="text-center mb-4">SQL Injection Test Form</h3>
        <form runat="server">
            <div class="mb-3">
                <label for="username" class="form-label">Username</label>
                <input type="text" class="form-control" id="username" name="username" placeholder="Enter username" runat="server" />
            </div>

            <div class="mb-3">
                <label for="password" class="form-label">Password</label>
                <input type="password" class="form-control" id="password" name="password" placeholder="Enter password" runat="server" />
            </div>

            <asp:Button class="btn btn-primary w-100" Text="Login" ID="loginbtn" OnClick="loginbtn_Click" runat="server" />

            <p class="warning mt-3">
                ?? This form is for testing purposes only. Ensure backend uses parameterized queries.
            </p>
        </form>
    </div>
</body>
</html>

2. Code-Behind: SQL Injection Detection and Login Logic

using AppBlock;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Data;
using System.Data.SqlClient;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using TaskPractices.Allkindoflog;

namespace TaskPractices
{
    public partial class SqlInjection : System.Web.UI.Page
    {
        public string[] BlackList = new string[]
        {
            "@@", "/*", "*/", "function", "truncate ", "alter", "begin", "create", "cursor",
            "delete ", "exec", "<script>", "</script>", "script", "execute", "fetch", "insert ",
            "kill", "drop", "sysobjects", "syscolumns", "update ", "document.cookie", "'", ":",
            "--", "%", "=", " or ", ">", "<", "exec(", " del", "chr", "asc", "update "
        };

        public string[] chars = new string[]
        {
            "@@", "/*", "*/", "function", "truncate ", "alter", "begin", "create", "cursor",
            "delete ", "exec", "<script>", "</script>", "script", "execute", "fetch", "insert ",
            "kill", "drop ", "sysobjects", "syscolumns", "update ", "document.cookie", "'", ":",
            " or ", ">", "<", "exec(", " del", "chr", "asc", "update "
        };

        public string strURLRewrited = "";
        string sqlcon = ConfigurationManager.ConnectionStrings["Sqlconnection"].ToString();

        protected void Page_Load(object sender, EventArgs e)
        {
            strURLRewrited = Request.RawUrl;
            sqlInjection1();
            RemoveSpecialChars(strURLRewrited);
        }

        private bool CheckStringForSQL(string pStr)
        {
            if (string.IsNullOrEmpty(pStr) || pStr.CompareTo("") == 0)
                return false;

            string lstr = pStr.ToLower();

            foreach (string item in BlackList)
            {
                if (lstr.ToUpper() == item.ToUpper())
                    return true;
            }

            return false;
        }

        public void sqlInjection1()
        {
            try
            {
                string ErrorPage = "/ErrorPage/ErrorPage-Demo1.aspx";

                // Form data check
                for (int i = 0; i < Request.Form.Count; i++)
                {
                    if (CheckStringForSQL(Request.Form[i]))
                    {
                        Log.errorlog(Request.Form[i], strURLRewrited);
                        Response.Redirect(ErrorPage);
                    }
                }

                // Query string check
                for (int i = 0; i < Request.QueryString.Count; i++)
                {
                    if (CheckStringForSQL(Request.QueryString[i]))
                    {
                        Log.errorlog(Request.QueryString[i], strURLRewrited);
                        Response.Redirect(ErrorPage);
                    }
                }
            }
            catch (Exception ex)
            {
                Response.Write(ex.Message);
            }
        }

        public void RemoveSpecialChars(string str)
        {
            foreach (string c in chars)
            {
                if (str.Contains(c))
                {
                    Log.errorlog(str, strURLRewrited);
                    Response.Redirect("/ErrorPage/ErrorPage-Demo1.aspx");
                }
            }
        }

        protected void loginbtn_Click(object sender, EventArgs e)
        {
            DataSet ds = new DataSet();

            try
            {
                SqlParameter[] param = {
                    new SqlParameter("@PAN", username.Value.Trim()),
                    new SqlParameter("@EMAIL", password.Value.Trim())
                };

                string sql = "SELECT * FROM ClientData (NOLOCK) WHERE PAN=@PAN AND EMAIL=@EMAIL";
                ds = SqlHelper.ExecuteDataset(sqlcon, CommandType.Text, sql, param);

                if (ds != null && ds.Tables[0].Rows.Count > 0)
                {
                    HttpContext.Current.Session["ClientCode"] = ds.Tables[0].Rows[0]["ClientId"].ToString().Trim();
                    HttpContext.Current.Session["ClientFname"] = ds.Tables[0].Rows[0]["Name"].ToString().Trim();
                    HttpContext.Current.Session["Pan"] = ds.Tables[0].Rows[0]["PAN"].ToString().Trim();
                    HttpContext.Current.Session["Email"] = ds.Tables[0].Rows[0]["EMAIL"].ToString().Trim();

                    ScriptManager.RegisterStartupScript(this, typeof(string), "Message", "alert('Login successfully');", true);
                }
                else
                {
                    ScriptManager.RegisterStartupScript(this, typeof(string), "Message", "alert('User Not Exists !');", true);
                }
            }
            catch (Exception ex)
            {
                ScriptManager.RegisterStartupScript(this, typeof(string), "Message", $"alert('{ex.Message}');", true);
            }
        }
    }
}

Key Takeaways

  • Always use parameterized queries instead of string concatenation to prevent SQL injection.
  • Implement input sanitization and validation on both the server and client sides.
  • Maintain a blacklist of harmful SQL keywords to filter user input.
  • Redirect to custom error pages and log malicious attempts for analysis.

Improvements You Can Add

  • Use a whitelist approach for known safe characters.
  • Integrate logging with tools like ELMAH or Serilog.
  • Use Stored Procedures instead of inline queries for extra safety.
  • Replace hard-coded blacklists with centralized config-based filters.

Conclusion
This project helps demonstrate SQL injection defense in a hands-on way using ASP.NET. It’s a great way to test and validate your security practices while building safe and user-friendly forms. Would you like a downloadable PDF of this documentation?


HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting - HostForLIFE :: The Greatest Option for Reactive Systems in C#.NET 9 is Akka.NET

clock April 28, 2025 08:46 by author Peter

In a world where software needs to be message-driven, elastic, resilient, and responsive, traditional thread-based and monolithic designs frequently break under complexity or stress. Reactive systems excel in this situation, and Akka.NET makes it not only feasible but also pleasurable to design them in C# with.NET 9. Originally created for the JVM and then transferred to.NET, Akka.NET is an open-source toolkit built on the actor paradigm. It makes it simple to create distributed, concurrent, and fault-tolerant systems.

This post will explain why Akka.NET is the greatest option for developing reactive systems in.NET 9 and provide a step-by-step demonstration of a practical example.

Why Choose Akka.net for Reactive Systems?
Akka.NET is an open-source .NET framework that implements the actor model — a proven approach to building concurrent systems. Each actor is an independent unit that processes messages asynchronously and maintains its own internal state.

Core Features of Akka.NET
Asynchronous by default: Actors process messages concurrently without locking.

  • Resilient architecture: Supervision strategies handle failures gracefully.
  • Scalable: Easily scale horizontally using Akka.Cluster and Cluster.Sharding.
  • Decoupled components: Built on message passing, not shared state.
  • Built-in support for persistence, routing, and remote messaging.

Akka.NET allows you to build applications that are not only scalable and efficient but also fault-tolerant by design.

Step-by-Step: Building a Reactive System with Akka.NET
Let’s create a simple reactive banking system where.

  • Users can deposit, withdraw, and check their balance.
  • Each operation is handled as a message by an actor.

Step 1. Create a New Console App
dotnet new console -n AkkaReactiveBank
cd AkkaReactiveBank


Step 2. Add Akka.NET NuGet Package
dotnet add package Akka

Step 3. Define Actor Messages
Create a new file BankMessages.cs.
public record Deposit(decimal Amount);
public record Withdraw(decimal Amount);
public record CheckBalance;


These represent the commands sent to the actor.

Step 4. Create the Bank Actor
Create a new file BankActor.cs.
using Akka.Actor;
using System;

public class BankActor : ReceiveActor
{
    private decimal _balance;

    public BankActor()
    {
        Receive<Deposit>(msg => HandleDeposit(msg));
        Receive<Withdraw>(msg => HandleWithdraw(msg));
        Receive<CheckBalance>(_ => HandleCheckBalance());
    }

    private void HandleDeposit(Deposit msg)
    {
        _balance += msg.Amount;
        Console.WriteLine($"[Deposit] Amount: {msg.Amount}, New Balance: {_balance}");
    }

    private void HandleWithdraw(Withdraw msg)
    {
        if (_balance >= msg.Amount)
        {
            _balance -= msg.Amount;
            Console.WriteLine($"[Withdraw] Amount: {msg.Amount}, Remaining Balance: {_balance}");
        }
        else
        {
            Console.WriteLine("[Withdraw] Insufficient funds.");
        }
    }

    private void HandleCheckBalance()
    {
        Console.WriteLine($"[Balance] Current Balance: {_balance}");
    }
}


Step 5. Set Up the Actor System in the Program.cs
Replace the content of the Program.cs.
using Akka.Actor;
class Program
{
    static void Main(string[] args)
    {
        using var system = ActorSystem.Create("BankSystem");

        var bankActor = system.ActorOf<BankActor>("Bank");

        bankActor.Tell(new Deposit(1000));
        bankActor.Tell(new Withdraw(200));
        bankActor.Tell(new CheckBalance());

        Console.ReadLine();
    }
}

Step 6. Run the Application
dotnet run

Output
[Deposit] Amount: 1000, New Balance: 1000
[Withdraw] Amount: 200, Remaining Balance: 800
[Balance] Current Balance: 800


You’ve just created a reactive banking system with Akka.NET in .NET 9.

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting - HostForLIFE :: Apps for.NET 9 Run Faster Than Before

clock April 21, 2025 08:49 by author Peter

Microsoft continues to push the limits of what can be done with.NET with every new release, and.NET 9 is no exception. From quicker APIs, serialization, and more effective JIT compilation to better memory management, this most recent version offers observable performance gains across the board. This post will examine the main performance improvements in.NET 9, contrast the actual code execution of.NET 8 and.NET 9, and demonstrate how upgrading to.NET 9 can significantly improve your apps with little modification.

What's New in .NET 9 Performance?

  • Some standout performance improvements in .NET 9.
  • Improved JIT Compiler (RyuJIT)
  • Reduced GC (Garbage Collection) Pauses
  • Faster System.Text.Json Serialization/Deserialization
  • Enhanced HTTP/3 and Kestrel Web Server
  • More Efficient Task and Thread Management

Real Programming Comparison: .NET 8 vs .NET 9
Let's take a real example that most applications deal with — JSON serialization and HTTP response via Minimal APIs.

Example. Minimal API returning JSON data
Code (Same for .NET 8 and .NET 9)
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/data", () =>
{
    var data = Enumerable.Range(1, 1000)
        .Select(x => new { Id = x, Name = $"Item {x}" });
    return Results.Ok(data);
});

app.Run();


We’ll test this endpoint using a load test tool (e.g., wrk or Apache Benchmark) to compare .NET 8 and .NET 9.

Benchmark Result

Environment

  • OS: Windows 11 / Ubuntu 22.04
  • Load Test: 100,000 requests / 10 concurrent threads
Metric .NET 8 .NET 9 Improvement
Requests/sec 29,200 34,500 +18%
Avg Response Time 12.4 ms 9.8 ms -21%
Memory Allocations 2.5 MB 1.8 MB -28%
CPU Usage (under load) High Reduced  

Another Example: String Parsing Function
Let’s compare a simple string parsing function using BenchmarkDotNet.

Code
public class TextProcessor
{
    public static List<string> ExtractWords(string sentence)
    {
        return sentence
            .Split([' ', ',', '.', '!'], StringSplitOptions.RemoveEmptyEntries)
            .Where(word => word.Length > 3)
            .ToList();
    }
}


Benchmark Test
[MemoryDiagnoser]
public class BenchmarkTest
{
    private readonly string sentence = "The quick brown fox jumps over the lazy dog. Testing .NET performance!";

    [Benchmark]
    public List<string> RunTest() => TextProcessor.ExtractWords(sentence);
}

Benchmark Result

Runtime Mean Time Allocated Memory
.NET 8 5.10 µs 1.65 KB
.NET 9 4.01 µs 1.32 KB

Result: .NET 9 is ~21% faster and uses ~20% less memory.

System.Text.Json Serialization Comparison

Code

var person = new { Name = "Peter", Age = 30 };
string json = JsonSerializer.Serialize(person);
Framework Serialization Time Memory Allocation
.NET 8 2.4 µs 560 B
.NET 9 1.9 µs 424 B

.NET 9 improves System.Text.Json serialization speed and lowers memory usage for large object graphs.

Conclusion
.NET 9 is a performance beast. With smarter memory handling, improved JIT, and optimizations to HTTP servers, JSON serialization, and general computation — your apps will run faster and leaner by default. No major code changes are needed — just upgrade and reap the benefits. Whether you're building APIs, desktop apps, or microservices, .NET 9 is built to scale faster, respond quicker, and consume fewer resources. Now is the time to upgrade and experience the performance leap.

Upgrade to .NET 9 and unleash the true speed of your apps!

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting - HostForLIFE :: Discover how to integrate Firebase with .NET

clock April 14, 2025 07:19 by author Peter

A free database is provided by a Firebase database connection.  In essence, we are able to perform CRUD tasks, which include Insert, Update, Get, and Delete.  I've broken out how to install and create a project on the Firebase console panel here.  Opening Firebase and logging into your account is the first step.  The second step is to select the terminal application that opens Dashboard, click Create Project, and then enter the name of your project.

 

Making a real-time database is the third phase. Then select the test mode and press the "enable" button. You can now use this configuration for crude operations as your database has been built. The first step is to register for a Firebase account. I've included a link that will assist you in using the Firebase console app to create a project.

These are some steps listed below that take you to the exact place of Firebase integration with .NET. Firstly, you need to create a .net project and then install the NuGet package; and name of the package is FireSharp, and the version of the package is 2.0.4.

Now, you will need credentials to perform the CRUD operation.

To connect with your real-time database, copy the base path from the console app.

Then we need the authsecret key, that key you can fetch from the project setting > service account > database secret key.


Lastly, let’s write code in .NET.
// Firebase configuration
IFirebaseConfig ifc = new FirebaseConfig()
{
    AuthSecret = "**********x8Ed6HVU0YXlXW-L75ho4ps",
    BasePath = "https://we****.firebaseio.com/"
};

IFirebaseClient client;

// Create a user object
User user = new User()
{
    Id = 1,
    FirstName = "Test 1",
    LastName = "Test 2"
};

// Initialize Firebase client
client = new FireSharp.FirebaseClient(ifc);

// Insert data
var set = client.Set("User/" + user.Id, user);

// Delete data
set = client.Delete("User/" + user.Id);

// Update data
set = client.Update("User/" + user.Id, user);

// Retrieve data
set = client.Get("User/" + user.Id);


To explore more classes, please visit the official doc of Firebase Admin .NET SDK: firebase.google.com/docs/reference/admin/dotnet



European ASP.NET Core Hosting - HostForLIFE :: Creating Custom Components in Blazor

clock April 11, 2025 09:33 by author Peter

Microsoft created the open-source Blazor web framework, which enables programmers to create interactive online apps with C# and.NET. Blazor builds modular and reusable user interface components using a component-based architecture. Building intricate and reusable web apps requires the use of custom components. We will use examples to demonstrate how to develop custom components in Blazor in this article.

Prerequisites
Before we begin, ensure you have the following set up on your development environment:

  • Visual Studio 2022.
  • Basic knowledge of C# and HTML.

Understanding Components in Blazor
Components in Blazor are similar to user controls in other web frameworks. They are self-contained pieces of code that contain both markup and logic. Components can be composed and nested to create complex UI elements. In Blazor, components can be created using Razor syntax or using C# code. There are two types of components in Blazor:

  • Razor Components: These are defined using Razor syntax (.razor files) and allow for a mix of HTML and C# code.
  • Code-Behind Components: These are defined using C# classes and are more suitable for more complex logic or when you want to separate the UI and C# code.

In this article, we'll focus on creating custom Razor components.

Step 1. Create a New Blazor Project
Let's start by creating a new Blazor project. Open Visual Studio and follow these steps:

  • Click on "Create a new project."
  • In the "Create a new project" dialog, search for "Blazor WebAssembly App," select the "Blazor WebAssembly App" template and click on "Next".
  • Choose a name and location for your project, and click "Next".
  • Choose the ".NET 7.0" option from the framework and click "Create" to generate the project.

Step 2. Add a Custom Component
In this example, we'll create a simple custom component that displays a welcome message with the ability to customize the name.

  • Right-click on the "Pages" folder in your Blazor project, and select "Add" > "New Folder." Name the folder "Components."
  • Right-click on the newly created "Components" folder, and select "Add" > "New Item."
  • In the "Add New Item" dialog, search for "Razor Component" and select the "Razor Component" template.
  • Name the component "WelcomeMessage.razor" and click "Add."

Step 3. Define the Custom Component
Now, let's define the content of our custom component. Open the "WelcomeMessage.razor" file and replace its content with the following code.
@code {
    [Parameter] public string Name { get; set; } = "Guest";
}
<h3>Welcome, @Name!</h3>

In this code, we have a simple Razor component with a parameter named "Name." The parameter represents the name of the user to display in the welcome message. We've set a default value of "Guest" in case the name is not provided.

Step 4. Using the Custom Component

Now that we have our custom component defined let's use it in one of our existing Blazor pages. Open the "Index.razor" file located in the "Pages" folder and add the following line at the top of the file to import the "WelcomeMessage" component.
@page "/"

@using YourAppName.Components


Next, add the following code within the existing <div> tag in the "Index.razor" file:
<WelcomeMessage Name="Peter" />

This line of code will render the "WelcomeMessage" component with the name "Peter".

Step 5. Build and Run the Application

With the custom component in place, we can now build and run the application to see it in action. Press Ctrl + F5 or click the "Start Debugging" button in Visual Studio to build and run the application.
Once the application loads in your browser, you should see the welcome message, "Welcome, Peter!" If you don't see the name, check if you've correctly implemented the custom component.

How to Create Reusable Components?
One of the main benefits of using custom components in Blazor is the ability to create reusable UI elements. To create a reusable component, you can define it in a separate file and import it into other components as needed. Here's an example of a reusable component that displays a button.

Create a new component named as SubmitButton and add the below code.
<button class="@ButtonClass" @onclick="OnClick">@ButtonText</button>

@code {
    [Parameter]
    public string ButtonText { get; set; } = "Button";

    [Parameter]
    public string ButtonClass { get; set; } = "btn-primary";

    [Parameter]
    public EventCallback<MouseEventArgs> OnClick { get; set; }
}


This component takes three parameters: the button text, the button class, and a callback that is triggered when the button is clicked. The default values for the button text and class are set in the component, but they can be overridden when the component is used.

To use this component in your application, you can add the following code to a Razor page.
<SubmitButton ButtonText="Click Me" ButtonClass="btn-success" OnClick="HandleClick" />
@code {
    private async Task HandleClick(MouseEventArgs args)
    {
        // Handle the button click event
    }
}


This will render a button with the text "Click Me" and the class "btn-success". When the button is clicked, the HandleClick method will be called.

Conclusion

Custom components are a powerful feature of Blazor that allow developers to create reusable and modular UI elements. By creating custom components, developers can build complex web applications more efficiently and with greater flexibility. In this article, we explored how to create custom components in Blazor using examples. We hope this article has been helpful in understanding how to create custom components in Blazor.

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



European ASP.NET Core Hosting - HostForLIFE :: ASP.NET Core Advanced APIs: Versioning, EF Core, and Middleware

clock April 7, 2025 10:15 by author Peter

My preferred framework for creating scalable web services is ASP.NET Core Web API. When creating scalable web services, I like to use the ASP.NET Core Web API. I can't wait to tell you about it. Using Entity Framework Core, Dependency Injection, API versioning, and a little extra code to log requests and responses, I'll walk you through how I created it in this tutorial.

We're looking at sophisticated concepts for developing APIs that expand as your program gets larger, so this won't be your typical beginner's assignment. Let's get started.

Step 1. Setting Up the Project
First, I fired up Visual Studio and created a new ASP.NET Core Web API project. Here’s how I did it.

  • Open Visual Studio and click on Create a new project.
  • I chose ASP.NET Core Web API and clicked Next.
  • Named the project EmployeeManagementAPI—you can name it whatever you like—and clicked Create.
  • I selected .NET 7.0 and hit Create.

Once Visual Studio had set up the basic project structure, I was ready to roll. Next, it was time to integrate Entity Framework Core so I could store and manage employee data in a database.

Step 2. Integrating Entity Framework Core

I needed to hook up a database to store employee records. For this, I went with Entity Framework Core because it’s super flexible and easy to work with.

Installing EF Core

First things first, I installed the required packages via Package Manager Console.
Install-Package Microsoft.EntityFrameworkCore.SqlServer
Install-Package Microsoft.EntityFrameworkCore.Tools


With that out of the way, I moved on to creating a DbContext to represent the database. I created a folder called Data and added a new class called EmployeeContext. Here’s the code I put in.
using Microsoft.EntityFrameworkCore;
using EmployeeManagementAPI.Models;

namespace EmployeeManagementAPI.Data
{
    public class EmployeeContext : DbContext
    {
        public EmployeeContext(DbContextOptions<EmployeeContext> options)
            : base(options)
        {
        }

        public DbSet<Employee> Employees { get; set; }
    }
}


Next, I needed an Employee model. In the Models folder, I added the Employee.cs class.
namespace EmployeeManagementAPI.Models
{
    public class Employee
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Department { get; set; }
        public decimal Salary { get; set; }
    }
}


Configuring the Database Connection
With the DbContext and model in place, I needed to configure the connection string. I added the connection string in appsettings.json like this.

"ConnectionStrings": {
  "EmployeeConnection": "Server=(localdb)\\mssqllocaldb;Database=EmployeeManagementDB;Trusted_Connection=True;"
}

Then, in Program.cs, I added the following line to register EmployeeContext with Dependency Injection.

builder.Services.AddDbContext<EmployeeContext>(options =>
    options.UseSqlServer(builder.Configuration.GetConnectionString("EmployeeConnection")));


Running Migrations
Finally, I created the database using EF Core migrations. Here’s what I did.

  • Add-Migration InitialCreate
  • Update-Database

This created the Employees table in the database. With the database ready, it was time to move on to the service layer.

Step 3. Building the Service Layer
Rather than dumping all the logic into the controller, I created a service layer to handle employee operations. This helps keep the code cleaner and easier to maintain.
Creating the Service Interface and Implementation

In the Services folder, I added an interface, IEmployeeService, and its implementation, EmployeeService. Here's what I came up with,

First, the interface.

public interface IEmployeeService
{
    Task<IEnumerable<Employee>> GetAllEmployeesAsync();
    Task<Employee> GetEmployeeByIdAsync(int id);
    Task AddEmployeeAsync(Employee employee);
    Task UpdateEmployeeAsync(Employee employee);
    Task DeleteEmployeeAsync(int id);
}


Then, I implemented the interface in EmployeeService.cs.

public class EmployeeService : IEmployeeService
{
    private readonly EmployeeContext _context;

    public EmployeeService(EmployeeContext context)
    {
        _context = context;
    }

    public async Task<IEnumerable<Employee>> GetAllEmployeesAsync()
    {
        return await _context.Employees.ToListAsync();
    }

    public async Task<Employee> GetEmployeeByIdAsync(int id)
    {
        return await _context.Employees.FindAsync(id);
    }

    public async Task AddEmployeeAsync(Employee employee)
    {
        _context.Employees.Add(employee);
        await _context.SaveChangesAsync();
    }

    public async Task UpdateEmployeeAsync(Employee employee)
    {
        _context.Entry(employee).State = EntityState.Modified;
        await _context.SaveChangesAsync();
    }

    public async Task DeleteEmployeeAsync(int id)
    {
        var employee = await _context.Employees.FindAsync(id);
        if (employee != null)
        {
            _context.Employees.Remove(employee);
            await _context.SaveChangesAsync();
        }
    }
}


Now, I needed to register this service in Program.cs so it could be injected into the controllers.

builder.Services.AddScoped<IEmployeeService, EmployeeService>();


Step 4. Building the Employee Controller
With the service layer ready, I moved on to the controller. In the Controllers folder, I created EmployeesController.cs.

[Route("api/[controller]")]
[ApiController]
public class EmployeesController : ControllerBase
{
    private readonly IEmployeeService _employeeService;

    public EmployeesController(IEmployeeService employeeService)
    {
        _employeeService = employeeService;
    }

    [HttpGet]
    public async Task<IActionResult> GetAllEmployees()
    {
        var employees = await _employeeService.GetAllEmployeesAsync();
        return Ok(employees);
    }

    [HttpGet("{id}")]
    public async Task<IActionResult> GetEmployeeById(int id)
    {
        var employee = await _employeeService.GetEmployeeByIdAsync(id);
        if (employee == null)
        {
            return NotFound();
        }
        return Ok(employee);
    }

    [HttpPost]
    public async Task<IActionResult> AddEmployee([FromBody] Employee employee)
    {
        await _employeeService.AddEmployeeAsync(employee);
        return CreatedAtAction(nameof(GetEmployeeById), new { id = employee.Id }, employee);
    }

    [HttpPut("{id}")]
    public async Task<IActionResult> UpdateEmployee(int id, [FromBody] Employee employee)
    {
        if (id != employee.Id)
        {
            return BadRequest();
        }
        await _employeeService.UpdateEmployeeAsync(employee);
        return NoContent();
    }

    [HttpDelete("{id}")]
    public async Task<IActionResult> DeleteEmployee(int id)
    {
        await _employeeService.DeleteEmployeeAsync(id);
        return NoContent();
    }
}


This controller was straightforward and tied everything together. I now had a fully functional API for managing employees.

Step 5. Adding API Versioning
As the API grew, I knew I’d need to implement API versioning to ensure backward compatibility. I installed the versioning package.
Install-Package Microsoft.AspNetCore.Mvc.Versioning


Next, I configured versioning in Program.cs.
builder.Services.AddApiVersioning(options =>
{
    options.AssumeDefaultVersionWhenUnspecified = true;
    options.DefaultApiVersion = new ApiVersion(1, 0);
    options.ReportApiVersions = true;
});


Now, I could version my controllers like this.

[ApiVersion("1.0")]
[Route("api/v{version:apiVersion}/[controller]")]
[ApiController]
public class EmployeesV1Controller : ControllerBase
{
    // Version 1.0 controller code
}


Step 6. Custom Middleware for Logging
One thing I always like to do is log requests and responses, especially when working with APIs. So, I wrote some custom middleware to log incoming requests and outgoing responses.

Here’s what my middleware looked like.

public class RequestLoggingMiddleware
{
    private readonly RequestDelegate _next;
    private readonly ILogger<RequestLoggingMiddleware> _logger;

    public RequestLoggingMiddleware(RequestDelegate next, ILogger<RequestLoggingMiddleware> logger)
    {
        _next = next;
        _logger = logger;
    }

    public async Task InvokeAsync(HttpContext context)
    {
        _logger.LogInformation($"Incoming request: {context.Request.Method} {context.Request.Path}");
        await _next(context);
        _logger.LogInformation($"Outgoing response: {context.Response.StatusCode}");
    }
}


Then, I registered this middleware in Program.cs.
app.UseMiddleware<RequestLoggingMiddleware>();

Now, every request and response was being logged, which made debugging much easier.

Conclusion

And there you have it—an advanced Employee Management API built with ASP.NET Core Web API. We covered a lot of ground, from integrating Entity Framework Core to creating a solid service layer, and even added some extra touches like API versioning and custom middleware.

This is the kind of architecture that scales well and keeps things organized. If you’ve made it this far, your API is in great shape for future growth.

Next Steps

  • Consider adding authentication and authorization to secure the API (I recommend using JWT).
  • Look into caching to improve performance, especially for frequently accessed data.
  • Write unit tests for your services and controllers to ensure your API behaves as expected.

Happy coding!

HostForLIFE ASP.NET Core Hosting

European Best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in