nuget Package Manager fails with 401 error

If you are connecting to a private nuget feed you may run into an issue when you first set up the connection, or after updating your password, that you can’t connect to the feed. The error console will report that it was unable to load the service index and the status result was 401. The fix that I’ve found to work has been using the update option through the dotnet nuget command. In my experience, it is often caused because credentials haven’t been set or have changed and the feed is using the old ones.

Update nuget feed

Updating a nuget source is fairly straightforward. The only way I have been able to do so is via the command line, if there is a way through Visual Studio I’d be interested to hear about it. Before using the command line you’ll need to download the .NET SDK if you haven’t done so already. If you are already compiling code on the machine then it is installed and you are ready to go.

To update an existing nuget feed, let’s call the feed InternalCompanyFeed, you need to run the update option. In the below example, we will update the InternalCompanyFeed by setting the username to bnolan and the password to $3cuR3PaS$w0rd.

dotnet nuget update source InternalCompanyFeed --username bnolan --password $3cuR3PaS$w0rd

Depending on the authentication type you may need to set the type(s) to use the --valid-authentication-types option. The standard options are basic, negotiate, kerberos, ntlm, and digest. I haven’t needed that yet but would be interested to hear what setup has required someone to set the option.

Using Serilog sub-loggers in .NET Core/5

If you aren’t familiar with Serilog, it is a powerful, full-featured logging library that integrates seamlessly with .NET Core/5. Some of the features it provides are flat and structured logging, numerous plugins to write data to various endpoints (called sinks), and an implementation that can be easily extended.

One of the concepts that Serilog brings to the logging world is the ability to create sub-loggers. A standard logging setup will have logs written out to a file, console, and/or a database. But if you want to write out to multiple instances of a sink then what you’ll want to look into are sub-loggers. Sub-loggers enable the instantiation of more than one instance of a sink with a custom configuration applied to each one. Some reasons that you might want to do this would be:

  • Output multiple formats of logs to a file system for digestion by different monitoring applications.
  • Recording log output from specific classes to their own files.
  • Generating multiple log files based on specific filters.

In this post I’ll show an example where we use appsettings.json to configure the logging library. Within the SpaHost project two log files will be created. One will contain all log entries while the other will hold only log entries with the level error or above. Within a second project, CrossDomainApi, we’ll have another scenario defined where one log file will contain all log entries while a second log file will contain only the log entries that originate from the Api.TestController namespace.

In order to get this configuration working a few NuGet packages need to be added.

Within the startup code of the ASP.NET Core/5 application a small bit of standard code is required. Within the Program.cs:CreateWebHostBuilder() method we need to add .UseSerilog() to the call chain. This will set Serilog as the logger for the web app. In this implementation we also make calls set the configuration, use IIS settings and capture startup errors. Depending on your implementation you may not need those calls.

public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
   var configuration = GetConfiguration();
   return WebHost.CreateDefaultBuilder(args)
      .UseSerilog()
      .UseConfiguration(configuration)
      .UseIIS()
      .CaptureStartupErrors(true)
      .UseStartup<Startup>();
}

The rest of the core logging configuration is defined in the appsettings.Development.json file. If you would rather define the logging setting in code you can do so but I personally like defining the set up in configuration files so we can customize logging based on the environment where the application is executing.

Within this file there is a section named "Serilog" that defines how the logger will handle log events, what gets written, and where those events are written. The sample below is fromthe CrossDomainApi‘s configuration file. As mentioned earlier this configuration writes logs to two files, one with all log events and another with only log events from a specific namespace.

{
  "Serilog": {
    "Using": [ "Serilog.Sinks.Console", "Serilog.Sinks.File" ],
    "MinimumLevel": {
      "Default": "Information",
      "Override": {
        "Microsoft": "Warning",
        "System": "Warning"
      }
    },
    "WriteTo": [
      {
        "Name": "Console",
        "Args": {
          "outputTemplate": "{Timestamp:HH:mm:ss.fff zzz}|{Level}|{ThreadId}|{SourceContext}|{Message:lj}|{Exception}{NewLine}"
        }
      },
      {
        "Name": "Logger",
        "Args": {
          "configureLogger": {
            "WriteTo": [
              {
                "Name": "File",
                "Args": {
                  "rollingInterval": "Day",
                  "path": "C:/Logs/CrossDomainApi/all-.log",
                  "outputTemplate": "{Timestamp:HH:mm:ss.fff zzz}|{Level}|{ThreadId}|{SourceContext}|{Message:lj}|{Exception}{NewLine}"
                }
              }
            ]
          }
        }
      },
      {
        "Name": "Logger",
        "Args": {
          "configureLogger": {
            "Filter": [
              {
                "Name": "ByIncludingOnly",
                "Args": {
                  "expression": "Contains(SourceContext, 'Api.TestController')"
                }
              }
            ],
            "WriteTo": [
              {
                "Name": "File",
                "Args": {
                  "rollingInterval": "Day",
                  "path": "C:/Logs/CrossDomainApi/api-.log",
                  "outputTemplate": "{Timestamp:HH:mm:ss.fff zzz}|{Level}|{ThreadId}|{SourceContext}|{Message:lj}|{Exception}{NewLine}"
                }
              }
            ]
          }
        }
      }
    ],
    "Enrich": [ "FromLogContext", "WithMachineName", "WithThreadId" ],
    "Properties": {
      "Application": "CrossDomainApi"
    }
  }
}

What makes this configuration use sub-loggers are the child definition of "WriteTo" within the array of "WriteTo" objects. In this case we have two sub-loggers, both write logs via the "File" sink. It is in the last Logger that we apply a filter within the "configureLogger" object definition. This filter first checks that the SourceContext value contains the string Api.TestController. All log events that match this check will be sent to the sub-logger for processing.

With this configuration in place you can add an ILogger<classname> parameter to your controllers for writing out the log files. The Dependency Injection system provided by .NET will automatically pass in the instance of ILogger that was defined during startup. Additionally, any logging output by referenced projects or libraries will also use the configured Serilog instance.

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace Api
{
    [Route("test")]
    public class TestController : ControllerBase
    {
        private readonly ILogger<TestController> _logger;

        public TestController(ILogger<TestController> logger)
        {
            _logger = logger;
        }
        public IActionResult Get()
        {
            _logger.LogDebug("Request received");
            return new JsonResult("OK");
        }
    }
}

To see the working solution take a look at the BackendForFrontend project. This project also demonstrates using an external Identity Server for authentication which was written up about in How to use Backend for Frontend to simplify authentication in an Angular SPA.

Mind Your Spaces in an IIS web.config Definition

This last week we spent the better part of a day chasing down a space. A single white-space character that was breaking an Internet Information Service (IIS) deployment for a website. Debugging the issue would have been straight forward except that all of the errors were pointing us in the wrong direction. To start let me provide details on our deployment.

The website was a .NET 5 Web API hosted in IIS. The project had been modified so that it would generate a DLL and not an EXE upon being compiled. This was done by adding <UseAppHost>false</UseAppHost> within the <PropertyGroup> to the .CSPROJ of the Web API. By default the UseAppHost value is true which will cause the build to generate a framework dependent executable. For our situation the default configuration wasn’t desired. Additionally, since we were no longer generating an EXE that IIS could use to launch the application we also needed to update the web.config.

Within the web.config the launch definition is contained in the <aspNetCore> element. Initially the element was defined with the processPath attribute pointing to the framework built EXE of the project. Now that we weren’t generating this executable we needed to define how IIS would launch the site. To do this we updated the processPath attribute to point to the dotnet.exe binary that was installed with the .NET Core Hosting Bundle. We also added the arguments attribute passing in the exec command and path to the project’s main DLL. In the end the <aspNetCore> element looked like this.

Broken web.config

<aspNetCore processPath="dotnet.exe"
            arguments=" exec .\WebApi.dll"
            stdoutLogEnabled="false"
            stdoutLogFile=".\logs\stdout"
            hostingModel="inprocess">
  <environmentVariables>
    <environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Development" />
  </environmentVariables>
</aspNetCore>

With these updates in place we deployed the app to our development environment and found that the site was displaying a 500.31 error.

HTTP Error 500.31 – Failed to load ASP.NET Core runtime
Common solutions to this issue:
The specified version of Microsoft.NetCore.App or Microsoft.AspNetCore.App was not found.
Troubleshooting steps:

Check the system event log for error messages
Enable logging the application process' stdout messages
Attach a debugger to the application process and inspect

For more information visit: https://go.microsoft.com/fwlink/?LinkID=2028526

The guidance from the Microsoft page tells us to look at the Application Event logs for further details. In examining the events we see three separate error events reported against IIS.

  1. Unable to locate application dependencies. Ensure that the versions of Microsoft.NetCore.App and Microsoft.AspNetCore.App targeted by the application are installed.
  2. Could not find ‘aspnetcorev2_inprocess.dll’. Exception message:
  3. Failed to start application ‘/LM/W3SVC/1/ROOT/WebApi’, ErrorCode ‘0x8000ffff’.

We verified the installation, and re-installed, the .NET Hosting Bundle, checked to make sure that all of the files needed for the site were being deployed, ran a check for all dependencies, and even ran the dotnet.exe command and arguments from the command line. Everything checked out and looked good. Performed the standard exhaustive search on StackOverflow but didn’t have any success with the suggested fixes.

Finally, since the application worked locally from Visual Studio we started walking through the startup process. Since no logs were generated during startup we took a look at our web.config and transformation files. What we found was that in the arguments definition of the <aspNetCore> element there was an extra space between the quote and exec command.

<aspNetCore processPath="dotnet.exe" arguments=" exec .\WebApi.dll" …/>

It seemed improbable that the extra space would be causing such havoc on the IIS deployment but it was in fact the problem that broke the website. We removed the extra space and the fixed configuration file was deployed. To our surprise all errors went away.

Fixed web.config

<aspNetCore processPath="dotnet.exe"
            arguments="exec .\WebApi.dll"
            stdoutLogEnabled="false"
            stdoutLogFile=".\logs\stdout"
            hostingModel="inprocess">
  <environmentVariables>
    <environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Development" />
  </environmentVariables>
</aspNetCore>

So pay attention to the spaces in your web.config deployment and transformation files. The error details provided by IIS and the event logs certainly did not indicate the underlying issue in this instance. But through a process of elimination we were able to track down the cause of issue and fix it.

How to use Backend for Frontend to simplify authentication in an Angular SPA #aspnetcore #identityserver4 #angular

For longer than I care to admit I’ve been trying to find an easy to implement authentication system with my Angular SPA. I’ve looked at rolling my own, utilizing Identity Server and the oidc-connect.js library, Auth0, as well as probably a half dozen other options. After experimenting around, reading blogs and articles on the subject, and checking out some sample implementations I’ve settled on a solution geared towards Single Page Applications (SPAs).

What I’ve picked is known as a Backend for Frontend architecture that uses Identity Server to manage authentication. The structure of the solution moves a lot of the authentication logic out of the SPA and into the backend .NET API. This means that instead of the SPA receiving a JSON Web Token (JWT) after authentication, and then managing the refresh and expiration of said token, the backend API will take ownership of managing the authenticated session and provide the SPA with a way to associate itself with that session. Besides these reasons, there are also security reasons for picking this type of implementation over a more client heavy implementation. Take a look at the post by Dominick Baer on his blog Least Privilege where he goes over many of these reasons. In fact the basis for my example implementation is based on his example from that post.

What I’ve changed from Dominick’s example is that I’ve added an Angular SPA project with a .NET BFF to manage the authenticated sessions. Additionally I swapped out the usage of ProxyKit as a reverse proxy with Microsoft’s Reverse Proxy. ProxyKit‘s development has ceased and the owner recommends migrating over to Microsoft’s implementation. While Reverse Proxy is still in preview it hopefully we will have a 1.0 release in the next few months.

The code for my example project can be found on GitHub. Admittedly there is a lot going on in this project. If you are running the solution in Visual Studio you will want to kick off the SpaHost and CrossDomainApi projects at the same time. The SpaHost will run on it’s own but one of the pages does pull down data from the CrossDomainApi when the user is authenticated so it is good to have them both running.

As it is currently implemented the project uses the publicly available Demo Identity Server. Both the HostSpa and CrossDomainApi point to this Identity Server. The CrossDomainApi checks for a JWT encoded Bearer Authentication header that contains an audience of api. If this isn’t present then the API request from a client will be denied.

The HostSpa hosts the Angular SPA and a reverse proxy to direct some requests to external services, in this case the CrossDomainApi. The configuration for utilizing Identity Server and defining the Reverse Proxy are all performed in the Startup.cs. The Reverse Proxy itself reads it’s configuration out of the appsettings.Development.json. However, in order to include the Bearer token in requests to proxied endpoints, code is added to the endpoint mapping which pulls the JWT out of the context and adds it to the Header.

endpoints.MapReverseProxy(proxyPipeline =>
{
   // The proxied controllers need the bearer token
   proxyPipeline.Use(async (context, next) =>
   {
      // If we are authenticated than we should be able to get the access token
      // from the context associated with this session
      var token = await context.GetTokenAsync("access_token");
      context.Request.Headers.Add("Authorization", $"Bearer {token}");

      await next().ConfigureAwait(false);
   });
});

The majority of the remaining code in the class deals with setting up the identity provider. I highly recommend reading the documentation for Identity Server 4 if you are unsure what all is being defined. Besides the standard OIDC configuration there are also lines of code which setup the token management and storage.

Managing access tokens takes a lot of work. You need to handling refreshes, sliding windows, revocation, storage, etc. Luckily the developers of Identity Server created a .NET library, IdentityModel.AspNetCore, which handles all of this for you. Documentation on it can be found in the Identity Server 4 documentation site.

// We want to enable the automatic management of tokens, auto refresh, in-memory storage
services.AddAccessTokenManagement();

The other piece of the puzzle is where will the tokens be stored. For development purposes keeping the tokens in memory is a suitable solution. For that we can use the AddDistributedMemoryCache() feature which you can learn more about it and other options on the documentation site. This will keep the tokens in memory for as long as the site is running.

// Enable the in-memory storage of tokens. In production or a multi-hosting environment
// you will want to use a SQL Server or Redis-like cache so tokens aren't lost during a
// reboot or deployment.
services.AddDistributedMemoryCache();

Within the Angular SPA the authentication piece is very limited. We don’t handle any of the actual authentication process. Instead we redirect the user to a controller that returns a view which requires the user to be authorized via the [Authorize]decorator. By loading this view the user will be automatically directed to the Identity Server configured in the Startup.cs class if they don’t have an active session. To see the controller logic take a look at the AccountsController.cs class. Within it you will see a function called Index(string redirect) which takes a single parameter, redirect. The value of this variable determines where the user will be sent to after they have been authenticated.

Once the user is authenticated they will have full access to the SPA site. A demonstration of how to restrict users access via authorization guards and how to force a user to authenticate when they access a publicly available page is available in the SpaHost project. Review the fetch-authenticated-data, guarded-root, auth-guard, app.module for details.

Please take a look and let me know if you have any questions or suggestions. I hope this post and the BackendForFrontend GitHub project are able to help you on your project.

TLDR; Go to my BackendForFrontend project on GitHub to see an example solution which uses Microsoft’s Reverse Proxy, Identity Server 4, and Angular to simplify identity configuration in a SPA.

Generating Entity-Framework classes for .NET Core projects in a Database First scenario

Microsoft provides two command line tools in situations when there is a need to generate Entity Framework classes for a .NET Core project. Within the Package Manager console of Visual Studio you can use Scaffold-DbContext and if you are using the .NET Core CLI then the command dotnet ef is available. Details on the capabilities (migration, scaffold, update, drop, and more) as well as instructions on what needs to be installed can be found on EF Core tools reference – .NET Core CLI and Entity Framework Core tools reference – Package Manager Console in Visual Studio.

When using dotnet ef command in the .NET Core CLI you will add the scaffold argument which will allow you to generate classes for an entire database, specific tables, or schemas within a database. If you are using it against a specific set of tables then you will provide each table with the –table flag preceding the table name.

In this example the command will generate EF classes in order to interact with two common Identity tables, AspNetRoleClaims and AspNetRoles, within the dbo schema.

dotnet ef dbcontext scaffold "Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=IdentityServer;" Microsoft.EntityFrameworkCore.SqlServer --table dbo.AspNetRoleClaims -table dbo.AspNetRoles

Having the tool generate classes for a schema is just as simple. Instead of using the –table use –schema and provide the schema name.

If there comes a need to regenerate tables or add more then you will need to use the –force flag. This will allow the process to overwrite any files that already exist in the project. Make sure you include all tables or schemas that you want generated. Even if a table hasn’t changed since the earlier schema change you will still need to list it when regenerating the classes.

If you prefer to use the Package Manager console then the same command from above could be executed using only small changes. Instead of a flag for each table you will combine all tables in a space separated list within enclosing quotes.

Scaffold-DbContext -Provider Microsoft.EntityFrameworkCore.SqlServer -Connection "Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=IdentityServer;" -Tables dbo.AspNetRoleClaims, dbo.AspNetRoles

This post went over only a small portion of the capabilities that these tools provide. If you are designing a system to be a Code First approach then you’ll become very familiar with these tools as you generate initialization and migration scripts. If you are unfamiliar with with EF migrations then take a look at Managing Database Schemas to guide you through each step.

Edited 26 April 2021 Fix Scaffold-DbContext syntax error.

Easy #Angular #Authentication and #Authorization setup using #DOTNETCORE

For the last few months I’ve been struggling to find an authentication and authorization setup that felt right for one of my projects. My requirements were basic. Have a system that I could use to limit access to my API endpoints and front-end components based on the roles of a given user. The back-end was to be written in C# using .NET Core 3.1 and the front-end in TypeScript and Angular.io. The system would also be self-contained, i.e. no external login providers.

Initially I used the default template from Visual Studio for generating an ASP.NET Core Web API project that uses a SPA framework for the front-end and IdentityServer 4 (IS4) to handle the authentication and authorization. It was a nice setup which makes it easy to tie in outside providers (i.e. Google, Microsoft, Facebook, etc) so users can signup using their login from another site. If you aren’t familiar with IS4 then reading the documentation and going through the various examples are a must. The drawbacks I saw was the complication of the system seemed greater than what my project needed and the authentication process required either a popup window, navigating away from the client site, or adding custom security headers to allow the login page to appear in a iframe.

So I went back to searching for other ways to handle authentication and authorization with .NET Core and reading up on the core concepts. Honestly reading the IS4 documentation was also a great way learn. After a few weeks I found a great write-up by Ankit Sharma titled Policy-Based Authorization In Angular Using JWT. Not only was the post well written, if you download the code from their GitHub repository it actually works!

The author goes through the process of creating a new ASP.NET Core 3.1 Web API project and a separate client app using Angular.io 8.3. The client will receive a JSON Web Token(JWT) that includes some basic information about the user, including the roles associated with the account. With this data the client can enable routes and features with basic checks and route guards.

The implementation is well thought out which makes conceptualizing how you could add new features easier. Some of the features that one might want to add are incorporating an auto-refresh of the token while the user is on the site, changing the credentials and auth data storage from in-memory to a database, adding password changes, logging out all active sessions for a user, or registering new users.

Yes, the author didn’t go over these but they did provide a solid foundation to start experimenting. Now if you do need all of these options then maybe revisiting IdentityServer 4 is a good idea since it provides a ready built framework to build a full-fledged identity management system. But for a simpler setup, the write-up by Ankit Sharma is a great starting point.