Easy #Angular #Authentication and #Authorization setup using #DOTNETCORE

For the last few months I’ve been struggling to find an authentication and authorization setup that felt right for one of my projects. My requirements were basic. Have a system that I could use to limit access to my API endpoints and front-end components based on the roles of a given user. The back-end was to be written in C# using .NET Core 3.1 and the front-end in TypeScript and Angular.io. The system would also be self-contained, i.e. no external login providers.

Initially I used the default template from Visual Studio for generating an ASP.NET Core Web API project that uses a SPA framework for the front-end and IdentityServer 4 (IS4) to handle the authentication and authorization. It was a nice setup which makes it easy to tie in outside providers (i.e. Google, Microsoft, Facebook, etc) so users can signup using their login from another site. If you aren’t familiar with IS4 then reading the documentation and going through the various examples are a must. The drawbacks I saw was the complication of the system seemed greater than what my project needed and the authentication process required either a popup window, navigating away from the client site, or adding custom security headers to allow the login page to appear in a iframe.

So I went back to searching for other ways to handle authentication and authorization with .NET Core and reading up on the core concepts. Honestly reading the IS4 documentation was also a great way learn. After a few weeks I found a great write-up by Ankit Sharma titled Policy-Based Authorization In Angular Using JWT. Not only was the post well written, if you download the code from their GitHub repository it actually works!

The author goes through the process of creating a new ASP.NET Core 3.1 Web API project and a separate client app using Angular.io 8.3. The client will receive a JSON Web Token(JWT) that includes some basic information about the user, including the roles associated with the account. With this data the client can enable routes and features with basic checks and route guards.

The implementation is well thought out which makes conceptualizing how you could add new features easier. Some of the features that one might want to add are incorporating an auto-refresh of the token while the user is on the site, changing the credentials and auth data storage from in-memory to a database, adding password changes, logging out all active sessions for a user, or registering new users.

Yes, the author didn’t go over these but they did provide a solid foundation to start experimenting. Now if you do need all of these options then maybe revisiting IdentityServer 4 is a good idea since it provides a ready built framework to build a full-fledged identity management system. But for a simpler setup, the write-up by Ankit Sharma is a great starting point.

Adding a #D3.js line chart to an #Angular.io project

D3.js (Data Driven Documents) is a powerful JavaScript library. At a basic level you can transform data into interactive charts and graphics that are easy to digest by your visitors. If you aren’t familiar with the library then take a look at their site for examples. I’ve also added a few links to other sites that showcase the capabilities.

While D3 provides documentation on the API and tutorials they also recommend looking at the examples from other developers to see how it all works. If you are looking for an example to show you how to get a basic line chart then you can use this post or look at their Line Chart example for a pure JavaScript implementation.

The example I’ve built for this post displays three different types of weather information for Philadelphia, Pennsylvania, Tempareture, Dewpoint, and Visibility. The example also has a button that the user can click on to change the displayed weather information. The data for the chart is stored in a JSON file in the assets directory of the project but it could have just as easily been an HTTP request to an API. To see the code for the example take a look at my GitHub repository angular-d3-line-chart-demo. If you’d like to see the chart in action then head over to Stackblitz.com for a running example. The rest of the post will go over the implementation as a technical discussion with references back to the code in the repository.

Adding D3.js to the Angular project requires adding the D3 core logic package and it’s associated Types files to the project. You may also need to add these other packages: d3-array, d3-axis, d3-scale, and d3-shape. Keep these packages in mind if you run into errors mentioning those packages. While I didn’t need them in my personal project, it is in Angular.io 8, I did need them when I was building this example on Stackblitz.com which is in Angular.io 9.

npm install --save d3
npm install --save-dev @types/d3

In order to encapsulate the chart related elements I created a custom component called line-chart.component. This component holds the chart, labels and controls to show the current type of data being displayed as well as a button to change the data. The encapsulated component can now be reused on the same page to show different data while providing the same user experience.

ng g component LineChart

When the component first initializes we need to configure the element that will contain the graph. When working with D3.js charts the element is an SVG. Within the buildSVG() function you’ll see all of the initial configuration performed on this SVG.

  • Grab the SVG element owned by this component, a simple task since it is owned by the component and there is only one SVG element
  • Set the height and width of SVG based on margin calculations
  • Add text to serve as a watermark in the chart

Because the chart data is dynamic the configuring of the X and Y axis aren’t performed during the initial configuration of the SVG. Instead, once the data is retrieved we’ll transform it into a format that is easily digested by the chart and the maximum/minimum values for the axis can be extracted.

The starting point for the data selection, transformation, and update of the graph occurs in the updateGraphData() function. This function is called during the component initialization as well as when the user clicks Change Chart Data button. From here the following functions are executed that will refresh the chart with the latest information.

  • clearChartData() – finds all path elements, lines on the chart, and removes them from SVG element.
  • buildChartData() – transforms the original JSON data into a simplified object containing a value and date field for consumption by the chart logic.
  • configureYaxis() – the Y-axis is going to represent some kind of numerical data. For this we can use the d3Array.scaleLinear(). Because we don’t want to have any of the data resting on the X-axis line we reduce minimum of the Y-axis range by 1. D3.js provides a function that will retrieve the minimum and maximum values in an array called d3Array.extent().
  • configureXaxis() – the X-axis is going to represent a time series so we’ll generate a scale based on the date property of the data and using d3Array.scaleTime().
  • drawLineAndPath() – the function uses the d3Shape.line() function to configure which parts of the data will be associated with the X and Y-axis. It also defines the look of the line, i.e. color and thickness.

With all of these functions combined we are able to initialize, configure, and draw the line chart as part of an Angular.io component. Other features that could be added are line tool tips, transitions between chart drawing, and showing two sets of data at one time. As I learn more about building charts with D3.js I will add new posts to describe how to add those features.

As mentioned earlier, take a look at the GitHub repository for all of the code. I’m going to continue updating and cleaning up the project as I learn more about D3.js line charts. Because of that I didn’t include any of the code in this post. I didn’t want to leave readers with one version discussed in the blog post while a newer version exists in the repo. If you have any questions or run into problems leave me a comment and I’ll see if I can help out.

Creating an #Azure #SQLSERVER database login account

A few weeks ago I posted the steps to create a login account in SQL Server which utilized SQL Server Authentication. While the steps work perfectly for local instances of SQL Server it doesn’t work on when you are operating in Azure SQL Server. I realized this quickly when I decided to use the steps at work with no success. Luckily the differences aren’t major, which this post will go over, so without further delay, here is what you need to do.

The big difference between creating an account on an Azure based SQL Server database versus a local server is under which database you run the steps. When creating the login you’ll run under the master database. Then when creating the user you’ll need to run it once under master and then again under the database the account should be associated with.

-- Execute from master database on the server
CREATE LOGIN [DbLoginUser] WITH PASSWORD = 'Password1'
GO
CREATE USER [DbLoginUser] FOR LOGIN [DbLoginUser] 
GO

The above script creates the login which you should be able to test out immediately after running them. You won’t have access to any of the databases on the server but you should be able to log into the server. The next script will associate the login with the particular database and place it in the db_owner role.

-- Open a new query window in the database you want to create the user
CREATE USER [DbLoginUser] FOR LOGIN [DbLoginUser] WITH DEFAULT_SCHEMA = [dbo]
GO
ALTER ROLE db_owner ADD MEMBER [DbLoginUser]
GO

With the steps completed you should be able to log into the server with the new DbLoginUser account. Additionally, you have full control over the database the user was created under.

The shallow and deep end of copying #JavaScript #arrays

In JavaScript, arrays are passed around by reference. What this means is that if you assign a variable so that it equals another variable which is defined as an array then a pointer to the original array is assigned to the new variable. You will now have two variables pointing to the exact same array object. This is unlike primitive types in JavaScript where when you assign them to a variable the value is copied to the new variable and allocated its own memory.

Below is an example showing the same reference being used in two different variables. It also shows what happens to the array when a new value is added to one of the variables pointing to the array.

<script>
   // Create the default array that will hold our standard names
   var defaultNames = ['Jack', 'Jill', 'James'];
   // Write all of the default names
   console.log('Default Names: ' + defaultNames.join());
   
   // Create another variable that will be initialized with default names 
   // and will hold more names
   var moreNames = defaultNames;

   moreNames.splice(0, 0, 'Jerry');
   // Write all of the more names
   console.log('More Names: ' + moreNames.join());
   // Write all of the default names
   console.log('Default Names: ' + defaultNames.join());
</script>

The result of this script is the following output:

Default Names: Jack,Jill,James
More Names: Jerry,Jack,Jill,James
Default Names: Jerry,Jack,Jill,James

Instead of the variable defaultNames remaining with its original assignment of three names it now includes the name Jerry because the reference to the array was assigned to the moreNames variable. Since both variables are pointing to the same array this is expected behavior. What we would really want is to have the values of defaultNames assigned to the moreNames variable so that we could manipulate one of the arrays without changing the other. In this post I’ll go over a few operations native to JavaScript that we can use. The first method is the slice() operation.

The slice() method goes through the array and creates a shallow copy of each element in the array. This means that if you have objects, not primitive types like numbers, then the reference to the object will be copied. The result is a new array object being created which will allow you to add and remove objects from each array without worry of impacting the other one.

So instead of doing the assignment:

var moreNames = defaultNames;

We instead use the slice() method:

var moreNames = defaultNames.slice();

This will result in the following output:

Default Names: Jack,Jill,James
More Names: Jerry,Jack,Jill,James
Default Names: Jack,Jill,James

Now the defaultNames array is not impacted when we add the name Jerry to the moreNames array.

Something new in ES6 is the spread operator […] which can be used to perform a shallow copy like the slice() method.

var moreNames = [...defaultNames];

The result will be the same as what was output when we used the slice() method. One of the neat features that I like with the spread operator is the ability to easily join of two arrays.

var moreNames = [...defaultNames, ['Gina', 'Greta', 'George'];
// Now moreNames contains ['Jack', 'Jill', 'James', 'Gina', 'Greta', 'George']

This would result in moreNames consisting of all the values from defaultNames plus the values in the second array. Again, all values are shallow copied into a new array object before they are assigned to the variable.

As mentioned earlier both of those methods only perform a shallow copy of the array objects. That means if your array contains references to objects then modifying the object in the one array will also modify it in the other array. In some cases this is desired but for the cases where it isn’t a quick way to perform a deep copy is to utilize the JSON object. We can use the stringify() method to convert the array into a JSON string and then the parse() method to convert it back into a JavaScript object.

<script>
   var defaultNames = [
     {first: 'Amelia', last: 'Smith'}, 
     {first: 'Christopher', last: 'Conner'}
   ];
   var modifiedNames = JSON.parse(JSON.stringify(defaultNames));
   newNames[0].first = 'Jennifer';
   console.log('Default Names: ' + JSON.stringify(defaultNames));
   console.log('Modified Names: ' + JSON.stringify(modifiedNames));
</script>

Thanks to the array object being converted into a string and then back into its native objects the two variables have completely different references for all objects in the array. This results in the following output:

Default Names: [{“first”:”Amelia”,”last”:”Smith”},{“first”:”Christopher”,”last”:”Conner”}]
Modified Names: [{“first”:”Jennifer”,”last”:”Smith”},{“first”:”Christopher”,”last”:”Conner”}]

In this case the first name in the modifiedNames output changed from Amelia to Jennifer. Had we used the slice() or […] operations to make the copy then both arrays would have had their first element change to Jennifer. With one, possibly two if you split out the function calls you are able to create a true deep copy of the array. This simplicity of the JSON object to create a deep copy makes it a great option for most use cases.

There is always the option, and sometimes it is warranted, of rolling your own code to create copies of arrays but is it really needed? If the language is already providing you with the tools to complete the operation then use them. This will keep your code maintainable, the method calls will most likely be better documented, and there will be a much lower risk of bugs in the native operation as compared to your own.

Basic steps to create a new #SQLSERVER database account

Once or twice a year I need to create an account for a SQL Server database that utilizes database authentication.  And every time I have to do this I end up spending too much time searching for the correct steps.  So this post is to help keep me from forgetting what to do for the rest of my career.

In the example below, the database we want to create an account for is [TestDatabase].  The login that will be used for authentication is DbLoginAcct.  And the user that will grant the login access to [TestDatabase] and the [dbo] schema is DbUserAcct.

USE [TestDatabase]
CREATE LOGIN [DbLoginAcct] WITH PASSWORD=N'test', DEFAULT_DATABASE=[TestDatabase]
GO
CREATE USER [DbUserAcct] FOR LOGIN [DbLoginAcct]
GO
ALTER USER [DbUserAcct] WITH DEFAULT_SCHEMA=[dbo]
GO
-- We want this user to have full control of the database so make it a db_owner
ALTER ROLE [db_owner] ADD MEMBER [DbUserAcct]
GO

After running the script we can log into the database by using the DbLoginAcct credentials.  If this is the first account on the server using database authentication you may run into issues when authenticating.  To fix these issues take a look at  How to Fix Microsoft SQL Server Login Failed Error 18456?

Finally, two things worth noting are the difference between the LOGIN and USER objects created as part of the script.  As it relates to the server, the LOGIN is used for authentication and the USER is used for authorization.  In other words, the LOGIN gets you into the server while the USER gets you access to the specific database.

Update:  This is for a non-Azure SQL Server Instance.

Removing #MVC Form #ModelState Errors when the form data model can’t be altered #csharp

When working on .NET MVC projects you may run into a situation where you need to override the state reported by the ModelState object.  In case you aren’t familiar with what the ModelState object is, in the case of MVC controllers the ModelState object provides details on whether the data in a form sent to the controller passes all model defined attribute validation checks.  A call to ModelState.IsValid returns a boolean value to indicate the state, true for all checks passed, false for one or more errors present.  Further details on the reasons why the model is not valid can be retrieved from the ModelState.Errors collection.  In some cases it may be necessary to remove some validation errors from the ModelState before calling ModelState.IsValid.  

I recently had to do this when a complex form required a major overhaul.  Ideally we would have built new form data models but constraints on the project didn’t permit the investment.  Instead a function was created that would remove validation errors for the few form elements causing validation issues.

The function wasn’t complex.  For our needs we wanted to remove all errors against an element that contained a given name.  This was because our form would contain input elements with names like these that created a natural hierarchy and grouping:  Car.Door.Front.Passenger.Color and Car.Body.Windshield.Size.  To do this we’d check each ModelState Key to see if it contained the element name of interest.  If it did then the ModelState.Values object at the same index of the found key would have its Errors collection cleared.

public void RemoveModelStateErrors(string elementName) {
    for(var idx = 0; idx < ModelState.Keys.Count; idx++) {
        if (ModelState.Keys.ElementAt(i).Contains(elementName)
            && ModelState.Values.ElementAt(i).Errors.Count > 0) { 
            ModelState.Values.ElementAt(i).Errors.Clear();
        }
    }
}

The function could be simplified if you always know the exact element name or it could be refactored to handle a list of element names.  Either way once the function is called, any errors related to the elementName would be removed and the ModelState.IsValid call will be updated accordingly.  If the only errors were related to the elements that were just removed then it would return true for the valid model.

Setting up #TypeScript and Runtime Compilation in #VS2019 for #ASPNETCore

Last week I started a new web project to test out ASP.NET Core and Visual Studio 2019.  With all of the changes between the IDE, project configuration, and MVC boilerplate code I figured waiting until work and life slowed down a bit would be ideal.  With the end of the year near the slowdown has finally happened.

The boilerplate code setup hasn’t been difficult, and as expected, well documented.  At this point in the project I’ve setup all references to data access classes outside of the main project to utilize dependency injection.  .NET Core makes this extremely simple I and recommend all developers who are hesitant to use DI to take a look at Microsoft’s documentation.

Two areas that stumped me for a bit were areas that were relatively simple in Visual Studio 2017.  The first one was getting TypeScript to compile and build the mappings properly for debugging.  Prior to .NET Core a tsconfig.json file in the project’s Scripts directory was all that was needed.  However, ASP.NET MVC projects in .NET Core have a different setup.  The guidance is to create a scripts directory outside of the wwwroot directory in the project to hold the TypeScript files.  In order to get the JavaScript code that is generated from those files to be used in the deployed site you then need to copy the JavaScript, TypeScript, and Map files over to your JavaScript directory within wwwroot.  From what I have read, to do this you need to perform a few steps.  They are pretty basic and I encourage you to read the article on TypeScriptLang.org as it has more details on the steps needed.  What follows are the two files I had to play with to get the mappings working.

The first file to add to your project is tsconfig.json.  You can place the file at the root of your project or in the main folder holding your TypeScript files.  For the purposes of this post I’ll be putting the file in the root of my project.

tsconfig.json

{
  "compileOnSave": true,
  "compilerOptions": {
    "noImplicitAny": false,
    "noEmitOnError": true,
    "removeComments": false,
    "sourceMap": true, // Generate the source Map file connecting JS to TS
    "target": "es5",
    "typeRoots": [ // Point to NPM for typings
      "node_modules/@types"
    ]
  },
  "exclude": [
    "node_modules",  // Folder to ignore
    "wwwroot" // Folder to ignore
  ],
  "include": [
    "scripts" // The folder to look for TypeScript files
  ]
}

Once this configuration file is in place any build launched will fire off the TypeScript compiler and generate the JavaScript and Map files.

The next piece of the puzzle is getting the generated files into the correct directory in the wwwroot folder of the project.  To accomplish this another file needs to be added to the project in order to use gulp.  The gulpfile.js will have a simple job, delete the old JavaScript directory and copy over the TypeScript, JavaScript, and Map files to the wwwroot directory.

gulpfile.js

var gulp = require('gulp');
var del = require('del');

var paths = {
    scripts: ['scripts/app/**/*.js', 'scripts/app/**/*.ts', 'scripts/app/**/*.map'],
};

gulp.task('clean', function () {
    return del(['wwwroot/js/**/*.js', 'wwwroot/js/**/*.map', 'wwwroot/js/**/*.ts']);
});

gulp.task('default', function () {
    gulp.src(paths.scripts).pipe(gulp.dest('wwwroot/js'))
});

Once those files were in place the next battle was getting my project to rebuild and run any updates made while debugging the site.  I originally thought the problem resided with the IDE configuration.  Turns out this isn’t the case and what was needed were changes at the Project level and in the code.

The steps are all spelled out by Microsoft in Razor file compilation in ASP.NET Core.  To help with the code piece of the update take a look below.  In the Startup.cs file the constructor and ConfigureServices sections of code will need to be updated in a similar fashion.

Startup.cs

private IWebHostEnvironment _env { get; set; }

public IConfiguration Configuration { get; }

public Startup(IConfiguration configuration, IWebHostEnvironment env)
{
    Configuration = configuration;
    _env = env;
}

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    IMvcBuilder builder = services.AddRazorPages();
#if DEBUG
    if (_env.IsDevelopment())
    {
        builder.AddRazorRuntimeCompilation();
    }
#endif
  ...
}

Once these changes were finally realized my project was back in a productive development state.  Visual Studio 2019 does have TypeScript compilation settings that can be set in the project Properties that could be used instead of the tsconfig.json file.  And it might be possible to not use the gulp task by having your TypeScript files within the wwwroot/js folder.   I’ve shied away from this setup for the time being since multiple articles have emphasized keeping the TypeScript code outside of wwwroot and using the tsconfig.json file makes the project more automated build friendly.

 

Entity Framework returning partial objects?

There have been a few times when using Entity Framework that a LINQ query seems to return a partial object.  In my own experience the problem appears when the when querying for data that was just updated or inserted into the database.  When retrieving the new entries the first query for the data returns a partial object, or in some cases the old data, while the second query, or a query executed shortly afterwards, will result in the full object.

The issue that we’re running into is that the DbContext hasn’t had time to refresh its own model of the data.  Any foreign-key references might show correctly but the object that would be referenced will show up as null.  If you were to query the database directly you won’t have the same issue.  All foreign keys and the associated data are all set immediately after the UPDATE or INSERT command is completed.  To handle these situations in Entity Framework there are a few options.

One option is to create a new instance of DbContext each time you are querying for the data.  The new instance will have no recolection of the previous state of the database and will only retreive the latest instance.

Another option is to use the .AsNoStracking() option in your query.  This is the method I’ve used most as it allows me to specify which queries I want to grab the latest data out of the database with the least amount of overhead.  The syntax is clean and doesn’t require special configurations.

var result = (from color in _context.Colors.AsNoTracking()
              where color.ColorId = 44
              select color).FirstOrDefault();

The above query is a basic example of retrieving the latest instance of the Color object based on the data within the database.  Any Foreign Keys linking the Color table to other tables will also be retrieved from the database.

When working on a new project or a database that is relatively small you might not see this problem right away.  As the data becomes more complex or the database gets larger the problem will most-likely crop up.  To keep from tracking down a tricky bug pick a way to solve the problem early and apply it consistantly.

Nuget and .NET Standard

I just pushed out my first production ready library, RandomSelection, to NuGet.  I’ve been meaning to publish it for over a year but never got around to it.  I honestly thought the process was going to be more involved but in the end it was pretty simple.  The library was originally written in .NET 4.X but before I published it I wanted to move it over to .NET Standard.  The motivation was to enable its use across a wider range of projects.  To make the change I followed three steps.

First, created a new project in Visual Studio 2019 targeting the .NET Standard Library project template.

Second, copied over the classes and project structure from the original project over to the new one.

Third, updated my tests since the library I was using had made a change that impacted my code.  Also made small updates to the way I was throwing exceptions so that the message received by the caller would be clear.

That was it.  Then to build the NuGet package I used the VS 2019 project properties view to set the parameter values used by NuGet.  For a detailed set of instructions to follow take a look at Microsoft’s Create and publish a package using Visual Studio.  No need to create your own nuspec file or sign it.  Give it a shot.  It was amazingly easy.

Regex String Replace Rescue

I ran into an issue recently where I needed to fix two fields in a SQL statement.  Wasn’t a big deal except there were over 8,000 SQL statements requiring the change.  Would have loved to do a simple Replace String in Notepad++ but each value was unique.  The job that needed to be done was change a value from a string with single quotes around it to be a decimal number.  Simple enough, a value like ‘12.34’ needs to be come 12.34 for the SQL statement to be valid.

Anytime I working with random text files I tend to work in Notepad++.  I like the user experience it presents, it is light-weight, and fast.  Within the application you can perform the standard search and replace or you can use regular expressions to search the file.  I knew my search would have to involve regular expressions but what I wasn’t sure was whether I could replace the text with the cleaned version of itself.  To my surprise, and due to regex ignorance, this was 100% possible.  By taking advantage of a feature called Capture Groups a user can specify a search pattern as well as the specific part of the pattern you want to keep.  Then, for the replace value in the search and replace window you set the replacement value as \1 which represents the first, and in my case only, capture group.

Notepad++ Regular expression string replace

What this all came down to was using the regular expression ‘(d\d\.d\d\)’ created a search term that looked for a two digit number with a decimal and two more digits after the decimal surrounded by single quotes.  The digits and decimal will be saved as part of capture group one.  Now I know there were probably much better ways to create a regex search term but this worked so I went with it.  Thank you to StackOverflow for getting me squared away with this problem.