Using multiple hosting environments on the same machine in ASP.NET Core

CheapASPNETHostingReview.com | Best and cheap ASP.NET Core hosting. This short post about how to set the hosting environment in ASP.NET Core.

However, if this is a capability you think you will need, you can use a similar approach to the one I use in that post to set the environment using command line arguments.

This approach involves building a new IConfiguration object, and passing that in to the WebHostBuilder on application startup. This lets you load configuration from any source, just as you would in your normal startup method, and pass that configuration to the WebHostBuilder using UseConfiguration. The WebHostBuilder will look for a key named "Environment" in this configuration, and use that as the environment.

For example, if you use the following configuration.

 You can pass any setting value with this setup, including the “environment variable”:

This is fine if you can use command line arguments like this, but what if you want to use environment variables? Again, the problem is that they’re shared between all apps on a machine.

However, you can use a similar approach, coupled with the UseEnvironment extension method, to set a different environment for each machine. This will override the ASPNETCORE_ENVIRONMENT value, if it exists, with the value you provide for this application alone. No other applications on the machine will be affected.

To test this out, I added the MYCOOLPROJECT_ENVIRONMENT key with a value of Staging to the launch.json file VS uses when running the app:

Running the app using F5, shows that we have correctly picked up the Staging value using our custom environment variable:

With this approach you can effectively have a per-app environment variable that you can use to configure the environment for an app individually.

Summary

On shared hosting, you may be in a situation when you want to use a different IHostingEnvironment for multiple apps on the same machine. You can achieve this with the approach outlined in this post, building an IConfiguration object and passing a key to WebHostBuilder.UseEnvironment extension method.

How To Route Constraints In ASP.NET Core

How To Route Constraints In ASP.NET Core

logo

CheapASPNETHostingReview.com | Best and cheap ASP.NET Core. Route Constraints can be a handy way to distinguish between similar route names, and in some cases, pre-filter out “junk” requests from actually hitting your actions and taking up resources. A route constraint can be as simple as enforcing that an ID that you expect in a URL is an integer, or as complicated as regex matching on strings.

An important thing to remember is that route constraints are not a way to “validate” input. Any server side validation you wish to occur should still happen regardless of any route constraints set up. Importantly, know that if a route constraint is not met than a 404 is returned, rather than a 400 bad request you would typically expect to see from a validation failure.

Type Constraints

Type constraints are a simple way to ensure that a parameter can be cast to a certain value type. Consider the following code :

At first glance you might assume that if you called “/api/controller/abc” that the route would not match – It would make sense since the id parameter is an integer. But infact what happens is that the route is matched and the id is bound as 0. This is where route constraints come in. Consider the following :

Now if the id in the URL is not able to be cast to an integer, the route is not matched.

You can do this type of constraints with int, float, decimal, double, long, guid, bool and datetime.

Size Constraints

There are two types of “size” constraints you can use in routes. The first is to do with strings and means you can set a minimum length, max length or even a range.

This sets a minimum length for the string value. You can also use maxlength to limit the length.

Alternatively, you can set how many characters a string can be within a range using the length property.

While that’s great for string variables, for integers you can use the min/max/range constraints in a similar fashion.

Regex Constraints

Regex constraints are a great way to limit a string input. By now most should know exactly what regex is so there isn’t much point doing a deep dive on how to format your regex, just throw it in as a constraint and away it goes.

It is worth noting there for whatever reason, the .NET core team added another handy “quick” way of doing alpha characters only instead of regex. There you can just use the constraint of “alpha”.

Developing a Webservice in DotNetNuke

Developing a Webservice in DotNetNuke

CheapASPNETHostingReview.com | Best and cheap DotNetNuke hosting. I have recently been assigned to built a DotNetNuke web service to permit a windows application (or any sort of net client for instance) the flexibility to control DotNetNuke person accounts (create, change roles, delete, retrieve e mail address, and so forth.)

Since I had a tough time locating an accurate code sample or documentation that really applies to DotNetNuke 7.3 and accessing it without having being earlier logged in to DotNetNuke, it absolutely was difficult to constructed anything at all. I ultimately found out how to do it properly so I hard I would put my attempts to some use and write a blog publish explaining how you can get it done step by stage.

That said, let’s begin by the fundamentals and just create a publicly available web services that permits anybody to ping the net service and acquire a pong again. For that we are going to use the new DotNetNuke 7 Providers Framework which makes it fairly simple should you know how to utilize it.

In order to create a net support that will work within DotNetNuke 7, you will need to fireplace up Visual Studio and create a class Library project (c# or VB but all illustrations listed here will be in c#).

That done, we will then reference some needed DotNetNuke 7 required libraries (making use of the Add Reference dialog box), here’s the list:

Then we also need to reference the System.Web class from the .NET tab of the same dialog box.

Finally, we neet to set the output path of the project to the DotNetNuke bin directory and we are ready to code.

Here is the code, the explanations follow:

  1. We merely start with some using statements for our needs as demonstrated previously mentioned
  2. We develop a namespace for our service and no matter what name we use listed here will be part of the url. I utilized MyService just for this instance but use any name which makes perception for your services.
  3. Now we create a public class for our controller. You’ll be able to create numerous controllers if you want to and the controller is just a bunch of related actions that make feeling to group with each other. In my genuine project I have a PingController for testing functions, a UsersController for almost any steps that relate to user accounts and so forth. Just utilize a identify that makes feeling because it will even present up in the url. Two things for being careful right here:
    • The identify of one’s controller should end using the term Controller but only what will come just before it will show inside the url, so for PingController, only Ping will show in the url route.
    • It should inherit DnnApiController so it’ll use the DotNetNuke Providers Framework.
  4. Then we create the actual motion, inside our case, PublicPing. It’s just a straightforward technique which return an HttpResponseMessage and may have a handful of characteristics. By default the brand new providers framework will respond only to host consumers and you also must explicitly enable other access rights if necessary, in this case the [AllowAnonymous] helps make this technique (or action if you prefer) obtainable to anyone with out credentials. The next attribute, [HttpGet] can make this action reply to HTTP GET verb, which can be usually used when requesting some date in the web server.
  5. Finally in that action, you insert whatever code you action needs to do, in this case just return the string “Pong!”, just remember that you should return an HttpResponseMessage rather than a string or int or other item.

Ok so our controller and motion is done, now we just need to map that to an actual URL and that exactly what the final portion of the earlier code does. In essence this code tells DotNetNuke to map a specific url pattern for the techniques outlined in your course. You can use that code as is simply replacing MyService by no matter what your support title is.

Testing:

That is all there is certainly to it, your services is prepared! To test it, first compile it, then just navigate to http://yourdomain/DesktopModules/MyService/API/Ping/PublicPing and you should see “Pong!” inside your browser like a response.

Passing parameters

Ok, so the basic code above is working but it doesn’t do anything useful. Lets add something more useful by creating an action that will give us the email address for a specific user id.

Again, here’s the code and the explanations will follow (place the code inside the same namespace as the previous one):

Initial we build a UsersController course which will hold all actions related to person accounts, it isn’t completely required, you’ll be able to have numerous steps within the same controller, nonetheless because this motion is not in any respect connected to our PingController, let’a create a new one more descriptive.

We then create a GetEmail motion (method) which will accept a userid parameter. The [RequireHost] parameter listed here will make it accessible only to host customers, we are going to see afterwards other authentication options.

The code inside the approach alone is fairly significantly self explanatory. The only interesting factor to notice listed here is the fact that because our course inherits DnnApiController, we already have a PortalSettings item obtainable. That is the big benefit of producing use of the DotNetNuke Solutions Framework. You’ll have a ModuleInfo object to represent your module (if there is 1 using the identical identify as your support, which can be not essential such on this scenario), a PortalSettings object that signifies the portal at the domain title utilized to accessibility the support (portal alias) and at last a UserInfo item symbolizing the person that accessed the web services.

Testing:
If we now navigate to http://yourdomain/MyService/API/Users/GetEmail?userid=2 you need to receive the email tackle back again from the server unless of course obviously that userid does not exist, ensure that you check having a userid that truly exists for that portal. Should you exactly where not formerly linked having a host account, you then will probably be requested for qualifications.

Limiting access to particular roles

Alright, that actually works however, you need to give host qualifications to anyone needing to make use of your webservice. To avoid which you can change [RequireHost] by [DnnAuthorize(StaticRoles=”Administrators”)] which can limit access to administrators. Much better however, you nevertheless must provide them with an admin account. So the easy method to give only constrained entry would be to create a brand new role in DotNetNuke only for your internet services and substitute Administrators by that specific function title within the authentication parameter.

Utilizing HttpPost : (reply to a comment down bellow)

To answer Massod comment bellow, it’s nearly exactly the same thing however, you have to develop an object to contain the posted information.

Let’s make a easy ping that makes use of Submit, very first we need to create an object which will contain the posted info this sort of as:

Then we create the service method something like this:

note that normally, a post would only return ok and no message, I am just doing this so we can test here.

Now since this is a POST verb, we can’t test it by only using url parameters, we need to make an html file with a form to test it out. It would be someting like this:

The crucial thing to not right here is you can not just develop your Publish technique taking a string even when this can be only what you require, you do must create an object which will get your parameters.

Also never overlook that this is only for tests, you usually do not need to make this publicly accessible, you’d probably usually use yet another parameter than [AllowAnonymous] such as [DnnModuleAuthorize(AccessLevel = SecurityAccessLevel.View)] and [ValidateAntiForgeryToken] unless you truly want that for being public.

.NET Core And SQL Server In Linux Docker Containers

.NET Core And SQL Server In Linux Docker Containers

CheapASPNETHostingReview.com | Best and cheap ASP.NET core hosting. Throughout the years, we have been using ASP.NET and SQL Server mainly on Windows. Now the times have changed! You can now develop the same ASP.NET (with more optimized runtime and libraries) apps with the same SQL Server Database Engine on Linux and this is what I want to show you here.

To make things more interesting, I will take Docker, a leading container technology platform, into account for this demo. We have an Azure Linux VM (Ubuntu 16.04) where Docker is installed. We will spin up an ASP.NET Core container and a SQL Server container in a separate Docker user-defined network, accessing the application container from the VM’s public IP address and SQL Server container from SSMS on our local Windows Machine.

image001

Getting Started

I have a Linux Ubuntu 16.04 VM in Azure with Docker installed. I will SSH into the machine and generate a default ASP.NET Core 1.0 LTS project with SQL Server Entity Framework Core provider installed. You can do so manually either using dotnet CLI or Yeoman Generators.

The default ASP.NET Core project uses Identity for authentication\authorization. Identity, in turn, depends upon an EFCore provider which is in our case SQL Server.

Now, we do the following 3 important things here.

  1. Pull down the official Microsoft SQL Server image from Docker Hub
  2. Change the Connection String of the application such that it points to the SQL Server Docker container
  3. Dockerize the application

Pull down the official SQL Server Docker image from the terminal as,

docker pull microsoft/mssql-server-linux:latest

Next, change the connection string of the ASP.NET Core application in the appsettings.json file using Vim or Nano editor at the root as,

Notice the Server name in the connection string. It should not be the localhost if you want to run the application inside the Docker container as in our case. The Server name must match with the SQL Server custom container name when we run it. This is how Services are discovered by the Docker Engine.

Also, make sure that the “Trusted_Connection” is set to false as it forces the integrated security inside the Linux which is not supported.

Now finally, create the Dockerfile to build the Docker image of our application at the root as,

touch Dockerfile

With the contents,

And build the image with any name (aspnetcoreapp in our case) by the typing in the terminal.

docker build -t aspnetcoreapp .

It will start restoring the NuGet packages, set up the environment and build the Docker image for the application.

We now have the SQL Server and ASP.NET Core Docker images. The next thing we need to do is to configure the Azure VM’s Network Security Group (NSG) to open port 80 to allow HTTP traffic from the Internet and the port 1433 to allow our local SQL Server Management Studio to connect to the SQL Server container running inside the Linux VM.

My Linux VM is provisioned using ARM model this is why we need to configure the NSG which was created with the VM. If you used ASM model, configure the VM’s endpoint instead.

To do this, we add Inbound Security Rules for port 80 and 1433. So go to NSG blade => Inbound security rules => Add inbound security rule, type any formal names and open the ports 80 and 1433 as.

image02

This is all we have to do. Now we have 2 Docker images and configured the NSG of the NIC attached with the VM.

Spinning up the containers

Spin up the SQL Server Container

To spin up the SQL Server container, type in the terminal as,

docker run -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=Br0ckLesnar!’ -p 1433:1433 -d –name sqlinux –network=isolated_network microsoft/mssql-server-linux

Notice the name of the container. As said earlier, this must match with the server name given in the connection string of the web app settings. Also notice that we must place these running containers inside a separate Docker network. If we don’t specify a network, it will run inside the default network and automatic service discovery does not work in the default network to create a separate Docker network type.

And, use this network for your containers.

Spin up the ASP.NET Core Container

To run the ASP.NET Core container, simply type.

docker run -p 80:5000 -d –name webapp –network=isolated_network aspnetcoreapp

We now have the application and database containers running and they are interacting with each other. To test this, browse to the public IP address or DNS name of your Linux VM on your local machine’s browser and you will see that the application is up and running.

image03

Go to the Register page and try to register a user.

image04

And you can see that the user is successfully created in SQL Server running inside the Docker Container.

image05

Connecting Windows SSMS with Docker SQL Server

Now we will use our local SQL Server tool, called SQL Server Management Studio (SSMS),  to connect the SQL Server instance running inside the Docker container inside the Azure VM. Remember, we opened the port 1433 in the NSG attached to the NIC of the VM. So open the SSMS and type the IP Address of the VM with port (with format [ip_address, port]) inside the Server name section, use the SQL Server Authentication option, type the user SA and type the password we used when we spun the SQL Server container.

image06

We see that the Server is connected — now run a SQL query against one of the tables  created by Identity in the database and you will see that the record has been successfully added and displaying in the SSMS.

image07

Conclusion

We saw how we connect an application container and a database container with service discovery feature of Docker Engine. I did not mount any volume to the SQL Server container nor did I use any Docker Volume plugin to make it even more productive so this is stateless by default. Using the same technique for production use case is not recommended. The idea was to provide a step by step guide to build a simple 3-tier application using Docker containers.

How to Secure your ASP.NET Core MVC and Web API app using Google

How to Secure your ASP.NET Core MVC and Web API app using Google

CheapASPNETHostingReview.com | Best and cheap ASP.NET Core MVC hosting. Now it’s time to tackle a common scenario – securing your .NET Core Web app (even when accessed via Angular).

To keep things simple for this example, we’re going to require our users to log in as soon as they enter our app.

We can use ASP.NET Core to redirect the user to a login page as soon as they hit our default controller action (/home/index).

That way, they can’t even load our Angular app until they’re logged in.

Once they’ve logged in (via Google), Angular will load as normal and any requests from the Angular app to our Web API will work.

The end result

Let’s start by looking at the end result.

When we’ve made the changes to our app, any users attempting to access it will be redirected to this amazing login page.

Log-in-with-Google-1

We’re not going to win any prizes for design here but it will get us up and running. When your user clicks the Log in with Google link, they’ll be redirected to Google for authentication.

Google-sign-in-page

Once they’ve confirmed their email and password, they’ll be redirected back to your application, along with tokens confirming they have been authenticated.

ASP.NET Core will then accept those tokens as proof of identity and check for them on every request to a secure part of your app.

Sample App

To save spinning up yet another sample app, I’m going to use my Angular 2 Weather Station for this.

However, any ASP.NET Core MVC app will suffice.

If you don’t already have one and want to spin up a new app to play along, this should work…

Google – for all your authorization needs

As we covered in our look at big picture, you need an Authorization server. The Auth server takes care of requesting user credentials, confirming they are who they claim to be, then redirecting them back to your application with an access token.

To save ourselves the hassle of creating our own login system for now, we’re going to use Google as our Authorization Server (using OAuth2 and OpenId Connect).

What in the world is OAuth 2 and OpenId Connect?

OK, I’ll level with you.

When I started putting together this article, I fully intended to use OAuth 2 by itself.

If you’re not familiar with it, OAuth 2 is a means by which you can request an authorization key for your app via a third party e.g. Google.

The thing is though, it was never really designed for true user authentication.

If you think back to our house analogy from the big picture. OAuth 2 will give users a key to your house, but once they have a key, there’s no longer any guarantee that they are who they claim to be. They could have given that key to anyone who can now do what they like in your house!

Also, there are no strict rules on how OAuth2 should be implemented. The big providers like Google and Facebook started encouraging sites to use it for pseudo Authentication, hence “Login with Google” buttons appearing everywhere. But OAuth2 by itself is pretty weak for Authentication and there have been a number of significant holes found in it over the last few years.

This is where OpenId Connect comes in. This sits on top of OAuth 2 and effectively turns it into the secure authentication framework you really want it to be.

Using OpenId Connect, you can be much more sure that the person holding the key to your web app is the person they claim to be.

The good news is, setting up OpenId Connect in ASP.NET Core is pretty straightforward and definitely worth it for the extra security it provides.

Set up your app in Google

The first step is to head on over to Google to set up the OAuth 2.0 side of things.

You’ll need to generate credentials for your app (to use when talking to Google) and set up redirect URLs so your users are redirected back to your app when login succeeds or fails.

You can follow the guide on Setting up OAuth 2.0 over at Google’s official support site.

Go ahead, do that now, then you can follow along with the next steps.

Note, as part of the set up, you will need to provide an Authorized redirect URI.

Assuming your ASP.NET app is using the default port, you will typically want to add http://localhost:5000/signin-oidc for testing on your local machine.

But watch out, if you’re using Visual Studio 2017, your app might run via IISExpress using a different port, you’ll need to use the correct URL either way.

Redirect-URI-1

Securing the ASP.NET Core app

Now it’s time to look at our ASP.NET MVC app.

You can easily restrict any part of your app using the [Authorize] attribute.

Given we want to block users before they even get to the Angular app, we’ll go ahead and lock down our Home controller’s Index Action.

Modify HomeController.cs as follows.

When a user accesses our angular app, they start here. With this attribute in place, our application is now effectively restricted to logged in users.

It’s a bit brute force, but this is the simplest approach we can take whilst we get our heads around how all of this works, before we get in to more complicated scenarios like letting users into part of our SPA before requiring them to log in.

All well and good, but if we stop here we’ve literally prevented anyone from getting into our app.

No-auth-set-up

To remedy that, we need to tell ASP.NET Core how we want users to be authenticated, along with some important details like where to send them when they’re not.

Authentication with Cookies

To keep things simple, we’ll use Cookie Authentication here. If anyone tries to access a restricted resource and doesn’t have a legitimate ASP.NET security cookie, they will be redirected to our super login page.

Start off by bringing in the Microsoft Cookies Nuget package.

With that installed, you can easily configure Cookies Authentication in Startup.cs.

This sets things up so that any unauthorized users attempting to access a restricted part of our app, will be required to log in via OpenId Connect.

Create a login page

Before we go any further, it would be a good idea to create our login page, complete with button to log in via Google.

Add an AccountController.cs file to the controllers folder and define a simple Login action.

Create a Login.cshtml view in Views/Account.

If you’re not using Bootstrap, feel free to skip the divs, the key part is the link to sign in with Google (via OpenId Connect).

Now if you’re paying attention you’ll have noticed we’re linking to /account/external but that doesn’t exist (yet).

Back to the AccountController, add an External action.

With this in place, .NET Core is almost ready to challenge your user’s credentials via Google.

Configure OpenId Connect

Finally, you just need to bring in Microsoft’s OpenId Connect NuGet package.

Then head on back over to Startup.cs and modify the Configure method.

You can get hold of your ClientId and ClientSecret from Google (assuming you’ve followed the instructions to set up Google OAuth2)

Important: Don’t go including your Id and Secret “naked” in the code like this other than for testing. In reality you’ll want to protect this sensitive information. One option is to use App Secrets during development.

As you can see, OpenId Connect is actually pretty simple to set up. You need to point it at an authority server (Google in this case).

The OnRedirectToIdentityProvider event handler is there to make sure users are redirected to our login page when they try to access a restricted part of the app. The Request.Path check simply makes sure we don’t accidentally block our app as it attempts to complete the sign-in process via Google.

Give it a spin

All that’s left is to test it out.

When you access your app you’ll be redirected to the login page.

From there, clicking on the Login link will send you off to Google where you can log in with your Google account.

Once you’ve done that, you’ll be sent back to your app where you’ll have been granted access.

Lock down those APIs

So far we haven’t locked down our API controllers. That means anyone (logged in or not) can still go directly to our APIs.

Thankfully, now we’ve tackled the OpenId Connect plumbing, it’s trivial to add the [Authorize] attribute to any of our API controllers, locking them down for anyone but authorized users (who have an auth cookie).

The last step

Phew, you made it this far.

Now your users can log in via Google (using OpenIdConnect).

Once Google’s confirmed their identity (by asking them to log in), they’re redirected back to your app, complete with access and identity tokens.

ASP.NET Core’s Cookie Middleware then kicks in to serialize your user’s principal (information about their identity) into an encrypted cookie. Thereafter, any requests to your app validate the cookie, recreate the principal and assign it to the User property on HttpContext.

One last thing, you might want to give your users a way to sign out of your app. Just add the following to your AccountController.

Any request to /account/signout will now sign them out, requiring them to log in again to access your app.

How To Response Caching in ASP.Net Core 1.1

How To Response Caching in ASP.Net Core 1.1

CheapASPNETHostingReview.com | Best and cheap ASP.NET Core 1.1 hosting. With the ASP.NET Core 1.1 many new features were introduced. One of them was enabling gZip compression and today we will take a look at another new feature which is Response Caching Middleware. This middleware allows to implement response caching. Response caching adds cache-related headers to responses. These headers specify how you want client, proxy and middleware to cache responses. It can drastically improve performance of your web application. In this post, let’s see how to implement response caching in ASP.Net Core application.

Response Caching in ASP.Net Core 1.1


To use this middleware, make sure you have ASP.NET 1.1 installed. You can download and install the .NET Core 1.1 SDK.

Let’s create an ASP.NET Core application. Open Project.json and include following nuget package.

Once the package is restored, now we need to configure it. So open Startup.cs, add highlight line of code in ConfigureServices method.

And now let’s add this middleware to HTTP pipeline, so add highlighted line in the Configure method of Startup.cs.

We are done with all configurations. To use it, you need to include ResponseCache attribute on controller’s action method. So open HomeController.cs and add ResponseCache attribute to Contact method and set the duration to 20 seconds. For the demo, I modified the contact method and add Date time to see response caching in action.

This attribute will set the Cache-Control header and set max-age to 20 seconds. The Cache-Control HTTP/1.1 general-header field is used to specify directives for caching mechanisms in both requests and responses. Use this header to define your caching policies with the variety of directives it provides. In our case, following header will be set.

Here the cache location is public and expiration is set to 20 seconds. Read this article to know more about HTTP Caching.

Now let’s run the application to see it in action. When you visit contact page, you should see the current date and time of your system. As the cache duration is set to 20 seconds, the response will be cached for 20 seconds. You can verify it via visiting other pages of the application and then coming back to Contact page.

Response-Caching-in-ASP.NET-Core

During a browser session, browsing multiple pages within the website or using back and forward button to visit the pages, content will be served from the local browser cache (if not expired). But when page is refreshed via F5 , the request will be go to the server and page content will get refreshed. You can verify it via refreshing contact page using F5. So when you hit F5, response caching expiration value has no role to play to serve the content. You should see 200 response for contact request.

Static contents (like image, css, js) when refreshed, will result in 304 Not Modified if nothing has changed for the requested content. This is due to the ETag and Last-Modified value append in the response header. See below image (Screenshot taken in Firefox)

Response-Caching-in-ASP.NET-Core-ETag

Firefox gives 304 where chrome gives 200 response for static files. Strange behavior from Chrome.

When a resource is requested from the site, the browser sends ETag and Last-Modified value in the request header as If-None-Match and If-Modified-Since. The server compares these header’s value against the value present on the server. If values are same, then server doesn’t send the content again. Instead, the server will send a 304 - Not Modified response, and this tells the browser to use previously cached content.

Other options with ResponseCache attribute

Along with duration, following options can also be configured with ResponseCache attribute.

  • Location: Gets or sets the location where the data from a particular URL must be cached. You can assign Any, Client or None as cache location.
  • NoStore: Gets or sets the value which determines whether the data should be stored or not. When set to true, it sets “Cache-control” header to “no-store”. Ignores the “Location” parameter for values other than “None”. Ignores the “duration” parameter.
  • VaryByHeader: Gets or sets the value for the Vary response header.

Thank you for reading. Keep visiting this blog and share this in your network. Please put your thoughts and feedback in the comments section.

How To Using Node Services In ASP.NET Core

How To Using Node Services In ASP.NET Core

CheapASPNETHostingReview.com | Best and cheap ASP.NET Core Hosting. This post is about running Javascript code in the Server. Because a huge number of useful, high-quality Web-related open source packages are in the form of Node Package Manager (NPM) modules. NPM is the largest repository of open-source software packages in the world, and the Microsoft.AspNetCore.NodeServices package means that you can use any of them in your ASP.NET Core application.

aspnet-mvc-poster


To use Node Services, first you need to include the reference of Microsoft.AspNetCore.NodeServices package in your project file. You can do this using dotnet add package Microsoft.AspNetCore.NodeServices command.

Then you need to add the Node Services middleware to the request pipeline. You can do it in your ConfigureServices() method.

Now you’re able to get instance of INodeServices in your application. INodeServices is the API through which .NET code can make calls into JavaScript that runs in a Node environment. You can use FromServices attribute to get the instance of `INodeServices’ in your action method. Here is Add method implementation in MVC.

And here is the code of AddModule.js file

You need to use the type of the result in your InvokeAsync method, in this example I am using int. NodeServices allows ASP.NET Core developers to make use of the entire NPM ecosystem, which gives rise to a huge range of possibilities. You can find the full source code on GitHub.

Happy Programming.

How To Create Help Desk Web Application using ASP.NET Core

How To Create Help Desk Web Application using ASP.NET Core

CheapASPNETHostingReview.com | Best and cheap ASP.NET Core hosting. Suppose you work for a small to midsize company that employs 50-100 workers. The Help Desk — a subsidiary of the Information Services Division — is in charge of trouble tickets regarding general PC issues such as email, viruses, network issues, etc. Initially, the Help Desk team stored this information in Excel spreadsheets, but as the company has grown, managing these spreadsheets has become tedious and time consuming.

coverASPNETCORE

The Help Desk has asked you to devise a more efficient solution that could be developed internally, saving the company money. As you start to think about it, the following requirements are apparent: fields for the submitter’s first and last name, as well as their email address. You’ll also need combo boxes for indicating ticket severity (low, medium, high), department, status (new, open, resolved), employee working on the issue, as well as an area for comments. Of all the solutions available, creating an internal help desk Web application with ASP.NET is relatively simple.

In the following article, we’ll see how to implement these features in an ASP.NET help desk Web application using a database-driven approach,
Creating the JavaScript File
Because creating the JavaScript file is the easiest of the work left, we’ll do this next. From the Solution Explorer, follow these steps:

Creating the Help Desk Class

Now that we have our data coming in, we need to be able to record a help desk ticket submission. We need to create an event handler in a class to handle it. Let’s first create a help desk class by doing the following:

  •     Right click the project solution.
  •     Choose Add>New Item.
  •     In the Add New Item window, select Class.cs.
  •     In the name text field, type “HelpDesk” and then click Add.

Double click HelpDesk.cs from the Solution Explorer, which will show the empty class as shown below:

We need to import three libraries as shown below:

The first library (System.Data) allows us to work with stored procedures in ADO.NET, the second (System.Configuration) allows us to reference a connection key from configuration file and the last (System.Data.SqlClient) one allows us to connect to SQL Server.

How To Configuration ASP.NET Core 2.0 with Razor Pages Part 1

How To Configuration ASP.NET Core 2.0 with Razor Pages Part 1

CheapASPNETHostingReview.com | Best and cheap ASP.NET Core 2.0 hosting. At Build 2017, there were a lot of new features announced for ASP.NET Core 2.0, .NET Core 2.0 and .NET Standard 2.0.

Today, we’re going to look at a few of the changes, specifically: the new configuration model and Razor Pages

Configuration

A lot of the changes that the ASP.NET Core team have brought to ASP.NET Core 2.0 are all about taking the basic application setup and making it as automatic, and quick and easy to change as possible. The first and easiest way that they have done this is by creating the AspNetCore.All package.

AspNetCore.All Package

In previous versions of ASP.NET Core when we’ve created an application and wanted to add in functionality, we’ve had to search on NuGet or using the Package Manager to find the NuGet packages for the functionality that we want.

This lead to a csproj which looks like this:

Re-targeting this project as a netcoreapp2.0 (.NET Core 2.0) application with ASP.NET Core 2.0 libraries, we get the following csproj file:

The AspNetCore.All package is a meta package which pulls down all of the relevant (Anti Forgery, Auth, Entity Framework Core, MVC, Static files, etc.) packages to our application when package restore happens.

Because we no longer have to track down each of these individual packages, our job is made easier. Also, when the packages within the AspNetCore.All package are updated, the updated versions will be included in the AspNetCore.All meta package.

The AspNetCore.All package is included in .NET Core 2.0’s Runtime Store, and is compiled to native code, rather than IL. This means that all of the libraries included in the AspNetCore.All package are pre-compiled as native binaries for the Operating Systems that .NET Core 2.0 supports.

Boot Time Improvements

Dan and Scott were able to show that ASP.NET Core 2.0 applications can cold boot in less than a second, versus up to 7 seconds for ASP.NET Core 1.0 applications.

The ASP.NET Core team have achieved this by shipping the AspNetCore.All package in native code for each platform, and by enabling view pre-compilation. By pre-compiling the views, they no longer have to be compiled at start up.

View pre-compilation is a trick that has been around in .NET Framework for a while, but it isn’t a default build action.

New Program Setup

This leads me nicely onto the new program setup model.

In ASP.NET Core 1.0 the program.cs file contained a single method for configuring and running the server, and there was a lot of manual configuration required. As in the following code block:

To enable server features, you had to know what those features where called or rely on intellisense in order to find the right methods.

But in ASP.NET Core 2.0, a lot of the configuration is taken care of for us. So much so that the following code snippet is the default program.cs for an ASP.NET Core 2.0 application:

From the off, you can see how much simpler the new program.cs file is. The new program.cs goes hand in hand with the new startup.cs

First a refresher on what the ASP.NET Core 1.0 startup.cs:

Configuration is handled by us developers and we have to explicitly list all configuration files and enable logging.

Compare this to the new startup.cs:

There’s a lot that’s changed here, so let’s look at the changes in turn.

The Constructor and DI

Taking a look at the constructor, we can see that the configuration is Dependency Injected in for us.

This is because all of the explicit configuration that we had to do in ASP.NET Core 1.0 is done automatically for us. ASP.NET Core 1.0 will look for any and all relevant json/ini files and attempt to deserialise them to objects for us and inject them into the IConfiguration object.

The ConfigureServices method is pretty much the same, but the Configure method has been very simplified:

In the ASP.NET Core 1.0 Configure method, we had to inject the ILoggerFactory in order to enable logging:

However, the ASP.NET Core 2.0 Configure method doesn’t have the ILoggerFactory injected in:

This is because the contents of the appsettings.json are parsed and added into the IConfiguration object which is injected in at the constructor level of the class:

If we take a look at the appsettings.json, we can see that the logging is set up for us there:

And looking at the highlighted lines, we’ll see that logging is set up so that we’ll only get warnings. This can be proven by running the application from the terminal. Doing so, and navigating around in the application, you won’t receive any messages in the terminal other than warnings:

However, if we edit the appsettings.json file to match the following:

Then re-run the application and click around, we’ll see the familiar log messages again:

Its entirely up to the developer and their needs as to which level of logging they require. I prefer information logging when I’m developing and to switch to warnings once I’ve published, but your requirements may be different.

Razor Pages

The other big new thing in ASP.NET Core 2.0 is the concept of Razor Pages. Razor Pages are enabled by default, as they are a feature of MVC, thus the following line in the startup.cs enables them:

Razor Pages cover the situations when creating a full blown Controller, View and a Model for a single or small number of pages seem a little over kill. Take for instance a simple homepage with no controller required, presumably something which could be handled by a static page, but which should have a simple model.

An example of this can be seen in the ASP.NET Core Web App (Razor Pages) template, which is installed as part of the .NET Core 2.0 preview1:

Taking a look at the directory structure for this new template, we can see that the new Razor Pages are located within the Pages directory.

ASP.NET-Core-2.0-Razor-Pages-directory-structure

Routing

Before we take a look at the contents of one of the Razor Pages, it will be worth covering how the routing for Razor Pages works. The request URL for a Razor Page is mapped to it’s path within the Pages directory – the Pages directory being the default location which the Runtime checks for any Razor Pages which could match the requested URL.

The following table shows a few examples of how the location of Razor Pages maps to requests:

Razor-Pages-Request-Mapping-Example

How Using Microsoft Enterprise Library in ASP.NET

How Using Microsoft Enterprise Library in ASP.NET

CheapASPNETHostingReview.com | Best and cheap ASP.NET hosting. In this tutorial we will show you how to using Microsoft Enterprise Library is a collection of reusable software components used for  logging, validation, data access, exception handling etc.

Here I am describing how to use Microsoft Enterprise Library for data access.

Step 1: First download the project from http://entlib.codeplex.com/ URL.
Step 2: Now extract the project to get

And give reference in the Bin directory by Right click on Bin -> Add Reference -> then give the path of these 4 dlls. Then

Step 3: Modification in the web.config for Connection String.

Give the connection string as above where Datasource is your data source name, Initial Catalog is your database name and User ID and Password as in your sql server.

Step 4:

Now it is time to write the code.
Write the below 2 lines in the using block.

Here I am writting some examples how to work on:

The above code is a sample that will return a dataset. Here Fewlines4bijuConnection is the connection name and Topics_Return is the stored procedure name that is nothing but a Select statement.
But if the stored procedure is taking parameter then the code will be like:

As the code explained above ASPHostPortal Connection is the connection name and Topics_Save is the stored procedure name that is taking 3 (Subject,Description,PostedBy) input parameters and 1(Status) output parameter.

You may give values from textbox, I am here provideing sample values like  “Here is the subject”, “Here is the Descriptiont” or you may give the UserID from session, I am here giving 4. The output parameter will give you a string as defined and the code to get the value is

you can pass input parameter as below

DbType.AnsiString since Subject is of string time, you can select different values like AnsiString, DateTime from the Enum as be the parameter type.

The above code describes if you are using any stored procedure.
Below is an example that shows how to use inline SQL statements.

Happy coding!