AKS Series– Using Azure Dev Spaces with Visual Studio Kubernetes Tooling

Azure Kubernetes Service brings a world class managed Kubernetes service to the cloud. Customers can now leverage the power of Kubernetes platform without having to worry about managing the control plane. As a result of that, customers are now able to embark on the containerization journey with confidence. In this blog post, we will see how Visual Studio makes it easy to collaborate with AKS using Azure Dev Spaces.

Let’s get started.

Step 1 – Install the Kubernetes tools for Visual Studio 2017. This can be found here – https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vs-tools-for-kubernetes

Step 2 – Open Visual Studio 2017 and Create a new project.

Step 3 – Choose Container Application for Kubernetes template and name the project devspacedemo and Click Ok.

image

Step 4 – Choose Web Application MVC for the template.

image

Step 5 – You will get an option to create public endpoint for the application in the Kubernetes cluster.

image

Step 6 – As part of the scaffolding, Visual Studio creates Dockerfile and azds.yaml. Dockerfile is used to create container image. The azds.yaml files describes HELM chart and configurations needed to install the HELM chart.

image

Step 7- Click on Azure Dev Spaces in Visual Studio to run our application and deploy it in AKS cluster. Create a new AKS cluster if it’s not created yet and Click Ok.

SNAGHTML104430c6

Step 8 – Enable Dev Spaces in the AKS cluster if it already is not.

image

image

Step 9 – Visual Studio deploys the application to AKS. Once the deployment is finished, Visual Studio will open the endpoint for the Web application in the browser. In this blog post, we saw how to use Kubernetes tools in Visual Studio to deploy aspnet core application inside the AKS Dev Space.

SNAGHTML1058fcb0

AKS Series – Use Azure Storage Option as Persistent Volumes in AKS

Azure Kubernetes Service brings a world class managed kubernetes service to the cloud. The customers can now leverage the power of Kubernetes platform without having to worry about managing the control plane. As a result of that, customers are now able to embark on the containerization journey with confidence. In this blog post, we will explore how to use storage orchestration capabilities in AKS. One of the best practices with containers is not to persist data inside the containers for long term as containers are ephermal. These containers can be removed and rebuilt very often and may require storage that persists across pods beyond the application lifecycle.  In this blog post, we will learn about how to create Persistent Volumes in AKS with Azure Files.

image

What is a Persistent Volume? – A persistent volume (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod. We will see how to dynamically create the Persistent Volumes using the Kubernetes API Server on Azure Files. To start with we need to create a Storage class to define tier of storage – Premium or Standard.

Step 1 – Create a Storage Class using the following yaml.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azurefile
provisioner: kubernetes.io/azure-file
parameters:
  storageAccount: aksshare1

What is a Persistent Volume Claim? – A PersistentVolumeClaim requests either Disk or File storage of a particular Storage Class, access mode, and size.

Step 2- Create the Persistent Volume Claim dynamically using the following yaml.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile
spec:
accessModes:
- ReadWriteOnce
storageClassName: azurefile
resources:
requests:
storage: 5Gi
Step 3 – Let’s use the Persistent Volume Claim from a NGINX pod using the following yaml.
kind: Pod
apiVersion: v1
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/mnt/azure"
        name: volume
  volumes:
    - name: volume
      persistentVolumeClaim:
        claimName: azurefile

When we apply the pod definition, persistent volume claim is used to request the storage. The volumeMount  is specified to read and write data. In this example we used /mnt/Azure to mount the volume at. This blog showcases how to dynamically create persistent volume claims for Kubernetes pods using Azure Files.

 

 

 

 

 

 

Docker Blog Series Part 7– Deploy Azure Web App On Containers

Container services are encapsulated, individually deployable components that run as isolated instances on the same kernel to take advantage of virtualization that an operating system provides. Thus, each application and its runtime, dependencies, and system libraries run inside a container with full, private access to the container’s own isolated view of operating system constructs.

Azure has various services to orchestrate containers like Azure Service Fabric, Azure Container Service, Azure Kubernetes Service and ACI. In this blog post, we will see how to deploy containers on Web Apps.

Step 1: Create a ASPNET Core Web Application using Visual Studio.

image

Step 2: Select Web Application template. DO NOT check “Enable Docker Support”

image

Step 3: Once the application is loaded in Visual Studio add a Dockerfile to the root of the project.

image

Step 4. Open the Dockerfile in Visual Studio and add the following steps in the Dockerfile and Save the Dockerfile.

FROM microsoft/aspnetcore:latest
WORKDIR dockerdemo
ADD ./WebApp .
ENTRYPOINT [“dotnet”,”bin/Debug/netcoreapp2.0/publish/WebApp.dll”]
EXPOSE 8086
ENV ASPNETCORE_URLS http://0.0.0.0:8086

Step 5. Open Powershell in Admin Mode and browse to the project folder.

Step 6. Execute Docker build command to build image from the Dockerfile. Include tag name in the build command as shown below.

docker build . -t monuacr.azurecr.io/webapp:1

Step 7. Execute Docker run command to create the container using the image previously created as shown below

docker run monuacr.azurecr.io/webapp:1

Step 8. Inspect the IP address on which the container is running by executing network inspector nat command.

Step 9. Open the web browser and navigate to the IP address and you will see the ASP.Net core application running.

image

Step 10. Publish the image to Azure Container Registry.

docker push monuacr.azurecr.io/webapp:1

Azure Web App on Containers

Now we will deploy our container image which we pushed to the Azure Container Registry to Azure Web App as a container deployment. We can easily deploy and run containerized web apps that scale with business needs. Currently we can deploy Azure Web apps on Linux Containers.

Step 11. Go to the Azure Portal and Navigate to the Container registry we pushed the image to.

Step 12. Click on Repositories.

SNAGHTML453b34

Step 13. Now Click on the image we pushed to the Repository.

SNAGHTML47553b

Step 14. Now Click on the Deploy to Web App link. This will enable us to deploy the image to Azure Web App.

SNAGHTML46ad43

Step 15. Fill in the required values for the web app and click on Create.

image

Step 16. Azure  Web App is up and running using the Docker image on Linux Container. Navigate to the Web App on the Portal.

 

image

As you saw, we can deploy Azure web apps on containers and avail all the PAAS benefits of the Azure Web App Platform along with being portable by using Containers.

For more information, check out this article.

https://azure.microsoft.com/en-us/services/app-service/containers/

Docker Blog Series Part 6 – How to use Service Fabric Reverse Proxy for container services

Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. Service Fabric also addresses the significant challenges in developing and managing cloud native applications. It is also an orchestrator of services across a cluster of machines and it is continuing to invest heavily in container orchestration and management. In this blog post, we will check out how to use Service Fabric Reverse Proxy for container services.

Container services are encapsulated, individually deployable components that run as isolated instances on the same kernel to take advantage of virtualization that an operating system provides. Thus, each application and its runtime, dependencies, and system libraries run inside a container with full, private access to the container’s own isolated view of operating system constructs. Microservices running inside containers in Service Fabric run on a subset of nodes. Service Fabric orchestration is responsible for service discovery, resolution and routing. As a result, the endpoint for the services running inside the container can change dynamically.

If the container services are deployed in Azure Service Fabric, we could use Azure Load Balancer to reach the services from outside. To make these services/endpoints accessible to external clients, we configure the Load Balancer to forward traffic to each port that the service uses. This approach works but requires each service to be configured on Load Balancer.

Other approach could be to use Reverse proxy. Instead of configuring ports of individual service in Load Balancer, we configure the port of the Reverse proxy.  Following is the specific URI format, Reverse proxy uses for service discovery.

http(s)://<Cluster FQDN | internal IP>:Port/<MyApp>/<MyService>

 

image

 

One of the challenges when using Reverse proxy approaches is when you configure Reverse proxy port in the Load Balancer, all the services in the cluster that expose an HTTP endpoint are addressable from outside the cluster. Due to this reason, we recommend to use SF Reverse proxy for internal services.

But if you have mix of microservices which includes internal and external services, we would have to expose the reverse proxy port in the load balancer to expose external services. But that would implicitly also expose our internal services which we dont want to. So a workaround for now would be to use the below port configuration in the Docker compose file. We will be explicitly dropping the HTTP prefix from the ports. This will ensure that those container services which have this configuration will not be exposed. This will make sure that the internal container service with this configuration will not be exposed through the Load Balancer.

You can check out my previous blog below to see how to do Docker – Compose deployment for Service Fabric.

image

As you saw above, you can leverage Azure Service Fabric Reverse proxy to expose different types of services. To read more about Azure Service Fabric proxy check out the following links.

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reverse-proxy-diagnostics

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy-configure-secure-communication

Docker Blog Series Part 5 – Understanding new container management features in Service Fabric

Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. Service Fabric also addresses the significant challenges in developing and managing cloud native applications. It is also an orchestrator of services across a cluster of machines and it is continuing to invest heavily in container orchestration and management. In this blog post, we will check out some of newer features as they relate to container orchestration – Volume Mounting and Docker Compose support.

Containers are encapsulated, individually deployable components that run as isolated instances on the same kernel to take advantage of virtualization that an operating system provides. Thus, each application and its runtime, dependencies, and system libraries run inside a container with full, private access to the container’s own isolated view of operating system constructs.

Volumes

In the world of containers, volumes are a great way to persist data created and used by the docker containers. Let’s take a look at how to use volumes in Service Fabric.

Step 1 – Create a new Service Fabric Application using VS2017.

image

Step 2 Use  the Container template and provide the docker container image previously built. You can check out my earlier blog posts to learn about creating docker images/applications.

image

Once the project is created, Visual Studio will create the necessary manifest files ready for publishing. Before we publish let us configure the volumes in the ApplicationManifest.

Step 3 – Open the ApplicationManifest.xml and Add the Volume Policy as shown below under Policies inside ServiceManifestImport.

SNAGHTML88506c

 

Step 4 – Now you are ready to publish the application and use the volumes.

 

image

You can also use volume drivers like Azure File Share with Service Fabric Volumes. You can read more on it here – https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-containers-volume-logging-drivers

 

Docker Compose Support

Docker uses the docker-compose.yml file for defining multi-container applications. To make it easy for customers familiar with Docker to orchestrate existing container applications on Azure Service Fabric, natively in the platform, Service Fabric can accept version 3 and later of docker-compose.yml files.

As a pre-requisite, we will use the container image we used earlier to create a docker-compose.yml file.

Step 1 – Create a docker-compose.yml file as shown below.

SNAGHTMLc0cb49

The above compose file when deployed is going to create a Service Fabric service called ‘customer’ using the image provided.

Step 2 – Connect to Service Fabric cluster in Azure using the following powershell script.

Connect-ServiceFabricCluster -ConnectionEndpoint ‘_YOURSFCLUSTER_:19000′

Step 3 – Execute the Service Fabric Docker Compose deployment command

New-ServiceFabricComposeDeployment -DeploymentName app -Compose ‘docker-compose-servicefabric.yml’ -RegistryUserName ‘xyz’ -RegistryPassword ‘xyz’

Once the command is executed Service Fabric application will be created in Azure Service Fabric using the Docker-Compose feature. As you saw in this blog post, we can leverage newer features with Service Fabric for Container Orchestration. Happy Containerizing!!!

References

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-overview

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-containers-overview

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-docker-compose

Docker Blog Series Part 4 – Managing Secrets inside Kubernetes Cluster in Azure Container Service

One of the common tasks in application development is to manage configurations. Some of the configurations can be sensitive information. One of the features of Kubernetes is secret management. There are many approaches to retrieve secrets from the secret management store. In this blog post, I will show you how to manage a sample secret using environment variables in a Web Api. We will do this in two steps. First step is to create the Kubernetes secrets. Second step is to leverage Kubernetes secrets in a Web Api. This blog assumes a successful deployment of a Kubernetes cluster in ACS. If you have not done this already, please follow the below link to deploy one. https://docs.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-walkthrough

Let’s get started.

Step 1. Create a secrets.yaml file with the below content.

image

Step 2. Run the following Kubectl command to create Secret inside Kubernetes cluster.

kubectl.exe create -f “c:\secrets.yml”

Step 3. Verify if the secret got created in the Kubernetes cluster by running the following command.

kubectl.exe get secrets

NAME                         TYPE                                                      DATA                            AGE
default-token-fp4cp   kubernetes.io/service-account-token   3                                   9h
mysecret                     Opaque                                                 2                                   2h

You can optionally check this in the Kubernetes UI/Dashboard as well.

SNAGHTML5e05168

Now that we have the secret created, we will create an ASP.NET Core Web Api project to consume the secret as an environment variable.

Step 4. Open Visual Studio 2017 and create ASP.NET Core Web Api project.

clip_image001

clip_image002

Step 5. Make the following change in Default Values Controller. We will be adding code to read Environment Variable called ‘Secret_Username’. This value will be coming from the Secrets inside the Kubernetes cluster.

public string Envar { get; set; }
// GET api/values
[HttpGet]
public IEnumerable<string> Get()
{

if (Environment.GetEnvironmentVariable(“SECRET_USERNAME”) != string.Empty)
Envar = Environment.GetEnvironmentVariable(“SECRET_USERNAME”);
else
Envar = “Environment Variable could not be read from Kubernetes Secrets”;
return new string[] { “Here is the secret username”, ” ” + Envar };
}

Step 6. Add a Dockerfile to the project, Build the Dockerfile and push this application to the Docker Hub or an equivalent Docker Repository. If you need help with Dockerizing the app, read my earlier blog posts.

Step 7. Now let’s push this application to Kubernetes cluster to retrieve the Secret value. We will use kubectl create command to create the deployment.  Service.yaml is the yaml file which describes the Kubernetes deployment. To understand more about Kubernetes deployment, read the documentation here – https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-intro/

kubectl.exe create –f Service.yaml

clip_image003

The important part to understand here is how we describe the Environment variable and map it to the valueFrom the secret store using SecretKeyRef.

Step 8. Once the application is deployed to the Kubernetes cluster, your values controller should be able to access the ‘Secret_UserName’ in the Values Controller from the secret store using the environment variable ‘Secret_UserName’.

In this blog post, we saw one of the ways to manage secrets in Kubernetes.. Stay tuned for more!!!

Docker Blog Series Part 3– Deploy IIS based applications to Service Fabric using Docker Containers

One of  the value propositions of using containers with Service Fabric is that you can now deploy IIS based applications to the SF cluster. In this blog post, we will see how to leverage docker containers to deploy IIS apps to Service Fabric. I will skip the image creation and publish to the docker hub in this blog. Please reference my earlier blog to learn more about creating and pushing images.

For this blog, I will be using an already pushed IIS image to the docker Hub. The application image uses microsoft/iis as a base image.

Let’s get started.

Step 1. Open Visual Studio 2017 and create a Service Fabric Application.

clip_image001

Step 2. Now choose the Guest Container feature and provide valid Image Name and Click Ok. The image name is the one we published to Docker Hub in the previous exercise.

clip_image002

Step 3. Once the application is created and loaded, add the following endpoint information into ServiceManifest.xml

clip_image003

Step 4. Now let’s add following section into ApplicationManifest.xml to add Hub Credentials and Port Binding endpoint information.

clip_image004

Step 5. We are ready to Publish our application to Service Fabric Cluster now. Right click on the Application and Publish to Azure Service Fabric. Make sure when you create the Service Fabric cluster, you pick the option to create it with Windows Server 2016 with Containers.

clip_image005

Step 6. First let’s see our application deployed using Service Fabric explorer.

clip_image006

Step 7 Now, lets browse to our application on Service Fabric. You should see your IIS  application running on Service Fabric.

As you saw in this blog post we can use Service Fabric for Container Orchestration for newer and legacy applications. More coming in next blog on Container Orchestration capabilities like DNS, Scaling, Labels etc.. Stay tuned!

Docker Blog Series Part 2 – Build & Deploy ASP.NET Core based Docker Container on Service Fabric

Azure Service Fabric in addition to offering a Service Fabric programming model is also able to orchestrate container based services across a cluster of machines. Service Fabric can deploy services in container images. In this blog post, we will see how to use Service Fabric as an orchestrator for Windows based Docker images. The images will be published to Docker Hub and consumed by the Service Fabric orchestrator. We will build an ASPNET Core Web application, Dockerize it and publish it to Docker Hub. Finally we will publish the image to a Service Fabric Cluster.

Step 1: Create a ASPNET Core Web Application using Visual Studio.

image

Step 2: Select Web Application template. DO NOT  check “Enable Docker Support”

image

Step 3: Once the application is loaded in Visual Studio add a Dockerfile to the root of the project.

image

Step 4. Open the Dockerfile in Visual Studio and add the following steps in the Dockerfile and Save the Dockerfile.

FROM microsoft/dotnet:1.1-sdk-nanoserver
ENTRYPOINT [“dotnet”, “bin/Debug/netcoreapp1.1/DemoCoreApp.dll”]
WORKDIR dockerdemo
ADD ./DemoCoreApp .
RUN dotnet restore DemoCoreApp.csproj
RUN dotnet build DemoCoreApp.csproj
EXPOSE 80
ENV ASPNETCORE_URLS http://0.0.0.0:80

Step 5. Open Powershell in Admin Mode and browse to the project folder.

Step 6. Execute Docker build command to build image from the Dockerfile. Include tag name in the build command as shown below.

image

Step 7. Exexute Docker run command to create the container using the image previously created as shown below

image

Step 8. Inspect the IP address on which the container is running by executing network inspector nat command.

SNAGHTML318dca

Step 9. Open the web browser and navigate to the IP address and you will see the ASP.Net core application running.

image

Step 10. Publish the image to Docker Hub so that it can be consumed from Service Fabric. Before pushing login to Docker Hub using login command

image

Now we will use the  image from Docker Hub to deploy to a Service Fabric Cluster.

Let’s begin building the Service Fabric Orchestration piece.

Step 11. Open Visual Studio 2017 and create a Service Fabric Application.

image

Step 12. Now choose the Guest Container feature and provide valid Image Name and Click Ok. The image name is the one we published to Docker Hub in the previous exercise.

image

Step 13. Once the application is created and loaded, add the following endpoint information into ServiceManifest.xml

image

Step 14. Now let’s add following section into ApplicationManifest.xml to add Hub Credentials and Port Binding endpoint information.

SNAGHTML4f39dc

Step 15. We are ready to Publish our application to Service Fabric Cluster now. Right click on the Application and Publish to Azure Service Fabric. Make sure when you create the Service Fabric cluster, you pick the option to create it with Windows Server 2016 with Containers.

image

Step 16. First let’s see our application deployed using Service Fabric explorer.

image

Step 17. Now, lets browse to our application on Service Fabric.

image

As you saw in this blog post we can use Service Fabric for Container Orchestration. More coming in next blog on Container Orchestration capabilities like DNS, Scaling, Labels etc.. Stay tuned!

Docker Blog Series Part 1- Building ASP.NET Core Application using Docker For Windows.

Applications and their set of user expectations continue to evolve over the last few years. We have seen various trends like microservices and containerization become mainstream. The enterprises are beginning to realize the cost savings and other benefits of these technologies. It therefore becomes incumbent to build applications and service with this mindset. In this blog post, I will explain some of the foundational aspects of containerization and showcase ‘Docker for Windows’ Visual Studio Tooling. Along with that, we will see how to build ASP.NET core application with Docker Support Tooling

Before talking about containers, let’s take a moment to understand Microservices. Microservices and micro services style architecture is about decomposing monolithic application into smaller services. This enables versioning and deploying these services independently. Building microservices based applications helps accelerates agile delivery within an agile engineering teams.

Now let’s talk about containers. Containers provide a mechanism to run software reliably when moved from one computing environment to another. Simply put, containers are a portable unit of application and its underlying dependencies packaged (container image) together. By containerizing the application and its dependencies, the OS and underlying infrastructure differences are abstracted away. Containers isolate applications from each other on a shared OS. Additionally, we can get more isolation using Hyper V Containers. The micro services based services are a perfect fit for containerized applications.

Why should we be interested in containers? Let’s look at some of the value proposition around it.

– The containers enable ‘write-once, run anywhere’ apps.

– They enable microservices style architecture.

– It is also great for dev/test of apps and services.

– From the operations standpoint, this is great from portability and enables higher compute density. We can also scale up and down containers in response to the business needs.

Docker is platform for running these containerized applications. Docker is becoming the standard for implementing containers across the industry. There are different ways to run hosting containers within Docker.

 

  1. Docker for Windows, which uses Hyper V to run the MobyLinux VM or Windows Containers
  2. Docker for Mac
  3. Boot2Docker and Virtual Box for older scenarios.

In this post, I want to give a quick overview on how to run Docker applications locally using ‘Docker for Windows’ using Visual Studio Tooling for an ASP.NET Core application.

Prerequisites:

– Install Docker Tooling for Visual Studio 2015.

– ASP.NET Core

– Install Docker for Windows

Step 1:

Create ASP.NET Core Web Application using Visual Studio 2015

vs2

Steps 2:

Right Click on the Web Application we just created and Add – Docker Support. Upon adding Docker support, Visual Studio Tooling automatically creates various Docker files for you to use.

dock

Step 3:

Review various Docker Files

a- Docker-compose.yml- This yml file is a Docker compose file. Docker compose is a tool which can be used to build multi-container applications. You can have more than one service defined in the docker compose file.

version: ‘2’

 

services:

  webapplication2:

    image: username/webapplication2

    build:

      context: .

      dockerfile: Dockerfile

    ports:

      – “8088:8088”

b- Docker File. This file is used to build the docker images using the instructions in this file.

FROM microsoft/dotnet:1.0.0-core

 

# Set the Working Directory

WORKDIR /app

 

# Configure the listening port to 8088

ENV ASPNETCORE_URLS http://*:8088

EXPOSE 8088

 

# Copy the app

COPY . /app

 

# Start the app

ENTRYPOINT dotnet WebApplication2.dll

Step 4 :

All you need now is to run your Visual Studio Application and you are up and running with your ASP.NET Core application running on a Docker container. We can take a look at the running container using the “docker ps -a” command.

In the next follow up blog post, we will explore How to deploy applications on Docker on Windows using Windows Server 2016 TP5.

Web Test Plug-in for Authentication

Web and Load testing are essential part of application life cycle management . Within Visual Studio, we provide a great template to do web and load testing.  This template is used to simulate the application traffic using web tests and simulate high load with the load test template. This works very well for web application, web api’s, wcf applications. Web Tests work at the HTTP layer and are basically simulating request/response to and from the server. They don’t execute any JavaScript. Sometimes the application to be load tested require you to provide Authentication information. If it’s a passive mode of authentication your application either redirects to an identity provider or you manage it using Forms authentication.  In case of API’s, where you are expected to provide the authentication information in the header, we can very well achieve that using Web Tests as well.

In this blog post, I will explain how easily you can plug your authentication information for a Web test. I will be using JWT Token as an example of authentication information to be passed in the header of the request.

1- Create a  Visual Studio VS 2013/2015 Web Performance and Load Testing project.

image

2- Once the Web Test project is created, record your web test traffic belonging to your application. Since the application is going to need authentication token in the header, let’s see how to pass that information in the request pipeline in a web test.

3- Create a Web Test Plugin. Web Test Plug-in is an extensible plugin model for Web Tests which enables us to override various Web Test Events. In this example I will be overriding the PreWebTest method. Let’s start by creating a class called JWTTokenWebTestPlugin. This class inherits from the WebTestPlugin Class.

image

4 – Next up, we are going to override the PreWebTest method and set the Context Parameter to the JWT token. The token is returned from the GetAdToken function as shown below. The values to the various parameters needed in the function will be provided from the UI when configuring the JWT web test plugin.

image

5 – Right click on the Web Test and click ‘ Add Web Test Plug in’. In the list of plug in’s you will see the JWTTokenWebTestPlugin. Configure the values for the Plug-in and click Ok.

image

image

6- You are now set up to use Web Test Plugin with JWT Tokens. These tokens will be sent to the web api in the application header.