Windows Server Containers – HyperV Containers – Nano Server …
Windows Server Containers – HyperV Containers – Nano Server …
If you are new to Docker and Containers, please read this document, which describes what Containers are and what Docker is.
If you want more info, there are a lot of Channel9 videos on Containers as well
If you have problems with Docker (not NAV related), the Windows Containers Docker forum is the place you can ask questions (read the readme first):
In order to run a NAV container, you need a computer with Docker installed, this will become your Docker host. Docker runs on Windows Server 2016 (or later) or Windows 10 Pro.
When using Windows 10, Containers are always using Hyper-V isolation with Windows Server Core. When using Windows Server 2016, you can choose between Hyper-V isolation or Process isolation. Read more about this here.
I will describe 3 ways to get started with Containers. If you have a computer running Windows Server 2016 or Windows 10 – you can use this computer. If not, you can deploy a Windows Server 2016 with Containers on Azure, which will give you everything to get started.
In the Azure Gallery, you will find an image with Windows Server 2016 and Docker installed and pre-configured. You can deploy this image by clicking this link.
Note, do not select Standard_D1 (simply not powerful enough) – use at least Standard_D2 or Standard_D3.
In this VM, you can now run all the docker commands, described in this document.
Follow these steps to install Docker on a machine with Windows Server 2016.
Follow these steps to install Docker on Windows 10.
NavContainerHelper is a PowerShell module from the PowerShell Gallery, you can read more information about it here.
The module contains a number of PowerShell functions, which helps running and interactig with NAV containers.
On your Docker host, start PowerShell ISE and run:
install-module navcontainerhelper -force
get-command -Module navcontainerhelper
to list all functions available in the module. Use
in order to list the functions in the module grouped into areas.
Start PowerShell ISE and run this command:
New-NavContainer -accept_eula -containerName "test" -auth NavUserPassword -imageName "microsoft/dynamics-nav"
to run your first NAV container using NavUserPassword authentication. PowerShell will pop up a dialog and require you to enter a username and a password to use for the container.
New-NavContainer will remove existing containers with the same name before starting a new container. The container will be started as a process and the output of the function will be displayed in the PowerShell output window.
PS C:\Users\freddyk> New-NavContainer -accept_eula -containerName "test" -auth NavUserPassword -imageName "microsoft/dynamics-nav" Creating Nav container test Using image microsoft/dynamics-nav NAV Version: 11.0.20783.0-w1 Generic Tag: 0.0.5.3 Creating container test from image microsoft/dynamics-nav Waiting for container test to be ready Initializing... Starting Container Hostname is test PublicDnsName is test Using NavUserPassword Authentication Starting Local SQL Server Starting Internet Information Server Creating Self Signed Certificate Self Signed Certificate Thumbprint 0A4F70380C95876A708018EA6883CA3A1F7FF72D Modifying NAV Service Tier Config File with Instance Specific Settings Starting NAV Service Tier Creating DotNetCore NAV Web Server Instance Creating http download site Creating Windows user admin Setting SA Password and enabling SA Creating admin as SQL User and add to sysadmin Creating NAV user Container IP Address: 172.19.144.74 Container Hostname : test Container Dns Name : test Web Client : http://test/NAV/ Dev. Server : http://test Dev. ServerInstance : NAV Files: http://test:8080/al-0.12.17720.vsix Initialization took 59 seconds Ready for connections! Reading CustomSettings.config from test Creating Desktop Shortcuts for test NAV container test successfully created
The New-NavContainer will use the docker command to run the container, and will be building up the needed parameters for docker run dynamically, based on the parameters you specify to New-NavContainer.
You can also use docker run to run a container yourself
docker run -e ACCEPT_EULA=Y microsoft/dynamics-nav
Note, if you are running Windows 10 , you will have to add –memory 4G as an extra parameter to the docker run command above (and in all docker run commands in this doc.)
This will start a container with a random name, a random password, using SSL with a self-signed certificate and using admin as the username. The prompt in which you are running the command will be attached to the container and the output will be displayed. You can add a number of extra parameters to specify password, database connection, license file, configurations, etc. etc.
The docker run command created by the New-NavContainer call above will be something like:
docker run --name test ` --hostname test ` --env auth=NavUserPassword ` --env username="admin" ` --env ExitOnError=N ` --env locale=en-US ` --env licenseFile="" ` --env databaseServer="" ` --env databaseInstance="" ` --volume "C:\ProgramData\NavContainerHelper: C:\ProgramData\NavContainerHelper" ` --volume "C:\ProgramData\NavContainerHelper\Extensions\test\my: C:\Run\my" ` --restart unless-stopped ` --env useSSL=N ` --env securePassword=<encryptedpasword> ` --env passwordKeyFile="c:\run\my\aes.key" ` --env removePasswordKeyFile=Y ` --env accept_eula=Y ` --detach ` microsoft/dynamics-nav
Note, if you are running Windows 10, New-NavContainer will automatically add –memory 4G to the docker run command.
all parameters starting with –env means that docker is going to set an environment variable in the container. This is the way to transfer parameters to the NAV container. All the –env parameters will be used by the PowerShell scripts inside the container. All the non –env parameters will be used by the docker run command.
The –name parameter specifies the name of the container and the –hostname specifies the hostname of the contianer.
–volume parameters shared folders from the docker host to the container and –detach means that the container process will be detached from the process starting it.
The NAV container images supports a number of parameters and some of them are used in the above output. All of these parameters can be omitted and the NAV container image has a default behavior for them.
As you might have noticed, the New-NavContainer transfers the password to the container as an encrypted string and the key to decrypt the password is shared in a file and deleted afterwards. This allows you to use Windows Authentication with your domain credentials in a secure way.
Containers are a way to wrap up an application into its own isolated box. For the application in its container, it has no knowledge of any other applications or processes that exist outside of its box. Everything the application depends on to run successfully also lives inside this container. Wherever the box may move, the application will always be satisfied because it is bundled up with everything it needs to run.
Imagine a kitchen. We package up all the appliances and furniture, the pots and pans, the dish soap and hand towels. This is our container
We can now take this container and drop it into whatever host apartment we want, and it will be the same kitchen. All we must do is connect electricity and water to it, and then we’re clear to start cooking (because we have all the appliances we need!)
In much the same way, containers are like this kitchen. There can be different kinds of rooms as well as many of the same kinds of rooms. What matters is that the containers come packaged up with everything they need.
Watch a short overview here: Windows-based containers: Modern app development with enterprise-grade control.
Containers are an isolated, resource controlled, and portable runtime environment which runs on a host machine or virtual machine. An application or process which runs in a container is packaged with all the required dependencies and configuration files; It’s given the illusion that there are no other processes running outside of its container.
The container’s host provisions a set of resources for the container and the container will use only these resources. As far as the container knows, no other resources exist outside of what it has been given and therefore cannot touch resources which may have been provisioned for a neighboring container.
The following key concepts will be helpful as you begin creating and working with Windows Containers.
Container Host: Physical or Virtual computer system configured with the Windows Container feature. The container host will run one or more Windows Containers.
Container Image: As modifications are made to a containers file system or registry—such as with software installation—they are captured in a sandbox. In many cases you may want to capture this state such that new containers can be created that inherit these changes. That’s what an image is – once the container has stopped you can either discard that sandbox or you can convert it into a new container image. For example, let’s imagine that you have deployed a container from the Windows Server Core OS image. You then install MySQL into this container. Creating a new image from this container would act as a deployable version of the container. This image would only contain the changes made (MySQL), however it would work as a layer on top of the Container OS Image.
Sandbox: Once a container has been started, all write actions such as file system modifications, registry modifications or software installations are captured in this ‘sandbox’ layer.
Container OS Image: Containers are deployed from images. The container OS image is the first layer in potentially many image layers that make up a container. This image provides the operating system environment. A Container OS Image is immutable. That is, it cannot be modified.
Container Repository: Each time a container image is created, the container image and its dependencies are stored in a local repository. These images can be reused many times on the container host. The container images can also be stored in a public or private registry, such as DockerHub, so that they can be used across many different container hosts.
For someone familiar with virtual machines, containers may appear to be incredibly similar. A container runs an operating system, has a file system and can be accessed over a network just as if it was a physical or virtual computer system. However, the technology and concepts behind containers are vastly different from virtual machines.
Mark Russinovich, Microsoft Azure guru, has a great blog post which details the differences.
Windows Containers include two different container types, or runtimes.
Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server Container shares a kernel with the container host and all containers running on the host. These containers do not provide a hostile security boundary and should not be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
Hyper-V Isolation – expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with other containers on the same host. These containers are designed for hostile multitenant hosting with the same security assurances of a virtual machine. Since these containers do not share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (with in supported versions) – for example all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V Isolation is a runtime decision. You may elect to create the container with Hyper-V isolation initially and later at runtime choose to run it instead as a Windows Server container.
As you read about containers, you’ll inevitably hear about Docker. Docker is the vessel by which container images are packaged and delivered. This automated process produces images (effectively templates) which may then be run anywhere—on premises, in the cloud, or on a personal machine—as a container.
Just like any other container, a Windows Server Container can be managed with Docker.
From a developer’s desktop, to a testing machine, to a set of production machines, a Docker image can be created that will deploy identically across any environment in seconds. This story has created a massive and growing ecosystem of applications packaged in Docker containers with DockerHub, the public containerized-application registry that Docker maintains, currently publishing more than 180,000 applications in the public community repository.
When you containerize an app, only the app and the components needed to run the app are combined into an “image”. Containers are then created from this image as you need them. You can also use an image as a baseline to create another image, making image creation even faster. Multiple containers can share the same image, which means containers start very quickly and use fewer resources. For example, you can use containers to spin up light-weight and portable app components – or ‘micro-services’ – for distributed apps and quickly scale each service separately.
Because the container has everything it needs to run your application, they are very portable and can run on any machine that is running Windows Server 2016. You can create and test containers locally, then deploy that same container image to your company’s private cloud, public cloud or service provider. The natural agility of Containers supports modern app development patterns in large scale, virtualized cloud environments.
With containers, developers can build an app in any language. These apps are completely portable and can run anywhere – laptop, desktop, server, private cloud, public cloud or service provider – without any code changes.
Containers help developers build and ship higher-quality applications, faster.
IT Professionals can use containers to provide standardized environments for their development, QA, and production teams. They no longer have to worry about complex installation and configuration steps. By using containers, systems administrators abstract away differences in OS installations and underlying infrastructure.
Containers help admins create an infrastructure that is simpler to update and maintain.
Because of their small size and application orientation, containers are well suited for agile delivery environments and microservice-based architectures. When you use containers and microservices, however, you can easily have hundreds or thousands of components in your environment. You may be able to manually manage a few dozen virtual machines or physical servers, but there is no way you can manage a production-scale container environment without automation. The task of automating and managing a large number of containers and how they interact is known as orchestration.
The standard definition of orchestration includes the following tasks:
Azure offers two container orchestrators: Azure Container Service (AKS) and Service Fabric.
Azure Container Service (AKS) makes it simple to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications. This enables you to use your existing skills, or draw upon a large and growing body of community expertise, to deploy and manage container-based applications on Microsoft Azure. By using AKS, you can take advantage of the enterprise-grade features of Azure, while still maintaining application portability through Kubernetes and the Docker image format.
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. Service Fabric addresses the significant challenges in developing and managing cloud native applications. Developers and administrators can avoid complex infrastructure problems and focus on implementing mission-critical, demanding workloads that are scalable, reliable, and manageable. Service Fabric represents the next-generation platform for building and managing these enterprise-class, tier-1, cloud-scale applications running in containers.
Ready to begin leveraging the awesome power of containers? Hit the jumps below to get a hands-on with deploying your very first container:
For users on Windows Server, go here – Windows Server Quick Start Introduction
For users on Windows 10, go here – Windows 10 Quick Start Introduction