Jump-start Kubernetes and Istio with Docker Desktop on Windows 10

Here we will setup a single-node Kubernetes cluster on a windows 10 PC (In my case it is a surface 5 with 16GB RAM). If you are new to docker, feel free to check out Jump-start with docker.
We are going to setup:

  • A single-node Kubernetes cluster
  • Kubernetes dashboard
  • Helm
  • Isito (service mesh, including Kiali)
  • Deployment samples

1. Enable Kubernetes in Docker Desktop

Docker Desktop (or Docker for Windows) is a nice environment for developers on Windows. The community stable version of Docker Desktop is good enough for this jump-start, just make sure the version you installed include Kubernetes 1.14.x or higher. (I am using Docker Desktop Community 2.1.0.3).

Once installed, you can enable Kubernetes in Setting (see detailed info at here)

Then, you can verify it by running “kubectl version“ in powershell (or Command window)

In my case, I got error while connecting to [::1]:8080:

1
2
3
4
PS C:\> kubectl version
#Output:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

This is because I am missing an environment variable “KUBECONFIG“. Set this variable to your user directory such as “C:\Users\YOUR__USER_NAME\.kube\config“.

After adding this and restart your powershell, it should work.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
PS C:\> Get-Item -Path Env:KUBECONFIG
#Output:
Name Value
---- -----
KUBECONFIG C:\Users\lufeng\.kube\config

PS C:\> kubectl version
#Output:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

PS C:\> kubectl get namespaces
#Output:
NAME STATUS AGE
default Active 18h
docker Active 18h
kube-node-lease Active 18h
kube-public Active 18h
kube-system Active 18h

2. Installing Kubernetes Dashboard

It is always nice to have a GUI for a complicated system such as Kubernetes, so lets install the dashboard https://github.com/kubernetes/dashboard.

2.1 Dashboard deployment

1
2
3
4
5
6
7
8
PS C:\> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
#Output:
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

2.2 Accessing the dashboard

First of all, we need to enable the proxy, so you can access the dashboard from your localhost:

1
2
3
PS C:\> kubectl proxy
#Output:
Starting to serve on 127.0.0.1:8001

Once the proxy is up and running, visit the dashboard URL: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Normally you will meet this [login view] (https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md#login-view)

You can find more info from the dashboard github about Access control, but here we will do it simpler (This is for demo purpose, do not apply the same setup in your production environment).

2.2.1 Get token

Get the default token name

1
2
3
4
PS C:\> kubectl get secrets
#Output:
NAME TYPE DATA AGE
default-token-n92hz kubernetes.io/service-account-token 3 18h

Then get the token

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
PS C:\> kubectl describe secrets default-token-n92hz
#Output:
Name: default-token-n92hz
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: c56ad00e-e5e5-11e9-91a0-00155d3a9005
Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImt3NlcnZpY2UtYWNjb......CIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjfv4TPDVZoOrLWHZecEw-8XBQ
PS C:\>

Use the token in the login form, then you are in.

3. Installing Helm on Windows

Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources. You can read more at https://helm.sh/.
According to the installation guide, we are going to:

  1. Install scoop
  2. Install helm via scoop
    1
    PS C:\> scoop install helm
  3. Ensure configure the environment variable for “HELM_HOME“, such as “C:\Users\USERNAME.kube”. It should be an valid directory in your file system.
    1
    2
    3
    4
    5
    PS C:\> Get-Item -Path Env:HELM_HOME
    #Output:
    Name Value
    ---- -----
    HELM_HOME C:\Users\lufeng\.kube
  4. Initialize Helm and install Tiller
    Once you have Helm ready, you can initialize the local CLI and also install Tiller into your Kubernetes cluster in one step:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    #Check current kubernetes cluster context
    PS P:\> kubectl config current-context
    #Output:
    docker-desktop

    #Init helm
    PS C:\> helm init --history-max 200
    #Output:
    $HELM_HOME has been configured at C:\Users\lufeng\.kube.
    Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

    #Verify the triller is up and running (the last row)
    PS C:\> kubectl get pods --namespace kube-system
    #Output:
    NAME READY STATUS RESTARTS AGE
    coredns-fb8b8dccf-b5lq5 1/1 Running 0 19h
    coredns-fb8b8dccf-t5kdf 1/1 Running 0 19h
    etcd-docker-desktop 1/1 Running 0 19h
    kube-apiserver-docker-desktop 1/1 Running 0 19h
    kube-controller-manager-docker-desktop 1/1 Running 0 19h
    kube-proxy-bj2x4 1/1 Running 0 19h
    kube-scheduler-docker-desktop 1/1 Running 0 19h
    kubernetes-dashboard-5f7b999d65-vqdq6 1/1 Running 0 19h
    tiller-deploy-5454fb964d-8tp5t 1/1 Running 0 76s

4. Installing Istio

Istio is a microservice-mesh management framework, that provides traffic management, policy enforcement, and telemetry collection.
We are going to:

  • Install Istio (and addons such as Kiali) via Helm (doc)
  • Accessing Kiali dashboard (doc)
  • Install bookinfo demo (doc)

4.1 Install Istio via Helm

Simply follow the steps in https://istio.io/docs/setup/install/helm/, remember to config docker desktop as mentioned. Unzip the downloaded package into “c:\Istio“ as we might want to update some files there.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
PS C:\> helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.3.1/charts/
#Output:
"istio.io" has been added to your repositories

#Use Helm’s Tiller pod to manage Istio release (option 2), as we installed Tiller in previous step.
PS C:\> cd istio

#1. Make sure you have a service account with the cluster-admin role defined for Tiller. If not already defined, create one using following command
PS C:\istio> kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
#Output:
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

#2. Config Tiller on your cluster with the service account:
PS C:\istio> helm init --upgrade --service-account tiller
#Output:
$HELM_HOME has been configured at C:\Users\lufeng\.kube.
Tiller (the Helm server-side component) has been upgraded to the current version.

#3. Install the istio-init chart to bootstrap all the Istio’s CRDs:
PS C:\istio> helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
#Output:
NAME: istio-init
LAST DEPLOYED: Fri Oct 4 11:36:15 2019
NAMESPACE: istio-system
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME AGE
istio-init-istio-system 0s

==> v1/ClusterRoleBinding
NAME AGE
istio-init-admin-role-binding-istio-system 0s

==> v1/ConfigMap
NAME DATA AGE
istio-crd-10 1 0s
istio-crd-11 1 0s
istio-crd-12 1 0s

==> v1/Job
NAME COMPLETIONS DURATION AGE
istio-init-crd-10-1.3.1 0/1 0s
istio-init-crd-11-1.3.1 0/1 0s 0s
istio-init-crd-12-1.3.1 0/1 0s 0s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
istio-init-crd-11-1.3.1-qz4fh 0/1 ContainerCreating 0 0s
istio-init-crd-12-1.3.1-6rk5w 0/1 ContainerCreating 0 0s

==> v1/ServiceAccount
NAME SECRETS AGE
istio-init-service-account 1 0s

Then select a configuration profile. We go with “demo“ as it include some nice addons such as Kiali.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#Installation
PS C:\istio> helm install install/kubernetes/helm/istio --name istio --namespace istio-system --values install/kubernetes/helm/istio/values-istio-demo.yaml

#Verify
PS C:\istio> kubectl get pods -n istio-system
#Output:
NAME READY STATUS RESTARTS AGE
grafana-6fc987bd95-zj4kn 1/1 Running 0 98s
istio-citadel-55646d8965-wvflc 1/1 Running 0 97s
istio-egressgateway-7bdb7bf7b5-ck4k6 1/1 Running 0 98s
istio-galley-56bf6b7497-c9szw 1/1 Running 0 98s
istio-ingressgateway-64dbd4b954-64gj8 1/1 Running 0 98s
istio-init-crd-10-1.3.1-tvnr4 0/1 Completed 0 4h1m
istio-init-crd-11-1.3.1-qz4fh 0/1 Completed 0 4h1m
istio-init-crd-12-1.3.1-6rk5w 0/1 Completed 0 4h1m
istio-pilot-5d4c86d576-crn2k 2/2 Running 0 97s
istio-policy-759d4988df-c7tnb 2/2 Running 1 97s
istio-sidecar-injector-5d6ff6d758-8tlrx 1/1 Running 0 97s
istio-telemetry-7c88764b9c-245mk 2/2 Running 1 97s
istio-tracing-669fd4b9f8-gmlh9 1/1 Running 0 97s
kiali-94f8cbd99-zwz8z 1/1 Running 0 98s
prometheus-776fdf7479-jwnvh 1/1 Running 0 97s

You can also verify these pod via dashboard

4.2 Accessing Kiali dashboard

As we installed the Demo configuration profile of Istio, Kiali was also installed. Kiali is an observability console for Istio with service mesh configuration capabilities. (Read more at https://istio.io/docs/tasks/telemetry/kiali/ also)

To open Kiali UI, pls run

1
2
3
4
PS C:\istio> kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001
#Output:
Forwarding from 127.0.0.1:20001 -> 20001
Forwarding from [::1]:20001 -> 20001

Then go to http://localhost:20001 for visting Kiali UI.

Again, it ask for login. As in this case, Kiali was installed as a part of the Demo configuration profile, you can use default user name “admin“ and password “admin“ to login.

4.3 Install bookinfo demo

Now, lets deploy a demo application composed of four separate microservices. The detailed doc can be found at https://istio.io/docs/examples/bookinfo/.

  1. Start the application services

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    #1. Set automatic sidecar injection
    PS C:\istio> kubectl label namespace default istio-injection=enabled

    #2. Deployment
    PS C:\istio> kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
    #Output:
    service/details created
    serviceaccount/bookinfo-details created
    deployment.apps/details-v1 created
    service/ratings created
    serviceaccount/bookinfo-ratings created
    deployment.apps/ratings-v1 created
    service/reviews created
    serviceaccount/bookinfo-reviews created
    deployment.apps/reviews-v1 created
    deployment.apps/reviews-v2 created
    deployment.apps/reviews-v3 created
    service/productpage created
    serviceaccount/bookinfo-productpage created
    deployment.apps/productpage-v1 created

    #3. Verify services and pods
    PS C:\istio> kubectl get services
    #Output:
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    details ClusterIP 10.110.165.24 <none> 9080/TCP 33s
    kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h50m
    productpage ClusterIP 10.97.123.119 <none> 9080/TCP 32s
    ratings ClusterIP 10.111.216.40 <none> 9080/TCP 33s
    reviews ClusterIP 10.109.244.28 <none> 9080/TCP 33s

    PS C:\istio> kubectl get pods
    #Output:
    NAME READY STATUS RESTARTS AGE
    details-v1-c5b5f496d-sgr6w 2/2 Running 0 85s
    productpage-v1-c7765c886-6cpr9 2/2 Running 0 83s
    ratings-v1-f745cf57b-87m7q 2/2 Running 0 85s
    reviews-v1-75b979578c-vmzn2 2/2 Running 0 84s
    reviews-v2-597bf96c8f-plml7 2/2 Running 0 85s
    reviews-v3-54c6c64795-x67ss 2/2 Running 0 84s

    #4. Verify by calling the application
    PS C:\istio> kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | select-string -pattern "<title>"
    #Output:
    <title>Simple Bookstore App</title>
  2. Establish gateway for the bookinfo app

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    #1. Apply gateway
    PS C:\istio> kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
    #Output:
    gateway.networking.istio.io/bookinfo-gateway created
    virtualservice.networking.istio.io/bookinfo created

    #2. Verify the gateway
    PS C:\istio> kubectl get gateway
    #Output:
    NAME AGE
    bookinfo-gateway 38s
  3. Confirm the app is accessible from outside the cluster
    Go to http://localhost/productpage to verify you can open the page. You can refresh the page several times for generating telemtries.

  4. Kiali Visualization
    Assuming the 20001 port forwarding is still running, then you can visualize the service relationship in Kiali http://localhost:20001/

5. Deployment examples

Let’s deploy a single-container-application (Grafana) to the cluster, which is described at https://grafana.com/docs/installation/docker/

1. Docker version

1
docker run -d -p 3000:3000 grafana/grafana

2. Kubernetes kubectl command version

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 1. Deployment
PS C:\> kubectl run grafana-test --generator=run-pod/v1 --image=grafana/grafana --port=3000
#Output:
pod/grafana-test created

# 2. Check the name of the grafana pod. Note it is sitting in "default" namespace
PS C:\> kubectl -n default get pod
#Output:
NAME READY STATUS RESTARTS AGE
details-v1-c5b5f496d-sgr6w 2/2 Running 0 29h
grafana-test 2/2 Running 0 97s
kubernetes-bootcamp-b94cb9bff-vsprh 2/2 Running 0 3h6m
productpage-v1-c7765c886-6cpr9 2/2 Running 0 29h
ratings-v1-f745cf57b-87m7q 2/2 Running 0 29h
reviews-v1-75b979578c-vmzn2 2/2 Running 0 29h
reviews-v2-597bf96c8f-plml7 2/2 Running 0 29h
reviews-v3-54c6c64795-x67ss 2/2 Running 0 29h

# 4. Enable port forwarding.
# In case you wanna use select as the pod name contains random string,
# Use "kubectl -n default port-forward $(kubectl -n default get pod -l run=grafana-test -o jsonpath='{.items[0].metadata.name}') 3000:3000"
PS C:\> kubectl -n default port-forward grafana-test 3000:3000
#Output:
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

3. Kubernetes YAML deployment version
It is recommended to use YAML file for defining a deployment. See doc at https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Create a deployment grafana-deployment.yaml file as below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-yaml-deployment
labels:
app: grafana-yaml
spec:
replicas: 1
selector:
matchLabels:
app: grafana-yaml
template:
metadata:
labels:
app: grafana-yaml
spec:
containers:
- name: grafana-yaml
image: grafana/grafana
ports:
- containerPort: 3000

Then apply the yaml file and run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#1. Deployment
PS C:\> kubectl apply -f .\grafana-deployment.yaml
#Output:
deployment.apps/grafana-yaml-deployment created

#2. Verify
PS C:\> kubectl get deployments
#Output:
NAME READY UP-TO-DATE AVAILABLE AGE
details-v1 1/1 1 1 29h
grafana-yaml-deployment 1/1 1 1 40s
kubernetes-bootcamp 1/1 1 1 3h27m
productpage-v1 1/1 1 1 29h
ratings-v1 1/1 1 1 29h
reviews-v1 1/1 1 1 29h
reviews-v2 1/1 1 1 29h
reviews-v3 1/1 1 1 29h

#3. Enable forward port, by using selector app=grafana-yaml
PS C:\> kubectl -n default port-forward $(kubectl -n default get pod -l app=grafana-yaml -o jsonpath='{.items[0].metadata.name}') 3000:3000

#4. Expose the service via nodeport
PS C:\> kubectl expose deployment grafana-yaml-deployment --type=NodePort --port=3000
#Output:
service/grafana-yaml-deployment exposed

#5. Get the external ip and port
PS C:\> kubectl get services
#Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.110.165.24 <none> 9080/TCP 3d8h
grafana-yaml-deployment NodePort 10.98.52.86 <none> 3000:30857/TCP 9s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d15h
productpage ClusterIP 10.97.123.119 <none> 9080/TCP 3d8h
ratings ClusterIP 10.111.216.40 <none> 9080/TCP 3d8h
reviews ClusterIP 10.109.244.28 <none> 9080/TCP 3d8h

PS C:\> kubectl describe service grafana-yaml-deployment
Name: grafana-yaml-deployment
Namespace: default
Labels: app=grafana-yaml
Annotations: <none>
Selector: app=grafana-yaml
Type: NodePort
IP: 10.98.52.86
LoadBalancer Ingress: localhost
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30857/TCP
Endpoints: 10.1.0.208:3000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Then you can access grafana pod via http://localhost:30857

6. Summary

Now, you should have a kubernetes environment up and running, together with Istio and Kiali enabled. It can be used as your sandbox, for developing and testing your applications in Kubernetes. With Istio and Kiali, you can also play with service mesh. Everything is running locally in “one box”, so you do not need to worry about any cloud running cost.

Have fun.

Share Comments