Skip to end of metadata
Go to start of metadata

Introduction

For the orchestration of the Yona containers in cluster, we are using Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. The management of the Kubernetes resources is done with Helm, a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.This page describes how to set this environment up and how to run Yona with it.

Setup

Windows

(warning) This procedure requires Windows 10 Pro. It does not work on Windows 10 Home, as Hyper-V is not available on that edition. It might also work on Windows 8 Pro, but that has not been tested.

To run Yona on a developer laptop, we use Minikube, a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day. The below procedure uses Hyper-V as the VM provider. This is a standard Windows component, so there is one less component to install. See also this Microsoft blog for background information and to learn how to enable Hyper-V.

These are the steps to set it up:

  1. Install the stable version of Docker, as described here
  2. Download the minikube-windows-amd64.exe file, rename it to minikube.exe and add it to your path
  3. Determine the latest version of kubectl:

    >curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt
  4. Download the latest version of kubectl:

    >curl -LO https://storage.googleapis.com/kubernetes-release/release/<latest version>/bin/windows/amd64/kubectl.exe and add it to your path
  5. Open Hyper-V manager and create a virtual switch named Minikube. The procedure is given in the Microsoft blog mentioned above, but we use a different name for the virtual switch.
  6. Open a command prompt with administrator privileges
  7. Start the local Kubernetes cluster:

    >minikube start --vm-driver="hyperv" --hyperv-virtual-switch=Minikube --cpus 4 --memory 7192 --disk-size 10G --v=7 --alsologtostderr

    Minikube will create a HyperV instance with the above configuration, and deploy Kubernetes master node to it. It will setup your kubectl config (%HOMEDRIVE%%HOMEPATH%/.kube/config) to point to the K8S API in the virtual environment.
    To verify the node is available:

    >kubectl get nodes
    NAME       STATUS    AGE       VERSION
    minikube   Ready     1d        v1.6.0

    You can see a quick list of all the running components on a base K8S install:

    >kubectl get all --all-namespaces  -a
    NAMESPACE     NAME                                READY     STATUS    RESTARTS   AGE
    kube-system   po/kube-addon-manager-minikube      1/1       Running   1          20h
    kube-system   po/kube-dns-268032401-g0kdt         3/3       Running   96         18h
    kube-system   po/kubernetes-dashboard-q4m57       1/1       Running   1          20h
    
    NAMESPACE     NAME                      DESIRED   CURRENT   READY     AGE
    kube-system   rc/kubernetes-dashboard   1         1         1         20h
    
    NAMESPACE     NAME                       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
    default       svc/kubernetes             10.0.0.1     <none>        443/TCP         20h
    kube-system   svc/kube-dns               10.0.0.10    <none>        53/UDP,53/TCP   20h
    kube-system   svc/kubernetes-dashboard   10.0.0.164   <nodes>       80:30000/TCP    20h
    
    NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    kube-system   deploy/kube-dns        1         1         1            1           20h
    
    NAMESPACE     NAME                          DESIRED   CURRENT   READY     AGE
    kube-system   rs/kube-dns-268032401         1         1         1         20h
    kube-system   rs/tiller-deploy-1491950541   1         1         1         20h
  8. Download Helm and add it to your path
  9. Set the HELM_HOME environment variable:

    >set HELM_HOME=%HOMEDRIVE%%HOMEPATH%\.helm
  10. Load the Tiller component of Helm into the Kubernetes cluster:

     

    >helm init

Now you are ready to deploy Yona server

Deploy Yona server

This section describes how to deploy Yona server. After covering the preparation steps, two options are described: Deploying Yona from a checkout and deploying an existing build.

Preparation steps

  1. Go to the k8s folder in the Yona check-out
  2. Add the Yona charts repository for the dependencies:

    >helm repo add yona https://yonadev.github.io/helm-charts
  3. Create the Yona namespace:

    >kubectl create -f .\01_namespace.yaml
  4. Create a persistent storage volume:

    >kubectl create -f .\02_storage.yaml

    It will create a space of the defined size in the virtual machine to be used by persistent volume claims for the MariaDB and LDAP data

 

Deploy from checkout

Follow these steps to deploy Yona from a Git check-out:

 

  1. Go to the helm subfolder
  2. Update the dependencies

    >helm dependency update yona

    This will download MariaDB and LDAP if not downloaded already.

  3. Now deploy/upgrade Yona:

    >helm upgrade --install --namespace yona yona ./yona

Deploy existing build

Follow these steps to deploy a build created by Jenkins:

  1. Update the repositories

    >helm repo update
  2. Two possibilities exist:
    1. Deploy latest build

      >helm upgrade --install --namespace yona yona yona/yona
    2. Deploy a specific build

      >helm upgrade --install --namespace yona --version 1.2.549 yona yona/yona

Verify the deployment

Monitor stand-up

To see the pods being deployed:

>kubectl get pods -n yona -a

You will see various liquibase containers failing out due to some components not being ready right away - MariaDB, LDAP, etc.

Eventually you will see a completed liquibase, and eventually the various components will be up.

You can also use the -w command to watch the changes to pod status

>kubectl get pods -n yona -w

With the above, once you see that liquibase-update has 'Completed' it is good to go.

To view the Logs of any container, use the following command with the full pod name from the get pods command above

>kubectl logs -n yona 527-develop-liquibase-update-5p637

Accessing the cluster services

Once everything is up, you probably want to test it.

Get a list of the exposed services with the following:

>minikube service list
|-------------|----------------------|-----------------------------|
|  NAMESPACE  |         NAME         |             URL             |
|-------------|----------------------|-----------------------------|
| default     | kubernetes           | No node port                |
| kube-system | kube-dns             | No node port                |
| kube-system | kubernetes-dashboard | http://192.168.178.73:30000 |
| kube-system | tiller-deploy        | No node port                |
| yona        | admin                | http://192.168.178.73:31001 |
| yona        | admin-actuator       | http://192.168.178.73:31011 |
| yona        | analysis             | http://192.168.178.73:31002 |
| yona        | analysis-actuator    | http://192.168.178.73:31012 |
| yona        | app                  | http://192.168.178.73:31003 |
| yona        | app-actuator         | http://192.168.178.73:31013 |
| yona        | batch                | http://192.168.178.73:31004 |
| yona        | batch-actuator       | http://192.168.178.73:31014 |
| yona        | ldap                 | http://192.168.178.73:31389 |
| yona        | yona-mariadb         | No node port                |
|-------------|----------------------|-----------------------------|

Now you can check whether the app service is up and running, with the activity categories deployed:

>curl http://192.168.178.73:31003/activityCategories/
{
  "_embedded" : {
    "yona:activityCategories" : [ {
:
:

To verify what build is running:

>curl http://192.168.178.73:31013/info/
{
  "build" : {
    "version" : "0.0.8-SNAPSHOT",
    "buildNumber" : "549",
    "artifact" : "appservice",
    "name" : "appservice",
    "group" : "yonadev",
    "time" : "2017-06-17T09:50:54.000+0000"
  }
}

To fetch the current configuration properties:  

>curl http://192.168.178.73:31013/configprops/
{
  "spring.jpa-org.springframework.boot.autoconfigure.orm.jpa.JpaProperties" : {
    "prefix" : "spring.jpa",
    "properties" : {
:
:

 

Shutdown, for redeploy

The following commands will stop the chart, and clean up various resources that are not automatically cleaned up when a chart is stopped.

>helm delete yona --purge ; kubectl -n yona delete secret -l release=yona ; kubectl -n yona delete job -l release=yona ; kubectl -n yona delete cm -l release=yona ; kubectl -n yona delete pvc -l release=yona ; echo "All cleanly purged"
  • No labels