Recently played with the Spring/SpringBoot/SpringCloud stack with a toy project: https://github.com/gonwan/spring-cloud-demo. Just paste README.md here, and any pull request is welcome:
Switch from Postgres to MySQL, and from Kafka to RabbitMQ.
Easier local debugging by switching off service discovery and remote config file lookup.
Kubernetes support.
Swagger Integration.
Spring Boot Admin Integration.
The project includes:
[eureka-server]: Service for service discovery. Registered services are shown on its web frontend, running at 8761 port.
[config-server]: Service for config file management. Config files can be accessed via: http://${config-server}:8888/${appname}/${profile}. Where ${appname} is spring.application.name and ${profile} is something like dev, prd or default.
[zipkin-server]: Service to aggregate distributed tracing data, working with spring-cloud-sleuth. It runs at 9411 port. All cross service requests, message bus delivery are traced by default.
[zuul-server]: Gateway service to route requests, running at 5555 port.
[authentication-service]: OAuth2 enabled authentication service running at 8901. Redis is used for token cache. JWT support is also included. Spring Cloud Security 2.0 saves a lot when building this kind of services.
[organization-service]: Application service holding organization information, running at 8085. It also acts as an OAuth2 client to authentication-service for authorization.
[license-service]: Application service holding license information, running at 8080. It also acts as an OAuth2 client to authentication-service for authorization.
[config]: Config files hosted to be accessed by config-server.
[docker]: Docker compose support.
[kubernetes]: Kubernetes support.
NOTE: The new OAuth2 support in Spring is actively being developed. All functions are merging into core Spring Security 5. As a result, current implementation is suppose to change. See:
Every response contains a correlation ID to help diagnose possible failures among service call. Run with curl -v to get it:
1
2
3
4
# curl -v ...
...
<sc-correlation-id:3265b50156556c05
...
Search it in Zipkin to get all trace info, including latencies if you are interested in.
The license service caches organization info in Redis, prefixed with organizations:. So you may want to clear them to get a complete tracing of cross service invoke.
All OAuth2 tokens are cached in Redis, prefixed with oauth2:. There is also JWT token support. Comment/Uncomment @Configuration in AuthorizationServerConfiguration and JwtAuthorizationServerConfiguration classes to switch it on/off.
Swagger Integration
The organization service and license service have Swagger integration. Access via /swagger-ui.html.
Spring Boot Admin Integration
Spring Boot Admin is integrated into the eureka server. Access via: http://${eureka-server}:8761/admin.
It is painful to deploying a Kubernetes cluster in mainland China. The installation requires access to Google servers, which is not so easy for every one. Fortunately, there are mirrors or alternative ways. I’ll use Docker v1.13 and Kubernetes v1.11 in the article.
Run the init command by specify the version, the access to Google server is avoided. The script also advices you to turn off firewalld, swap, selinux and enable kernel parameters:
1
2
3
4
# systemctl stop firewalld
# systemctl disable firewalld
# swapoff -a
# setenforce 0
Open /etc/sysconfig/selinux, change enforcing to permissive.
Create /etc/sysctl.d/k8s.conf with content:
1
2
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
1
# sysctl --system
Remember to comment out swap volumes from /etc/fstab.
2.3 Pull Kubernates images
Pull the Kubernetes images from docker/docker-cn mirror maintained by anjia0532. These are minimal images required for a Kubernetes master installation.
These version numbers comes from the kubeadm init command if you cannot access Google servers. These images should be retagged to gcr.io ones before next steps, or the kubeadm command line would not find them:
1
2
3
4
5
6
7
# docker tag registry.docker-cn.com/anjia0532/google-containers.kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1
# docker tag registry.docker-cn.com/anjia0532/google-containers.kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
# docker tag registry.docker-cn.com/anjia0532/google-containers.kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1
# docker tag registry.docker-cn.com/anjia0532/google-containers.kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1
# docker tag registry.docker-cn.com/anjia0532/google-containers.pause:3.1 k8s.gcr.io/pause:3.1
# docker tag registry.docker-cn.com/anjia0532/google-containers.etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
# docker tag registry.docker-cn.com/anjia0532/google-containers.coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
Now, you can access with: http://<master-ip>:31023/.
You can grant admin grant full admin privileges to Dashboard’s Service Account in the development environment for convenience:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# cat dashboard-admin.yaml
apiVersion:rbac.authorization.k8s.io/v1beta1
kind:ClusterRoleBinding
metadata:
name:kubernetes-dashboard
labels:
k8s-app:kubernetes-dashboard
roleRef:
apiGroup:rbac.authorization.k8s.io
kind:ClusterRole
name:cluster-admin
subjects:
-kind:ServiceAccount
name:kubernetes-dashboard
namespace:kube-system
# kubectl create -f dashboard-admin.yaml
5. Troubleshoting
In my office environment, errors occur and the coredns are always in CrashLoopBackOff status:
I Googled a lot, read answers from Stackoverflow and Github, reset iptables/docker/kubernetes, but still failed to solve it. There ARE unresolved issues like #60315. So I tried to switch to flannel network instead of weave. First, Kubernetes and weave need to be reset:
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
Updated June 3, 2019: flannel seems to have a close version dependency on kubernetes version. When deploying kubernetes 1.14, a specific git version should be used, according to the official document:
Updated Jan 11, 2022: Just deployed a new cluster with docker 20.10.12 & kubernetes 1.23.1.
1. kubeadm defaults to systemd, instead of cgroupfs as the container runtime cgroup driver. In docker case, edit /etc/docker/daemon.json, and restart docker service: