安装

  • winsw/winsw

Release 页面下载最新版本以及示例配置文件。

将可执行文件和配置文件都复制到自定义的目录,并将文件名改为一致。

配置

下面我使用 Frpc 作为演示。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<!-- frpc-service.xml -->
<service>
<id>frpc</id>
<name>Frpc Service</name>
<description>A fast reverse proxy to help you expose a local server behind a NAT or firewall to the internet.</description>
<executable>E:\Frp\frpc.exe</executable>
<onfailure action="restart" delay="5 sec" />
<resetfailure>1 day</resetfailure>
<arguments>-c E:\Frp\frpc.ini</arguments>
<workingdirectory>E:\Frp</workingdirectory>
<priority>AboveNormal</priority>
<stoptimeout>15 sec</stoptimeout>
<stopparentprocessfirst>false</stopparentprocessfirst>
<startmode>Automatic</startmode>
<waithint>15 sec</waithint>
<sleeptime>1 sec</sleeptime>
<logpath>E:\Frp\logs</logpath>
<log mode="roll-by-time">
<pattern>yyyyMMdd</pattern>
</log>
</service>

相关配置的解读请参考下载的示例配置文件。

运行

1
2
3
4
5
6
7
8
# 首先运行测试,看配置是否可以正常运行
./frpc-service.exe testwait

# 安装 Service
./frpc-service.exe install

# 运行 Service
./frpc-service.exe start

参考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
A wrapper binary that can be used to host executables as Windows services

Usage: winsw [/redirect file] <command> [<args>]
Missing arguments trigger the service mode

Available commands:
install install the service to Windows Service Controller
uninstall uninstall the service
start start the service (must be installed before)
stop stop the service
stopwait stop the service and wait until it's actually stopped
restart restart the service
restart! self-restart (can be called from child processes)
status check the current status of the service
test check if the service can be started and then stopped
testwait starts the service and waits until a key is pressed then stops the service
version print the version info
help print the help info (aliases: -h,--help,-?,/?)

Extra options:
/redirect redirect the wrapper's STDOUT and STDERR to the specified file

WinSW 2.9.0.0
More info: https://github.com/kohsuke/winsw
Bug tracker: https://github.com/kohsuke/winsw/issues

安装 docker 并拉取相关镜像

  • gitea/gitea

  • drone/drone

  • drone/drone-runner-docker

1
2
3
docker pull gitea/gitea
docker pull drone/drone
docker pull drone/drone-runner-docker

安装 gitea

1
2
3
4
5
6
7
8
docker run -d \
--name gitea \
--restart always \
-p 53022:22 \
-p 53080:3000 \
-m 400m \
-v /opt/docker/gitea/data:/data \
gitea/gitea:1.11.5

按照步骤完成安装 Gitea 后,打开 https://git.52xckl.cn/user/settings/applications 页面。

应用名称:drone,重定向 URL:https://drone.52xckl.cn/login 填写完成后点击创建应用获取 客户端 ID客户端密钥

创建 drone 与 drone-runner

  • ${CLIENT_ID} -> 客户端 ID
  • ${CLIENT_SECRET} -> 客户端密钥
  • ${RPC_SECRET} -> drone 与 drone-runner 通讯的密钥,随机生成即可
1
2
3
4
5
6
7
8
9
10
11
12
13
docker run -d \
--name drone \
--restart always \
-m 200m \
-p 54080:80 \
-e DRONE_GITEA_SERVER=https://git.52xckl.cn \
-e DRONE_GITEA_CLIENT_ID=${CLIENT_ID} \
-e DRONE_GITEA_CLIENT_SECRET=${CLIENT_SECRET}= \
-e DRONE_RPC_SECRET=${RPC_SECRET} \
-e DRONE_SERVER_HOST=drone.52xckl.cn \
-e DRONE_SERVER_PROTO=https \
-v /opt/docker/drone/data:/var/lib/drone \
drone/drone:1.7.0
1
2
3
4
5
6
7
8
9
10
11
docker run -d \
--name drone-runner \
--link drone:drone \
--restart always \
-m 200m \
-e DRONE_RUNNER_NAME=runner-001 \
-e DRONE_RPC_PROTO=http \
-e DRONE_RPC_HOST=drone \
-e DRONE_RPC_SECRET=${RPC_SECRET} \
-v /var/run/docker.sock:/var/run/docker.sock \
drone/drone-runner-docker:1.3.0

最后 docker logs -f --tail 10 drone-runner 显示 successfully pinged the remote server 即为成功。

测试

首先打开 https://drone.52xckl.cn/ 完成 OAuth2 认证并激活项目。

在激活的项目中创建名为 .drone.yml 的文件:

1
2
3
4
5
6
7
8
9
kind: pipeline
name: test
type: docker
steps:
- name: test
image: golang:1.14-alpine
commands:
- CGO_ENABLED=0 GO111MODULE=on go test -count=1 -cover -v ./...
- CGO_ENABLED=0 GO111MODULE=on go run .

这段是 Go 相关项目的,如果编写该文件请查询相关官方文档。

参考文档

  • gitea

  • drone

安装相关依赖

1
apt install -y uml-utilities bridge-utils

添加网桥

1
brctl addbr br0

激活网桥

1
ip link set br0 up

添加虚拟网卡

1
ip link set tap0 up

将虚拟网卡添加到指定网桥上

1
brctl addif br0 tap0

给网桥配制ip地址

1
ifconfig br0 172.24.16.10 up

移除

1
2
3
4
5
brctl delif br0 tap0

tunctl -d tap0

brctl delbr br0

brctl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Usage: brctl [commands]
commands:
addbr <bridge> add bridge
delbr <bridge> delete bridge
addif <bridge> <device> add interface to bridge
delif <bridge> <device> delete interface from bridge
hairpin <bridge> <port> {on|off} turn hairpin on/off
setageing <bridge> <time> set ageing time
setbridgeprio <bridge> <prio> set bridge priority
setfd <bridge> <time> set bridge forward delay
sethello <bridge> <time> set hello time
setmaxage <bridge> <time> set max message age
setpathcost <bridge> <port> <cost> set path cost
setportprio <bridge> <port> <prio> set port priority
show [ <bridge> ] show a list of bridges
showmacs <bridge> show a list of mac addrs
showstp <bridge> show bridge stp info
stp <bridge> {on|off} turn stp on/off

使用上一篇文章成功安装 k8scalico 后,使用 istio 管控微服务。

下载 istio

  • https://istio.io/docs/setup/getting-started/

  • Istio 1.5.4

1
2
3
4
5
curl -L https://istio.io/downloadIstio | sh -
# 或者从 `https://github.com/istio/istio/releases/latest` 选择版本下载后解压 `tar zxf istio-*.tar.gz`
cd istio-*
# 并将目录下 bin 的路径加入环境变量 `PATH`
export PATH=$PWD/bin:$PATH

安装

  • https://istio.io/docs/setup/install/istioctl/

使用默认配置安装

1
istioctl manifest apply

验证是否安装成功

1
2
istioctl manifest generate > istio.yaml
istioctl verify-install -f istio.yaml

部署

这里简单地引用了 gitea 作为部署镜像,若为微服务,修改相应路由即可。

创建 test namespaces

1
kubectl create namespace test

创建 configmaps

1
2
3
4
5
6
7
8
# config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: test
data:
DB_TYPE: sqlite3
1
kubectl apply -f config.yaml

部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# gitea.yaml
apiVersion: v1
kind: Service
metadata:
name: web
namespace: test
spec:
type: ClusterIP
ports:
- name: http-web
port: 80
targetPort: 3000
selector:
app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: test
image: gitea/gitea:latest
ports:
- name: port
containerPort: 3000
envFrom:
- configMapRef:
name: config
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: web
namespace: test
spec:
hosts:
- "*"
gateways:
- istio-system/istio-ingressgateway
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: web
port:
number: 80
1
kubectl apply -f gitea.yaml

查看

修改 istio-ingressgatewayLoadBalancerNodePort 将端口暴露出去。

1
kubectl edit service istio-ingressgateway -n istio-system
1
kubectl get service -n istio-system
1
istio-ingressgateway   NodePort    10.96.203.60    <none>        80:31893/TCP   25m

最后访问 http://${ip}:31893/ 就可以看到 Gitea 的页面了。

准备

安装 Docker

上一篇文章 安装 docker

修改主机名

1
hostnamectl set-hostname k8s-master

修改 /etc/hosts

1
192.168.140.28 api.k8s.local k8s-master

关闭 swap

1
swapoff -a

注释 /etc/fstab 文件中 swap 分区。

添加内核参数

修改 /etc/sysctl.conf

1
2
3
4
5
fs.file-max = 1000000

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

添加 repo 源

1
2
3
4
5
6
7
8
9
10
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

阿里云公网

1
sed -i 's|packages.cloud.google.com|mirrors.aliyun.com/kubernetes|' /etc/yum.repos.d/kubernetes.repo

阿里云内网

1
sed -i 's|https://packages.cloud.google.com|http://mirrors.cloud.aliyuncs.com/kubernetes|' /etc/yum.repos.d/kubernetes.repo

安装

1
2
3
yum install -y kubeadm kubelet kubectl

systemctl enable kubelet

初始化 master 节点

1
2
3
4
5
6
kubeadm init \
--kubernetes-version=1.18.2 \
--apiserver-advertise-address=192.168.140.28 \
--apiserver-bind-port 6443 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers

当出现 Your Kubernetes control-plane has initialized successfully! 即安装成功,并且在下面有相关配置。

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

还有最后一条命令在添加 k8s node 节点时使用到

1
kubeadm join 192.168.140.28:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx

如果忘了保存,可以使用以下命令重新获取到

1
kubeadm token create --print-join-command

这时可以使用以下命令查看节点

1
2
3
kubectl get node

kubectl get pod -A
1
2
NAME         STATUS     ROLES    AGE     VERSION
k8s-master NotReady master 8m56s v1.18.2

这边看到状态为 NotReady 是因为未安装网络组件。

添加 node 节点

1
kubeadm join 192.168.140.28:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx

安装 Calico 网络

1
2
3
4
wget --unlink -qO calico.yaml https://docs.projectcalico.org/v3.14/manifests/calico.yaml
# 10.244.0.0/16 这个地址需要与上面的 kubeadm init --pod-network-cidr 参数值一致
sed -i -e "s|192.168.0.0/16|10.244.0.0/16|g" calico.yaml
kubectl apply -f calico.yaml

完成这步之后会看到节点的状态变为 Ready

安装 Dashboard

1
2
wget --unlink -qO dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
kubectl apply -f dashboard.yaml

一般这样是无法通过外网访问,建议在测试环境下修改 kubernetes-dashboardClusterIPNodePort 将端口暴露出去。

1
kubectl edit service kubernetes-dashboard -n kubernetes-dashboard

修改之后,查看 services

1
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.106.39.19    <none>        443:31570/TCP            36m

之后访问 https://${ip}:31570 发现需要 token,下面创建一个管理员用户。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
1
kubectl apply -f admin-user.yaml

最后查看 token,找到 admin-user 复制 token 即可以管理员身份访问。

1
k describe secrets -n kubernetes-dashboard

其他

master 参与工作(单机部署)

1
kubectl taint nodes --all node-role.kubernetes.io/master-

命令补全

1
kubectl completion bash > /root/.kube/completion.bash.inc

如果使用 k 作为 kubectl 的别名,需要修改上面生成的文件,在文件末尾修改为

1
2
3
4
5
6
7
if [[ $(type -t compopt) = "builtin" ]]; then
complete -o default -F __start_kubectl kubectl
complete -o default -F __start_kubectl k
else
complete -o default -o nospace -F __start_kubectl kubectl
complete -o default -o nospace -F __start_kubectl k
fi

最后将下面这段加入 .bash_profile 中,以使用自动补全功能。

1
2
3
4
5
6
# complete
source /usr/share/bash-completion/bash_completion
source /root/.kube/completion.bash.inc

# alias
alias k='kubectl'