下载

安装请看 官方文档

需要注意的是文档内所用的版本是 Pro版,与 Github Release 中下载的 开源版 非同一版本。

以下使用的版本为 v2.19.0 Pro版

安装

在本机配置 Kubectl Context 并使用 kubectl config use-context dev 切换到指定集群,或者直接在线上服务器执行以下命令安装 Traffic Manager

1
telepresence helm install

安装参数请参考 ArtifactHub

安装完成后 kubectl get pod -n ambassador 查看状态:

1
2
3
NAME                                                READY   STATUS    RESTARTS   AGE
traffic-manager-ambassador-agent-56654cffd7-8qqdh 1/1 Running 0 46s
traffic-manager-5b77fb5-62lr4 1/1 Running 0 46s

连接集群

1
telepresence connect --namespace biz # 指定 namespace
1
2
3
4
Launching Telepresence User Daemon
Launching Telepresence Root Daemon
...
Connected to context dev, namespace biz (https://10.x.x.x:6443)
1
telepresence list
1
bill              : ready to intercept (traffic-agent not yet installed)

拦截转发流量

查看指定服务的 yaml 配置:

1
kubectl get svc -n biz bill -o yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: Service
metadata:
name: bill
namespace: biz
spec:
clusterIP: 10.43.57.126
clusterIPs:
- 10.43.57.126
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
protocol: TCP
targetPort: 10000
- name: grpc
port: 81
protocol: TCP
targetPort: 10000
selector:
app: bill
sessionAffinity: None
type: ClusterIP

可以看到 bill 服务中监听了 10000 端口,所以拦截时指定 10000:10000<local port>:<remote port>):

1
telepresence intercept bill -p 10000:10000
1
2
3
4
5
6
7
8
Using Deployment bill
Intercept name : bill
State : ACTIVE
Workload kind : Deployment
Destination : 127.0.0.1:10000
Service Port Identifier: port
Volume Mount Error : sshfs is not installed on your local machine
Intercepting : all TCP requests
1
kubectl describe pod -n biz bill-779b9c6bf7-mdmv6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
Init Containers:
tel-agent-init:
Container ID: containerd://a8c5d0eff4714d77b8299ebe698dfe7907f521524f8ca267e9a1c24958e3ed9d
Image: docker.io/datawire/ambassador-telepresence-agent:1.14.5
Image ID: docker.io/datawire/ambassador-telepresence-agent@sha256:3f6f3076b1eca26c460ef166993c3d9e7527fcc2a3d74709e01869a39cfebd91
Port: <none>
Host Port: <none>
Args:
agent-init
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 19 Mar 2024 10:02:21 +0800
Finished: Tue, 19 Mar 2024 10:02:21 +0800
Ready: True

Containers:
bill:
Container ID: containerd://e43df533e5f3834fcb9d7f5b62bef63b6a01375a3d94cee3f2a5ef8c83592966
Image: ...
Image ID: ...
Port: 10000/TCP
Host Port: 0/TCP
State: Running
Ready: True

traffic-agent:
Container ID: containerd://356cdc8b7f423871907778085f43804a5e6c19c98884b0342e88b073d4926c8a
Image: docker.io/datawire/ambassador-telepresence-agent:1.14.5
Image ID: docker.io/datawire/ambassador-telepresence-agent@sha256:3f6f3076b1eca26c460ef166993c3d9e7527fcc2a3d74709e01869a39cfebd91
Port: 9900/TCP
Host Port: 0/TCP
Args:
agent
State: Running
Started: Tue, 19 Mar 2024 10:02:22 +0800
Ready: True

然后访问远程集群地址 https://10.x.x.x:8888/bill/_health 可以看到流量已经转发到本地的 10000 端口。

恢复

1
2
3
telepresence leave bill # stop `bill` intercept

telepresence quit -s # stop all local telepresence daemons

Ref

  • https://www.getambassador.io/docs/telepresence

自建 TailscaleDerper 节点并开启认证,需要在 Derper 节点也安装 Tailscale 客户端。

Docker Compose

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
version: "3"
services:
tailscale:
image: tailscale/tailscale
container_name: tailscale
privileged: true
restart: always
volumes:
- "./tailscale/data:/var/lib/tailscale"
- "./tailscale/tmp:/tmp"
- "/dev/net/tun:/dev/net/tun"
cap_add:
- net_admin
- sys_module
environment:
TS_AUTHKEY: "从 https://login.tailscale.com/admin/settings/keys 获取"
TS_STATE_DIR: "/var/lib/tailscale"
TS_USERSPACE: "false"
derper:
image: starudream/derper
container_name: derper
restart: always
command: /tailscale/derper -a :80 -verify-clients
depends_on:
- tailscale
ports:
- "3478:3478/udp"
volumes:
- "./tailscale/tmp:/var/run/tailscale"

Derper 的镜像请 于此 查看。

当前版本 1.60.1tailscaled.sockvar/run/tailscale/tailscaled.sock 只是一个指向 /tmp/tailscaled.sock 的链接。

Nginx

不使用 Derper 内置 SSL 证书,使用 nginx 反向代理,需要注意 proxy_set_header Upgrade $http_upgrade; 开启 Websocket

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
server {
# ...
location / {
proxy_pass http://derper:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}

Access Controls

最后在 https://login.tailscale.com/admin/acls/file 修改配置文件,添加 DERPMap 配置。

OmitDefaultRegions 会忽略官方的 Derper 节点,自建建议开启以保护隐私。

下面的配置文件示例,分成内网与公网两个配置,Tailscale 客户端会通过延迟自动选择使用内网还是公网。

Nodes 内详细配置请看 DERPNode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
// ... acls ssh
"derpMap": {
"OmitDefaultRegions": true,
"Regions": {
"900": {
"RegionID": 900,
"RegionCode": "private",
"Nodes": [
{
"Name": "private-aliyun",
"RegionID": 900,
"HostName": "derper.52xckl.cn",
"IPV4": "内网ip 172.17.0.1",
"STUNPort": 3478,
"DERPPort": 443
}
]
},
"901": {
"RegionID": 901,
"RegionCode": "public",
"Nodes": [
{
"Name": "public-aliyun",
"RegionID": 901,
"HostName": "derper.52xckl.cn",
"IPV4": "公网ip",
"STUNPort": 3478,
"DERPPort": 443
}
]
}
}
}
}

Test

1
docker exec -it tailscale tailscale netcheck
1
2
3
4
5
6
7
8
9
10
11
Report:
* UDP: true
* IPv4: yes, 172.19.0.1:34143
* IPv6: no, unavailable in OS
* MappingVariesByDestIP: true
* HairPinning: false
* PortMapping:
* Nearest DERP:
* DERP latency:
- private: 200µs ()
- public: 3.1ms ()
1
docker exec -it tailscale tailscale ping <node name>
1
2
3
pong from <node name> (<node ip>) via DERP(public) in 11ms
pong from <node name> (<node ip>) via DERP(public) in 11ms
pong from <node name> (<node ip>) via DERP(public) in 11ms

Ref

  • https://tailscale.com/kb/1118/custom-derp-servers

解决不必要的 30s 超时等待时间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
FROM docker:20-dind as upstream

FROM scratch

COPY --from=upstream / /

VOLUME /var/lib/docker

EXPOSE 2376

ENV DOCKER_TLS_CERTDIR=/certs

ENTRYPOINT ["dockerd-entrypoint.sh"]

CMD []

Ref

  • https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29130#note_1028331564
  • https://github.com/docker-library/docker/blob/0d1c2100d12da2e7e458cdff18d741f625ce27d6/20.10/dind/Dockerfile

common

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/env bash

method=$1
if [ -z "$method" ]; then
method="GET"
fi
path=$2
if [[ $path == *\?* ]]; then
path="$path&pretty"
else
path="$path?pretty"
fi

curl -v -u "elastic:p1ssw0rd" -X "$method" -H "Content-Type: application/json" "${@:2}" "http://localhost:9200/$path"

index setting

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
./curl.sh PUT _index_template/logstash -d '{
"template": {
"settings": {
"index": {
"lifecycle": {
"name": "90-days-default",
"rollover_alias": "90days"
},
"refresh_interval": "5s",
"number_of_replicas": "0"
}
}
},
"index_patterns": [
"logstash-*"
]
}'

index health is yellow

1
2
3
./curl.sh PUT _settings -d '{
"index.number_of_replicas": 0
}'

index has exceeded [1000000] - maximum allowed to be analyzed for highlighting

1
2
3
./curl.sh PUT _settings -d '{
"index.highlight.max_analyzed_offset": 100000000
}'

this action would add [2] shards, but this cluster currently has [1000]/[1000]

1
2
3
4
5
./curl.sh PUT _cluster/settings -d '{
"persistent": {
"cluster.max_shards_per_node": 1000000
}
}'

Can’t store an async search response larger than [10485760] bytes.

1
2
3
4
5
./curl.sh PUT _cluster/settings -d '{
"persistent": {
"search.max_async_search_response_size": "50mb"
}
}'

  • min: 最小值
  • max: 最大值
  • default 默认 limit
  • defaultRequest 默认 request

如果 deployment 没有 resources requestslimits,那么会使用 defaultRequestdefault 值,否则使用自定义的值,但是 limits 不能超过 maxrequests 不能小于 min