解决不必要的 30s 超时等待时间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
FROM docker:20-dind as upstream

FROM scratch

COPY --from=upstream / /

VOLUME /var/lib/docker

EXPOSE 2376

ENV DOCKER_TLS_CERTDIR=/certs

ENTRYPOINT ["dockerd-entrypoint.sh"]

CMD []

Ref

  • https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29130#note_1028331564
  • https://github.com/docker-library/docker/blob/0d1c2100d12da2e7e458cdff18d741f625ce27d6/20.10/dind/Dockerfile

common

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/env bash

method=$1
if [ -z "$method" ]; then
method="GET"
fi
path=$2
if [[ $path == *\?* ]]; then
path="$path&pretty"
else
path="$path?pretty"
fi

curl -v -u "elastic:p1ssw0rd" -X "$method" -H "Content-Type: application/json" "${@:2}" "http://localhost:9200/$path"

index setting

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
./curl.sh PUT _index_template/logstash -d '{
"template": {
"settings": {
"index": {
"lifecycle": {
"name": "90-days-default",
"rollover_alias": "90days"
},
"refresh_interval": "5s",
"number_of_replicas": "0"
}
}
},
"index_patterns": [
"logstash-*"
]
}'

index health is yellow

1
2
3
./curl.sh PUT _settings -d '{
"index.number_of_replicas": 0
}'

index has exceeded [1000000] - maximum allowed to be analyzed for highlighting

1
2
3
./curl.sh PUT _settings -d '{
"index.highlight.max_analyzed_offset": 100000000
}'

this action would add [2] shards, but this cluster currently has [1000]/[1000]

1
2
3
4
5
./curl.sh PUT _cluster/settings -d '{
"persistent": {
"cluster.max_shards_per_node": 1000000
}
}'

Can’t store an async search response larger than [10485760] bytes.

1
2
3
4
5
./curl.sh PUT _cluster/settings -d '{
"persistent": {
"search.max_async_search_response_size": "50mb"
}
}'

  • min: 最小值
  • max: 最大值
  • default 默认 limit
  • defaultRequest 默认 request

如果 deployment 没有 resources requestslimits,那么会使用 defaultRequestdefault 值,否则使用自定义的值,但是 limits 不能超过 maxrequests 不能小于 min

1
2
3
4
5
6
7
8
9
10
11
12
13
14
SELECT h.schema_name,
h.table_name,
h.id AS table_id,
h.associated_table_prefix,
row_estimate.row_estimate
FROM _timescaledb_catalog.hypertable h
CROSS JOIN LATERAL (
SELECT SUM(cl.reltuples) AS row_estimate
FROM _timescaledb_catalog.chunk c
JOIN pg_class cl ON cl.relname = c.table_name
WHERE c.hypertable_id = h.id
GROUP BY h.schema_name, h.table_name
) row_estimate
ORDER BY schema_name, row_estimate DESC, table_name;