Docker images for Elasticsearch are available from the Elastic Docker registry. A list of all published Docker images and tags is available at www.docker.elastic.co. The source code is in GitHub.
Elasticsearch 的 Docker 镜像可从 Elastic Docker 镜像仓库获取。所有已发布的 Docker 镜像和标签的列表可在 www.docker.elastic.co 上找到。源代码位于 GitHub 中。
This package contains both free and subscription features. Start a 30-day trial to try out all of the features.
此包包含免费功能和订阅功能。开始 30 天试用 以试用所有功能。
If you just want to test Elasticsearch in local development, refer to Run Elasticsearch locally. Please note that this setup is not suitable for production environments.
如果您只想在本地开发中测试 Elasticsearch,请参阅在本地运行 Elasticsearch。请注意,此设置不适用于生产环境。
Run Elasticsearch in Docker【在 Docker 中运行 Elasticsearch】
Use Docker commands to start a single-node Elasticsearch cluster for development or testing. You can then run additional Docker commands to add nodes to the test cluster or run Kibana.
使用 Docker 命令启动单节点 Elasticsearch 集群以进行开发或测试。然后,您可以运行其他 Docker 命令以将节点添加到测试集群或运行 Kibana。
This setup doesn’t run multiple Elasticsearch nodes or Kibana by default. To create a multi-node cluster with Kibana, use Docker Compose instead. See Start a multi-node cluster with Docker Compose.
此设置默认不运行多个 Elasticsearch 节点或 Kibana。要使用 Kibana 创建多节点集群,请改用 Docker Compose。请参阅[使用 Docker Compose 启动多节点集群](https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docker.html#docker-compose-file “使用 Docker Compose 启动多节点集群”)。
Hardened Docker images 【强化的 Docker 镜像】
You can also use the hardened Wolfi image for additional security. Using Wolfi images requires Docker version 20.10.10 or higher.
您还可以使用强化的 Wolfi 映像来提高安全性。使用 Wolfi 映像需要 Docker 版本 20.10.10 或更高版本。
To use the Wolfi image, append -wolfi
to the image tag in the Docker command.
要使用 Wolfi 映像,请在 Docker 命令中的映像标签后附加-wolfi
。
For example:
例如:
docker pull docker.elastic.co/elasticsearch/elasticsearch-wolfi:8.17.3
Start a single-node cluster【启动单节点集群】
-
Install Docker. Visit Get Docker to install Docker for your environment.
-
安装 Docker。访问 获取 Docker 为您的环境安装 Docker。
If using Docker Desktop, make sure to allocate at least 4GB of memory. You can adjust memory usage in Docker Desktop by going to Settings > Resources.
如果使用 Docker Desktop,请确保分配至少 4GB 内存。您可以通过转到 设置 > 资源 来调整 Docker Desktop 中的内存使用情况。
-
Create a new docker network.
-
创建新的 docker 网络。
docker network create elastic
-
Pull the Elasticsearch Docker image.
-
拉取 Elasticsearch Docker 镜像。
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.17.3
-
Optional: Install Cosign for your environment. Then use Cosign to verify the Elasticsearch image’s signature.
-
可选:为您的环境安装 Cosign。然后使用 Cosign 验证 Elasticsearch 镜像的签名。
wget https://artifacts.elastic.co/cosign.pub
cosign verify --key cosign.pub docker.elastic.co/elasticsearch/elasticsearch:8.17.3
The cosign
command prints the check results and the signature payload in JSON format:
cosign
命令以 JSON 格式打印检查结果和签名:
Verification for docker.elastic.co/elasticsearch/elasticsearch:8.17.3 --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- Existence of the claims in the transparency log was verified offline
- The signatures were verified against the specified public key
- Start an Elasticsearch container.
5.启动Elasticsearch容器。
docker run --name es01 --net elastic -p 9200:9200 -it -m 1GB docker.elastic.co/elasticsearch/elasticsearch:8.17.3
Use the -m
flag to set a memory limit for the container. This removes the need to manually set the JVM size.
使用 -m
标志为容器设置内存限制。这样就无需 手动设置 JVM 大小。
Machine learning features such as semantic search with ELSER require a larger container with more than 1GB of memory. If you intend to use the machine learning capabilities, then start the container with this command:
机器学习功能(例如 使用 ELSER 进行语义搜索)需要具有超过 1GB 内存的更大容器。如果您打算使用机器学习功能,请使用以下命令启动容器:
docker run --name es01 --net elastic -p 9200:9200 -it -m 6GB -e "xpack.ml.use_auto_machine_memory_percent=true" docker.elastic.co/elasticsearch/elasticsearch:8.17.3
The command prints the elastic
user password and an enrollment token for Kibana.
该命令打印elastic
用户密码和 Kibana 的注册令牌。
-
Copy the generated
elastic
password and enrollment token. These credentials are only shown when you start Elasticsearch for the first time. You can regenerate the credentials using the following commands. -
复制生成的
elastic
密码和注册令牌。这些凭据仅在您首次启动 Elasticsearch 时显示。您可以使用以下命令重新生成凭据。
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
We recommend storing the elastic
password as an environment variable in your shell. Example:
我们建议将 elastic
密码作为环境变量存储在您的 shell 中。例如:
export ELASTIC_PASSWORD="your_password"
- Copy the
http_ca.crt
SSL certificate from the container to your local machine.
7.将 http_ca.crt
SSL 证书从容器复制到本地机器。
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
-
Make a REST API call to Elasticsearch to ensure the Elasticsearch container is running.
-
对 Elasticsearch 进行 REST API 调用,以确保 Elasticsearch 容器正在运行。
curl --cacert http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
Add more nodes 【添加更多节点】
-
Use an existing node to generate a enrollment token for the new node.
-
使用现有节点为新节点生成注册令牌。
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
The enrollment token is valid for 30 minutes.
注册令牌有效期为 30 分钟。
-
Start a new Elasticsearch container. Include the enrollment token as an environment variable.
-
启动新的 Elasticsearch 容器。将注册令牌作为环境变量包含在内。
docker run -e ENROLLMENT_TOKEN="<token>" --name es02 --net elastic -it -m 1GB docker.elastic.co/elasticsearch/elasticsearch:8.17.3
-
Call the cat nodes API to verify the node was added to the cluster.
-
调用 cat nodes API 来验证节点是否已添加到集群。
curl --cacert http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200/_cat/nodes
Run Kibana 【运行 Kibana】
- Pull the Kibana Docker image.
1.拉取 Kibana Docker 镜像。
docker pull docker.elastic.co/kibana/kibana:8.17.3
-
Optional: Verify the Kibana image’s signature.
-
可选:验证 Kibana 镜像的签名。
wget https://artifacts.elastic.co/cosign.pub
cosign verify --key cosign.pub docker.elastic.co/kibana/kibana:8.17.3
- Start a Kibana container.
3.启动 Kibana 容器。
docker run --name kib01 --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:8.17.3
-
When Kibana starts, it outputs a unique generated link to the terminal. To access Kibana, open this link in a web browser.
-
Kibana 启动时,它会向终端输出一个唯一生成的链接。要访问 Kibana,请在 Web 浏览器中打开此链接。
-
In your browser, enter the enrollment token that was generated when you started Elasticsearch.
-
在浏览器中,输入启动 Elasticsearch 时生成的注册令牌。
To regenerate the token, run:
要重新生成令牌,请运行:
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
-
Log in to Kibana as the
elastic
user with the password that was generated when you started Elasticsearch.To regenerate the password, run:
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
Remove containers 【删除容器】
To remove the containers and their network, run:
要删除容器及其网络,请运行:
# Remove the Elastic network
docker network rm elastic
# Remove Elasticsearch containers
docker rm es01
docker rm es02
# Remove the Kibana container
docker rm kib01
Next steps 【后续步骤】
You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, review the requirements and recommendations to apply when running Elasticsearch in Docker in production.
现在,您已设置好测试 Elasticsearch 环境。在开始正式开发或使用 Elasticsearch 投入生产之前,请查看在生产环境中使用 Docker 运行 Elasticsearch 时适用的要求和建议。
Start a multi-node cluster with Docker Compose 【使用 Docker Compose 启动多节点集群】
Use Docker Compose to start a three-node Elasticsearch cluster with Kibana. Docker Compose lets you start multiple containers with a single command.
使用 Docker Compose 启动带有 Kibana 的三节点 Elasticsearch 集群。Docker Compose 允许您使用单个命令启动多个容器。
Configure and start the cluster 【配置并启动集群】
-
Install Docker Compose. Visit the Docker Compose docs to install Docker Compose for your environment.
-
安装 Docker Compose。访问 Docker Compose 文档 为你的环境安装 Docker Compose。
If you’re using Docker Desktop, Docker Compose is installed automatically. Make sure to allocate at least 4GB of memory to Docker Desktop. You can adjust memory usage in Docker Desktop by going to Settings > Resources.
如果您使用的是 Docker Desktop,Docker Compose 会自动安装。请确保为 Docker Desktop 分配至少 4GB 内存。您可以通过转到 设置 > 资源 来调整 Docker Desktop 中的内存使用情况。
-
Create or navigate to an empty directory for the project.
-
为项目创建或导航到一个空目录。
-
Download and save the following files in the project directory:
-
下载并保存以下文件到项目目录中:
-
In the
.env
file, specify a password for theELASTIC_PASSWORD
andKIBANA_PASSWORD
variables. -
在
.env
文件中,为ELASTIC_PASSWORD
和KIBANA_PASSWORD
变量指定密码。
The passwords must be alphanumeric and can’t contain special characters, such as
!
or@
. The bash script included in thedocker-compose.yml
file only works with alphanumeric characters. Example:密码必须是字母数字,不能包含特殊字符,例如
!
或@
。docker-compose.yml
文件中包含的 bash 脚本仅适用于字母数字字符。示例:
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=changeme
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=changeme
...
-
In the
.env
file, setSTACK_VERSION
to the current Elastic Stack version. -
在
.env
文件中,将STACK_VERSION
设置为当前 Elastic Stack 版本。... # Version of Elastic products STACK_VERSION=8.17.3 ...
-
By default, the Docker Compose configuration exposes port
9200
on all network interfaces. -
默认情况下,Docker Compose 配置在所有网络接口上公开端口
9200
。To avoid exposing port
9200
to external hosts, setES_PORT
to127.0.0.1:9200
in the.env
file. This ensures Elasticsearch is only accessible from the host machine.为了避免将端口
9200
暴露给外部主机,请在.env
文件中将ES_PORT
设置为127.0.0.1:9200
。这可确保只能从主机访问 Elasticsearch。
...
# Port to expose Elasticsearch HTTP API to the host
#ES_PORT=9200
ES_PORT=127.0.0.1:9200
...
-
To start the cluster, run the following command from the project directory.
-
要启动集群,请从项目目录运行以下命令。
docker-compose up -d
-
After the cluster has started, open http://localhost:5601 in a web browser to access Kibana.
-
集群启动后,在 Web 浏览器中打开 http://localhost:5601 来访问 Kibana。
-
Log in to Kibana as the
elastic
user using theELASTIC_PASSWORD
you set earlier. -
使用之前设置的
ELASTIC_PASSWORD
以elastic
用户身份登录 Kibana。
Stop and remove the cluster 【停止并删除集群】
To stop the cluster, run docker-compose down
. The data in the Docker volumes is preserved and loaded when you restart the cluster with docker-compose up
.
要停止集群,请运行docker-compose down
。当您使用docker-compose up
重新启动集群时,Docker 卷中的数据将被保留并加载。
docker-compose down
To delete the network, containers, and volumes when you stop the cluster, specify the -v
option:
要在停止集群时删除网络、容器和卷,请指定 -v
选项:
docker-compose down -v
Next steps 【后续步骤】
You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, review the requirements and recommendations to apply when running Elasticsearch in Docker in production.
现在,您已设置好测试 Elasticsearch 环境。在开始正式开发或使用 Elasticsearch 投入生产之前,请查看在生产环境中使用 Docker 运行 Elasticsearch 时适用的要求和建议。
Using the Docker images in production 【在生产中使用 Docker 镜像】
The following requirements and recommendations apply when running Elasticsearch in Docker in production.
在生产环境中使用 Docker 运行 Elasticsearch 时适用以下要求和建议。
Set vm.max_map_count
to at least 262144
将 vm.max_map_count
设置为至少 262144
The vm.max_map_count
kernel setting must be set to at least 262144
for production use.
对于生产用途,必须将 vm.max_map_count
内核设置至少设置为 262144
。
How you set vm.max_map_count
depends on your platform.
如何设置 vm.max_map_count
取决于您的平台。
Linux
To view the current value for the vm.max_map_count
setting, run:
要查看 vm.max_map_count
设置的当前值,请运行:
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
To apply the setting on a live system, run:
要在实时系统上应用设置,请运行:
sysctl -w vm.max_map_count=262144
To permanently change the value for the vm.max_map_count
setting, update the value in /etc/sysctl.conf
.
要永久更改vm.max_map_count
设置的值,请更新/etc/sysctl.conf
中的值。
macOS with Docker for Mac
macOS中的 Docker for Mac
The vm.max_map_count
setting must be set within the xhyve virtual machine:
必须在 xhyve 虚拟机内设置 vm.max_map_count
设置:
-
From the command line, run:
-
从命令行运行:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
-
Press enter and use
sysctl
to configurevm.max_map_count
: -
按回车键并使用
sysctl
配置vm.max_map_count
:
sysctl -w vm.max_map_count=262144
-
To exit the
screen
session, typeCtrl a d
. -
要退出
screen
会话,请输入Ctrl a d
。
Windows and macOS with Docker Desktop
Windows 和 macOS 中的 Docker Desktop
The vm.max_map_count
setting must be set via docker-machine:
必须通过 docker-machine 设置 vm.max_map_count
:
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
Windows with Docker Desktop WSL 2 backend
带有 Docker Desktop WSL 2 后端 的 Windows
The vm.max_map_count
setting must be set in the "docker-desktop" WSL instance before the Elasticsearch container will properly start. There are several ways to do this, depending on your version of Windows and your version of WSL.
必须在“docker-desktop”WSL 实例中设置 vm.max_map_count
设置,Elasticsearch 容器才能正常启动。有几种方法可以做到这一点,具体取决于您的 Windows 版本和 WSL 版本。
If you are on Windows 10 before version 22H2, or if you are on Windows 10 version 22H2 using the built-in version of WSL, you must either manually set it every time you restart Docker before starting your Elasticsearch container, or (if you do not wish to do so on every restart) you must globally set every WSL2 instance to have the vm.max_map_count
changed. This is because these versions of WSL do not properly process the /etc/sysctl.conf file.
如果您使用的是 22H2 之前的 Windows 10,或者如果您使用的是使用内置 WSL 版本的 Windows 10 版本 22H2,则必须在每次启动 Elasticsearch 容器之前重新启动 Docker 时手动设置它,或者(如果您不想在每次重新启动时都这样做)您必须全局设置每个 WSL2 实例以更改 vm.max_map_count
。这是因为这些版本的 WSL 无法正确处理 /etc/sysctl.conf
文件。
To manually set it every time you reboot, you must run the following commands in a command prompt or PowerShell window every time you restart Docker:
要在每次重新启动时手动设置它,您必须在每次重新启动 Docker 时在命令提示符或 PowerShell 窗口中运行以下命令:
wsl -d docker-desktop -u root
sysctl -w vm.max_map_count=262144
If you are on these versions of WSL and you do not want to have to run those commands every time you restart Docker, you can globally change every WSL distribution with this setting by modifying your %USERPROFILE%\.wslconfig as follows:
如果您使用的是这些版本的 WSL,并且不想在每次重新启动 Docker 时都运行这些命令,则可以通过修改 %USERPROFILE%\.wslconfig 来使用此设置全局更改每个 WSL 发行版,如下所示:
[wsl2]
kernelCommandLine = "sysctl.vm.max_map_count=262144"
This will cause all WSL2 VMs to have that setting assigned when they start.
这将导致所有 WSL2 VM 在启动时都应用该设置。
If you are on Windows 11, or Windows 10 version 22H2 and have installed the Microsoft Store version of WSL, you can modify the /etc/sysctl.conf within the "docker-desktop" WSL distribution, perhaps with commands like this:
如果您使用的是 Windows 11 或 Windows 10 版本 22H2,并且安装了 Microsoft Store 版本的 WSL,则可以修改“docker-desktop”WSL 发行版中的 /etc/sysctl.conf,可能使用以下命令:
wsl -d docker-desktop -u root
vi /etc/sysctl.conf
and appending a line which reads:
并附加一行内容:
vm.max_map_count = 262144
Configuration files must be readable by the elasticsearch
user
配置文件必须对elasticsearch
用户可读
By default, Elasticsearch runs inside the container as user elasticsearch
using uid:gid 1000:0
.
默认情况下,Elasticsearch 在容器内以用户elasticsearch
的身份运行,使用 uid:gid 1000:0
。
One exception is Openshift, which runs containers using an arbitrarily assigned user ID. Openshift presents persistent volumes with the gid set to 0
, which works without any adjustments.
一个例外是 Openshift,它使用任意分配的用户 ID 运行容器。Openshift 呈现的持久卷的 gid 设置为0
,无需任何调整即可运行。
If you are bind-mounting a local directory or file, it must be readable by the elasticsearch
user. In addition, this user must have write access to the config, data and log dirs (Elasticsearch needs write access to the config
directory so that it can generate a keystore). A good strategy is to grant group access to gid 0
for the local directory.
如果您要绑定挂载本地目录或文件,则它必须可供 elasticsearch
用户读取。此外,此用户必须具有对 [config、数据和日志目录](https://www.elastic.co/guide/en/elasticsearch/reference/8.17/important-settings.html#path-settings “路径设置”) 的写访问权限(Elasticsearch 需要对 config
目录的写访问权限,以便它可以生成密钥库)。一个好的策略是授予对本地目录的 gid 0
的组访问权限。
For example, to prepare a local directory for storing data through a bind-mount:
例如,要通过绑定挂载准备用于存储数据的本地目录:
mkdir esdatadir
chmod g+rwx esdatadir
chgrp 0 esdatadir
You can also run an Elasticsearch container using both a custom UID and GID. You must ensure that file permissions will not prevent Elasticsearch from executing. You can use one of two options:
您还可以使用自定义 UID 和 GID 运行 Elasticsearch 容器。您必须确保文件权限不会阻止 Elasticsearch 执行。您可以使用以下两个选项之一:
- Bind-mount the
config
,data
andlogs
directories. If you intend to install plugins and prefer not to create a custom Docker image, you must also bind-mount theplugins
directory. - 绑定挂载
config
、data
和logs
目录。如果您打算安装插件并且不想 创建自定义 Docker 映像,您还必须绑定挂载plugins
目录。 - Pass the
--group-add 0
command line option todocker run
. This ensures that the user under which Elasticsearch is running is also a member of theroot
(GID 0) group inside the container. - 将
--group-add 0
命令行选项传递给docker run
。这可确保运行 Elasticsearch 的用户也是容器内root
(GID 0) 组的成员。
Increase ulimits for nofile and nproc 【增加 nofile 和 nproc 的 ulimit】
Increased ulimits for nofile and nproc must be available for the Elasticsearch containers. Verify the init system for the Docker daemon sets them to acceptable values.
必须为 Elasticsearch 容器提供 nofile 和 nproc 的 ulimits。验证 Docker 守护进程的 init 系统 是否将它们设置为可接受的值。
To check the Docker daemon defaults for ulimits, run:
要检查 Docker 守护进程的 ulimits 默认值,请运行:
docker run --rm docker.elastic.co/elasticsearch/elasticsearch:8.17.3 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
If needed, adjust them in the Daemon or override them per container. For example, when using docker run
, set:
如果需要,可以在守护进程中调整它们或按容器覆盖它们。例如,使用docker run
时,设置:
--ulimit nofile=65535:65535
Disable swapping 【禁用交换】
Swapping needs to be disabled for performance and node stability. For information about ways to do this, see Disable swapping.
为了提高性能和提高节点稳定性,需要禁用交换。有关如何执行此操作的信息,请参阅禁用交换。
If you opt for the bootstrap.memory_lock: true
approach, you also need to define the memlock: true
ulimit in the Docker Daemon, or explicitly set for the container as shown in the sample compose file. When using docker run
, you can specify:
如果您选择 bootstrap.memory_lock: true
方法,您还需要在 Docker Daemon 中定义 memlock: true
ulimit,或者按照 示例 Compose 文件 中所示为容器明确设置。使用 docker run
时,您可以指定:
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
Randomize published ports 【随机化已发布的端口】
The image exposes TCP ports 9200 and 9300. For production clusters, randomizing the published ports with --publish-all
is recommended, unless you are pinning one container per host.
该镜像公开了 TCP 端口 9200 和 9300。对于生产集群,建议使用 --publish-all
随机化已发布的端口,除非你在每个主机上固定一个容器。
Manually set the heap size 【手动设置堆大小】
By default, Elasticsearch automatically sizes JVM heap based on a nodes’s roles and the total memory available to the node’s container. We recommend this default sizing for most production environments. If needed, you can override default sizing by manually setting JVM heap size.
默认情况下,Elasticsearch 会根据节点的 角色 和节点容器可用的总内存自动调整 JVM 堆的大小。我们建议在大多数生产环境中使用此默认大小。如果需要,您可以通过手动设置 JVM 堆大小来覆盖默认大小。
To manually set the heap size in production, bind mount a JVM options file under /usr/share/elasticsearch/config/jvm.options.d
that includes your desired heap size settings.
要在生产中手动设置堆大小,请在 /usr/share/elasticsearch/config/jvm.options.d
下绑定一个 JVM 选项文件,其中包含所需的 堆大小 设置。
For testing, you can also manually set the heap size using the ES_JAVA_OPTS
environment variable. For example, to use 1GB, use the following command.
为了进行测试,您还可以使用 ES_JAVA_OPTS
环境变量手动设置堆大小。例如,要使用 1GB,请使用以下命令。
docker run -e ES_JAVA_OPTS="-Xms1g -Xmx1g" -e ENROLLMENT_TOKEN="<token>" --name es01 -p 9200:9200 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.17.3
The ES_JAVA_OPTS
variable overrides all other JVM options. We do not recommend using ES_JAVA_OPTS
in production.
ES_JAVA_OPTS
变量会覆盖所有其他 JVM 选项。我们不建议在生产中使用 ES_JAVA_OPTS
。
Pin deployments to a specific image version 【将部署固定到特定映像版本】
Pin your deployments to a specific version of the Elasticsearch Docker image. For example docker.elastic.co/elasticsearch/elasticsearch:8.17.3
.
将您的部署固定到 Elasticsearch Docker 镜像的特定版本。例如docker.elastic.co/elasticsearch/elasticsearch:8.17.3
。
Always bind data volumes 【始终绑定数据卷】
You should use a volume bound on /usr/share/elasticsearch/data
for the following reasons:
您应该使用绑定在 /usr/share/elasticsearch/data
上的卷,原因如下:
- The data of your Elasticsearch node won’t be lost if the container is killed
- 如果容器被终止,Elasticsearch 节点的数据不会丢失
- Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
- Elasticsearch 对 I/O 敏感,而 Docker 存储驱动程序并不适用于快速 I/O
- It allows the use of advanced Docker volume plugins
- 它允许使用高级 Docker 卷插件
Avoid using loop-lvm
mode 【避免使用 loop-lvm
模式】
If you are using the devicemapper storage driver, do not use the default loop-lvm
mode. Configure docker-engine to use direct-lvm.
如果你正在使用 devicemapper 存储驱动程序,请不要使用默认的 loop-lvm
模式。配置 docker-engine 以使用 direct-lvm。
Centralize your logs 【集中化您的日志】
Consider centralizing your logs by using a different logging driver. Also note that the default json-file logging driver is not ideally suited for production use.
考虑使用不同的日志记录驱动来集中您的日志。另请注意,默认的 json-file 日志记录驱动程序并不适合生产使用。
Configuring Elasticsearch with Docker 【使用 Docker 配置 Elasticsearch】
When you run in Docker, the Elasticsearch configuration files are loaded from /usr/share/elasticsearch/config/
.
当你在 Docker 中运行时,Elasticsearch 配置文件 会从 /usr/share/elasticsearch/config/
加载。
To use custom configuration files, you bind-mount the files over the configuration files in the image.
要使用自定义配置文件,您需要将文件绑定挂载 安装到镜像中的配置文件上。
You can set individual Elasticsearch configuration parameters using Docker environment variables. The sample compose file and the single-node example use this method. You can use the setting name directly as the environment variable name. If you cannot do this, for example because your orchestration platform forbids periods in environment variable names, then you can use an alternative style by converting the setting name as follows.
您可以使用 Docker 环境变量设置单个 Elasticsearch 配置参数。示例 Compose 文件 和 单节点示例 使用此方法。您可以直接将设置名称用作环境变量名称。如果您无法做到这一点,例如因为您的编排平台禁止在环境变量名称中使用句点,那么您可以使用另一种样式,通过按如下方式转换设置名称。
- Change the setting name to uppercase
- 将设置名称改为大写
- Prefix it with
ES_SETTING_
- 在其前面加上
ES_SETTING_
- Escape any underscores (
_
) by duplicating them - 通过复制下划线 (
_
) 来转义它们 - Convert all periods (
.
) to underscores (_
) - 将所有句点 (
.
) 转换为下划线 (_
)
For example, -e bootstrap.memory_lock=true
becomes -e ES_SETTING_BOOTSTRAP_MEMORY_LOCK=true
.
例如,-e bootstrap.memory_lock=true
变成 -e ES_SETTING_BOOTSTRAP_MEMORY_LOCK=true
。
You can use the contents of a file to set the value of the ELASTIC_PASSWORD
or KEYSTORE_PASSWORD
environment variables, by suffixing the environment variable name with _FILE
. This is useful for passing secrets such as passwords to Elasticsearch without specifying them directly.
您可以使用文件的内容来设置ELASTIC_PASSWORD
或KEYSTORE_PASSWORD
环境变量的值,方法是在环境变量名称后加上_FILE
后缀。这对于将密码等机密信息传递给 Elasticsearch 而无需直接指定它们非常有用。
For example, to set the Elasticsearch bootstrap password from a file, you can bind mount the file and set the ELASTIC_PASSWORD_FILE
environment variable to the mount location. If you mount the password file to /run/secrets/bootstrapPassword.txt
, specify:
例如,要从文件设置 Elasticsearch 引导密码,您可以绑定挂载该文件并将 ELASTIC_PASSWORD_FILE
环境变量设置为挂载位置。如果将密码文件挂载到 /run/secrets/bootstrapPassword.txt
,请指定:
-e ELASTIC_PASSWORD_FILE=/run/secrets/bootstrapPassword.txt
You can override the default command for the image to pass Elasticsearch configuration parameters as command line options. For example:
您可以覆盖镜像的默认命令,以将 Elasticsearch 配置参数作为命令行选项传递。例如:
docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername
While bind-mounting your configuration files is usually the preferred method in production, you can also create a custom Docker image that contains your configuration.
虽然绑定挂载配置文件通常是生产中的首选方法,但您也可以创建包含配置的自定义 Docker 镜像。
Mounting Elasticsearch configuration files 【挂载 Elasticsearch 配置文件】
Create custom config files and bind-mount them over the corresponding files in the Docker image. For example, to bind-mount custom_elasticsearch.yml
with docker run
, specify:
创建自定义配置文件并将其绑定到 Docker 映像中的相应文件上。例如,要使用docker run
绑定custom_elasticsearch.yml
,请指定:
-v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
If you bind-mount a custom elasticsearch.yml
file, ensure it includes the network.host: 0.0.0.0
setting. This setting ensures the node is reachable for HTTP and transport traffic, provided its ports are exposed. The Docker image’s built-in elasticsearch.yml
file includes this setting by default.
如果您绑定挂载自定义elasticsearch.yml
文件,请确保它包含network.host: 0.0.0.0
设置。此设置可确保节点 HTTP 和传输流量可达,前提是其端口已放通。Docker 映像的内置elasticsearch.yml
文件默认包含此设置。
The container runs Elasticsearch as user elasticsearch
using uid:gid 1000:0
. Bind mounted host directories and files must be accessible by this user, and the data and log directories must be writable by this user.
容器使用 uid:gid1000:0
以用户elasticsearch
身份运行 Elasticsearch。绑定挂载的主机目录和文件必须可由该用户访问,并且数据和日志目录必须可由该用户写入。
Create an encrypted Elasticsearch keystore 【创建加密的 Elasticsearch 密钥库】
By default, Elasticsearch will auto-generate a keystore file for secure settings. This file is obfuscated but not encrypted.
默认情况下,Elasticsearch 将自动生成 安全设置 的密钥库文件。此文件经过混淆但未加密。
To encrypt your secure settings with a password and have them persist outside the container, use a docker run
command to manually create the keystore instead. The command must:
要使用密码加密您的安全设置并使其在容器外保留,请使用 docker run
命令手动创建密钥库。该命令必须:
- Bind-mount the
config
directory. The command will create anelasticsearch.keystore
file in this directory. To avoid errors, do not directly bind-mount theelasticsearch.keystore
file. - 绑定挂载
config
目录。该命令将在此目录中创建一个elasticsearch.keystore
文件。为避免错误,请勿直接绑定挂载elasticsearch.keystore
文件。 - Use the
elasticsearch-keystore
tool with thecreate -p
option. You’ll be prompted to enter a password for the keystore. - 使用带有
create -p
选项的elasticsearch-keystore
工具。系统将提示您输入密钥库的密码。
For example:
例如:
docker run -it --rm \
-v full_path_to/config:/usr/share/elasticsearch/config \
docker.elastic.co/elasticsearch/elasticsearch:8.17.3 \
bin/elasticsearch-keystore create -p
You can also use a docker run
command to add or update secure settings in the keystore. You’ll be prompted to enter the setting values. If the keystore is encrypted, you’ll also be prompted to enter the keystore password.
您还可以使用 docker run
命令来添加或更新密钥库中的安全设置。系统将提示您输入设置值。如果密钥库已加密,系统还会提示您输入密钥库密码。
docker run -it --rm \
-v full_path_to/config:/usr/share/elasticsearch/config \
docker.elastic.co/elasticsearch/elasticsearch:8.17.3 \
bin/elasticsearch-keystore \
add my.secure.setting \
my.other.secure.setting
If you’ve already created the keystore and don’t need to update it, you can bind-mount the elasticsearch.keystore
file directly. You can use the KEYSTORE_PASSWORD
environment variable to provide the keystore password to the container at startup. For example, a docker run
command might have the following options:
如果您已经创建了密钥库并且不需要更新它,则可以直接绑定挂载elasticsearch.keystore
文件。您可以使用KEYSTORE_PASSWORD
环境变量在启动时向容器提供密钥库密码。例如,docker run
命令可以有以下选项:
-v full_path_to/config/elasticsearch.keystore:/usr/share/elasticsearch/config/elasticsearch.keystore
-e KEYSTORE_PASSWORD=mypassword
Using custom Docker images 【使用自定义 Docker 镜像】
In some environments, it might make more sense to prepare a custom image that contains your configuration. A Dockerfile
to achieve this might be as simple as:
在某些环境中,准备一个包含配置的自定义镜像可能更有意义。实现此目的的Dockerfile
可能很简单:
FROM docker.elastic.co/elasticsearch/elasticsearch:8.17.3
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
You could then build and run the image with:
然后你可以使用以下命令构建并运行该图像:
docker build --tag=elasticsearch-custom .
docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
Some plugins require additional security permissions. You must explicitly accept them either by:
某些插件需要额外的安全权限。您必须通过以下方式明确接受它们:
- Attaching a
tty
when you run the Docker image and allowing the permissions when prompted. - 运行 Docker 映像时附加
tty
,并在出现提示时允许权限。 - Inspecting the security permissions and accepting them (if appropriate) by adding the
--batch
flag to the plugin install command. - 检查安全权限并通过向插件安装命令添加
--batch
标志来接受它们(如果适用)。
See Plugin management for more information.
有关更多信息,请参阅 插件管理。
Troubleshoot Docker errors for Elasticsearch 【排查 Elasticsearch 的 Docker 错误】
Here’s how to resolve common errors when running Elasticsearch with Docker.
以下是使用 Docker 运行 Elasticsearch 时常见错误的解决方案。
elasticsearch.keystore is a directory
Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.io.IOException: Is a directory: SimpleFSIndexInput(path="/usr/share/elasticsearch/config/elasticsearch.keystore") Likely root cause: java.io.IOException: Is a directory
A keystore-related docker run
command attempted to directly bind-mount an elasticsearch.keystore
file that doesn’t exist. If you use the -v
or --volume
flag to mount a file that doesn’t exist, Docker instead creates a directory with the same name.
与 keystore 相关的 docker run
命令尝试直接绑定挂载不存在的 elasticsearch.keystore
文件。如果您使用 -v
或 --volume
标志挂载不存在的文件,Docker 会创建一个同名的目录。
To resolve this error:
要解决此错误:
- Delete the
elasticsearch.keystore
directory in theconfig
directory. - 删除
config
目录中的elasticsearch.keystore
目录。 - Update the
-v
or--volume
flag to point to theconfig
directory path rather than the keystore file’s path. For an example, see Create an encrypted Elasticsearch keystore. - 更新
-v
或--volume
标志以指向config
目录路径,而不是密钥库文件的路径。有关示例,请参阅 创建加密的 Elasticsearch 密钥库。 - Retry the command.
- 重试该命令。
elasticsearch.keystore: Device or resource busy
Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
A docker run
command attempted to update the keystore while directly bind-mounting the elasticsearch.keystore
file. To update the keystore, the container requires access to other files in the config
directory, such as keystore.tmp
.
docker run
命令尝试在直接绑定挂载 elasticsearch.keystore
文件时 更新密钥库。要更新密钥库,容器需要访问 config
目录中的其他文件,例如 keystore.tmp
。
To resolve this error:
要解决此错误:
- Update the
-v
or--volume
flag to point to theconfig
directory path rather than the keystore file’s path. For an example, see Create an encrypted Elasticsearch keystore. - 更新
-v
或--volume
标志以指向config
目录路径,而不是密钥库文件的路径。有关示例,请参阅 创建加密的 Elasticsearch 密钥库。 - Retry the command.
- 重试该命令。