ISheep

ISheep

Badminton | Coding | Writing | INTJ
github

Installing Various Environments with Docker

Managed containers by Portainer.

Install Redis#

First, go to the official website to download the redis.conf file and edit it.#

Modify the redis.conf configuration file:
The main configurations are as follows:

 bind 127.0.0.1 # Comment out this part to allow external access to Redis
 daemonize no # Start Redis as a daemon thread (setting it to yes will cause Redis to stop immediately after starting)
 requirepass your_password # Set a password for Redis
 appendonly yes # Redis persistence, default is no
 tcp-keepalive 300 # Prevent the error "remote host forcibly closed an existing connection", default is 300

Create a directory mapped to the local and Docker, i.e., the local storage location#

Create a local storage location for Redis;

You can customize it. Since some of my Docker configuration files are stored in the /mydata directory, I will create a redis directory under /mydata for easy management in the future.
mkdir /data/redis
mkdir /data/redis/data
Copy the configuration file to the newly created directory.

File authorization#

chmod 777 redis.conf

Start Redis#

docker run -p 6379:6379 --name redis -v /mydata/redis/redis.conf:/etc/redis/redis.conf  -v /mydata/redis/data:/data -d redis redis-server /etc/redis/redis.conf --appendonly yes

Parameter explanation:

-p 6379:6379: Map the 6379 port inside the container to the 6379 port on the host
-v /data/redis/redis.conf:/etc/redis/redis.conf: Put the configured redis.conf file into the container at this location
-v /data/redis/data:/data: Display the data persisted by Redis in the host for data backup
redis-server /etc/redis/redis.conf: This is a key configuration that allows Redis to start with this redis.conf configuration instead of starting without any configuration
–appendonly yes: Redis starts with data persistence

Install Elasticsearch 7.9.3#

I chose to install Kibana locally (don't want to consume server resources)

  1. Pull the image
docker pull elasticsearch:7.9.3
  1. Create the required folders and files
mkdir -p /mydata/elasticsearch/config
mkdir -p /mydata/elasticsearch/data
echo "http.host: 0.0.0.0">>/mydata/elasticsearch/config/elasticsearch.yml
  1. Set folder permissions
chmod -R 777 /mydata/elasticsearch/
  1. Create and start the Elasticsearch container
docker run --name elasticsearch -p 9200:9200 \
 -p 9300:9300 \
 -e "discovery.type=single-node" \
 -e ES_JAVA_OPTS="-Xms64m -Xmx128m" \
 -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
 -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \
 -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
 -d elasticsearch:7.9.3
  1. Set the container to start automatically
docker update elasticsearch --restart=always
  1. Install the IK Chinese word segmentation plugin
cd /mydata/elasticsearch/plugins/
wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.9.3/elasticsearch-analysis-ik-7.9.3.zip
mkdir ik
unzip -d ik/ elasticsearch-analysis-ik-7.9.3.zip 
docker restart elasticsearch
  1. Open the port number
firewall-cmd --zone=public --add-port=9200/tcp --permanent
systemctl  restart firewalld.service

Install Kafka and Zookeeper#

Guide on Juejin
Install Kafka

docker run  -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=server_ip:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://server_ip:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -e KAFKA_HEAP_OPTS="-Xmx256M -Xms128M" -t wurstmeister/kafka
Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.