# 查看容器资源使用情况
docker stats [容器名/ID]
# 查看容器详细信息
docker inspect [容器名/ID]
# 查看容器进程
docker top [容器名/ID]
Google开发的容器监控工具:
docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
gcr.io/cadvisor/cadvisor:latest
访问 http://localhost:8080
查看监控数据
配置Docker暴露metrics:
# 编辑或创建/etc/docker/daemon.json
{
"metrics-addr": "0.0.0.0:9323",
"experimental": true
}
重启Docker:systemctl restart docker
部署Prometheus:
docker run -d -p 9090:9090 \
-v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
示例prometheus.yml配置:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'docker'
static_configs:
- targets: ['host.docker.internal:9323']
部署Grafana:
docker run -d -p 3000:3000 grafana/grafana
登录后添加Prometheus数据源并导入Docker仪表板
# 查看容器日志
docker logs [容器名/ID]
# 跟踪实时日志
docker logs -f [容器名/ID]
# 显示最后N行
docker logs --tail=100 [容器名/ID]
# 带时间戳显示
docker logs -t [容器名/ID]
创建docker-compose.yml:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
environment:
- discovery.type=single-node
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- "5601:5601"
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.9.2
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
ports:
- "5000:5000"
depends_on:
- elasticsearch
创建logstash.conf:
input {
tcp {
port => 5000
codec => json
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
启动服务:
docker-compose up -d
配置Docker日志驱动:
docker run --log-driver=syslog \
--log-opt syslog-address=tcp://localhost:5000 \
--log-opt tag="myapp" \
your-application-image
# 运行Fluentd
docker run -d -p 24224:24224 -p 24224:24224/udp -v /path/to/conf:/fluentd/etc fluent/fluentd
# 配置容器使用Fluentd日志驱动
docker run --log-driver=fluentd --log-opt fluentd-address=localhost:24224 --log-opt tag="docker.{{.Name}}" your-image
日志管理:
--log-opt max-size=10m --log-opt max-file=3
--log-driver=json-file
监控优化:
安全考虑:
性能考虑:
以上方案可根据实际环境需求和资源情况进行组合或简化使用。