Apache in Docker: How do I “access.log”? – Problems with loading a website are often blamed on the Internet connection, but even the most perfectly set up network cannot help if there is no service to reply at your destination. One of the most popular HTTP servers used for this task is Apache2. Much of Apache’s popularity can be attributed to its easy installation and use, but never the less it is possible to run into problems with even the easiest of the software. If you’ve encountered an issue loading your web page, follow these simple troubleshooting methods outlined in this guide to attempt to get your web server back up and working again. Below are some tips in manage your apache2 server when you find problem about apache-2.2, log-files, docker, , .
I’m just getting started with Docker and richt now I’m trying to figure out how to set up my first dockerized Apache 2 / PHP environment. Up to now I have been using full Linux VMs, where I used log-files being written to /var/log/apache2, then use “logrotate” to hop to a new file each day.
Logfiles were mainly used for immediate error detection (i.e. log on to the server and use less to open the current access.log and error.log files) and for fail2ban.
If I’m correct that is not practicable in an Docker environment – mainly because you usually cannot log in to containers to have a look at the logs. Also logs will be lost if the container is removed.
So: What is the most common method to work with/emulate/replace access.log/error.log in that situation? What are common solutions for both production and development environments?
My ideas so far include using a NFS share (slow and may cause filename collisions if not careful), and logstash (not sure if it is worth the effort and practicable for smaller sites or even dev environments?) but I’m sure smart people have come up with better solutions?
Not sure if it makes a difference, but currently I’m basing my Docker image on php:5.6-apache.
You can still use
docker exec -it <your container name> /bin/bash command to get into your container and do your regular job.
Or maybe you can change
/bin/bash into your command or script
.sh of your command to execute it.
To get your file out of container use
docker cp <container name:/path/to/file> </your local machine/path/>
And for your daily job, you can use
cron to cronjob those commands. I highly reccommend you to have alias your frequent docker commands. So that I can use docker happily with a few key.
docker logs <container name/id> command is for viewing log from execution of the docker image. It shows redirect output to stdout.
How about writing access and error log to stderr and stdout?
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
Centralized logging with ELK would allow for more proactive monitoring though. But you already thought of that one yourself.
In the apache configuration file you can add:
and to see the logs use the command below:
docker logs container_id
Maybe this feature did not exist when the question was asked, but with run’s -v argument you can mount a directory on the host onto a directory in the container.
docker run -v [host_dir]:[container_dir]
This way the log (or other) files will survive when the container is deleted and you can access the files as if apache were installed on the host rather than in a container.
Alternatively, you could somehow push modified log files to a central location. The Kibana stack uses filebeat to achieve this, but it should be possible to run filebeat independently if you do not care for the rest of the stack.