A server stack is the collection of software that forms the operational infrastructure on a given machine. In a computing context, a stack is an ordered pile. A server stack is one type of solution stack — an ordered selection of software that makes it possible to complete a particular task. Like in this post about Best way to kill Zombie and D state processes in linux was one problem in server stack that need for a solution. Below are some tips in manage your linux server when you find problem about linux, process, zombie, , .
What is the best way to kill Zombie processes and D state process by single command.
Actually, reboot. There’s no real way to easily get rid of a zombie, but there’s really no reason to because a zombie isn’t taking up resources on the computer; it’s an orphaned entry in a process table. Init is supposed to collect it but something went wrong with the process. http://en.wikipedia.org/wiki/Zombie_process
Perhaps you’re asking because there’s worse problem…are you getting a boatload of zombies roaming your process table? That usually means a bug in the program or a problem with a configuration. You shouldn’t have a huge number of zombies on the system. One or two I don’t worry. If you have fifty of them from Apache or some other daemon, you probably have a problem. But that’s not directly related to your question…
You can’t kill a zombie – its already dead
If the ppid still exists, then terminating that can often clean up the spawned zombies.
You shouldn’t be killing processes in uninterruptible sleep – usually this means they’re i/o bound, but IIRC it can also occur during a blocking read from e.g. a network socket.
Errors in underlying filesystem or disks might cause I/O bound processes. In this case try to “umount -f” the filesystem they depend upon – this will abort whatever outstanding I/O requests there are open.
Most answers focus on Zombies, this targets
Processes that are stuck in “uninterruptible sleep” show up in a process list as being in state D
In my experience, these are frequently blocked on I/O in some way. If you’ve been reading files from a mounted network disk and the remote host has restarted or network connectivity has failed in some way, then the copy process can be stuck waiting indefinitely.
The only fixes are to just kill the process, or to
umount -f /mnt/source to unmount the disk, and then the process will see that it has gone, and will fail cleanly. Obviously the I/O action will be incomplete too, likely with a partial file copy.
rsync process can be in this state while running through SSH – they are literally blocked waiting for the network to catch up. Point is that state D is not necessarily bad, so killing them all may not be an ideal solution.