In the past days I had to debug quite old (up2date “enterprise”) software. It ships with multiple bash and sh scripts that together start a java service. All nicely wrapped into a sysv init script. Wrapped in a systemd unit. sucks…
A systemctl start $unit / systemctl stop $unit works most of the time, a systemctl restart $unit never worked. The process forks multiple bash/sh scripts which fork the java process at the end. systemd isn’t able to keep track of the correct pid for the main process, it thinks the main process is one of the sh scripts. The whole startup only works because of RemainAfterExit=yes. A nasty workaround would be to set PIDFile=. The software even creates one, but that isn’t owned by root. systemd declines reading pidfiles that aren’t owned by root.
All in all this sucks hard. The scripts come from the vendor so I cannot easily change them. One way to improve debugging and the lost pid is by increasing the systemd-internal log level:
systemd-analyze set-log-level debug
Afterwards the journal of the restarted unit contains more information. systemd in my case complained that it received a SIGCHLD from a forked process which sometimes(?) triggers systemd to immediately stop the unit again. Now it’s time for futher debugging!