December 28, 2020, 9:32am
systemd provides a powerful tool,
systemd-run which can constrain how many system resources an application started by it can use. It can limit (non-exhaustive list) use of CPU, memory and disk. However, this powerful too is too complicated to use on the command line. A wrapper is required to make it easily accessible. Use case: running resource intensive tasks (such as backups, compression) without making the system (almost) unusable by the load generated by that task.
limit-low design goals:
Do one thing and do it well.
The one thing is to be a wrapper that limits system resources for wrapped
applications. Support being run as:
in shell or script in pipes
stdout, stderr, stdin interactive shell input/output
graphical (GUI) applications
Being as non-intrusive as possible. For example, not adding extraneous
output to stdout such as
“Running scope as unit: run-r0d607a8f35dc4dea909b830f9d922b99.scope”.
limit-low is a around
systemd-run parameters to run an
application with limited system resources.
systemd-run itself is using Linux
systemd upstream feature request:
05:50PM - 14 Jun 20 UTC
**Is your feature request related to a problem? Please describe.**
… ng non-time critical things such as server backups, compression, log analysis it happens that these tools/script eat most (dunno 90-100%) of CPU or disk IO. This needlessly takes away system resources from more important tasks.
**Describe the solution you'd like**
An easy to use command line tool `limiter` (or so) which starts an application with little system resources.
**Describe alternatives you've considered**
There's `nice` and `renice` to lower priority of a process, `cpulimit` to lets say 30% maximum, `taskset` to limit to 1 core, `ionice`. Each of these tools has a different syntax. Specifically `cpulimit` seems harder to master. Syntax isn't trivial. Writing this for multiple tasks (on a server) would be a lot work.
`nice` alone does not solve it. If I run for example `nice -n19 stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s` on my desktop system, it helps, but it is still less responsive until that process finishes.
Would be useful for tasks (such as backups) that require a lot of CPU / IO where it does not matter if these finish in 5 seconds, 5 minutes or 30 minutes. More important is not to take away CPU shares from more important processes.
For that purpose I wrote a script.
## usage examples:
## limiter stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
## sudo limiter stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
if [ "$(id -u)" != "0" ]; then
## reasons for using some parameters:
## --scope - allows starting GUI applications
## useful to remove for debugging:
## --quiet \
## useful to add for debugging:
## --unit=testtest \
## useful for debugging (root):
## sudo systemctl show testtest.scope
## sudo systemctl show testtest.scope | grep CPU
## sudo systemctl status testtest.scop
## useful for debugging (user):
## systemctl --user show testtest.scope
## systemctl --user show testtest.scope | grep CPU
## systemctl --user status testtest.scop
--property=IOReadIOPSMax="/dev/disk/ 1K" \
Problem with that script is that I don't know what I missed, what's redundant, what's coming, etc. Hopefully would be useful enough to have such a tool shipped with systemd.
To start graphical (GUI) applications such as Tor Browser it requires installation of package
dbus-user-session. The (non-)security impact of this is being discussed here:
dbus - user vs system session.
sudo apt update
sudo apt install dbus-user-session
Depending on the outcome of that discussion,
dbus-user-session will be installed by default in Kicksecure and Whonix or perhaps
systemd-run can be run with different parameters such as
--scope not requiring
--scope as alternative works but systemd tells:
--pty/--pipe is not compatible in timer or
Which would break
limit-low design goals.
December 28, 2020, 12:45pm
sudo apt update
sudo apt install stress
Does not work as user yet.
limit-low stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
Only works as root.
sudo limit-low stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
Difference can be easily seen for example in
December 28, 2020, 2:18pm
Perhaps it would be better to implement this with plain cgroups, i.e.
systemd-run. Could avoid a lot issues such as dbus, environment variables, and whatnot.
December 31, 2020, 2:10pm
cgroups are not easy to use for this purpose either. Here is what I got.
#sudo apt install cgroup-tools
testcommand="stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s"
testcommand="dd if=/dev/zero of=/tmp/xxx bs=10M count=1"
sudo cgdelete cpu,memory,blkio:limit-low &>/dev/null || true
## TODO: user:user
sudo cgcreate -t user:user -a user:user -g cpu,memory,blkio:limit-low
## cpu.max default is 100000
#sudo cgset -r cpu.max=1 limit-low
#sudo cgset -r cpu.shares=1 limit-low
sudo cgset -r cpu.cfs_quota_us=10000 limit-low
sudo cgset -r cpu.cfs_period_us=50000 limit-low
sudo cgset -r memory.limit_in_bytes=100m limit-low
## To limit I/O to 1MB/s, as done previously, we write into the io.max file:
## $ echo "8:0 wbps=1048576" > io.max
#sudo cgset -r io.max="8:0 wbps=1048576" limit-low
## Specify a bandwidth rate on particular device for root group. The format for policy is “<major>:<minor> <bytes_per_second>”:
## echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
## Above will put a limit of 1MB/second on reads happening for root group on device having major/minor number 8:16.
## <major>:<minor> <bytes_per_second>
## How to find out <major>:<minor>?
## TODO: Qubes ls -la /dev/xvda
# sudo cgset -r blkio.throttle.read_bps_device="202:0 1048576" limit-low
# sudo cgset -r blkio.throttle.write_bps_device="202:0 1048576" limit-low
cgexec -g cpu,memory,blkio:limit-low \
#ionice --class 3 \
#nice -n 19 \
sudo cgdelete cpu,memory,blkio:limit-low
December 31, 2020, 3:00pm
Neither with cgroups one can easily say “use maximum hard limit of xx % of available system resources (CPU, RAM, IO) for this application”.
Maxmium RAM can be a hard limit such as xxxx MB.
Throttling a device IO (virtual) (harddrive) syntax is
<major>:<minor> <bytes_per_second>. See
https://lqhl.me/blog/2015/09/09/network-disk-bandwidth-limiting-on-linux/. Also a hardcoded value. Which value to choose? One would have to benchmark the disk first before deciding on a reasonable value. Also finding out
<major>:<minor> is non-trivial from a script.
It seems cgroups resource moderation is more useful to say “within this cgroup this application can use xx % of CPU”. This might be useful when implementing this for the whole system.
Maybe just combining
nice would be a good enough hack. I.e.
ionice --class 3 nice -n 19 application-name
But that wouldn’t limit maximum RAM the application can use. And once RAM is exhausted, system will probably freeze or get so slow that one would loose patience and reboot.
Would be best if Debian had this implemented for the whole system. For example guaranteeing that X always has sufficient system resources and cannot be starved by a broken application.