Use cpulimit to free up your CPU

Photo by Henning Witzel on Unsplash

The recommended tool for managing system resources on Linux systems is cgroups. While very powerful in terms of what sorts of limits can be tuned (CPU, memory, disk I/O, network, etc.), configuring cgroups is non-trivial. The nice command has been available since 1973. But it only adjusts the scheduling priority among processes that are competing for time on a processor. The nice command will not limit the percentage of CPU cycles that a process can consume per unit of time. The cpulimit command provides the best of both worlds. It limits the percentage of CPU cycles that a process can allocate per unit of time and it is relatively easy to invoke.

The cpulimit command is mainly useful for long-running and CPU-intensive processes. Compiling software and converting videos are common examples of long-running processes that can max out a computer’s CPU. Limiting the CPU usage of such processes will free up processor time for use by other tasks that may be running on the computer. Limiting CPU-intensive processes will also reduce the power consumption, heat output, and possibly the fan noise of the system. The trade-off for limiting a process’s CPU usage is that it will require more time to run to completion.

Install cpulimit

The cpulimit command is available in the default Fedora Linux repositories. Run the following command to install cpulimit on a Fedora Linux system.

$ sudo dnf install cpulimit

View the documentation for cpulimit

The cpulimit package does not come with a man page. Use the following command to view cpulimit’s built-in documentation. The output is provided below. But you may want to run the command on your own system in case the options have changed since this article was written.

$ cpulimit --help
Usage: cpulimit [OPTIONS…] TARGET
      -l, --limit=N percentage of cpu allowed from 0 to 800 (required)
      -v, --verbose show control statistics
      -z, --lazy exit if there is no target process, or if it dies
      -i, --include-children limit also the children processes
      -h, --help display this help and exit
   TARGET must be exactly one of these:
      -p, --pid=N pid of the process (implies -z)
      -e, --exe=FILE name of the executable program file or path name
      COMMAND [ARGS] run this command and limit it (implies -z)

A demonstration

To demonstrate using the cpulimit command, a contrived, computationally-intensive Python script is provided below. The script is run first with no limit and then with a limit of 50%. It computes the value of the 42nd Fibonacci number. The script is run as a child process of the time command in both cases to show the total time that was required to compute the answer.

$ /bin/time -f '(computed in %e seconds)' /bin/python -c 'f = lambda n: n if n<2 else f(n-1)+f(n-2); print(f(42), end=" ")'
267914296 (computed in 51.80 seconds)
$ /bin/cpulimit -i -l 50 /bin/time -f '(computed in %e seconds)' /bin/python -c 'f = lambda n: n if n<2 else f(n-1)+f(n-2); print(f(42), end=" ")'
267914296 (computed in 127.38 seconds)

You might hear the CPU fan on your PC rev up when running the first version of the command. But you should not when running the second version. The first version of the command is not CPU limited but it should not cause your PC to become bogged down. It is written in such a way that it can only use at most one CPU. Most modern PCs have multiple CPUs and can simultaneously run other tasks without difficulty when one of the CPUs is 100% busy. To verify that the first command is maxing out one of your processors, run the top command in a separate terminal window and press the 1 key. Press the Q key to quit the top command.

Setting a limit above 100% is only meaningful on a program that is capable of task parallelism. For such programs, each increment of 100% represents full utilization of a CPU (200% = 2 CPUs, 300% = 3 CPUs, etc.).

Notice that the -i option has been passed to the cpulimit command in the above example. This is necessary because the command to be limited is not a direct child process of the cpulimit command. Rather it is a child process of the time command which in turn is a child process of the cpulimit command. Without the -i option, cpulimit would only limit the time command.

Final notes

If you want to limit a graphical application that you start from a desktop icon, copy the application’s .desktop file (often located under the /usr/share/applications directory) to your ~/.local/share/applications directory and modify the Exec line accordingly. Then run the following command to apply the changes.

$ update-desktop-database ~/.local/share/applications
FAQs and Guides


  1. Thomas Mittelstaedt

    Thanks for the ‘just-in-time’ hint. Will try that on the Android emulator virtual machine process qemu, which ran ‘wild’ yesterday.

  2. Ben

    The version of CPU Limit you linked to in the article hasn’t been maintained in years. Most distributions/OSes (Debian, Ubuntu, openSUSE, FreeBSD) use the LimitCPU fork these days. LimitCPU does the same thing, has the same syntax, and features a manual page. Plus it’s still maintained and occasionally improved to provide more features/compiler tweaks. Its latest release was about a month ago.

  3. For those who may be interested, here is a one-liner written in Perl that computes the 42nd Fibonacci number. It isn’t as useful for testing cpulimit though 😛.

    perl -e 'use feature "signatures"; sub f($n) { $f[$n] //= ($n<2) ? $n : f($n-1) + f($n-2) }; print f(42), "\n"'
  4. Ben

    Thank you for this, very useful for my workloads (running long computations while keeping system interactive). I was wondering if you could comment on how this interacts with taskset[1] (cpu pinning).
    From my understanding of your article cpulimit 100% would be equivalent to pinning to 1 CPU, but I’m wondering if that is 1 core (topology), or the equivalent (for the scheduler) of 1 CPU core. (assuming HT is disabled)
    I tend(ed) to use taskset because it ensures while limiting overall CPU use, the running process would still be benefiting from the local caches bound to the selected (cores), but would be interesting to see if they can be combined.


    • Ben

      CPUlimit (or its modern fork LimitCPU) limits the target process(es) to an overall CPU resource limit.

      Basically if you run “cpulimit -l 50 program-name” it means program-name won’t go above 50% of your CPU resources, as monitored by a system monitor like top.

      It doesn’t limit a process to a specific CPU, just an overall consumption percentage of available resources.

    • jama

      I do use ‘taskset’ every once in while, and you can confirm the cpu pinning (topology) of your system with ‘lstopo’, which produces graphical image form your system.

      Once you have identified the topology, you can assign the wanted cpu-pairs for your process, like ‘taskset –cpu-list 0,1,2,3’ or ‘taskset –cpu-list 0-3’ etc…

      To verify, you may use ‘htop’ instead of ‘top’, as ‘htop’ displays the individual cpu core load as well.

  5. Another command for this sort of thing, and will be more prevalent the more chiplets are employed is numactl.

    numactl can limit processes and their children to a single node and direct which node they use.

    But thanks for showing me another way.

  6. jama

    Thank you for your excellent article, again, but in my opinion, this command, as well as ‘nice’, are not that much useful nowadays since the amount of cpu cores and memory have increased during the decades since 1973… What comes to the compiling, nowadays it in fact is possible to use ‘make -j <#cores>’ -flag to perform the compilation in parallel by distributing the compilation process among several cpus, making it faster, instead of using only one. In addition, the ‘oom-kill’ -process kills the processes it considers ‘unnecessary’, in order to keep the system stable.

    This is my opinion. I would like to have processes to be finished as fast as possible, instead of having ‘nice’ queue of processes waiting to be finished.

    The drawback might be, that even you ‘nice’, or otherwise ‘optimize’ a process, especially one having constant disk activity, like a compilation process, the disk activity in fact will not be affected that much, and each disk access (I/O) is taken away from other processes, making those more slower as they need to wait their share…

    However, for some large-scale system administrators with 1000s of simultaneous users this kind of ‘optimization’ might be necessary.

  7. Cecille

    Thanks for the article.
    Bash does not seems to get limited:

    fibonacci_42 () {
    perl -e ‘use feature “signatures”; sub f($n) { $f[$n] //= ($n<2) ? $n : f($n-1) + f($n-2) }; print f(42), “\n”‘ 2>/dev/null
    export -f fibonacci_42

    $/bin/bash -E -c “fibonacci_42”

    $cpulimit -i -l 1 /bin/bash -E -c “fibonacci_42”

    • It is because the Perl one-liner I provided in my comment does not require 100% CPU usage for any significant length of time. I think if you use the Python script it should work.

    • Jesse

      cpulimit maintainer here. In the example in your comment I wouldn’t expect limiting to work as I’m pretty sure the process doing the calculation is Perl, not bash. Bash is calling Perl in the function, but would probably just be waiting for Perl to return the answer if I’m reading your example correctly.

      To use cpulimit (or LimitCPU) in this instance you’d want to do one of two things. Either run “cpulimit -l 10 perl -e ….” or “cpulimit -m -l 10 bash …”. The -m flag monitors child processes and limits them as well as the parent.

  8. newton

    nosso Fedora ficou maravilhoso em nosso estilo plasma bem feito super rápido e bom demais para navegar ,parabéns

  9. Mark

    Personally I find cpulimit only useful on machines with many cores. I have experienced issued on single core machines (yes, some VMs are setup that way) where using cpulimit to limit a process to 50% does work on the process, but cpulimit itself will use all remaining cpu itself; too much context switching in the OS between those two processes I guess.

    However on multi core machines Some common utilities like ‘ffmpeg’ will happily span cores and drive them all to 100% busy.
    Apart from the noise of spinning fans, utilities like lm_sensors or the extremely usefull ‘bpytop’ can show the core temperatures sitting at a point where they are probably considering shutting themselves down.
    Using cpulimit with the flag to include child processes to throttle it resolves that; although as noted in an earlier reply things will take longer, a ffmpeg run that would take 30mins running hot can take over 2hrs when throttled; but in my opinion better than leaving the cores running at what they report as a critical temp for 30mins when you know in advance that is what will happen.

    I have not looked at tasksets so thanks to the comment on that, if they are available on all OS’s I may look into that further.

    cpulimit is a nice tool to know about as it is available everywhere, it is in most RedHat and Debian based package repositories and where it is not I have never had problems building it from the github source.
    For those like me that do a lot of cpu intensive scheduled batch work it is worth knowing about and installing, or even if just for one annoying program you don’t care about that slows down your machine every day (thinking of you packagekit).

    Gregory, thanks for the ‘final notes’ section on how to set it up within a desktop application setting. The ‘update-desktop-database’ command I wasn’t aware of and has probably solved a few unrelated issues :-).

    • Hi Mark!

      FYI to everyone else — Mark is the one who originally suggested this article. Thanks for notifying us about this command. I didn’t use the ffmpeg example from your original proposal because it is not in the default Fedora repositories and we’ve been told in the past not to promote it on the Magazine because of its non-FOSS codecs. Hope that you don’t mind that I took your proposal and wrote up the article. It had been setting in our “approved specifications” category for over a year and we’ve been a little low on contributions lately.

      The reason that I included that bit about how to use it with GUI applications at the end is because someone had asked that exact question a while ago on Fedora’s user support forum:

      I pointed them at the cpulimit command you had mentioned and they said that it worked to fix their problem. So your proposal has been very useful to lots of people. Thanks! And thanks also to Jesse who is maintaining the software of course. 🙂

  10. jama


    If you’re concerned about

    …’Some common utilities like ‘ffmpeg’ will happily span cores and drive them all to 100% busy.
    Apart from the noise of spinning fans, utilities like lm_sensors or the extremely usefull ‘bpytop’ can show the core temperatures sitting at a point where they are probably considering shutting themselves down.’…,

    one option could be limiting the cpu frequency itself.

    The available cpu frequencies can be checked with:

    cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
    3600000 2800000 2200000

    Where we can see that the slowest supported speed is 2,2GHz.

    By forcing this smallest value to each core, the temperatures will not get high, keeping the fans silent as well:

    for i in {0..15} ; do echo 2200000 > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_max_freq ; done

    And the cpu frequencies will not exceed 2,2GHz.

    To revert the setting, just replace the 2200000 with 3600000.

  11. Thanks, that’s such a useful feature for background processes!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat. Fedora Magazine aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. The Fedora logo is a trademark of Red Hat, Inc. Terms and Conditions

%d bloggers like this: