Argument list too long

From Helpful
Revision as of 19:22, 10 April 2022 by Helpful (talk | contribs)
Jump to navigation Jump to search

The direct cause is usually having a * somewhere in your shell command.

The actual reason is a little lower level.

Most shells expand shell globs before it executes a command, so e.g. cp * backup/ actually might happen to expand to a long list of files and/or very long filenames.

Either way, it may create a very large string to be handed to the exec() call.

When that argument list is too long for the chunk of kernel memory reserved for running commands (MAX_ARG_PAGES, usually something like 128KB, and specifically for the environment + command line and probably some other details), a size that is hard-coded in the kernel, you get this error.

You can argue it's a design flaw, or that it's a sensible guard against a self-DoS.

Short version is that it's a fact of shell life.

There are various workable solutions:

  • if you meant 'everything in a directory', then you can often specify the directory and a flag to use recursion
  • if you're being selective, then find may be useful, and it allows doing things streaming-style, e.g.
find . -name '*.txt' -print0 | xargs -0 echo (See also find and xargs)
  • Recompiling the kernel with a larger MAX_ARG_PAGES - of course, you don't know how much you'll need, and this memory is permanently inaccessible for anything else so just throwing somehing huge at is is not ideal


  • that most of these split the set of files into smaller sets, and execute something for each of these sets. : In some cases this significantly alters what the overall command does.
You may want to think about it, and read up on xargs, and its --replace.
  • for filename in `ls`; do echo $filename; done is not a solution, nor is it at all safe against special characters.
ls | while read filename ; do echo $filename; done (specifically for bourne-type shells) works better, but I find it harder to remember why exactly so use find+xargs.