No more disk space: How can I find what is taking up the space?
up vote
62
down vote
favorite
I've run into a problem on one of my servers running 16.04: there is no disk space left.
I have no idea what is taking up the space. Is there a command to list the current directory sizes, so I can traverse and end up in the directory taking up all the space?
disk-usage
|
show 5 more comments
up vote
62
down vote
favorite
I've run into a problem on one of my servers running 16.04: there is no disk space left.
I have no idea what is taking up the space. Is there a command to list the current directory sizes, so I can traverse and end up in the directory taking up all the space?
disk-usage
1
Check the disk usage analyser
– Pranal Narayan
May 4 '17 at 15:24
@PranalNarayan No GUI as it's on my server I'm afraid :(
– Karl Morrison
May 4 '17 at 15:26
1
Darn you, now I went looking, found this bugs.launchpad.net/ubuntu/+source/baobab/+bug/942255 and wish it was a thing.
– Sam
May 4 '17 at 21:52
1
wrt "no GUI, is a server": you could install the GUI app (assuming you are happy with it and the support libraries being on a server) and use is on your local screen via X11-tunnelled-through-SSH with something likeexport DISPLAY=:0.0; ssh -Y <user>@<server> filelight
(replacefilelight
with your preferred tool). Of course with absolutely no space left, if you don't already have the tool installed you'll need to use something else anyway!
– David Spillett
May 5 '17 at 10:15
4
@DavidSpillett As stated, there is no space left on the server. So I can't install anything.
– Karl Morrison
May 6 '17 at 9:09
|
show 5 more comments
up vote
62
down vote
favorite
up vote
62
down vote
favorite
I've run into a problem on one of my servers running 16.04: there is no disk space left.
I have no idea what is taking up the space. Is there a command to list the current directory sizes, so I can traverse and end up in the directory taking up all the space?
disk-usage
I've run into a problem on one of my servers running 16.04: there is no disk space left.
I have no idea what is taking up the space. Is there a command to list the current directory sizes, so I can traverse and end up in the directory taking up all the space?
disk-usage
disk-usage
edited May 5 '17 at 14:02
terdon♦
63.6k12134211
63.6k12134211
asked May 4 '17 at 15:21
Karl Morrison
2,546113355
2,546113355
1
Check the disk usage analyser
– Pranal Narayan
May 4 '17 at 15:24
@PranalNarayan No GUI as it's on my server I'm afraid :(
– Karl Morrison
May 4 '17 at 15:26
1
Darn you, now I went looking, found this bugs.launchpad.net/ubuntu/+source/baobab/+bug/942255 and wish it was a thing.
– Sam
May 4 '17 at 21:52
1
wrt "no GUI, is a server": you could install the GUI app (assuming you are happy with it and the support libraries being on a server) and use is on your local screen via X11-tunnelled-through-SSH with something likeexport DISPLAY=:0.0; ssh -Y <user>@<server> filelight
(replacefilelight
with your preferred tool). Of course with absolutely no space left, if you don't already have the tool installed you'll need to use something else anyway!
– David Spillett
May 5 '17 at 10:15
4
@DavidSpillett As stated, there is no space left on the server. So I can't install anything.
– Karl Morrison
May 6 '17 at 9:09
|
show 5 more comments
1
Check the disk usage analyser
– Pranal Narayan
May 4 '17 at 15:24
@PranalNarayan No GUI as it's on my server I'm afraid :(
– Karl Morrison
May 4 '17 at 15:26
1
Darn you, now I went looking, found this bugs.launchpad.net/ubuntu/+source/baobab/+bug/942255 and wish it was a thing.
– Sam
May 4 '17 at 21:52
1
wrt "no GUI, is a server": you could install the GUI app (assuming you are happy with it and the support libraries being on a server) and use is on your local screen via X11-tunnelled-through-SSH with something likeexport DISPLAY=:0.0; ssh -Y <user>@<server> filelight
(replacefilelight
with your preferred tool). Of course with absolutely no space left, if you don't already have the tool installed you'll need to use something else anyway!
– David Spillett
May 5 '17 at 10:15
4
@DavidSpillett As stated, there is no space left on the server. So I can't install anything.
– Karl Morrison
May 6 '17 at 9:09
1
1
Check the disk usage analyser
– Pranal Narayan
May 4 '17 at 15:24
Check the disk usage analyser
– Pranal Narayan
May 4 '17 at 15:24
@PranalNarayan No GUI as it's on my server I'm afraid :(
– Karl Morrison
May 4 '17 at 15:26
@PranalNarayan No GUI as it's on my server I'm afraid :(
– Karl Morrison
May 4 '17 at 15:26
1
1
Darn you, now I went looking, found this bugs.launchpad.net/ubuntu/+source/baobab/+bug/942255 and wish it was a thing.
– Sam
May 4 '17 at 21:52
Darn you, now I went looking, found this bugs.launchpad.net/ubuntu/+source/baobab/+bug/942255 and wish it was a thing.
– Sam
May 4 '17 at 21:52
1
1
wrt "no GUI, is a server": you could install the GUI app (assuming you are happy with it and the support libraries being on a server) and use is on your local screen via X11-tunnelled-through-SSH with something like
export DISPLAY=:0.0; ssh -Y <user>@<server> filelight
(replace filelight
with your preferred tool). Of course with absolutely no space left, if you don't already have the tool installed you'll need to use something else anyway!– David Spillett
May 5 '17 at 10:15
wrt "no GUI, is a server": you could install the GUI app (assuming you are happy with it and the support libraries being on a server) and use is on your local screen via X11-tunnelled-through-SSH with something like
export DISPLAY=:0.0; ssh -Y <user>@<server> filelight
(replace filelight
with your preferred tool). Of course with absolutely no space left, if you don't already have the tool installed you'll need to use something else anyway!– David Spillett
May 5 '17 at 10:15
4
4
@DavidSpillett As stated, there is no space left on the server. So I can't install anything.
– Karl Morrison
May 6 '17 at 9:09
@DavidSpillett As stated, there is no space left on the server. So I can't install anything.
– Karl Morrison
May 6 '17 at 9:09
|
show 5 more comments
11 Answers
11
active
oldest
votes
up vote
68
down vote
accepted
As always in Linux, there's more than one way to get the job done. However, if you need to do it from CLI, this is my preferred method:
I start by running this as root or with sudo:
du -cha --max-depth=1 / | grep -E "M|G"
The grep is to limit the returning lines to those which return with values in the Megabyte or Gigabyte range. If your disks are big enough, you could add |T
as well to include Terabyte amounts. You may get some errors on /proc
, /sys
, and/or /dev
since they are not real files on disk. However, it should still provide valid output for the rest of the directories in root. After you find the biggest ones you can then run the command inside of that directory in order to narrow your way down the culprit. So for example, if /var
was the biggest you could do it like this next:
du -cha --max-depth=1 /var | grep -E "M|G"
That should lead you to the problem children!
Additional Considerations
While the above command will certainly do the trick, I had some constructive criticism in the comments below that pointed out some things you could also include.
- The
grep
I provided could result in the occasional "K" value being returned if the name of the directory or file has a capital G or M. If you absolutely don't want any of the K valued directories showing up you'd want to up your regex game to be more creative and complex. e.g.grep -E "^[0-9.]*[MG]"
If you know which drive is the issue and it has other mounted drives on top of it that you don't want to waste time including in your search, you could add the
-x
flag to yourdu
command. Man page description of that flag:
-x, --one-file-system
skip directories on different file systems
You can sort the output of the
du
command so that the highest value is at the bottom. Just append this to the end of the command:| sort -h
This is exactly what I do.
– Lightness Races in Orbit
May 4 '17 at 22:58
5
Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe"^[0-9]*[.]*[0-9]*[MG]"
– Xen2050
May 5 '17 at 6:24
4
If you know it's one drive that's the issue, you can use the-x
option to makedu
stay on that one drive (provided on the command-line). You can also pipe throughsort -h
to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the--max-depth
option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom.
– Muzer
May 5 '17 at 12:58
1
@alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is/
), and using-x
gives me a guarantee I won't be miscounting things. If your/
is full and you have a separately-mounted/home
or whatever, using-x
is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case.
– Muzer
May 5 '17 at 13:22
1
If you have the sort you don't need the grep.
– OrangeDog
May 8 '17 at 10:21
|
show 8 more comments
up vote
59
down vote
You can use ncdu
for this. It works very well.
sudo apt install ncdu
30
I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha
– Karl Morrison
May 4 '17 at 15:29
@KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can justcurl http://path/to/ncdu | sh
, and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too
– hanshenrik
May 4 '17 at 20:09
@KarlMorrison or you can boot a live image of Linux and install it in there.
– Mark Yisri
May 10 '17 at 10:31
add a comment |
up vote
16
down vote
I use this command
sudo du -aBM -d 1 . | sort -nr | head -20
Occasionally, I need to run it from the /
directory, as I've placed something in an odd location.
Giving you a +1 for it working! However TopHats solution actually read my drive quicker!
– Karl Morrison
May 4 '17 at 15:47
I often find it more useful to do this without the-d 1
switch (and usually withless
instead ofhead -20
), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat.
– Ilmari Karonen
May 5 '17 at 19:16
@KarlMorrison it doesn't read it quicker, it's just thatsort
waits for the output to be completed before beginning output.
– muru
May 29 '17 at 5:34
@muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term!
– Karl Morrison
May 29 '17 at 8:19
add a comment |
up vote
11
down vote
There are already many good answers about ways to find directories taking most of the space. If you have reason to believe that few large files are the main problem, rather than many small ones, you could use something like find / -size +10M
.
add a comment |
up vote
10
down vote
I don't know Ubuntu and can't check my answer but post here my answer based on my experience as unix admin long time ago.
Find out which filesystem runs out of space
df -h
will list all filesystem, their size and their free space. You only waste time if you investigate filesystems that have enough space. Assume that the full filesystem is /myfilesystem. check the df output if there are filesystems mounted on subdirs of /myfilesystems. If so, the following speps must be adapted to this situation.
Find out how much space is used by the files of this filesystem
du -sh /myfilesystem
The -x option may be used to guarantee that only the files that are member of this filesystems are taken into account. Some Unix variants (e.g. Solaris) do not know the -x option for du. Then you have to use some workarounds to find the du of your filesystem.
Now check if the du of the visible files is approximately the size of the used space displayed by df. If so, you can start to find the large files/directories of the /myfilesystem filesystem to clean up.
to find the largest subdirectories of a directory /.../dir use
du -sk /.../dir/*|sort -n
the -k option forces du to output the sie in kilobyte without any unit. This may be the default on some systems. Then you can omit this option. The largest files/subdirectories will be shown at the bottom of the output.
If you have found a large file/directory that you don't need anymore you can remove it in an appropriate way. Don't bother about the small directories on the top of the output. It won't solve your problem if you delete them. If you still haven't enough space than you can repeat step 4 in the larges subdirectories which are displayed at the bottom of the list.
But what happened if the du output is not approximately the available space displayed by df?
If the du output is larger then you have missed a subdirectory where another filesystem is mounted. If the du output is much smaller, then som files are not shown in any directory tha du inspects. There can be different reasons for his phenomena.
some processes are using a file that was already deleted. Therefore this files were removed from the directory and du can't see them. But for the filesystem their blocks are still in use until the proceses close the files. You can try to find out the relevant processes (e.g. with lsof) and force them to close this files (e.g by stopping the application or by killing the processes). Or you simply reboot your machine.
there are files in directories that aren't visible anymore because on one of their parent directories another filesystem is mounted. So if you have a file /myfilesysem/subdir/bigfile and now mount another filesystem on /myfilesystem/subdir then you cannot see this file anymore and
du -shx /myfilesystem
will report a value that does not contain the size of /myfilesystem/subdir/bigfile. The only way to find out if such files exist is to unmount /myfilesystem/subir and check with
ls -la /myfilesystem/subdir
if it contains files.
There may be special types of filesystems that use/reserve space on a disk that is not visible to the ls command. You need special tools to display this.
Besides this systematic way using the du command there are some other you can use. So you can use the find command to find files that are larger then some value you supply, you can search for files that larger than some value you supply or that were newly created or have a special name (e.g. *.log, core, *.trc). But you always should do a df as described in 1 so that you work on the right filesystem
On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files.
– Zan Lynx
May 7 '17 at 18:25
Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements.
– Zan Lynx
May 7 '17 at 18:30
@ZanLynx Thank you, I never heard of bind mounts before
– miracle173
May 8 '17 at 11:01
@ZanLynx: Not just on busy servers. Imagine that you have/tmp
on a separate file system (e. g. a tmpfs) and something created files in/tmp
before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process/etc/fstab
) or, like you suggest, a bind-mount.
– David Foerster
Jun 3 '17 at 16:58
add a comment |
up vote
7
down vote
In case you are also interested in not using a command, here's an app: Filelight
It lets you quickly visualize what's using disk space in any folder.
It's a server I SSH into, no GUI.
– Karl Morrison
May 6 '17 at 9:10
@KarlMorrison I think there are ways to run GUI programs over ssh, but that's an idea for later when you've got space to install packages
– Xen2050
May 6 '17 at 23:54
@David Oh yeah, I'm trying to get out of that. It used to be necessary on another platform that I used. I'll fix that comment.
– Mark Yisri
Jun 5 '17 at 11:29
@Karl yes, it's easy if X is already installed on the client:ssh -X <your host>
and then run your program from the command line
– Mark Yisri
Jun 5 '17 at 11:30
@MarkYisri the point is that you need to install the program and its dependencies. And the case of Filelight requires at least KDElibs and Qt, which are not really small. See e.g. this page for filelight Ubuntu package, note how many dependencies it has.
– Ruslan
Jul 4 '17 at 15:10
add a comment |
up vote
5
down vote
Try sudo apt-get autoremove
to remove the unused files if you haven't done so
1
Already did that before :( But good idea for others!
– Karl Morrison
May 6 '17 at 9:10
add a comment |
up vote
3
down vote
I often use this one
du -sh /*/
Then if I find some big folders I'll switch to it and do further investigation
cd big_dir
du -sh */
If needed you can also make it sort automatically with
du -s /*/ | sort -n
add a comment |
up vote
2
down vote
Not really an answer - but an addendum.
You're hard out of space and can't install ncdu from @erman 's answer.
Some suggestions
sudo apt clean all
to delete packages you have already downloaded. SAFE
sudo rm -f /var/log/*gz
purge log files older than a week or two - will not delete newer/current logs. MOSTLY SAFE
sudo lsof | grep deleted
list all open files, but filter down to the ones which have been deleted from disk. FAIRLY SAFE
sudo rm /tmp/*
delete some temp files - if something's using them you could upset a process. NOT REALLY THAT SAFE
That `lsof one may return lines like this:
server456 ~ $ lsof | grep deleted
init 1 root 9r REG 253,0 10406312 3104 /var/lib/sss/mc/initgro ups (deleted)
salt-mini 4532 root 0r REG 253,0 17 393614 /tmp/sh-thd-1492991421 (deleted)
Can't do much for the init line, but the second line suggest salt-minion has a file open which was deleted, and the disk blocks will be returned once all the file handles are closed by a service restart.
Other common suspects here would include syslog / rsyslog / syslog-ng, squid, apache, or any process your server runs which is "heavy ".
add a comment |
up vote
2
down vote
I find particularly valuable the output of tools like Filelight, but, as in your case, on servers normally there's no GUI installed, but the du
command is always available.
What I normally do is:
- write the
du
output to a file (du / > du_output.txt
); - copy the file on my machine;
- use
DuFS
to "mount" thedu
output in a temporary directory;DuFS
uses FUSE to create a virtual filesystem (= no files are actually created, it's all fake) according to thedu
output; - run Filelight or another GUI tool on this temporary directory.
Disclaimer: I wrote dufs
- exactly because I often have to find out what hogs disk space on headless machines.
You could just sort -n du_output.txt
– Zan Lynx
May 7 '17 at 18:33
I find the graphical display of the used space way more intuitive.
– Matteo Italia
May 7 '17 at 18:50
add a comment |
up vote
-1
down vote
Similar to @TopHat, but filters some files if they have M, G, or T in the name. I don't believe it will miss size in the first column, but it won't match the filename unless you name files creatively.
du -chad 1 . | grep -E '[0-9]M[[:blank:]]|[0-9]G[[:blank:]]|[0-9]T[[:blank:]]'
Command line switches explained here since I didn't know what the c or a did.
add a comment |
protected by Thomas Ward♦ May 7 '17 at 17:47
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
11 Answers
11
active
oldest
votes
11 Answers
11
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
68
down vote
accepted
As always in Linux, there's more than one way to get the job done. However, if you need to do it from CLI, this is my preferred method:
I start by running this as root or with sudo:
du -cha --max-depth=1 / | grep -E "M|G"
The grep is to limit the returning lines to those which return with values in the Megabyte or Gigabyte range. If your disks are big enough, you could add |T
as well to include Terabyte amounts. You may get some errors on /proc
, /sys
, and/or /dev
since they are not real files on disk. However, it should still provide valid output for the rest of the directories in root. After you find the biggest ones you can then run the command inside of that directory in order to narrow your way down the culprit. So for example, if /var
was the biggest you could do it like this next:
du -cha --max-depth=1 /var | grep -E "M|G"
That should lead you to the problem children!
Additional Considerations
While the above command will certainly do the trick, I had some constructive criticism in the comments below that pointed out some things you could also include.
- The
grep
I provided could result in the occasional "K" value being returned if the name of the directory or file has a capital G or M. If you absolutely don't want any of the K valued directories showing up you'd want to up your regex game to be more creative and complex. e.g.grep -E "^[0-9.]*[MG]"
If you know which drive is the issue and it has other mounted drives on top of it that you don't want to waste time including in your search, you could add the
-x
flag to yourdu
command. Man page description of that flag:
-x, --one-file-system
skip directories on different file systems
You can sort the output of the
du
command so that the highest value is at the bottom. Just append this to the end of the command:| sort -h
This is exactly what I do.
– Lightness Races in Orbit
May 4 '17 at 22:58
5
Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe"^[0-9]*[.]*[0-9]*[MG]"
– Xen2050
May 5 '17 at 6:24
4
If you know it's one drive that's the issue, you can use the-x
option to makedu
stay on that one drive (provided on the command-line). You can also pipe throughsort -h
to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the--max-depth
option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom.
– Muzer
May 5 '17 at 12:58
1
@alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is/
), and using-x
gives me a guarantee I won't be miscounting things. If your/
is full and you have a separately-mounted/home
or whatever, using-x
is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case.
– Muzer
May 5 '17 at 13:22
1
If you have the sort you don't need the grep.
– OrangeDog
May 8 '17 at 10:21
|
show 8 more comments
up vote
68
down vote
accepted
As always in Linux, there's more than one way to get the job done. However, if you need to do it from CLI, this is my preferred method:
I start by running this as root or with sudo:
du -cha --max-depth=1 / | grep -E "M|G"
The grep is to limit the returning lines to those which return with values in the Megabyte or Gigabyte range. If your disks are big enough, you could add |T
as well to include Terabyte amounts. You may get some errors on /proc
, /sys
, and/or /dev
since they are not real files on disk. However, it should still provide valid output for the rest of the directories in root. After you find the biggest ones you can then run the command inside of that directory in order to narrow your way down the culprit. So for example, if /var
was the biggest you could do it like this next:
du -cha --max-depth=1 /var | grep -E "M|G"
That should lead you to the problem children!
Additional Considerations
While the above command will certainly do the trick, I had some constructive criticism in the comments below that pointed out some things you could also include.
- The
grep
I provided could result in the occasional "K" value being returned if the name of the directory or file has a capital G or M. If you absolutely don't want any of the K valued directories showing up you'd want to up your regex game to be more creative and complex. e.g.grep -E "^[0-9.]*[MG]"
If you know which drive is the issue and it has other mounted drives on top of it that you don't want to waste time including in your search, you could add the
-x
flag to yourdu
command. Man page description of that flag:
-x, --one-file-system
skip directories on different file systems
You can sort the output of the
du
command so that the highest value is at the bottom. Just append this to the end of the command:| sort -h
This is exactly what I do.
– Lightness Races in Orbit
May 4 '17 at 22:58
5
Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe"^[0-9]*[.]*[0-9]*[MG]"
– Xen2050
May 5 '17 at 6:24
4
If you know it's one drive that's the issue, you can use the-x
option to makedu
stay on that one drive (provided on the command-line). You can also pipe throughsort -h
to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the--max-depth
option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom.
– Muzer
May 5 '17 at 12:58
1
@alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is/
), and using-x
gives me a guarantee I won't be miscounting things. If your/
is full and you have a separately-mounted/home
or whatever, using-x
is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case.
– Muzer
May 5 '17 at 13:22
1
If you have the sort you don't need the grep.
– OrangeDog
May 8 '17 at 10:21
|
show 8 more comments
up vote
68
down vote
accepted
up vote
68
down vote
accepted
As always in Linux, there's more than one way to get the job done. However, if you need to do it from CLI, this is my preferred method:
I start by running this as root or with sudo:
du -cha --max-depth=1 / | grep -E "M|G"
The grep is to limit the returning lines to those which return with values in the Megabyte or Gigabyte range. If your disks are big enough, you could add |T
as well to include Terabyte amounts. You may get some errors on /proc
, /sys
, and/or /dev
since they are not real files on disk. However, it should still provide valid output for the rest of the directories in root. After you find the biggest ones you can then run the command inside of that directory in order to narrow your way down the culprit. So for example, if /var
was the biggest you could do it like this next:
du -cha --max-depth=1 /var | grep -E "M|G"
That should lead you to the problem children!
Additional Considerations
While the above command will certainly do the trick, I had some constructive criticism in the comments below that pointed out some things you could also include.
- The
grep
I provided could result in the occasional "K" value being returned if the name of the directory or file has a capital G or M. If you absolutely don't want any of the K valued directories showing up you'd want to up your regex game to be more creative and complex. e.g.grep -E "^[0-9.]*[MG]"
If you know which drive is the issue and it has other mounted drives on top of it that you don't want to waste time including in your search, you could add the
-x
flag to yourdu
command. Man page description of that flag:
-x, --one-file-system
skip directories on different file systems
You can sort the output of the
du
command so that the highest value is at the bottom. Just append this to the end of the command:| sort -h
As always in Linux, there's more than one way to get the job done. However, if you need to do it from CLI, this is my preferred method:
I start by running this as root or with sudo:
du -cha --max-depth=1 / | grep -E "M|G"
The grep is to limit the returning lines to those which return with values in the Megabyte or Gigabyte range. If your disks are big enough, you could add |T
as well to include Terabyte amounts. You may get some errors on /proc
, /sys
, and/or /dev
since they are not real files on disk. However, it should still provide valid output for the rest of the directories in root. After you find the biggest ones you can then run the command inside of that directory in order to narrow your way down the culprit. So for example, if /var
was the biggest you could do it like this next:
du -cha --max-depth=1 /var | grep -E "M|G"
That should lead you to the problem children!
Additional Considerations
While the above command will certainly do the trick, I had some constructive criticism in the comments below that pointed out some things you could also include.
- The
grep
I provided could result in the occasional "K" value being returned if the name of the directory or file has a capital G or M. If you absolutely don't want any of the K valued directories showing up you'd want to up your regex game to be more creative and complex. e.g.grep -E "^[0-9.]*[MG]"
If you know which drive is the issue and it has other mounted drives on top of it that you don't want to waste time including in your search, you could add the
-x
flag to yourdu
command. Man page description of that flag:
-x, --one-file-system
skip directories on different file systems
You can sort the output of the
du
command so that the highest value is at the bottom. Just append this to the end of the command:| sort -h
edited May 6 '17 at 16:21
grg
1176
1176
answered May 4 '17 at 15:36
TopHat
1,521610
1,521610
This is exactly what I do.
– Lightness Races in Orbit
May 4 '17 at 22:58
5
Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe"^[0-9]*[.]*[0-9]*[MG]"
– Xen2050
May 5 '17 at 6:24
4
If you know it's one drive that's the issue, you can use the-x
option to makedu
stay on that one drive (provided on the command-line). You can also pipe throughsort -h
to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the--max-depth
option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom.
– Muzer
May 5 '17 at 12:58
1
@alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is/
), and using-x
gives me a guarantee I won't be miscounting things. If your/
is full and you have a separately-mounted/home
or whatever, using-x
is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case.
– Muzer
May 5 '17 at 13:22
1
If you have the sort you don't need the grep.
– OrangeDog
May 8 '17 at 10:21
|
show 8 more comments
This is exactly what I do.
– Lightness Races in Orbit
May 4 '17 at 22:58
5
Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe"^[0-9]*[.]*[0-9]*[MG]"
– Xen2050
May 5 '17 at 6:24
4
If you know it's one drive that's the issue, you can use the-x
option to makedu
stay on that one drive (provided on the command-line). You can also pipe throughsort -h
to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the--max-depth
option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom.
– Muzer
May 5 '17 at 12:58
1
@alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is/
), and using-x
gives me a guarantee I won't be miscounting things. If your/
is full and you have a separately-mounted/home
or whatever, using-x
is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case.
– Muzer
May 5 '17 at 13:22
1
If you have the sort you don't need the grep.
– OrangeDog
May 8 '17 at 10:21
This is exactly what I do.
– Lightness Races in Orbit
May 4 '17 at 22:58
This is exactly what I do.
– Lightness Races in Orbit
May 4 '17 at 22:58
5
5
Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe
"^[0-9]*[.]*[0-9]*[MG]"
– Xen2050
May 5 '17 at 6:24
Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe
"^[0-9]*[.]*[0-9]*[MG]"
– Xen2050
May 5 '17 at 6:24
4
4
If you know it's one drive that's the issue, you can use the
-x
option to make du
stay on that one drive (provided on the command-line). You can also pipe through sort -h
to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the --max-depth
option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom.– Muzer
May 5 '17 at 12:58
If you know it's one drive that's the issue, you can use the
-x
option to make du
stay on that one drive (provided on the command-line). You can also pipe through sort -h
to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the --max-depth
option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom.– Muzer
May 5 '17 at 12:58
1
1
@alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is
/
), and using -x
gives me a guarantee I won't be miscounting things. If your /
is full and you have a separately-mounted /home
or whatever, using -x
is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case.– Muzer
May 5 '17 at 13:22
@alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is
/
), and using -x
gives me a guarantee I won't be miscounting things. If your /
is full and you have a separately-mounted /home
or whatever, using -x
is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case.– Muzer
May 5 '17 at 13:22
1
1
If you have the sort you don't need the grep.
– OrangeDog
May 8 '17 at 10:21
If you have the sort you don't need the grep.
– OrangeDog
May 8 '17 at 10:21
|
show 8 more comments
up vote
59
down vote
You can use ncdu
for this. It works very well.
sudo apt install ncdu
30
I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha
– Karl Morrison
May 4 '17 at 15:29
@KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can justcurl http://path/to/ncdu | sh
, and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too
– hanshenrik
May 4 '17 at 20:09
@KarlMorrison or you can boot a live image of Linux and install it in there.
– Mark Yisri
May 10 '17 at 10:31
add a comment |
up vote
59
down vote
You can use ncdu
for this. It works very well.
sudo apt install ncdu
30
I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha
– Karl Morrison
May 4 '17 at 15:29
@KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can justcurl http://path/to/ncdu | sh
, and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too
– hanshenrik
May 4 '17 at 20:09
@KarlMorrison or you can boot a live image of Linux and install it in there.
– Mark Yisri
May 10 '17 at 10:31
add a comment |
up vote
59
down vote
up vote
59
down vote
You can use ncdu
for this. It works very well.
sudo apt install ncdu
You can use ncdu
for this. It works very well.
sudo apt install ncdu
answered May 4 '17 at 15:28
Duncan
1,3091915
1,3091915
30
I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha
– Karl Morrison
May 4 '17 at 15:29
@KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can justcurl http://path/to/ncdu | sh
, and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too
– hanshenrik
May 4 '17 at 20:09
@KarlMorrison or you can boot a live image of Linux and install it in there.
– Mark Yisri
May 10 '17 at 10:31
add a comment |
30
I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha
– Karl Morrison
May 4 '17 at 15:29
@KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can justcurl http://path/to/ncdu | sh
, and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too
– hanshenrik
May 4 '17 at 20:09
@KarlMorrison or you can boot a live image of Linux and install it in there.
– Mark Yisri
May 10 '17 at 10:31
30
30
I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha
– Karl Morrison
May 4 '17 at 15:29
I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha
– Karl Morrison
May 4 '17 at 15:29
@KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can just
curl http://path/to/ncdu | sh
, and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too– hanshenrik
May 4 '17 at 20:09
@KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can just
curl http://path/to/ncdu | sh
, and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too– hanshenrik
May 4 '17 at 20:09
@KarlMorrison or you can boot a live image of Linux and install it in there.
– Mark Yisri
May 10 '17 at 10:31
@KarlMorrison or you can boot a live image of Linux and install it in there.
– Mark Yisri
May 10 '17 at 10:31
add a comment |
up vote
16
down vote
I use this command
sudo du -aBM -d 1 . | sort -nr | head -20
Occasionally, I need to run it from the /
directory, as I've placed something in an odd location.
Giving you a +1 for it working! However TopHats solution actually read my drive quicker!
– Karl Morrison
May 4 '17 at 15:47
I often find it more useful to do this without the-d 1
switch (and usually withless
instead ofhead -20
), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat.
– Ilmari Karonen
May 5 '17 at 19:16
@KarlMorrison it doesn't read it quicker, it's just thatsort
waits for the output to be completed before beginning output.
– muru
May 29 '17 at 5:34
@muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term!
– Karl Morrison
May 29 '17 at 8:19
add a comment |
up vote
16
down vote
I use this command
sudo du -aBM -d 1 . | sort -nr | head -20
Occasionally, I need to run it from the /
directory, as I've placed something in an odd location.
Giving you a +1 for it working! However TopHats solution actually read my drive quicker!
– Karl Morrison
May 4 '17 at 15:47
I often find it more useful to do this without the-d 1
switch (and usually withless
instead ofhead -20
), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat.
– Ilmari Karonen
May 5 '17 at 19:16
@KarlMorrison it doesn't read it quicker, it's just thatsort
waits for the output to be completed before beginning output.
– muru
May 29 '17 at 5:34
@muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term!
– Karl Morrison
May 29 '17 at 8:19
add a comment |
up vote
16
down vote
up vote
16
down vote
I use this command
sudo du -aBM -d 1 . | sort -nr | head -20
Occasionally, I need to run it from the /
directory, as I've placed something in an odd location.
I use this command
sudo du -aBM -d 1 . | sort -nr | head -20
Occasionally, I need to run it from the /
directory, as I've placed something in an odd location.
answered May 4 '17 at 15:25
Charles Green
12.9k73556
12.9k73556
Giving you a +1 for it working! However TopHats solution actually read my drive quicker!
– Karl Morrison
May 4 '17 at 15:47
I often find it more useful to do this without the-d 1
switch (and usually withless
instead ofhead -20
), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat.
– Ilmari Karonen
May 5 '17 at 19:16
@KarlMorrison it doesn't read it quicker, it's just thatsort
waits for the output to be completed before beginning output.
– muru
May 29 '17 at 5:34
@muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term!
– Karl Morrison
May 29 '17 at 8:19
add a comment |
Giving you a +1 for it working! However TopHats solution actually read my drive quicker!
– Karl Morrison
May 4 '17 at 15:47
I often find it more useful to do this without the-d 1
switch (and usually withless
instead ofhead -20
), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat.
– Ilmari Karonen
May 5 '17 at 19:16
@KarlMorrison it doesn't read it quicker, it's just thatsort
waits for the output to be completed before beginning output.
– muru
May 29 '17 at 5:34
@muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term!
– Karl Morrison
May 29 '17 at 8:19
Giving you a +1 for it working! However TopHats solution actually read my drive quicker!
– Karl Morrison
May 4 '17 at 15:47
Giving you a +1 for it working! However TopHats solution actually read my drive quicker!
– Karl Morrison
May 4 '17 at 15:47
I often find it more useful to do this without the
-d 1
switch (and usually with less
instead of head -20
), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat.– Ilmari Karonen
May 5 '17 at 19:16
I often find it more useful to do this without the
-d 1
switch (and usually with less
instead of head -20
), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat.– Ilmari Karonen
May 5 '17 at 19:16
@KarlMorrison it doesn't read it quicker, it's just that
sort
waits for the output to be completed before beginning output.– muru
May 29 '17 at 5:34
@KarlMorrison it doesn't read it quicker, it's just that
sort
waits for the output to be completed before beginning output.– muru
May 29 '17 at 5:34
@muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term!
– Karl Morrison
May 29 '17 at 8:19
@muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term!
– Karl Morrison
May 29 '17 at 8:19
add a comment |
up vote
11
down vote
There are already many good answers about ways to find directories taking most of the space. If you have reason to believe that few large files are the main problem, rather than many small ones, you could use something like find / -size +10M
.
add a comment |
up vote
11
down vote
There are already many good answers about ways to find directories taking most of the space. If you have reason to believe that few large files are the main problem, rather than many small ones, you could use something like find / -size +10M
.
add a comment |
up vote
11
down vote
up vote
11
down vote
There are already many good answers about ways to find directories taking most of the space. If you have reason to believe that few large files are the main problem, rather than many small ones, you could use something like find / -size +10M
.
There are already many good answers about ways to find directories taking most of the space. If you have reason to believe that few large files are the main problem, rather than many small ones, you could use something like find / -size +10M
.
answered May 4 '17 at 20:21
Luca Citi
21113
21113
add a comment |
add a comment |
up vote
10
down vote
I don't know Ubuntu and can't check my answer but post here my answer based on my experience as unix admin long time ago.
Find out which filesystem runs out of space
df -h
will list all filesystem, their size and their free space. You only waste time if you investigate filesystems that have enough space. Assume that the full filesystem is /myfilesystem. check the df output if there are filesystems mounted on subdirs of /myfilesystems. If so, the following speps must be adapted to this situation.
Find out how much space is used by the files of this filesystem
du -sh /myfilesystem
The -x option may be used to guarantee that only the files that are member of this filesystems are taken into account. Some Unix variants (e.g. Solaris) do not know the -x option for du. Then you have to use some workarounds to find the du of your filesystem.
Now check if the du of the visible files is approximately the size of the used space displayed by df. If so, you can start to find the large files/directories of the /myfilesystem filesystem to clean up.
to find the largest subdirectories of a directory /.../dir use
du -sk /.../dir/*|sort -n
the -k option forces du to output the sie in kilobyte without any unit. This may be the default on some systems. Then you can omit this option. The largest files/subdirectories will be shown at the bottom of the output.
If you have found a large file/directory that you don't need anymore you can remove it in an appropriate way. Don't bother about the small directories on the top of the output. It won't solve your problem if you delete them. If you still haven't enough space than you can repeat step 4 in the larges subdirectories which are displayed at the bottom of the list.
But what happened if the du output is not approximately the available space displayed by df?
If the du output is larger then you have missed a subdirectory where another filesystem is mounted. If the du output is much smaller, then som files are not shown in any directory tha du inspects. There can be different reasons for his phenomena.
some processes are using a file that was already deleted. Therefore this files were removed from the directory and du can't see them. But for the filesystem their blocks are still in use until the proceses close the files. You can try to find out the relevant processes (e.g. with lsof) and force them to close this files (e.g by stopping the application or by killing the processes). Or you simply reboot your machine.
there are files in directories that aren't visible anymore because on one of their parent directories another filesystem is mounted. So if you have a file /myfilesysem/subdir/bigfile and now mount another filesystem on /myfilesystem/subdir then you cannot see this file anymore and
du -shx /myfilesystem
will report a value that does not contain the size of /myfilesystem/subdir/bigfile. The only way to find out if such files exist is to unmount /myfilesystem/subir and check with
ls -la /myfilesystem/subdir
if it contains files.
There may be special types of filesystems that use/reserve space on a disk that is not visible to the ls command. You need special tools to display this.
Besides this systematic way using the du command there are some other you can use. So you can use the find command to find files that are larger then some value you supply, you can search for files that larger than some value you supply or that were newly created or have a special name (e.g. *.log, core, *.trc). But you always should do a df as described in 1 so that you work on the right filesystem
On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files.
– Zan Lynx
May 7 '17 at 18:25
Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements.
– Zan Lynx
May 7 '17 at 18:30
@ZanLynx Thank you, I never heard of bind mounts before
– miracle173
May 8 '17 at 11:01
@ZanLynx: Not just on busy servers. Imagine that you have/tmp
on a separate file system (e. g. a tmpfs) and something created files in/tmp
before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process/etc/fstab
) or, like you suggest, a bind-mount.
– David Foerster
Jun 3 '17 at 16:58
add a comment |
up vote
10
down vote
I don't know Ubuntu and can't check my answer but post here my answer based on my experience as unix admin long time ago.
Find out which filesystem runs out of space
df -h
will list all filesystem, their size and their free space. You only waste time if you investigate filesystems that have enough space. Assume that the full filesystem is /myfilesystem. check the df output if there are filesystems mounted on subdirs of /myfilesystems. If so, the following speps must be adapted to this situation.
Find out how much space is used by the files of this filesystem
du -sh /myfilesystem
The -x option may be used to guarantee that only the files that are member of this filesystems are taken into account. Some Unix variants (e.g. Solaris) do not know the -x option for du. Then you have to use some workarounds to find the du of your filesystem.
Now check if the du of the visible files is approximately the size of the used space displayed by df. If so, you can start to find the large files/directories of the /myfilesystem filesystem to clean up.
to find the largest subdirectories of a directory /.../dir use
du -sk /.../dir/*|sort -n
the -k option forces du to output the sie in kilobyte without any unit. This may be the default on some systems. Then you can omit this option. The largest files/subdirectories will be shown at the bottom of the output.
If you have found a large file/directory that you don't need anymore you can remove it in an appropriate way. Don't bother about the small directories on the top of the output. It won't solve your problem if you delete them. If you still haven't enough space than you can repeat step 4 in the larges subdirectories which are displayed at the bottom of the list.
But what happened if the du output is not approximately the available space displayed by df?
If the du output is larger then you have missed a subdirectory where another filesystem is mounted. If the du output is much smaller, then som files are not shown in any directory tha du inspects. There can be different reasons for his phenomena.
some processes are using a file that was already deleted. Therefore this files were removed from the directory and du can't see them. But for the filesystem their blocks are still in use until the proceses close the files. You can try to find out the relevant processes (e.g. with lsof) and force them to close this files (e.g by stopping the application or by killing the processes). Or you simply reboot your machine.
there are files in directories that aren't visible anymore because on one of their parent directories another filesystem is mounted. So if you have a file /myfilesysem/subdir/bigfile and now mount another filesystem on /myfilesystem/subdir then you cannot see this file anymore and
du -shx /myfilesystem
will report a value that does not contain the size of /myfilesystem/subdir/bigfile. The only way to find out if such files exist is to unmount /myfilesystem/subir and check with
ls -la /myfilesystem/subdir
if it contains files.
There may be special types of filesystems that use/reserve space on a disk that is not visible to the ls command. You need special tools to display this.
Besides this systematic way using the du command there are some other you can use. So you can use the find command to find files that are larger then some value you supply, you can search for files that larger than some value you supply or that were newly created or have a special name (e.g. *.log, core, *.trc). But you always should do a df as described in 1 so that you work on the right filesystem
On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files.
– Zan Lynx
May 7 '17 at 18:25
Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements.
– Zan Lynx
May 7 '17 at 18:30
@ZanLynx Thank you, I never heard of bind mounts before
– miracle173
May 8 '17 at 11:01
@ZanLynx: Not just on busy servers. Imagine that you have/tmp
on a separate file system (e. g. a tmpfs) and something created files in/tmp
before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process/etc/fstab
) or, like you suggest, a bind-mount.
– David Foerster
Jun 3 '17 at 16:58
add a comment |
up vote
10
down vote
up vote
10
down vote
I don't know Ubuntu and can't check my answer but post here my answer based on my experience as unix admin long time ago.
Find out which filesystem runs out of space
df -h
will list all filesystem, their size and their free space. You only waste time if you investigate filesystems that have enough space. Assume that the full filesystem is /myfilesystem. check the df output if there are filesystems mounted on subdirs of /myfilesystems. If so, the following speps must be adapted to this situation.
Find out how much space is used by the files of this filesystem
du -sh /myfilesystem
The -x option may be used to guarantee that only the files that are member of this filesystems are taken into account. Some Unix variants (e.g. Solaris) do not know the -x option for du. Then you have to use some workarounds to find the du of your filesystem.
Now check if the du of the visible files is approximately the size of the used space displayed by df. If so, you can start to find the large files/directories of the /myfilesystem filesystem to clean up.
to find the largest subdirectories of a directory /.../dir use
du -sk /.../dir/*|sort -n
the -k option forces du to output the sie in kilobyte without any unit. This may be the default on some systems. Then you can omit this option. The largest files/subdirectories will be shown at the bottom of the output.
If you have found a large file/directory that you don't need anymore you can remove it in an appropriate way. Don't bother about the small directories on the top of the output. It won't solve your problem if you delete them. If you still haven't enough space than you can repeat step 4 in the larges subdirectories which are displayed at the bottom of the list.
But what happened if the du output is not approximately the available space displayed by df?
If the du output is larger then you have missed a subdirectory where another filesystem is mounted. If the du output is much smaller, then som files are not shown in any directory tha du inspects. There can be different reasons for his phenomena.
some processes are using a file that was already deleted. Therefore this files were removed from the directory and du can't see them. But for the filesystem their blocks are still in use until the proceses close the files. You can try to find out the relevant processes (e.g. with lsof) and force them to close this files (e.g by stopping the application or by killing the processes). Or you simply reboot your machine.
there are files in directories that aren't visible anymore because on one of their parent directories another filesystem is mounted. So if you have a file /myfilesysem/subdir/bigfile and now mount another filesystem on /myfilesystem/subdir then you cannot see this file anymore and
du -shx /myfilesystem
will report a value that does not contain the size of /myfilesystem/subdir/bigfile. The only way to find out if such files exist is to unmount /myfilesystem/subir and check with
ls -la /myfilesystem/subdir
if it contains files.
There may be special types of filesystems that use/reserve space on a disk that is not visible to the ls command. You need special tools to display this.
Besides this systematic way using the du command there are some other you can use. So you can use the find command to find files that are larger then some value you supply, you can search for files that larger than some value you supply or that were newly created or have a special name (e.g. *.log, core, *.trc). But you always should do a df as described in 1 so that you work on the right filesystem
I don't know Ubuntu and can't check my answer but post here my answer based on my experience as unix admin long time ago.
Find out which filesystem runs out of space
df -h
will list all filesystem, their size and their free space. You only waste time if you investigate filesystems that have enough space. Assume that the full filesystem is /myfilesystem. check the df output if there are filesystems mounted on subdirs of /myfilesystems. If so, the following speps must be adapted to this situation.
Find out how much space is used by the files of this filesystem
du -sh /myfilesystem
The -x option may be used to guarantee that only the files that are member of this filesystems are taken into account. Some Unix variants (e.g. Solaris) do not know the -x option for du. Then you have to use some workarounds to find the du of your filesystem.
Now check if the du of the visible files is approximately the size of the used space displayed by df. If so, you can start to find the large files/directories of the /myfilesystem filesystem to clean up.
to find the largest subdirectories of a directory /.../dir use
du -sk /.../dir/*|sort -n
the -k option forces du to output the sie in kilobyte without any unit. This may be the default on some systems. Then you can omit this option. The largest files/subdirectories will be shown at the bottom of the output.
If you have found a large file/directory that you don't need anymore you can remove it in an appropriate way. Don't bother about the small directories on the top of the output. It won't solve your problem if you delete them. If you still haven't enough space than you can repeat step 4 in the larges subdirectories which are displayed at the bottom of the list.
But what happened if the du output is not approximately the available space displayed by df?
If the du output is larger then you have missed a subdirectory where another filesystem is mounted. If the du output is much smaller, then som files are not shown in any directory tha du inspects. There can be different reasons for his phenomena.
some processes are using a file that was already deleted. Therefore this files were removed from the directory and du can't see them. But for the filesystem their blocks are still in use until the proceses close the files. You can try to find out the relevant processes (e.g. with lsof) and force them to close this files (e.g by stopping the application or by killing the processes). Or you simply reboot your machine.
there are files in directories that aren't visible anymore because on one of their parent directories another filesystem is mounted. So if you have a file /myfilesysem/subdir/bigfile and now mount another filesystem on /myfilesystem/subdir then you cannot see this file anymore and
du -shx /myfilesystem
will report a value that does not contain the size of /myfilesystem/subdir/bigfile. The only way to find out if such files exist is to unmount /myfilesystem/subir and check with
ls -la /myfilesystem/subdir
if it contains files.
There may be special types of filesystems that use/reserve space on a disk that is not visible to the ls command. You need special tools to display this.
Besides this systematic way using the du command there are some other you can use. So you can use the find command to find files that are larger then some value you supply, you can search for files that larger than some value you supply or that were newly created or have a special name (e.g. *.log, core, *.trc). But you always should do a df as described in 1 so that you work on the right filesystem
answered May 5 '17 at 7:12
miracle173
2005
2005
On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files.
– Zan Lynx
May 7 '17 at 18:25
Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements.
– Zan Lynx
May 7 '17 at 18:30
@ZanLynx Thank you, I never heard of bind mounts before
– miracle173
May 8 '17 at 11:01
@ZanLynx: Not just on busy servers. Imagine that you have/tmp
on a separate file system (e. g. a tmpfs) and something created files in/tmp
before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process/etc/fstab
) or, like you suggest, a bind-mount.
– David Foerster
Jun 3 '17 at 16:58
add a comment |
On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files.
– Zan Lynx
May 7 '17 at 18:25
Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements.
– Zan Lynx
May 7 '17 at 18:30
@ZanLynx Thank you, I never heard of bind mounts before
– miracle173
May 8 '17 at 11:01
@ZanLynx: Not just on busy servers. Imagine that you have/tmp
on a separate file system (e. g. a tmpfs) and something created files in/tmp
before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process/etc/fstab
) or, like you suggest, a bind-mount.
– David Foerster
Jun 3 '17 at 16:58
On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files.
– Zan Lynx
May 7 '17 at 18:25
On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files.
– Zan Lynx
May 7 '17 at 18:25
Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements.
– Zan Lynx
May 7 '17 at 18:30
Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements.
– Zan Lynx
May 7 '17 at 18:30
@ZanLynx Thank you, I never heard of bind mounts before
– miracle173
May 8 '17 at 11:01
@ZanLynx Thank you, I never heard of bind mounts before
– miracle173
May 8 '17 at 11:01
@ZanLynx: Not just on busy servers. Imagine that you have
/tmp
on a separate file system (e. g. a tmpfs) and something created files in /tmp
before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process /etc/fstab
) or, like you suggest, a bind-mount.– David Foerster
Jun 3 '17 at 16:58
@ZanLynx: Not just on busy servers. Imagine that you have
/tmp
on a separate file system (e. g. a tmpfs) and something created files in /tmp
before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process /etc/fstab
) or, like you suggest, a bind-mount.– David Foerster
Jun 3 '17 at 16:58
add a comment |
up vote
7
down vote
In case you are also interested in not using a command, here's an app: Filelight
It lets you quickly visualize what's using disk space in any folder.
It's a server I SSH into, no GUI.
– Karl Morrison
May 6 '17 at 9:10
@KarlMorrison I think there are ways to run GUI programs over ssh, but that's an idea for later when you've got space to install packages
– Xen2050
May 6 '17 at 23:54
@David Oh yeah, I'm trying to get out of that. It used to be necessary on another platform that I used. I'll fix that comment.
– Mark Yisri
Jun 5 '17 at 11:29
@Karl yes, it's easy if X is already installed on the client:ssh -X <your host>
and then run your program from the command line
– Mark Yisri
Jun 5 '17 at 11:30
@MarkYisri the point is that you need to install the program and its dependencies. And the case of Filelight requires at least KDElibs and Qt, which are not really small. See e.g. this page for filelight Ubuntu package, note how many dependencies it has.
– Ruslan
Jul 4 '17 at 15:10
add a comment |
up vote
7
down vote
In case you are also interested in not using a command, here's an app: Filelight
It lets you quickly visualize what's using disk space in any folder.
It's a server I SSH into, no GUI.
– Karl Morrison
May 6 '17 at 9:10
@KarlMorrison I think there are ways to run GUI programs over ssh, but that's an idea for later when you've got space to install packages
– Xen2050
May 6 '17 at 23:54
@David Oh yeah, I'm trying to get out of that. It used to be necessary on another platform that I used. I'll fix that comment.
– Mark Yisri
Jun 5 '17 at 11:29
@Karl yes, it's easy if X is already installed on the client:ssh -X <your host>
and then run your program from the command line
– Mark Yisri
Jun 5 '17 at 11:30
@MarkYisri the point is that you need to install the program and its dependencies. And the case of Filelight requires at least KDElibs and Qt, which are not really small. See e.g. this page for filelight Ubuntu package, note how many dependencies it has.
– Ruslan
Jul 4 '17 at 15:10
add a comment |
up vote
7
down vote
up vote
7
down vote
In case you are also interested in not using a command, here's an app: Filelight
It lets you quickly visualize what's using disk space in any folder.
In case you are also interested in not using a command, here's an app: Filelight
It lets you quickly visualize what's using disk space in any folder.
answered May 5 '17 at 21:32
Gabriel
1,40842445
1,40842445
It's a server I SSH into, no GUI.
– Karl Morrison
May 6 '17 at 9:10
@KarlMorrison I think there are ways to run GUI programs over ssh, but that's an idea for later when you've got space to install packages
– Xen2050
May 6 '17 at 23:54
@David Oh yeah, I'm trying to get out of that. It used to be necessary on another platform that I used. I'll fix that comment.
– Mark Yisri
Jun 5 '17 at 11:29
@Karl yes, it's easy if X is already installed on the client:ssh -X <your host>
and then run your program from the command line
– Mark Yisri
Jun 5 '17 at 11:30
@MarkYisri the point is that you need to install the program and its dependencies. And the case of Filelight requires at least KDElibs and Qt, which are not really small. See e.g. this page for filelight Ubuntu package, note how many dependencies it has.
– Ruslan
Jul 4 '17 at 15:10
add a comment |
It's a server I SSH into, no GUI.
– Karl Morrison
May 6 '17 at 9:10
@KarlMorrison I think there are ways to run GUI programs over ssh, but that's an idea for later when you've got space to install packages
– Xen2050
May 6 '17 at 23:54
@David Oh yeah, I'm trying to get out of that. It used to be necessary on another platform that I used. I'll fix that comment.
– Mark Yisri
Jun 5 '17 at 11:29
@Karl yes, it's easy if X is already installed on the client:ssh -X <your host>
and then run your program from the command line
– Mark Yisri
Jun 5 '17 at 11:30
@MarkYisri the point is that you need to install the program and its dependencies. And the case of Filelight requires at least KDElibs and Qt, which are not really small. See e.g. this page for filelight Ubuntu package, note how many dependencies it has.
– Ruslan
Jul 4 '17 at 15:10
It's a server I SSH into, no GUI.
– Karl Morrison
May 6 '17 at 9:10
It's a server I SSH into, no GUI.
– Karl Morrison
May 6 '17 at 9:10
@KarlMorrison I think there are ways to run GUI programs over ssh, but that's an idea for later when you've got space to install packages
– Xen2050
May 6 '17 at 23:54
@KarlMorrison I think there are ways to run GUI programs over ssh, but that's an idea for later when you've got space to install packages
– Xen2050
May 6 '17 at 23:54
@David Oh yeah, I'm trying to get out of that. It used to be necessary on another platform that I used. I'll fix that comment.
– Mark Yisri
Jun 5 '17 at 11:29
@David Oh yeah, I'm trying to get out of that. It used to be necessary on another platform that I used. I'll fix that comment.
– Mark Yisri
Jun 5 '17 at 11:29
@Karl yes, it's easy if X is already installed on the client:
ssh -X <your host>
and then run your program from the command line– Mark Yisri
Jun 5 '17 at 11:30
@Karl yes, it's easy if X is already installed on the client:
ssh -X <your host>
and then run your program from the command line– Mark Yisri
Jun 5 '17 at 11:30
@MarkYisri the point is that you need to install the program and its dependencies. And the case of Filelight requires at least KDElibs and Qt, which are not really small. See e.g. this page for filelight Ubuntu package, note how many dependencies it has.
– Ruslan
Jul 4 '17 at 15:10
@MarkYisri the point is that you need to install the program and its dependencies. And the case of Filelight requires at least KDElibs and Qt, which are not really small. See e.g. this page for filelight Ubuntu package, note how many dependencies it has.
– Ruslan
Jul 4 '17 at 15:10
add a comment |
up vote
5
down vote
Try sudo apt-get autoremove
to remove the unused files if you haven't done so
1
Already did that before :( But good idea for others!
– Karl Morrison
May 6 '17 at 9:10
add a comment |
up vote
5
down vote
Try sudo apt-get autoremove
to remove the unused files if you haven't done so
1
Already did that before :( But good idea for others!
– Karl Morrison
May 6 '17 at 9:10
add a comment |
up vote
5
down vote
up vote
5
down vote
Try sudo apt-get autoremove
to remove the unused files if you haven't done so
Try sudo apt-get autoremove
to remove the unused files if you haven't done so
edited May 5 '17 at 13:39
Charles Green
12.9k73556
12.9k73556
answered May 5 '17 at 12:36
Donald Shahini
929
929
1
Already did that before :( But good idea for others!
– Karl Morrison
May 6 '17 at 9:10
add a comment |
1
Already did that before :( But good idea for others!
– Karl Morrison
May 6 '17 at 9:10
1
1
Already did that before :( But good idea for others!
– Karl Morrison
May 6 '17 at 9:10
Already did that before :( But good idea for others!
– Karl Morrison
May 6 '17 at 9:10
add a comment |
up vote
3
down vote
I often use this one
du -sh /*/
Then if I find some big folders I'll switch to it and do further investigation
cd big_dir
du -sh */
If needed you can also make it sort automatically with
du -s /*/ | sort -n
add a comment |
up vote
3
down vote
I often use this one
du -sh /*/
Then if I find some big folders I'll switch to it and do further investigation
cd big_dir
du -sh */
If needed you can also make it sort automatically with
du -s /*/ | sort -n
add a comment |
up vote
3
down vote
up vote
3
down vote
I often use this one
du -sh /*/
Then if I find some big folders I'll switch to it and do further investigation
cd big_dir
du -sh */
If needed you can also make it sort automatically with
du -s /*/ | sort -n
I often use this one
du -sh /*/
Then if I find some big folders I'll switch to it and do further investigation
cd big_dir
du -sh */
If needed you can also make it sort automatically with
du -s /*/ | sort -n
answered May 5 '17 at 3:05
phuclv
328224
328224
add a comment |
add a comment |
up vote
2
down vote
Not really an answer - but an addendum.
You're hard out of space and can't install ncdu from @erman 's answer.
Some suggestions
sudo apt clean all
to delete packages you have already downloaded. SAFE
sudo rm -f /var/log/*gz
purge log files older than a week or two - will not delete newer/current logs. MOSTLY SAFE
sudo lsof | grep deleted
list all open files, but filter down to the ones which have been deleted from disk. FAIRLY SAFE
sudo rm /tmp/*
delete some temp files - if something's using them you could upset a process. NOT REALLY THAT SAFE
That `lsof one may return lines like this:
server456 ~ $ lsof | grep deleted
init 1 root 9r REG 253,0 10406312 3104 /var/lib/sss/mc/initgro ups (deleted)
salt-mini 4532 root 0r REG 253,0 17 393614 /tmp/sh-thd-1492991421 (deleted)
Can't do much for the init line, but the second line suggest salt-minion has a file open which was deleted, and the disk blocks will be returned once all the file handles are closed by a service restart.
Other common suspects here would include syslog / rsyslog / syslog-ng, squid, apache, or any process your server runs which is "heavy ".
add a comment |
up vote
2
down vote
Not really an answer - but an addendum.
You're hard out of space and can't install ncdu from @erman 's answer.
Some suggestions
sudo apt clean all
to delete packages you have already downloaded. SAFE
sudo rm -f /var/log/*gz
purge log files older than a week or two - will not delete newer/current logs. MOSTLY SAFE
sudo lsof | grep deleted
list all open files, but filter down to the ones which have been deleted from disk. FAIRLY SAFE
sudo rm /tmp/*
delete some temp files - if something's using them you could upset a process. NOT REALLY THAT SAFE
That `lsof one may return lines like this:
server456 ~ $ lsof | grep deleted
init 1 root 9r REG 253,0 10406312 3104 /var/lib/sss/mc/initgro ups (deleted)
salt-mini 4532 root 0r REG 253,0 17 393614 /tmp/sh-thd-1492991421 (deleted)
Can't do much for the init line, but the second line suggest salt-minion has a file open which was deleted, and the disk blocks will be returned once all the file handles are closed by a service restart.
Other common suspects here would include syslog / rsyslog / syslog-ng, squid, apache, or any process your server runs which is "heavy ".
add a comment |
up vote
2
down vote
up vote
2
down vote
Not really an answer - but an addendum.
You're hard out of space and can't install ncdu from @erman 's answer.
Some suggestions
sudo apt clean all
to delete packages you have already downloaded. SAFE
sudo rm -f /var/log/*gz
purge log files older than a week or two - will not delete newer/current logs. MOSTLY SAFE
sudo lsof | grep deleted
list all open files, but filter down to the ones which have been deleted from disk. FAIRLY SAFE
sudo rm /tmp/*
delete some temp files - if something's using them you could upset a process. NOT REALLY THAT SAFE
That `lsof one may return lines like this:
server456 ~ $ lsof | grep deleted
init 1 root 9r REG 253,0 10406312 3104 /var/lib/sss/mc/initgro ups (deleted)
salt-mini 4532 root 0r REG 253,0 17 393614 /tmp/sh-thd-1492991421 (deleted)
Can't do much for the init line, but the second line suggest salt-minion has a file open which was deleted, and the disk blocks will be returned once all the file handles are closed by a service restart.
Other common suspects here would include syslog / rsyslog / syslog-ng, squid, apache, or any process your server runs which is "heavy ".
Not really an answer - but an addendum.
You're hard out of space and can't install ncdu from @erman 's answer.
Some suggestions
sudo apt clean all
to delete packages you have already downloaded. SAFE
sudo rm -f /var/log/*gz
purge log files older than a week or two - will not delete newer/current logs. MOSTLY SAFE
sudo lsof | grep deleted
list all open files, but filter down to the ones which have been deleted from disk. FAIRLY SAFE
sudo rm /tmp/*
delete some temp files - if something's using them you could upset a process. NOT REALLY THAT SAFE
That `lsof one may return lines like this:
server456 ~ $ lsof | grep deleted
init 1 root 9r REG 253,0 10406312 3104 /var/lib/sss/mc/initgro ups (deleted)
salt-mini 4532 root 0r REG 253,0 17 393614 /tmp/sh-thd-1492991421 (deleted)
Can't do much for the init line, but the second line suggest salt-minion has a file open which was deleted, and the disk blocks will be returned once all the file handles are closed by a service restart.
Other common suspects here would include syslog / rsyslog / syslog-ng, squid, apache, or any process your server runs which is "heavy ".
edited May 5 '17 at 15:22
phuclv
328224
328224
answered May 5 '17 at 5:11
Criggie
1394
1394
add a comment |
add a comment |
up vote
2
down vote
I find particularly valuable the output of tools like Filelight, but, as in your case, on servers normally there's no GUI installed, but the du
command is always available.
What I normally do is:
- write the
du
output to a file (du / > du_output.txt
); - copy the file on my machine;
- use
DuFS
to "mount" thedu
output in a temporary directory;DuFS
uses FUSE to create a virtual filesystem (= no files are actually created, it's all fake) according to thedu
output; - run Filelight or another GUI tool on this temporary directory.
Disclaimer: I wrote dufs
- exactly because I often have to find out what hogs disk space on headless machines.
You could just sort -n du_output.txt
– Zan Lynx
May 7 '17 at 18:33
I find the graphical display of the used space way more intuitive.
– Matteo Italia
May 7 '17 at 18:50
add a comment |
up vote
2
down vote
I find particularly valuable the output of tools like Filelight, but, as in your case, on servers normally there's no GUI installed, but the du
command is always available.
What I normally do is:
- write the
du
output to a file (du / > du_output.txt
); - copy the file on my machine;
- use
DuFS
to "mount" thedu
output in a temporary directory;DuFS
uses FUSE to create a virtual filesystem (= no files are actually created, it's all fake) according to thedu
output; - run Filelight or another GUI tool on this temporary directory.
Disclaimer: I wrote dufs
- exactly because I often have to find out what hogs disk space on headless machines.
You could just sort -n du_output.txt
– Zan Lynx
May 7 '17 at 18:33
I find the graphical display of the used space way more intuitive.
– Matteo Italia
May 7 '17 at 18:50
add a comment |
up vote
2
down vote
up vote
2
down vote
I find particularly valuable the output of tools like Filelight, but, as in your case, on servers normally there's no GUI installed, but the du
command is always available.
What I normally do is:
- write the
du
output to a file (du / > du_output.txt
); - copy the file on my machine;
- use
DuFS
to "mount" thedu
output in a temporary directory;DuFS
uses FUSE to create a virtual filesystem (= no files are actually created, it's all fake) according to thedu
output; - run Filelight or another GUI tool on this temporary directory.
Disclaimer: I wrote dufs
- exactly because I often have to find out what hogs disk space on headless machines.
I find particularly valuable the output of tools like Filelight, but, as in your case, on servers normally there's no GUI installed, but the du
command is always available.
What I normally do is:
- write the
du
output to a file (du / > du_output.txt
); - copy the file on my machine;
- use
DuFS
to "mount" thedu
output in a temporary directory;DuFS
uses FUSE to create a virtual filesystem (= no files are actually created, it's all fake) according to thedu
output; - run Filelight or another GUI tool on this temporary directory.
Disclaimer: I wrote dufs
- exactly because I often have to find out what hogs disk space on headless machines.
answered May 6 '17 at 16:57
Matteo Italia
145118
145118
You could just sort -n du_output.txt
– Zan Lynx
May 7 '17 at 18:33
I find the graphical display of the used space way more intuitive.
– Matteo Italia
May 7 '17 at 18:50
add a comment |
You could just sort -n du_output.txt
– Zan Lynx
May 7 '17 at 18:33
I find the graphical display of the used space way more intuitive.
– Matteo Italia
May 7 '17 at 18:50
You could just sort -n du_output.txt
– Zan Lynx
May 7 '17 at 18:33
You could just sort -n du_output.txt
– Zan Lynx
May 7 '17 at 18:33
I find the graphical display of the used space way more intuitive.
– Matteo Italia
May 7 '17 at 18:50
I find the graphical display of the used space way more intuitive.
– Matteo Italia
May 7 '17 at 18:50
add a comment |
up vote
-1
down vote
Similar to @TopHat, but filters some files if they have M, G, or T in the name. I don't believe it will miss size in the first column, but it won't match the filename unless you name files creatively.
du -chad 1 . | grep -E '[0-9]M[[:blank:]]|[0-9]G[[:blank:]]|[0-9]T[[:blank:]]'
Command line switches explained here since I didn't know what the c or a did.
add a comment |
up vote
-1
down vote
Similar to @TopHat, but filters some files if they have M, G, or T in the name. I don't believe it will miss size in the first column, but it won't match the filename unless you name files creatively.
du -chad 1 . | grep -E '[0-9]M[[:blank:]]|[0-9]G[[:blank:]]|[0-9]T[[:blank:]]'
Command line switches explained here since I didn't know what the c or a did.
add a comment |
up vote
-1
down vote
up vote
-1
down vote
Similar to @TopHat, but filters some files if they have M, G, or T in the name. I don't believe it will miss size in the first column, but it won't match the filename unless you name files creatively.
du -chad 1 . | grep -E '[0-9]M[[:blank:]]|[0-9]G[[:blank:]]|[0-9]T[[:blank:]]'
Command line switches explained here since I didn't know what the c or a did.
Similar to @TopHat, but filters some files if they have M, G, or T in the name. I don't believe it will miss size in the first column, but it won't match the filename unless you name files creatively.
du -chad 1 . | grep -E '[0-9]M[[:blank:]]|[0-9]G[[:blank:]]|[0-9]T[[:blank:]]'
Command line switches explained here since I didn't know what the c or a did.
answered May 5 '17 at 1:16
user685769
1
1
add a comment |
add a comment |
protected by Thomas Ward♦ May 7 '17 at 17:47
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
1
Check the disk usage analyser
– Pranal Narayan
May 4 '17 at 15:24
@PranalNarayan No GUI as it's on my server I'm afraid :(
– Karl Morrison
May 4 '17 at 15:26
1
Darn you, now I went looking, found this bugs.launchpad.net/ubuntu/+source/baobab/+bug/942255 and wish it was a thing.
– Sam
May 4 '17 at 21:52
1
wrt "no GUI, is a server": you could install the GUI app (assuming you are happy with it and the support libraries being on a server) and use is on your local screen via X11-tunnelled-through-SSH with something like
export DISPLAY=:0.0; ssh -Y <user>@<server> filelight
(replacefilelight
with your preferred tool). Of course with absolutely no space left, if you don't already have the tool installed you'll need to use something else anyway!– David Spillett
May 5 '17 at 10:15
4
@DavidSpillett As stated, there is no space left on the server. So I can't install anything.
– Karl Morrison
May 6 '17 at 9:09