53 captures
14 Jan 2012 - 12 Nov 2024
Apr MAY Jun
13
2012 2013 2014
success
fail

About this capture

COLLECTED BY

Organization: Alexa Crawls

Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period.

Collection: Alexa Crawls

Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period.
TIMESTAMPS

The Wayback Machine - http://web.archive.org/web/20130513212114/http://lwn.net:80/Articles/472071/
 
LWN.net Logo

Log in now

Create an account

Subscribe to LWN

Return to the Kernel page

LWN.net Weekly Edition for May 9, 2013

(Nearly) full tickless operation in 3.10

LFCS: The LLVMLinux project

LWN.net Weekly Edition for May 2, 2013

Go and Rust — objects without class

Fixing the symlink race problem

ByJake Edge
December 14, 2011

The problems with symbolic link race conditions have existed for decades, been well understood in that time, and developers have been given clear guidelines on how to avoid them. Yet they still persist, with new vulnerabilities discovered regularly. There is also a known way to avoid most of the problems by changing the kernel—something that has been done for many years in grsecurity and Openwall—but it has never made its way into the mainline. While kernel hackers are understandably unenthusiastic about working around buggy user-space programs in the kernel, this particular problem is severe enough that it probably makes sense to do so. It would seem that we are seeing some movement on that front.

The basic problem is a time-to-check-to-time-of-use (TOCTTOU) flaw. Buggy applications will look for the existence and/or characteristics of temporary files before opening them. An attacker can exploit the flaw by changing the file (often by making a symlink) in between the check and the open(). If the program with the flaw has elevated privileges (e.g. setuid), and the attacker replaces the file with a symlink to a system file, serious problems can result.

The bug generally happens in shared, world-writable directories that have the "sticky" bit set (like /tmp). The sticky bit on a directory is set to prevent users from deleting other users' files. So, the fix restricts the ability to follow symlinks in sticky directories. In particular, a symlink is only followed if it is owned by the follower or if the directory and symlink have the same owner. That solves much of the symlink race problem without breaking any known applications.

Welooked at patches to restrict the following of symlinks in sticky directories in June 2010. Since that time, there has been a two-pronged approach, championed by Kees Cook, to try to get the code into the mainline. The first is the Yama LSM, which is meant to collect up extensions to the Linux discretionary access control (DAC) model. But it runs afoul of the usual problem for specialized LSMs: the inability to stack LSMs.

Cook and others would clearly prefer to see the symlink changes go into the core VFS code, rather than via an LSM, but there has been a push by some to keep it out of the core. There was discussion of Yama and its symlink protections at the Linux Security Summit LSM roundtable, where the plan to push Yama as a DAC enhancement LSM was hatched. That may well be a way forward, but Cook has also posted a patch set that would put the symlink restrictions into fs/namei.c.

The latter patch attracted some interesting comments that would seem to indicate that Ingo Molnar and Linus Torvalds, at least, see value in closing the hole. None of the VFS developers have weighed in on this iteration, but Cook notes that the patch reflects feedback from Al Viro, which could be seen as a sign that he's not completely opposed. Molnar was particularly unhappy that the hole still exists:

Ugh - and people continue to get exploited from a preventable, fixable and already fixed VFS design flaw.

Molnar also had some questions about the implementation, including whether the PROTECTED_STICKY_SYMLINKS kernel configuration parameter should default to 'yes', but was overall very interested in seeing the patch move forward. Torvalds had a somewhat different take, "Ugh. I really dislike the implementation.", but suggested a different mechanism to try to solve the underlying problem by using the permission bits on the symlink. His argument is that Cook's approach is not very "polite" because it is hidden away, so it turns into:

some kind of hacky run-time random behavior depending on some invisible config option that people aren't even aware of.

As Cook points out, though, Torvalds's approach has its own set of "weird hidden behaviors". Torvalds admittedly had not thought his proposal through completely, but it does show an interest in seeing the problem solved. From Cook's perspective, the changes he is proposing simply change the behavior of sticky directories with respect to symlinks, whereas Torvalds's would have wider-ranging effects on symlink creation. Either might do the job, but Cook's solution does have an advantage in that the proposed changes have been shaken out for years in grsecurity and Openwall, as well as in Ubuntu more recently.

Given that several high-profile kernel hackers seem to be in favor of fixing the problem—Ted Ts'o was also favorably disposed to a fix back in 2010—the winds may have shifted in favor of the core VFS approach. If Viro and the other VFS developers aren't completely unhappy with it, we could see it in 3.4 or so.

If that were to happen, there is another related patch that would presumably also be pushed for mainline inclusion: hard link restrictions. That, like the symlink change, currently lives in Yama, though the case can be made that it should also be done in the core VFS code. That patch would disallow the creation of hard links to files that are inaccessible (neither readable nor writable) to the user making the link. It also disallows hard links to setuid and setgid files. That would close some further holes in the symlink race vulnerability, as well as fix some other application vulnerabilities.

Should both the symlink and hard link restrictions make their way into the VFS core, that would only leave the ptrace() restrictions in Yama. Those restrictions allow administrators to disallow a process from using ptrace() on anything other than its descendants (unless it has the CAP_SYS_PTRACE capability). Currently, any process can trace any other running under the same UID, so a compromise in one running program could lead to disclosing credentials and other sensitive information from another running program. There may also be other DAC enhancements that Cook or others are interested in adding to Yama in the future.

One way or another, the problem is severe enough that there should, at least, be a way for distributors or administrators to thwart these common vulnerabilities. Whether the fix lives in VFS or an LSM, providing an option to turn off a whole class of application flaws—which can often lead to system compromise—seems worth doing. Hopefully we are seeing movement in that direction.


(Log in to post comments)

'Drug cocktail' to fix /tmp bugs

Posted Dec 15, 2011 12:57 UTC (Thu) by epa (subscriber, #39769) [Link]

If the fix makes it into the mainline, this is great news and long overdue. By knocking out 25% of the vulnerabilities that crowd LWN's security section each week, it might even free up brainpower to look at other common bugs which might have an easier fix than "re-educate the whole world to do it correctly". (For example, how much stuff would break if signed integer overflows in C code aborted the application, unless the programmer gives a hint to the compiler to do otherwise?)

Another way to make /tmp more secure is for each user to have its own /tmp directory in its home directory. I wonder if that will be the next fix to be deployed. (Getting rid of symlink attacks is great, but if an application has insecure /tmp handling then it is still possible to DoS it by creating a file in /tmp at the wrong time.)

'Drug cocktail' to fix /tmp bugs

Posted Dec 15, 2011 15:39 UTC (Thu) by nix (subscriber, #2304) [Link]

You can do 'every user has his own /tmp' with pam_namespace now. If you're lucky this might not even break GDM et al (thanks to the X11 stuff kept in /tmp), but I wouldn't be surprised if it did break it.

'Drug cocktail' to fix /tmp bugs

Posted Dec 15, 2011 23:51 UTC (Thu) by PaXTeam (subscriber, #24616) [Link]

> For example, how much stuff would break if signed integer overflows in C code aborted the application[...]

the application? you wouldn't get that far if you enabled such a feature for the kernel itself ;). just try to compile it with clang's -fcatch-undefined-behavior and watch the fireworks...

'Drug cocktail' to fix /tmp bugs

Posted Dec 16, 2011 13:33 UTC (Fri) by nix (subscriber, #2304) [Link]

Hah. Forget the kernel: bugs are still being found in GCC itself where the compiler either contains examples of signed overflow, or introduces signed overflows during optimization. (The most recent example I'm aware of is PR51247, fixed just last month.)

The signed overflow thing, like the aliasing thing, is a problem that will never go away: it will just slowly retreat into more and more obscure software, while common software that relies on it will just gain -fwrapv in appropriate places because fixing it is too pervasive (just as such software has already gained -fno-strict-overflow).

'Drug cocktail' to fix /tmp bugs

Posted Dec 17, 2011 0:35 UTC (Sat) by wahern (subscriber, #37304) [Link]

You're probably right--it will never go away. The easiest way to fix signed overflow is to not use signed integers. Yet people continue to use (int) and (long) reflexively. Java doesn't even have unsigned integers, just for purities sake. The C++ crowd still debates whether size_t is better than a signed integer. Never mind that in real-life signed arithmetic is rare. SLoC-for-SLoC, the vast majority of arithmetic is mundane management of data for which negative numbers are unnecessary and awkward, and where unsigned overflow is usually entirely and reliably benign. It's even often a negative feedback effect--by using modulo arithmetic you thwart someone trying to overflow your buffers by producing the opposite result. Finally, corruption isn't much of an issue because garbage in is garbage out; no software can fix that.

Throwing exceptions on signed overflow will probably increase vulnerabilities. I don't think preventing a small number of privilege escalation attacks is worth the cost in dramatically increasing DoS attacks.

The symlink issue is a little disconcerting. It'd probably take less time grepping through the entire Debian source archive for "/tmp" and "TMPDIR", blacklisting stupid apps, and replacing bad code with mkstemp(3) or tmpfile(3), than debating how to hack the kernel to paper over idiocy.

Come to think of it, there is an operating system which takes exactly this approach--fixing classes of vulnerabilities by fixing the code. But the name escapes me at the moment ;)

'Drug cocktail' to fix /tmp bugs

Posted Dec 17, 2011 1:01 UTC (Sat) by kees (subscriber, #27264) [Link]

The problem is needing to continually scan all the software because /tmp junk _keeps_ getting added. :(

And for "how to hack the kernel", there's not really much of a debate the way I see it: this has been solved for over a decade already. :P

'Drug cocktail' to fix /tmp bugs

Posted Dec 17, 2011 2:54 UTC (Sat) by nybble41 (subscriber, #55106) [Link]

> by using modulo arithmetic you thwart someone trying to overflow your buffers by producing the opposite result

I doubt that. The difference between signed and unsigned shows up mainly in comparisons. The result of an expression cast to unsigned (e.g. pointer arithmetic) is generally the same whether you use unsigned modulo arithmetic or two's complement. For example, without overflow detection,

char *buffer;
uint32_t x = 4294967295; // 2**32 - 1
buffer[x];

has exactly the same effect on 32-bit platforms as

char *buffer;
int32_t x = -1;
buffer[x];

The first version does at least have the marginal advantage that you only need to check the upper boundary, provided the lower boundary is zero.

'Drug cocktail' to fix /tmp bugs

Posted Dec 18, 2011 17:12 UTC (Sun) by epa (subscriber, #39769) [Link]

Part of the problem is C's insane rules for silent conversion between signed and unsigned, so if your code is using unsigned int but a library uses signed ints, you must be very careful.

Throwing exceptions (or just aborting) on signed overflow would increase DoS vulnerabilities in the short term. The reason to suggest it is that it would convert subtle and perhaps unnoticed bugs into much more obvious bugs; also that 'fail safe' in the context of software usually means stop execution rather than doing something weird and continuing.

I don't agree that fixing all of Debian is easier than fixing the kernel. Too much new software is being written all the time with the same old vulnerabilities again and again (the LWN security section is witness to that). The choice is either to fix the kernel or to genetically engineer a new super-race of programmers who are immune to mistakes, oversights, or the belief that because a program works when tested then it will still work in the presence of pathological conditions such as an attacker making races.

As for OpenBSD, they sensibly take a belt-and-braces approach to most security issues. For example if your programs do not have vulnerabilities, then PID randomization is not necessary, but they do it anyway. By the same token, fixing every program in Debian is certainly a good idea but it should be as well as putting some defensive measure in the kernel, not a substitute.

'Drug cocktail' to fix /tmp bugs

Posted Dec 19, 2011 13:54 UTC (Mon) by incase (subscriber, #37115) [Link]

@epa:
Actually, fixing all of Debian (I take that as a synonym for "fixing all the software you can find") still does make sense even if the Linux kernel "fixes" this issue: There are still heaps of other Unix systems that might be affected by the same insecure temporary file handling problem, there are lots of systems running older kernels but sometimes (manually compiled) newer applications,....
So in either case, I think the kernel should take measures appropriate to mitigate this attack vector, while applications should be fixed to use more secure access patterns to avoid this problem (both on Linux and on other potentially affected systems).

'Drug cocktail' to fix /tmp bugs

Posted Dec 20, 2011 10:10 UTC (Tue) by epa (subscriber, #39769) [Link]

I thoroughly agree: fix the kernel *and* fix the applications. That's what I intended to say in the earlier post.

But even if for some reason you can't fix the applications, fix the kernel anyway!

Fixing the symlink race problem

Posted Dec 22, 2011 22:02 UTC (Thu) by slashdot (guest, #22014) [Link]

Much better solution: per-user /tmp

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds