|
Pondering the X client vulnerabilities
ByJonathan Corbet May 27, 2013
Certain projects are known for disclosing a large number of vulnerabilities
at once; such behavior is especially common in company-owned projects where
fixes are released in batches. Even those projects, though, rarely turn up with 30
new CVE numbers in a single day. But, on May 23, the X.org project
did exactly that when it disclosed a large
number of security vulnerabilities in various X client libraries — some of
which could be more than two decades old.
The vulnerabilities
The X Window System has a classic client/server architecture, with the X
server providing display and input services for a range of client
applications. The two sides communicate via a well-defined (if much extended)
protocol that, in theory, provides for network-transparent operation. In
any protocol implementation, developers must take into account the
possibility that one of the participants is compromised or overtly
hostile. In short, that is what did not happen in the X client
libraries.
In particular, the client libraries contained many assumptions about the
trustworthiness of the data coming from the X server. Keymap indexes were
not checked to verify that they fell in the range of known keys. Very
large buffer size values from the server could
cause integer overflows on the client side; that, in turn, could lead
to the allocation of undersized buffers that could subsequently be
overflowed. File-processing code could be forced
into unbounded recursion by hostile input. And so on. The bottom line
is that an attacker who controls an X server has a long list of possible
ways to compromise the clients connected to that server.
Despite the seemingly scary nature of most of these vulnerabilities, the impact on
most users should be minimal. Most of the time, the user is in control of
the server, which is usually running in a privileged mode. Any remote
attacker who can compromise such a server will not need to be concerned
with client library exploits; the game will have already been lost. The
biggest threat, arguably, is attacks against setuid programs by a local
user. If the user can control the server (perhaps by using one of the
virtual X server applications), it may be possible to subvert a privileged
program, enabling privilege escalation on the local machine. For this
reason, applying the updates makes sense in many situations, but it may not
be a matter of immediate urgency.
Many of these vulnerabilities have been around for a long time; the
advisory states that "X.Org believes all prior versions of these
libraries contain these flaws, dating back to their introduction."
That introduction, for the bulk of the libraries involved, was in the
1990's. That is a long time for some (in retrospect) fairly obvious errors
to go undetected in code that is this widely used.
Some thoughts
One can certainly make excuses for the developers who implemented those
libraries 20 years or so ago. The net was not so hostile — or so pervasive
— and it hadn't yet occurred to developers that their code might have to
interact with overly hostile peers. A lot of code written in those days
has needed refurbishing since.
It is a bit more interesting to ponder why that refurbishing took so long
to happen
in this case. X has long inspired fears of security issues, after all.
But, traditionally, those fears have been centered around the server, since
that is where the privilege lies. If you operate under the assumption that
the server is the line of defense, there is little reason to be concerned
about the prospect of the server attacking its clients. It undoubtedly
seemed better to focus on reinforcing the server itself.
Even so, one might think that somebody would have gotten around to looking
at the X library code before Ilja van Sprundel took on the task in 2013.
After all, if vulnerable code exists, somebody, somewhere will figure out a
way to exploit it, and attackers have no qualms about looking for problems
in ancient code. The X libraries are widely used and, for better or worse,
they do often get linked into privileged programs that, arguably, should
not be mixing interface and privilege in this way. It seems fairly likely
that at least some of these vulnerabilities have been known to attackers
for some time.
Speaking of review
As Al Viro has pointed out, the security
updates caused some problems of their own due to bugs that would have been
caught in a proper review process. Given the age and impact of the
vulnerabilities, it arguably would have been better to skip the embargo
process and post the fixes publicly before shipping them. After all, as Al
notes, unreviewed "security" fixes could be a way to slip new
vulnerabilities into a system.
In the free software community, we tend to take pride in our review
processes which, we hope, keep bugs out of our code and vulnerabilities out
of our system. In this case, though, it is now clear that some of our most
widely used library code has not seen a serious review pass for a long
time. Recent kernel vulnerabilities, too, have shown that our code is not
as well reviewed as we might like to think. Often, it seems, the
black hats are scrutinizing our code more closely than our developers and
maintainers are.
Fixing this will not be easy. Deep code review has always been in short
supply in our community, and for easily understandable reasons: the work is
tedious, painstaking, and often unrewarding. Developers with the skill to
perform this kind of review tend to be happier when they are writing code
of their own. Getting these developers to volunteer more of their time for
code review is always going to be an uphill battle.
The various companies working in this area could help the situation by
paying for more security review work. There are some signs that more of
this is happening than in the past, but this, too, has tended to be a hard
sell. Most companies sponsor development work to help ensure that their
own needs are adequately met by the project(s) in question. General
security work does not add features or enable more hardware; the rewards
from doing this work may seem nebulous at best. So most companies,
especially if they do not feel threatened by the current level of security
in our code, feel that security work is something they can leave to others.
So we will probably continue to muddle along with code that contains a
variety of vulnerabilities, both old and new. Most of the time, it works
well enough — at least, as far as we know. And on that cheery note, your
editor has to run; there's a whole set of new security updates to apply.
(Log in to post comments)
I would however be very suspicious of any SUID program that required the use of »network support and threading primitives«, whether it is based on Qt or anything else. (The X server comes to mind but that's another story.) If nothing else it should be possible to structure one's X client program such that any code that needs special privileges is put into a minimal separate executable that can then be SUID – and nowadays there are various other approaches one could take that may make SUID completely unnecessary in this context.
There's probably nothing wrong with using the non-GUI parts of Qt to implement a threaded networking daemon, if one doesn't mind the Qt haters' jumping all over one. But such a daemon would not run SUID root; it would be started as root initially (from something like SysV init or systemd) and then drop its root privileges ASAP.
It was actually Cyberax who claimed that »It's actually EXTREMELY common to have networked programs to be SUIDed«. This is apparently so common that so far he hasn't managed to come up with one single example.
Programs that need to open ports below 1024 for listening are not usually X clients. They are servers/daemons that are commonly run as root to begin with (and drop their privileges as soon as they can), rather than SUID-root programs. This is a completely different ball game.
Yeah, obviously they will, approximately forever. But even long before anyone was calling it obsolete and focusing on other work, it still didn't have the kinds of contributors needed to proactively find things like this. It's too enormous a surface of too terrible code to ever grow a proper community around it. LibreOffice managed thanks to its huge visibility and cross-platformness: X will never, ever, attain that critical mass.
The only way it managed before is because a few of us all took it in turns trying to be the hero and single-handedly do everything which needed doing, which obviously doesn't last long before you horrifically burn out. But it did a really good job of masking the enormous shortage of manpower we've always faced.
You say it ('just') as if it's possible to fix with literally no trade-offs whatsoever.
Safer-than-C pointers have stricter semantics. You can't just casually toss them into, and fish them back out of, the semantics-free abyss that is void *. You can't use safe_pointer_to_foo as if it was safe_pointer_to_array_of_foo; if you want a mutable handle to an entry in an array_of_foo, you create an iterator_on_array_of_foo, which consists of a safe_pointer_to_array_of_foo and an index.
And the principle of defensive design indicates that since language and library safety features exist to protect the user from the effects of the programmer's idiocy, the syntax that is quick and easy to use should be the one that has all the safeties enabled, and if the compiler can't sufficiently intelligently optimize away redundant safeties then there should be an unsafe syntax carrying a significant dose of syntactic salt. Unfortunately, because people tend to resist even those changes from which they would get real benefits, getting such a change of outlook past the standards committee is very hard.
|