|
LWN.net Weekly Edition for May 9, 2013
ByJonathan Corbet May 8, 2013
The relative quiet recently experienced on the Fedora development mailing
list felt a bit like
the calm before the storm. And, indeed, the storm duly arrived in
the form of a heated discussion over the proper behavior of the password
forms in the Anaconda installer. When the dust had settled, the status
quo ante prevailed. But some interesting governance questions were
raised in between.
The problem
After the Fedora 19 alpha release, the
installer developers decided to stop masking the passwords supplied by the
user during the installation process. Passwords only remained visible for
as long as the keyboard focus remained on the relevant input box; as soon
as the focus moved on, the password would be masked. But that still left
the password visible for an arbitrary period
of time to anybody who chose to look.
Some users, suffice to say, were unamused. A bug report
was filed — and promptly closed:
This is working exactly as it is intended. The passwords are only
visible while they are focused. As soon as you unfocus a field,
the password is hidden. This is a pattern that is becoming more
and more widely used, and it makes sense. Hiding the password as
you type doesn't actually do anything for security, as anyone
watching your monitor could just watch your keyboard instead.
The above-mentioned unamused users were, strangely enough, equally
unamused by this response. The bug was reopened, and the massive mailing list discussion was launched.
The arguments covered whether it ever made sense to show passwords in the
clear, whether passwords still make sense at all, and more. The new
behavior had few defenders, a trend not helped by the near absence of the
Anaconda developers from the discussion. On May 6, the change was reverted
with no explanation. At that point, naturally, the discussion was done.
Heard in the debate
While there was a lot of talk about the security issues associated with the
clear display of passwords, there was also a lot of meta-discussion on how
issues like this should be handled. Some participants, Matthew Garrett in particular, took issue with the
tactic of reopening the bug entry:
I'm saying that if a bug report has been closed due to the change
being a deliberate design decision, reopening the bug isn't going
to change the fact that it was a deliberate design decision. The
appropriate place to discuss deliberate design decisions is a forum
where said decisions are made, ie not Bugzilla.
Anybody who spends any significant time reading bug tracker entries knows
that they often host spirited debates about the merits of specific
changes. These entries can serve as a focal point for a discussion of a
particular issue that the relevant developers will have a relatively hard
time ignoring. On the other hand, bug trackers can be painful places to hold
any kind of detailed discussion. They tend to have all of the
disadvantages of forum sites, without the polished interfaces and enhanced
searchability that such sites provide. A desire to keep such discussions
on the mailing lists — and to avoid using the bug tracker to bug the
relevant developers — is understandable.
A few people tried to bring up matters of policy, claiming, understandably,
that policies are a big part of what distinguishes a distribution from a
random collection of software. Fedora does not appear to have an explicit
policy on how password forms should behave. Perhaps it should: it would be
better if such forms behaved consistently throughout the distribution
rather than surprising users with occasional novelties.
Eric Sandeen suggested that Anaconda's
unique role in the distribution should maybe subject it to more scrutiny
than other packages, asking "How much of the Anaconda team's job is
it to set Fedora OS policies and behaviors, vs. to implement them?"
He pointed out that a decision by the Anaconda developers to change the
default filesystem used with new installs would probably not be seen as an
Anaconda-only matter, for example.
It was suggested that the issue could be forwarded to the Fedora
Engineering Steering Committee (FESCo) in an attempt to force the change to
be reverted. FESCo member Kevin Fenzi expressed the low level of enthusiasm most
participants seemed to feel for that approach:
If they do decide to keep the change, you could escalate it to
FESCo. However, (speaking only for myself here) I would be VERY
reluctant to override maintainers on their packages on something
that is a design decision/judgement call. Where would we draw the
line?
Fedora, arguably, does not defend the independence of its package
maintainers as fiercely as, say, Debian does. But a developer's package
still tends to be their castle; even those who were opposed to this
particular change were, for the most part, unwilling to try to revert it by
force. As a community, we remain disinclined toward dictating instructions
to our developers.
The discussion clearly covered a number of ways in which this particular
technical decision should be debated and decided. The one thing it did not
do, unfortunately, is provide any indication of how the decision was made
to revert the password change. So a discussion that might have led to a
better understanding of how security-related decisions should be made
within Fedora ended up, instead, with the possibility that the developer
involved simply backed out the change rather than face the mob. That gives
the Fedora community no guidance on either policy or the best way to handle
technical disagreements; as such, it represents a missed opportunity.
Passwords, arguably, should be invisible, but policy decisions usually
belong in plain sight.
Comments (45 posted)
ByNathan Willis May 8, 2013
The Participatory Culture
Foundation (PCF) released version
6.0 of its GPL-licensed desktop video application Miro in April. There are performance
improvements and important new functionality, as one might expect.
But Miro's development over the years also serves as a barometer for the
state of online multimedia content—and the trend these days
seems to be pulling away from RSS and other open standards.
For the benefit of the unfamiliar, Miro is a cross-platform GUI
application designed to consolidate online audio and video sources
into a single player that automatically handles subscriptions and
other sources of recurring content. In the early days, it was named
Democracy Player, and the emphasis was on managing RSS and Atom feeds
for video sources, akin to a podcasting client for video. PCF's
larger mission centers around promoting decentralized media creation
and distribution; its other projects
include Miro Community, a
web platform for building video-sharing
sites, the Amara
framework for user-contributed video subtitles, and
Miro Video Converter,
a video conversion application (whose functionality is also available
in Miro itself).
Over the years, support for additional means of content delivery
have been added to Miro, including Bittorrent, web-based video sites
(i.e., those that embed video content but do not produce a parsable
feed), and dedicated services like Hulu. The application's interface
resembles a desktop music player: content sources are listed in
sidebar, with a collapsible tree grouping sources by type. Feed
subscriptions are listed under "Podcasts," and Miro indicates the
availability of new material with a numeric indicator next to the feed
title. Each feed has separate settings regarding whether new items are
automatically downloaded, how long they are retained, and so on.
Links to web video sites and digital music stores are each grouped
in their own category, but Miro does not check either of these service
types for new content automatically. The application can also manage
a local music collection
much like any other audio player. Oddly enough, although the music
management feature allows users to browse by album and artist, the
video management feature does not sport similar library-like options
for managing a collection of local video files. Instead, it presents a
sorted list and highlights entries that have not yet been
watched—the emphasis is on watching what is new and moving on.
Feed me, see more
Miro is written primarily in Python, and the Linux version uses
GStreamer for media playback and FFmpeg for format conversion.
Although the earliest versions of the application focused on video
feed management, an embedded browser engine has been available in Miro
since the 1.0 release, allowing users to search for and bookmark video
sites that do not produce a proper feed. Initially the engine was
Gecko on all platforms, but starting with the 3.0 series in 2010, Miro
has used WebKit on Linux.
By far the most noteworthy change in the 6.0 release is that it
finally restores working Flash support on Linux (which had been broken
since 2009), without which many of the web-based video services
were unusable. However, Flash support is not an all-or-nothing
proposition, and its value to the user depends on a number of
variables.
The Linux version of 6.0 comes with a handful of well-known web
video services (YouTube, Hulu, and PBS) pre-listed in the
application's sidebar. Users can add their own by entering the URL,
and the sidebar entries are essentially just browser
bookmarks—nothing prevents one from loading up the sidebar with
non-video sites (music services like Spotify, popular destinations
like LWN.net, etc.). Flash videos do indeed play, although playback
is dependent on having a Flash plugin by default. The official
plugin, of course, is a proprietary binary blob, which not every Linux
user will be thrilled with. The grim reality, however, is that
avoiding the proprietary Flash plugin remains a process fraught with
difficulty.
There are alternatives, albeit imperfect ones. If one visits the
YouTube HTML5 trial page,
for instance, one can switch on delivery of the HTML5
<video> element instead, and enjoy YouTube content in
WebM or H.264 format instead. Getting to the trial page within Miro
is not simple, unfortunately: one can only navigate to the URL
directly by adding it as a sidebar entry (and the only other option is
logging in to a registered YouTube account and changing the account
settings). At present, however, YouTube remains one of the few video
services that offers an HTML5 option. Ironically, it is also one of
the only video sites supported by the other option: the open source
Flash plugins Gnash and Lightspark.
In practice, visiting most Flash-driven video sites in Miro is
little different than visiting them in Firefox or another browser.
Since fewer and fewer services seem interested in distributing video
over RSS or Atom, preferring instead to draw the user into their own
little captive portals, offering a better-than-browser experience is a
challenge that Miro will have to rise to meet. The application does
do its best to parse content that does not come in standard feed
form—for example, one can add a YouTube "channel" or search
results page to the Podcasts sidebar, and Miro will attempt to pick
out its video content and download it, but the results are unreliable.
PCF also takes a firm stance against what is perceived as illegal
activity: it will not allow the user to download videos embedded in a
web page, nor is there any way to add a browser extension which adds
such functionality.
Other than the long-awaited Flash fix on Linux, the 6.0 release of
Miro is primarily a performance upgrade. Whether or not there are
real-world speed gains can be hard to gauge, particularly on Linux
where previous releases only handled feed-delivered video (where
the speed of one's Internet connection is the biggest factor), but in
my testing 6.0 is certainly no slower than it predecessors, and it is
much less crash-prone. The side features, such as video conversion
and portable device synchronization, have received updates as well, so
they support newer phones and media codecs. Miro 6.0 also restores
compatibility with the MythTV DVR platform; MythTV includes a tool
called MiroBridge
that treats Miro downloads as a virtual channel on a MythTV backend.
MiroBridge broke on Linux during the Miro 4.x cycle, and PCF only implemented
the fix for 6.0, skipping the 5.0 series altogether.
Although Miro does offer a smooth playback experience, it has
its drawbacks, starting with the embedded browser. Much as there is
no simple way to download a video from a web site in Miro's browser,
there is no way to install an ad-blocking extension or tweak other
settings. Thus, users for whom annoying web ads are usually a distant
memory may be shocked by the sensory assault of giant, animated
commercials plastered in every corner. There also appears to be no
way to add music download stores that are not built-in by default
(perhaps because PCF uses referral codes from store purchases as a
means of fundraising).
Miro also grabs the first media file it finds from each podcast
episode, which is not always desirable. Sometimes the user might
prefer one of the "fallback" options, either due to the format (e.g.,
Theora instead of H.264) or due to file size (not everything
must be seen in HD).
The $64,000 question: who cares?
Taken on its merits alone, Miro is a capable application. The
integrated MiroGuide service run by PCF is an excellent way to find
watchable and listenable content online amidst all the garbage, and
Miro itself greatly simplifies managing and watching video (or audio)
podcasts. Its player works well, offering full-screen functionality
and a unified interface even for disparate sources. It even boasts
simple format conversion and can synchronize portable devices.
Watching video and listening to podcasts in Miro is a noticeably
better experience than accessing the same material in the
browser—just like reading blog content aggregated into a feed
reader is noticeably better (for most people) than visiting the same
blogs in the browser.
Nevertheless, Miro remains a comparatively little-known
application, even among those who passionately mourn the impending
demise of Google Reader. Perhaps the amount of content each reader
consumes is part of the difference; I have 623 blog subscriptions at
the moment, which dwarfs by several orders of magnitude the number of
video and audio podcasts I have time for. On the other hand, video
fanatics have plenty of other choices available, from set-top boxes
like the Roku to open source media centers like MythTV or XBMC (which
typically rely on third-party developers maintaining screen scraper
code to extract video from web sites—which periodically break).
Miro may simply be stuck in the middle: better than a browser (which
gets the casual viewer by default), but less convenient than XBMC
(which captures the obsessive viewer).
Miro's biggest obstacle, however, is something entirely beyond its
control: the trend of commercial video sites to move away from RSS
feed delivery towards self-contained, browser-only video. The trend is
not technical in its motivations; businesses are rejecting
feed-delivered video for financial reasons. But it still puts Miro in
a difficult position. On a technical level, Miro for Linux may soon
face yet another obstacle, since Adobe has decided to discontinue the
NSAPI version of the Flash plugin for Linux. Perhaps switching to
Chromium's embedded browser engine would help, perhaps Gnash and
Lightspark will see renewed interest, or perhaps Mozilla's Shumway will make progress. But whatever
happens, eventually a major shift will have to be made in order to
retain Flash video playback on Linux. We are, of course, told that
Flash is dying, but its longevity is legendary enough by now that few
are willing to hold their breath and wait.
Miro may occupy a small niche, caught between the existing web
browser experience on one hand and the media center experience on the
other, but a small niche does not make a project unimportant. Miro's
adeptness with video feed management is still unique; in comparison,
the RSS feed support in XBMC and MythTV is awkward and second-class.
The closest parallel is probably to audio podcasting clients like
gPodder. To many users, the
podcasting support in generic desktop audio players like Rhythmbox is
good enough. But it isn't great, and gPodder still attracts
plenty of users who prefer the first-class feed support. Miro's early
release, too, proved that if you can do one thing and do it well,
users will appreciate you. The restoration of Flash video support in
6.0 muddies the water a bit, but it does not detract from Miro's core
strength.
Comments (3 posted)
ByJake Edge May 8, 2013
Parallel computing is at least part of the future of
computing. When even fairly low-end phones have multiple cores, it is
clear where things are headed. In the future, hardware with even more
parallelism will be available, but parallel programming is not keeping
up. Part of the problem there is the lack of availability of inexpensive, highly
parallel hardware—which is exactly what Andreas Olofsson and his team
at Adapteva have set out to address.
In his keynote at the 2013 Linux
Foundation Collaboration Summit (LFCS), Olofsson described the
Parallella—a
$99 supercomputer that Adapteva ran a Kickstarter project to design and
build. He began by noting that his keynote followed one from Samsung,
which is the world's largest electronics company while he was representing "perhaps
the smallest"
electronics company. Olofsson is not only the CEO, he is also one-third of
the engineering team, he said with a chuckle. Despite being so small, the
company was able to build a 64-core 28nm chip that runs 100 GigaFLOPS at 2
watts.
Adapteva created that chip two years ago and went around the world trying
to sell it, but that effort was met with "crickets", he said. The world
just wasn't ready yet, so that's where the Kickstarter idea came from.
Adapteva is turning into a systems company because "people want computers,
they don't just want chips".
Energy efficiency has not kept up with Moore's law, which creates an
"efficiency gap" that impacts both battery powered and plugged-in systems,
he said.
The gap has only started to widen in the last few years, which is why most
people don't care—yet.
But it is a
problem that is getting worse and worse and will be quite severe by 2020 or
2025. We have to start thinking differently, he said.
The architecture that everyone wants is an infinitely fast single-threaded
CPU with infinite memory and infinite bandwidth to that memory, which is,
of course, impossible to have. We keep trying new things when we hit
various limitations, scaling up the processor frequency, issuing multiple
instructions per cycle, adding SIMD (single instruction,
multiple data), and eventually adding more cores. "Now what?", he asked.
We are "running out of tricks". We have seen this play out in servers and
desktops, and are seeing it play out in mobile "as we speak".
When he started the company in 2008,
there were a number of chip-design trends that he believed would shape the
future of computing: things like power consumption, memory bottlenecks,
thermal density, yield issues, software complexity, and Amdahl's law. But
we have a very different example of computing that is close to hand: our
brains.
Whenever you look at the brain, you realize that what we have been
designing is so primitive in comparison to what nature has already
accomplished, he said. It is massively parallel (billions of neurons),
energy efficient (30 watts),
heterogeneous (different parts of the brain handle different functions),
and robust (losing a small part generally does not shut the whole thing
down).
The "practical vision" for today is heterogeneous, Olofsson said. We have
system-on-chips (SoCs) that can combine different kinds of functional
units, such as "big" CPUs, GPUs, and FPGAs. But, we won't see 1000 ARM
cores on a single SoC, he said. What Adapteva has done is to add hundreds,
or even thousands, of small RISC CPUs into a SoC with ARM or x86 cores as
the main processor.
Programming
The current state of parallel programming is not up to the task of writing
the code that will be needed for massively parallel systems of the future.
The challenge is to make parallel programming as productive as Java or
Python is today, Olofsson said. We should get to the point where GitHub
has thousands of projects that are using parallel programming techniques
that run on parallel hardware.
To get to this world of parallel ubiquity,
the challenges are "immense". The computer ecosystem needs to be rebuilt.
Billions of lines of code need to be rewritten, and millions of programmers
need to be re-educated. In addition, the computer education curriculum
needs to change so that people learn parallel programming in the first year
of college—or even high school.
In his mind, there is no question that the future of computing is parallel;
"how else are you going to scale?". Where else can you get the next
million-times speedup, he asked. But there is a question of "when?" and
it is "going to hurt before we get there". There is no reason not to start
now, though, which is where some of the ideas behind Parallella come from.
Inspired by the Raspberry Pi, Parallella is a $99 credit-card-sized
parallel computer. Adapteva is trying to create a market for its chip, but
it is also "trying to do good at the same time". The only way to build up
a good-sized community around a device like Parallella is to make it open,
"which is hard for a hardware guy", he said. But that makes the platform
accessible to interested hackers.
Parallella is cheap and easy to use, Olofsson said. He wishes Adapteva
could sell a million of them (like Raspberry Pi), but thinks that's
unlikely. If it sold 10,000 and people took them and did innovative things
using Parallella, that would be a big success.
The Kickstarter tried to raise $750,000 in 30 days and was successful in
doing so. He and his team have spent the last six months trying to deliver
what had been promised. The goal was a 5 watt computer that could do
100GFLOPS, which was achieved. There are many different applications for
such a device including software defined radio, ray tracing, image
processing, robotics, cryptography, gaming, and more.
The system has a dual-core Cortex-A9 processor that runs Linux. It has
most of the peripherals you would expect, but it also has a built-in FPGA.
That device can be configured to "do anything" and is meant to augment the
main processor. In addition, there is Adapteva's Epiphany
co-processor, which brings many small RISC cores to the system.
At the time of the talk, it had only been five days since Adapteva had
received the boards. Both 16-core
and 64-core Epiphany versions were delivered (Olofsson is holding one
of each in the photo above). Each consumes around 5 watts and he is
"excited to get them in the hands of the right people". By the second day,
the team could run "hello world" on the main CPU. Earlier in the
day of the talk (day 5), he heard from another member of the team that it
could talk to the Epiphany and run longer programs on the A9. Six months
ago, he didn't know if you could actually build this type of system in
credit card form factor with a $99 price point, but it can be done.
Now Adapteva needs to ship 6300 systems to people who donated to the
Kickstarter, which is no easy task. There are some serious logistics to be
worked out, because "we want people to be happy" with their boards.
Adapteva also wants to give away free kits to universities. Building a
sustainable distribution model with less than five people in the company
will be challenging. It is running a little late, he said, but will be
shipping all of the boards in the (northern hemisphere) summer.
Olofsson concluded with Adapteva's plans after all the shipping and
handling: next up is "massive parallelism" with 1024-core Epiphany
co-processors. It will be interesting to see what folks will do with
that system when it becomes available.
Comments (19 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
-
Security: IBM's homomorphic encryption library; New vulnerabilities in kernel, mesa, phpmyadmin, xen, ...
-
Kernel: What's coming in 3.10, part 2; Full tickless; LLVMLinux.
-
Distributions: Defining the Fedora user base; Debian, openSUSE Edu Li-f-e, ...
-
Development: Glibc; Adobe's CFF rasterizer; Geary crowdfunding; spreading the word about your code; ...
-
Announcements: OSI Board Changes, VP8 Patent Cross-license Agreement, Flock, ...
Next page: Security>>
|
|