Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 

















Wikipedia:Wikipedia Signpost/Single/2016-11-26







Add links
 









Project page
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Get shortened URL
Download QR code
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 

< Wikipedia:Wikipedia Signpost | Single

The Signpost
Single-page Edition
WP:POST/1
26 November 2016

News and notes
Arbitration Committee elections commence

In the media
Roundup of news related to U.S. presidential election and more

Blog
The top fifteen winning photos from Wiki Loves Earth

Gallery
Around the world with Wiki Loves Monuments 2016

Featured content
Featured mix

Special report
Taking stock of the Good Article backlog

Op-ed
Fundraising data should be more transparent

Traffic report
President-elect Trump

 

2016-11-26

Arbitration Committee elections underway

  • Facebook
  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • ByTony1
    There will be no counting of paper ballots in this Wikipedia election.

    The annual elections for the English Wikipedia's Arbitration Committee opened Monday 21 November and will last for two weeks until 23:59 UTC Sunday 4 December. ArbCom is the peak body for imposing binding solutions to the site's editor-conduct disputes, and is itself governed by the arbitration policy. Arbitration is generally the last avenue of dispute resolution, and over the years the administrators' noticeboard has tended to shoulder more of the work that might previously have ended up with the committee.

    The election follows a self-nomination period from 6 to 15 November that yielded 11 candidates for the seven vacant positions, which will run two-year terms (1 January 2017 – 31 December 2018). Candidate statements range from a pithy three words to the maximum 400 words permitted. Q&As for each candidate have revealed one or two interesting snippets.

    Of the six current arbitrators whose terms are about to finish, three are standing at this election for another term, and three are not contesting their seats:

    Eight editors not currently arbitrators are standing, of whom three have previously served on the committee, four have stood unsuccessfully in previous years, and two are new to the process:

    Updated with slight corrections to previous two paragraphs per comment. -Ed.

    The terms of eight of the 15 arbitrators do not expire until the end of 2017, and they will not be involved in the election:

    The Signpost spoke briefly with veteran arbitrator Casliber—now halfway through his third term since 2008—about the challenges and trends of work on the Committee. He says he's generally pleased that each year brings "a quorum of suitable candidates" for election. Arbitrators' workload has become more moderate (in the early days he remembers "getting up and facing 60 or 70 emails a day before breakfast"). However, some of the work has become more complex and ethically challenging as Wikipedia's credibility and authority as a source of information have grown; he says that view-pushing and paid editing—often hidden and sometimes on an industrial scale—have complicated aspects of arbitrators' forensic work and decision-making. Interestingly, Cas has observed a consistent pattern over the years in which new arbitrators soon become more sensitive to the need for a delicate balance between community openness and the protection of individuals' privacy. Like a number of current and previous arbitrators, he would be happy to see the Wikimedia Foundation play an enhanced role in dealing with some of the most difficult issues individual-related case that the Committee encounters—but he feels that this is unlikely to happen in the short term.

    The election is being managedbyGuy Macon, Mike V, and Mdann52. Candidates are competing through a formula that has been used for many years: the number of supports divided by the sum of supports and opposes for each candidate. There is a neutral option, although voters who wish to strategically give maximum advantage to their supports should avoid neutral votes. Through this formula, the minimum score (support per (support+oppose)) required for election is 0.5 (not itself a percentage, since it does not incorporate undervotes); if not enough candidates achieve this score, fewer than seven seats will be filled.

    Anofficial guide to candidates provides basic information about the candidates' positions. Private voting guides listed in the election template are by: Elonka, Biblioworm, SSTflyer, RegentsPark, Guerillero, Collect, Reyk, Carrite, QEDK, BU Rob13, and Tryptofish.

    Immediately following the voting period, a small team of stewards whose main wikis are not the English Wikipedia will check the votes for duplicate, missing, and ineligible votes, and compile a tally of the results. At the time of publication, the link to the instructions for scrutineers that is provided on the election page is dead. The announcement of the successful candidates is usually posted on the election page within a week after the end of voting.



    Reader comments

    2016-11-26

    Roundup of news related to U.S. presidential election and more

  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • ByPete Forsyth and Milowent

    Beyond the Wikipedia bubble, Donald Trump’s “shocking upset” in the U.S. presidential election prompted many news outlets to examine the widespread inaccuracy of expert predictions about the race, and to explore several themes of interest to Wikimedians.

    The day before the election, Creative Commons published a guide to freely licensed and public domain election resources and related media.

    The News Literacy Project, an organization focused on media literacy in U.S. secondary and pre-secondary education, observed in an election day missive that "a bitterly divided nation seemed incapable of agreeing on facts — let alone solutions — for the country’s myriad challenges", and that "it is more vital than ever that the next generation be taught how to discern credible, verified information from raw information, spin, misinformation and propaganda."

    Glenn Greenwald

    There has been much discussion about the role of "fake news websites", and their distribution through social media sites like Facebook and promotion via online advertising platforms. As calls for the social media titan to evaluate news stories mounted, journalist Glenn Greenwald noted that: "People are (rightly) skeptical of the state censoring "bad" viewpoints but (dangerously) eager for unaccountable tech billionaires to do it.”

    In “Facebook Doesn’t Need One Editor, It Needs 1,000 of Them”, Mathew Ingram of Fortune advised Facebook to look to Wikipedia for a solution. Ingram cited Wikimedia adviser Craig Newmark’s June 2016 blog post about Wikipedia’s role in journalism. The Harvard Business School paper (discussed in our previous edition’s In the media section, and noted below) might have offered an additional dimension to Ingram’s analysis.

    AWall Street Journal story, Most students don’t know when news is fake, Stanford study finds, pointed to media literacy as a key skill-set in countering fake news.

    Melissa Zimdars, a communications professor at Merrimack College, published (under a free license) a list of questionable websites, annotated with suggestions for how to evaluate their contents. The list itself was widely shared, and was covered by a number of news outlets. Zimdars then penned an op-ed for the Washington Post, noting, “with some concern, that the same techniques that get people to click on fake or overhyped stories are also being used to get people to read about my own list.” She said: “I’m not convinced that a majority of people who shared my list actually read my list, much as I’m not convinced that many people who share or comment on news articles posted to Facebook have actually read those articles”, and concluded that “while we think about fake news, we need to start thinking about how to make our actual news better, too.”

    The American Civil Liberties Union, which advocates for individual rights, was highly critical of Trump on election day, and highlighted threats it felt he might pose to the freedom of speech provisions of the First Amendment to the United States Constitution, among others.

    The Electronic Frontier Foundation (EFF), which advocates for digital rights, “wrote that 'the results of the U.S. presidential election have put the tech industry in a risky position”, urging technology companies to address several issues before Trump’s inauguration in January 2017. Issues raised include permitting pseudonymous access, curtailing behavioral analysis, keeping minimal logs of user behavior, and encrypting data. The Wikimedia Foundation, and standard Wikipedia practices, already perform better than most tech companies on all these issues; in the EFF's 2015 "Who Has Your Back" report, which evaluates tech companies on their data and privacy practices, Wikimedia earned a perfect five out of five stars. A related EFF post highlighted relevant grassroots efforts, while another urged President Obama to "boost transparency [and] accountability" in his final days in office. PF


    Wikipedia may be better at dealing with arguments than the internet at large.

    • Murder evidence: At the opening of the trial of Thomas Mair for the killing of British Member of Parliament Jo Cox, it was reported by media that Mair reviewed the Wikipedia pages of Cox, far right publication Occidental Observer, and also Ian Gow, the last MP to be murdered (in 1990).
  • More murder: A&E's new documentary series The Killing Season examines unsolved murder cases. In its first episode noted evidence from a Wikipedia edit history, in which an unidentified editor made an edit that changed the phrase "Gilgo Beach Killer" to the name of a person. The IP address in the edit history was that of the Suffolk County Police. Personnel from the show followed up on the named person by visiting his house, calling him, recording him, and playing his voice for someone who was presumably called by the serial killer. The episode first aired on November 12, 2016; the Wikipedia segment began at about 3 minutes into the episode.
  • And even more murder: The second episode explained in more depth (beginning at 39:00) how producers used Wikipedia editing history, and presented screenshots reflecting the edit in question.
  • Fact or fiction: The Wikipedia: Fact or Fiction video series on Loudwire, where artists discuss the accuracy of the information listed on their own Wikipedia biographies, celebrated its 100th episode.
  • Model Internet citizens: Wikipedia researchers Shane Greenstein and Feng Zhu reported in the Harvard Business Review on their recent Wikipedia research. (The Signpost reported on the Washington Post's coverage of the study in our Nov. 4 edition.) The study explores how contributors with different political viewpoints interact, and suggests that we have a "remarkable record" of dealing with differing opinions "without it descending into hate speech and loutish behavior." Compared to the rest of the internet, at least?
  • Wikipedia Records: Reports note that experimental musician Dedekind Cut released the B-side to his latest offering, Successor, on Wikipedia.
  • Wikimedia is officially "a thing": Open education advocate Lorna Campbell blogged about the addition of Wikimedia to the University of Edinburgh’s "23 Things for Digital Knowledge" list, which "aims to expose you to a range of digital tools for your personal and professional development as a researcher, academic, student, or professional."


  • Do you want to contribute to "In the media" by writing a story or even just an "in brief" item? Edit next week's edition in the Newsroom or contact the editor.



    Reader comments

    2016-11-26

    The top fifteen winning photos from Wiki Loves Earth

  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • ByJeff Elder

    The following content has been republished from the Wikimedia Blog. Any views expressed in this piece are not necessarily shared by the Signpost; responses and critical commentary are invited in the comments. For more information on this partnership, see our content guidelines.

    First place: The limestone of Stopića Cave, Serbia. Photo by Cedomir Zarkovic.

    The tangy butterscotch glow of a limestone cave in Serbia soaking up the slanting sun. The emerald green of lush German forest and pond covered with lush weeds and trees. The stark white and blue of a Ukraine snowscape 2,028 meters above sea level, where snowy peaks meet cloud and sky.

    The colors of nature burst from the 15 finalists of Wiki Loves Earth, the photo contest now in its third year of crowdsourcing gorgeous landscapes from more than 13,600 participants. The top 15 photos this year come from Serbia, Bulgaria, Nepal, Estonia, Ukraine, Spain, Austria, Brazil, Germany, and Thailand.

    National judging in 26 regions sorted through 115,000 photos and sent the best to international judges from Ghana, Germany, South Africa, Kosovo, France, India, Estonia, Indonesia, and Bulgaria.

    This year, the contest expanded to include a collaboration with the United Nations Educational, Scientific and Cultural Organization, better known by its acronym UNESCO. Contestants were invited to upload photos in a separate category for UNESCO biosphere reserves in 120 different countries.

    You can see more about Wiki Loves Earth on its website, and this year's jury report on Commons.

    Second place: Pobiti Kamani, Bulgaria, the only desert in Eastern Europe. Photo by Diego Delso.
    Third place: Tangye, Mustang, Nepal. Photo by Patricia Sauer.
    Fourth place: Ahja River, Estonia. Photo by Külli Kolina.
    Fifth place: Biały Słoń, the former Polish Astronomical and Meteorological Observatory, now located in Ukraine. Photo by Khoroshkov.
    Sixth place: Northern Inland Fuerteventura, Spain. Photo by Tamara Kulikova.
    Seventh place: Vääna River, Estonia. Photo by Kristoffer Vaikla.
    Eighth place: Part of the Schladming Tauern. Photo by Jörg Braukmann.
    Ninth place: Tokivske waterfall in Ukraine. Photo by Ryzhkov Sergey.
    Tenth place: Lençóis Maranhenses National Park, Brazil. Photo by Joao lara mesquita.
    Eleventh place: the same location as #5, but taken on a cold winter day. Photo by Taras Dut.
    Twelfth place: Wettenberger Ried, a protected forest in Germany. Photo by Andreas Weith.
    Thirteenth place: Khao Sam Roi Yot National Park in Thailand. Photo by Kosin Sukhum.
    Fourteenth place: Terra Ronca State Park, Brazil. Photo by Rafael Rodrigues Camargo.
    Fifteenth place: Tara in Serbia. Photo by Vladimir Mijailović.



    Reader comments


    2016-11-26

    Around the world with Wiki Loves Monuments 2016

  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • ByPine

    The first Wiki Loves Monuments competition was held in 2010 in the Netherlands as pilot project. The next year it spread to other countries in Europe. According to Guinness World Records, the 2011 edition of the Wiki Loves Monuments broke the world record for the largest photography competition. In 2012, the competition extended beyond Europe, with totals of 33 participating countries, more than 350,000 photos, and more than 15,000 participants. The 2016 edition of WLM was supported by UNESCO; contestants from 44 countries submitted approximately 277,899 photos.

    A sampling of the national winners appears below. The coordinating jury expects to announce the worldwide winners in mid-December 2016.



    Reader comments

    2016-11-26

    Featured mix

  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • ByArmbrust
    The World Before the Flood, an 1828 oil paintingbyWilliam Etty, depicts a scene from John Milton's Paradise Lost in which, among a series of visions of the future shown to Adam, he sees the world immediately before the Great Flood. The painting illustrates the stages of courtship as described by Milton; a group of men select wives from a group of dancing women, drag their chosen woman from the group, and settle down to married life. Behind the courting group, an oncoming storm looms, a symbol of the destruction which the dancers and lovers are about to bring upon themselves.

    This Signpost "Featured content" report covers material promoted from 30 October to 12 November.
    Text may be adapted from the respective articles and lists; see their page histories for attribution.

    1907 illustration of Eurasian rock pipitsbyHenrik Grönvold
    Taylor Swift performing during a The 1989 World Tour show in Detroit, Michigan in May 2015.
    Tina Maze is one of the two most decorated post-independence Slovenian Olympians, with four medals.

    Eight featured articles were promoted.

    Two featured lists were promoted.

    Nine featured pictures were promoted.



    Reader comments

    2016-11-26

    Taking stock of the Good Article backlog

  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • ByWugapodes

    The GA Trophy awarded at the end of a Good Article Cup

    Wugapodes is a two-time GA Cup participant and WikiCup finalist. Their academic work focuses on the linguistic impacts of group behavior.

    Before an English Wikipedia article can achieve good article status (the entry grade among the higher-quality article rankings), it must undergo review by an uninvolved editor. However, the number of articles nominated for review at any given time has outstripped the number of available reviewers for almost as long as the good article nominations process has existed creating a backlog of unreviewed articles. The resulting backlog in the queue of articles waiting to be reviewed has been a perennial concern. Nevertheless, the backlog at Good Article Nominations (GAN) reached its lowest point in two years on 2 July 2016. The culprit was the third annual Good Article Cup, which ended on 30 June 2016; the 2016-2017 GA Cup, its fourth iteration, began on 1 November and is ongoing. The GA Cup is the GA WikiProject's most successful backlog reduction initiative to date, but there is a problem that plagues this and all other backlog elimination drives: editor fatigue.

    The backlog at GAN has been growing ever since the process was created, with fluctuations and trends along the way. If the GA Cup, or any elimination drive, is going to be successful, it must at some point begin to treat the cause not simply the symptom. While the GA Cup has done a remarkable job in reducing the backlog, for long term success the cause of the backlog needs to be understood. The cause appears to be editor fatigue, with boom and bust reviewing periods where the core group of reviewers try to reduce the backlog and then tire out, causing the backlog to rebound. This is the chief benefit of the GA Cup: its format helps counteract the cycle of fatigue with a long term motivational structure.

    The GA Cup is a multi-round competition modeled on the older and broader-purpose WikiCup (which has run annually since 2007 and concluded this year on 31 October). Members of the GA WikiProject created the GA Cup as a way to encourage editors to review nominations and reduce the backlog through good-natured competition. Participants are awarded points for reviewing good article nominations, with more points being awarded the longer a nomination has languished in the queue. Each GA Cup sees a significant reduction in the number of nominations awaiting review. On this metric alone the GA Cup is a success; but counting raw articles awaiting review only gives insight into what happens while the GA Cup is running, ignoring the origin of the backlog and masking ways in which the GA Cup can be further improved.

    The GA Cup's predecessors, backlog elimination drives, only lasted a month, while the GA Cup lasts four. While the time commitment alone can be a source of fatigue, the mismatch between the time taken to review and the ease of nomination can lead to an unmanageable workload. A good article review nominally takes 7 days, so if the rate of closing reviews is less than the rate of nominations added, the backlog will not only increase, but the number of reviews being done by a given reviewer will balloon, causing them to burn out by the end of the competition. Well-known post-cup backlog spikes demonstrate the oft temporary nature of GA Cup efforts.

    With proper information and planning, the GA Cup can begin to treat the cause of the backlog rather than the symptom and succeed in sustaining backlog reductions after its conclusion.


    A history of the Good Article project

    Good articles can be identified by a green plus symbol. The plus-minus motif was not the first suggested; other ideas included a thumbs up, check mark, or ribbon.

    The Good Article project was created on 11 October 2005 "to identify good content that is not likely to become featured". The criteria were similar to those we have now:



    At first, the project was largely a list of articles individual editors believed to be good: any editor could add an article to the list, and any other editor could remove it. This received significant pushback, with core templates {{GA}} and {{DelistedGA}} receiving nominations for deletion on 2 December 2005 as "label creep" and a suggestion that the then-guideline should be deleted as well. They were kept, but, after discussions, the GA process received a slight tweak: while editors could still freely add articles they did not write as GAs, those wishing to self-nominate their work were referred to a newly created good article nomination page.

    While the first version of the Good Article page told editors to nominate all potential Good Articles at Wikipedia:Good article candidates (now Good Article Nominations), that requirement was removed 10 hours later. The current process was not adopted until a few months later. In March 2006 another suggestion was made:


    The next day the GA page was updated to reflect this new assessment process, and the nominations procedure was extended to all nominations, not just self-nominations.

    From there on the nomination page continued to grow. The first concerns over the backlog were raised in late 2006 and early 2007, when the nomination queue hovered around 140 unreviewed nominations. In May, the first backlog elimination drive was held, lasting three weeks. The drive saw a reduction in the backlog from 168 to just 77 articles. This did not last, however, with the backlog jumping back up to 124 a week later. The next backlog drive was held the next month, from 10 July to 14 August, with 406 reviews completed—but a net backlog reduction of just 50, leaving 73 articles still needing reviewed. Another drive planned for September was canceled due to perceived editor fatigue. Backlog elimination drives have been held at irregular intervals ever since then, with the most recent during August 2016. These drives were "moderately successful", to quote a 2015 Signpost op-edbyFigureskatingfan:



    With a looming backlog of more than 450 unreviewed articles by August 2014, a new solution was sought: the GA cup. Figureskatingfan, who co-founded the cup with Dom497, writes of its creation:

    I was in Washington, D.C., at the Wikipedia Workshop Facilitator Training in late August 2014. While I was there, I was communicating through Messenger with another editor, Dom497. We were discussing a long-standing challenge for WikiProject Good Articles—the traditionally long queue at GAN. Dom was a long-time member of the GA WikiProject. This impressive young man created several projects to encourage the reviewing of GAs, most of which I supported and participated in, but they all failed. I shared this dilemma with some of my fellow participants at the training, and in the course of the discussion, it occurred to me: Why not follow the example of the wildly successful and popular WikiCup, and create a tournament-based competition encouraging the review of GAs, but on a smaller scale, at least to start?

    I was literally on the way to the airport on my way home, discussing the logistics of setting up such a competition with Dom. By the time I got home, we had set up a preliminary scoring system and Dom had created the pages necessary. We brought up our idea at the WikiProject, and most expressed their enthusiastic support. We recruited two more judges, and conducted our first competition beginning in October 2014.


    — Figureskatingfan


    A history of the backlog

    The GAN backlog, 10 May 2007 to 25 June 2016.

    Over the last nine years, the GAN backlog has grown by about three nominations per month on average—the solid blue line above. Backlog levels are almost never stable. Large trends cause the backlog to fluctuate above and below the regressive average often. These trends though also have their own fluctuations with local peaks and valleys along an otherwise upward or downward trend. What causes these fluctuations? For the three declines after 2014, the answer is relatively simple: the GA Cup. But what about the earlier declines?

    The most obvious hypothesis is that the drops coincide with the backlog elimination drives, but this is not sufficient. While most backlog drives coincide with steep drops in the backlog, the ones that do are clustered towards the early years of GAN before it was as popular as it is now. It is easier to make significant dents in the backlog when only a couple nominations are coming in per day than when ten or more are coming in. Indeed, the last three backlog drives had a marginal impact, if any. More obviously, not all drops in the backlog stem from backlog elimination drives. Take, for instance, the reduction in the backlog in mid 2008—a reduction of 100 nominations without any backlog drive taking place. Similar reductions occurred thrice in 2013. In fact, the opposite effect has also been seen: the two most recent backlog drives seemingly occurred during natural backlog reductions, and didn't accelerate things by much. If elimination drives are not, taken together, the sole cause at play there must be some more fundamental cause that accounts for all the reductions seen.

    A better explanation comes from the field of finance: the idea of support and resistance in stock prices. For a stock, there is a price that is hard to rise above—a line of resistance—and a price that it is hard to fall below—a line of support. These phenomena are caused by the behavior of investors. When a stock price rises above a certain point, investors sell, causing the price to fall; conversely, when the price falls to a certain point, investors buy, causing the price to rise.

    Does this apply to good article reviews as well? By analogy, imagine GA reviewers as investors and the backlog as a stock price. When the backlog rises to a certain point, GA reviewers collectively think the backlog is too large and so begin reviewing at a higher pace to lower it—a line of resistance. When the backlog falls to a certain point, reviewers slow down their pace or get burned out, causing the backlog to grow—a line of support. This makes intuitive sense. The impetus behind most backlog elimination drives is a group of reviewers thinking the backlog has grown too large. The backlog elimination drives then are just a more organized example of reviewers picking up their pace.

    If this hypothesis is correct, then backlog reduction initiatives should be held during the low tide, encouraging weary reviewers, rather than during the high, when they are more likely to review nominations anyway, initiatives notwithstanding. But how can we tell where these lines of support exist and when the backlog is likely to bounce back? Economists and investors have found the moving average to be a useful tool in describing the lines of support and resistance in stock prices, so perhaps it can be useful here. In the graph above, the dashed, red line represents a 90-day simple moving average. It seems to capture the lines of support and resistance for the backlog well, as most local peaks tend to bounce off of it, but major trend changes pass through it.

    An example of the utility of this theory can be seen in early 2009. The backlog began to fall naturally in January, but was about to hit a line of resistance that may have caused the upward trend to continue. However, a backlog drive took place in February, causing an even steeper decline in the backlog, pushing it past the line of resistance. Unfortunately, the full impact of this cannot be understood as the data for April to November 2009 were never recorded by the GA Bot.


    The impact of the GA Cup

    The backlog over the last three years.

    After almost a year of no backlog drives in 2013, followed by two rather unsuccessful ones, the GA Cup was started. Over the past two years, three GA Cups have been run, all with robust participation and significant reductions in nominations outstanding. But is the cup succeeding? To answer that question I looked at the daily rates of new nominations, closed nominations, nominations passed, and nominations failed during each of the GA Cups and compared them to the rates before and after the first GA Cup.

    The presence of a reduction in the backlog is obvious: each cup correlates with a steep drop in the number of nominations, the most effective being the third GA Cup, which concluded on June 30 this year. The most recent GA Cup reduced the backlog by about two nominations per day, 92 more nominations completed than during the first GA Cup—despite the third Cup being significantly shorter than the first. The third GA Cup was lauded a success.

    Yet in late April, the backlog reduction began to stagnate. The number of nominations added remained relatively stable over this period, but this period coincided with a drop in the number of nominations being completed. In early May the backlog began to rise, crossing over the line of resistance in the process, and so beginning to shrink again towards the end of May, with a distinct downward trend by June.

    Backlog during the third GA Cup with a 15-day simple moving average

    Ultimately, the best way to conceptualize the GA review backlog is as a mismatch between the "supply" of reviewers and the "demand" for reviews. To borrow another concept from finance, it is simply a mismatch in supply and demand. The number of nominations—the demand—is relatively consistent, at about 10 nominations per day. There is a mild decrease in the rate of nominations—the daily rate decreases by one nomination every two years—but, all-in-all, relatively stable.

    Measuring supply is more difficult. The change in the backlog is equal to the number of nominations added minus the number of reviews opened, so if the average demand is 10 nominations, and the average supply of reviews is 0, then the backlog would grow by 10 nominations each day; if the supply were 5, it would grow by 5. That means the average number of nominations minus the average number of reviews equals the average change in the backlog. Since the average change in the backlog, the linear regression, and the average number of nominations are both known, the average supply can easily be calculated. It turns out to be about six per day. Taken in combination with the aforementioned demand, shows a net daily increase in the backlog by four nominations each day. And since this analysis includes the GA cup time period, the backlog is actually increasing at an even higher rate whenever a Cup isn't active!

    Backlog from the end of the Second GA Cup to the end of the Third GA Cup. The blue line indicates when the Third GA cup was announced and the green line when the Third Cup began.

    The number of open reviews does not inspire much confidence either. Reviews open drops dramatically after each GA cup, likely due to participant burnt-out. Interestingly, the number of open reviews also drops before the GA Cup causing a counterproductive uptick in the backlog. In fact, the drop just before this year's cup coincided with the announcement of the event's competition date a month prior to its start. This development came at a time when the number of reviews was increasing and the backlog naturally starting to decline.

    All told, these are not fatal flaws, as the GA Cup is succeeding despite them in other ways. Most obviously, the backlog has been decreasing during cups, and review quality doesn't seem to decline, qualitatively, either. Comparing five months before with the four months during the first GA Cup, there is no significant difference between the pass rates during or before the GA Cup ( t(504.97)=-1.788, p=0.07 ). In fact, may have actually decreased slightly, from 85% beforehand to 82% during the cup and because the p-value is close to significance, the idea that GA Cup reviewers are more stringent may be worth examining further.

    This is not to say that there is no other way to examine review quality. Reasonable minds can disagree on how well this metric describes the quality of reviews, and concerns of the quality of reviews have been raised a number of times, but this is the preferable starting point for this analysis. We now know that the GA Cup does not lead to "drive-by" passes, and that any problems with unfit articles passing or fit articles failing are occurring at about the same rate as normal. Hopefully, then, those solutions can be more general, improving all reviews' qualities, rather than specific to the GA Cup.

    Conclusions

    The GA Cups have been effective at encouraging editors completing GA reviews. Its effect on the cause of the backlog, on the other hand, is less clear. Long-lasting backlog reductions require a nuanced approach: recruiting more reviewers, finding the correct timing, and giving proper encouragement. The GA Cup is arguably already successful at encouragement, but that does not mean the former aspects cannot be improved as well.

    The GA Cup has so far been executed at times when reviewers were already increasing their efforts to reduce the backlog, and the announcement of the third GA Cup, for instance, caused these efforts to stagnate. By allowing these natural reductions to take place, and then holding the GA Cup when editors get burnt out, we can leverage GA cups' morale boost to help reduce backlogs even further.

    Furthermore, while there was no good way to analyze how well the GA Cup recruits new reviewers, anecdotally it seems to do so. Bringing in new reviewers when the regulars are getting burnt out would reduce the backlog rebound in the short term, and may lead to an increase in the number of regular reviewers in the long term.


    The organizers of the GA Cup understand that what is most needed is more reviews and more reviewers, which and whom the GA Cup has done an admirable job recruiting. The Third GA Cup has been the most successful so far, and hopefully the next cup will surpass it in all metrics.



    Reader comments

    2016-11-26

    Fundraising data should be more transparent

  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • ByLodewijk Gelauff
    If you live in an English-speaking country, you may see these banners soon!

    Fundraising season is coming up for the Wikimedia Foundation! If you live in an English-speaking country, you will probably be asked to donate the price of a rather expensive cup of coffee to keep our servers running. Fundraising has been successful for many years, making use of the goodwill and appreciation of Wikipedia's readership. And that’s a good thing.

    At the same time, a greater effort could (and should) be made by the fundraising department to support volunteers throughout the movement, by improving communication and sharing more country-level data and information. This could help to avoid conflicts between the Foundation and volunteers, and instead could facilitate them in their public-facing and outreach activities.

    It's generally accepted that Wikipedia stands or falls through the involvement of its volunteers. Volunteers write articles, improve them, categorize them, make them look good, correct spelling mistakes and improve grammar, and do all of the editing that goes into creating an encyclopedia. Similarly, volunteers make up the bulk of the ecosystem that supports the Wikimedia movement as a whole.

    This volunteer capacity is a great opportunity in many ways. With a movement of 80,000 volunteers, we can tap into local expertise–through the affiliated organizations and editing communities. Until a few years ago, the fundraising efforts made effective use of this expertise.[1] Nowadays, volunteer involvement seems to be limited to translating banner messages and description pages, if that.

    This is a pity, because I strongly believe that fundraising could more effectively benefit from volunteer involvement: volunteers could help by coming up with alternatives for this cup-of-coffee metaphor that may work much better in their own country, could point out effective payment methods, or identify missing information on the fundraising pages.[2] They could improve the cultural connection of the fundraising messaging.

    But this is not all. For volunteers across the Wikimedia ecosystem to operate optimally, they need tools and information. In this piece, I focus specifically on two ways in which the organization of fundraising could be improved, to facilitate volunteers throughout the movement better.

    Timing

    Apart from the occasional announcement, we don’t know for a fact when and in which country the Foundation plans to show banners asking for donations. Apparently it is a challenge to the Foundation to communicate the fundraising schedule well ahead of time. Let alone that the fundraising schedule is coordinated with the main (outreach) activities of editing communities, user groups, thematic organizations, and chapters. However, both fundraising and outreach activities make use of the same resource: the CentralNotice (the banner you see on top of each page). This lack of communication and coordination makes clashes of schedule unavoidable.

    The solution seems obvious: communicate and coordinate schedules to reduce overlap as much as possible. There has been some initial alignment this year around Wiki Loves Monuments after a major clash last year in Italy, where fundraising was scheduled at the same time as the main activity of the local chapter. The Foundation did reach out this year to a number of major chapters a few months before the fundraising effort in their country. Some improvement is ongoing, but a scalable and much more timely approach is needed and would benefit both fundraising and outreach activities. Let’s do an annual inquiry among all affiliated organizations to identify optimal and problematic periods for fundraising activity in their country, and schedule together for the year in advance. With relatively little effort, we can avoid painful last-minute discussions and collisions.

    Sharing country-level statistics

    The WMF fundraising department has only released continent-level statistics since 2012.

    While the recently published Fundraising Report for the year ending June 2016 (previous Signpost coverage) was very useful on sharing high-level trends and decisions, and explaining some of the WMF's research results, this seems a good moment to take a step back and look at how to inform and involve the community more actively.

    A higher standard of transparency is required to enable volunteers to work effectively to support fundraising and execute other activities. One of the types of data that have been repeatedly requested by volunteers is the country-level statistics pertaining to donations. While the Foundation did publish statistics broken down by country until 2012, it has not since: volunteers have to be satisfied with continent-level statistics. The argument made by the Foundation is vaguely defined: “There are a few different reasons why the team may not be able to publish data from a country, including privacy and security and other legal reasons”. [3]

    Whatever these legal reasons may be, I believe they need to be balanced against the benefits of releasing country-level data and/or statistics; this is not just a theoretical discussion for the sake of transparency.

    Wider benefits

    This kind of data could help volunteers to help the fundraising team in their countries. Local volunteers can combine an understanding of trends and the available data with a better understanding of local situations and changes, and be able to explain the data better. But for that local expertise to be applied, they need to understand the fundraising efforts in their own country. Country-level data could help volunteers in their other activities for the Wikimedia movement. They could use it in their media and outreach strategy, and can use it to provide context to journalists who are trying to understand how the citizens contribute to Wikipedia. This is a recurring question in interviews and by new contributors. It is plainly embarrassing for volunteers, advocating for transparent and openly licensed information flows, to say they don’t even know remotely how much their movement collects in contributions from their own country. When applying for external funding for their activities, or while advocating to governments on Wikimedia’s behalf on values we all share (here, for example, promoting improved legislation around copyright and access to information), they could use this data to demonstrate local active support and appreciation for Wikipedia/Wikimedia. With this data, they could demonstrate the extent to which readers from their country are willing to support the movement financially – and that the wide appreciation of readers goes beyond just words.

    If the data were detailed enough, especially outside the main fundraising banner season, it could potentially even help affiliates to demonstrate and understand how their activities impact fundraising success, and to learn from it and focus their outreach around it.

    Let's make optimal use of the expertise that our range of volunteers has to offer in our movement for fundraising optimization, and provide our volunteer base with the tools to help our mission in the best way possible! I hope the fundraising and legal departments will work together to see how we can take these improvements, implement them, and help volunteers do what they’re best at.

    Notes

    1. ^ For example, from 2010 through 2012 there was an active (closed) mailing list coordinating the fundraising efforts with volunteers, and a number of chapters had an active role in fundraising within their country, choosing effective language in collaboration with local fundraising experts, hosting locally relevant payment methods, and handling questions from donors.
  • ^ It should be noted that the Fundraising department did ask for banner suggestions.
  • ^ Stephen LaPorte (Legal department, Wikimedia Foundation) responded in 2015 and just now & Seddon (Fundraising, Wikimedia Foundation) last month


  • Reader comments

    2016-11-26

    President-elect Trump

  • Twitter
  • LinkedIn
  • Reddit
  • Digg
  • BySerendipodous and Milowent

    Week of October 30 – November 5, 2016: Asleep at the wheel

    Despite facing what could very well be the most important election since the civil rights era, Americans seem to want to think about anything but politics. Obviously the 2016 election is on people's minds, but not as much as macabre holidays, improbable wins by oft-ridiculed baseball teams, comic book sorcerers and, most tellingly of all perhaps, a melodrama about royalty. Given the responsibility they're about to take on, it's not surprising that democracy isn't a priority for readers at the moment. Still, get in gear guys. It's not like we're not all watching you or anything.

    For the full top-25 lists (and archives back to January 2013), see WP:TOP25. See this section for an explanation of any exclusions. For a list of the most-edited articles every week, see WP:MOSTEDITED.

    As prepared by Serendipodous, for the week of October 30 to November 5, 2016, the ten most popular articles on Wikipedia, as determined from the WP:5000 report were:

    Rank Article Class Views Image Notes
    1 Day of the Dead B-class 1,889,902
    Mexico's carnival of the cadaverous, the living dream of any kid who ever wished Halloween could last three days, is the beneficiary of Wikipedia's incurable interest in those holidays not routinely celebrated in the US. It's the same reason Boxing Day always charts higher than Christmas on this list. Despite the list covering both these holidays' dates, and despite Halloween being boosted by that greatest of Wikipedia flypapers, an interactive Google Doodle, the Day of the Dead's grim fandango still beat latter's monster mash. It only just loses even if we had added in Halloween's numbers from last week to that holiday's total.
    2 Halloween B-class 1,558,776
    Whatever happened to the Transylvania Twist?
    3 Doctor Strange (film) C-Class 1,077,855
    Marvel Studios continue their roll. Their attempt to bring their unashamedly psychedelic superhero into the earthier realms of the Marvel Cinematic Universe has apparently paid off, with a 90% RT rating and an $84 million opening, no doubt aided by the international star power of a certain Benedict Cumberbatch (pictured).
    4 Chicago Cubs C-Class 1,030,619
    The American baseball team has not won a World Series since 1908, but managed it this year, beating the Cleveland Indians 8–7. Turns out Back to the Future II was only off by a year.
    5 Huma Abedin C-Class 1,021,942
    A top adviser to Hillary Clinton, views started to rise on October 28, and remained high for most of the week. This probably is related to Clinton-related emails allegedly being found on the laptop of her estranged husband Anthony Weiner; a subject of much sound and fury, but ultimately signifying nothing.
    6 Ae Dil Hai Mushkil Start-Class 994,767
    This Indian romantic film whose cast includes Aishwarya Rai (pictured) had its debut on October 28 (Diwali weekend).
    7 Donald Trump C-Class 949,709
    For someone in imminent danger of becoming the next President of the United States, you'd think numbers would be higher. But they're not significantly up from last week, and significantly DOWN from two weeks ago. Is this a sign? I don't know.
    8 Curse of the Billy Goat Start-Class 949,092
    Apparently, legend has it that in 1945 the owner of the Billy Goat Tavern was asked to leave Wrigley Field because his pet goat's smell was bothering fans, and proclaimed that "Them Cubs, they ain't gonna win no more." And that's why the Cubs didn't win a World Series until this week. The moral of the story, children, is that people will make up any piece of boondoggle to rationalise a bad situation.
    9 Meghan Markle Start-Class 864,425
    The fact that this American mixed-race actress may be dating the fifth in line to the British throne has raised some fairly awkward questions in the British press, like whether the situation would be the same if she'd dated Prince William. Keep in mind this is the same Royal Family that nearly collapsed because the heir to the throne wanted to marry an American divorcee. Personally, I think the whole lot's an outdated anachronism anyway, so I couldn't care less.
    10 Elizabeth II Featured Article 822,254
    The longest-reigning British monarch in history is bound to draw attention whenever the British Royal Family becomes a topic of interest, but this week she gets an additional boost from her portrayal in The Crown, a $100 million melodrama about her early years where she is played by Claire Foy.

    Week of November 6–12, 2016: President-elect Trump

    See also our Special Traffic Report: The U.S. Presidential Election analyzing election related traffic from June 2015-November 2016
    I'll be taking a permanent spot on the charts, thank you.

    In the early morning of November 9, news reports announced that Donald Trump (#1) had won election as the 45th President of the United States, in one of the most oddball political victories of all time. And of course, he leads the chart this week with 12.3 million views, compared to only 2.64 million for his opponent, Hillary Clinton (#6). Trump's numbers are second-highest seen since we started the Top 25 in 2013 (the record was set in April 2016 when Prince died).

    Clearly this is a momentous event in United States politics, at least in the Age of Wikipedia. In comparison, when Barack Obama was first elected in November 2008, his article received only 4.99 million views on the week of the election, compared to 1.08 million to his opponent John McCain. (Although mobile viewcounts were not captured then, mobile views were not a very large portion of traffic in 2008.) This 5-1 view ratio is similar to the Trump-Hillary ratio we see in this week's report. See also User:Andrew Gray/Election statistics for an in-depth analysis of 2008 statistics done shortly after that election. In 2012 (when mobile viewcounts were a larger portion of traffic than in 2008 but still not captured by stats.grok.se), Obama beat Mitt Romney in election week views by 2.04 million to 1.78 million.

    2008, 2012, 2016, week before and week of views.

    Nine of the top 10 slots this week are election-related, with only Queen Elizabeth II (#8) breaking the run, based on the great success of The Crown television series. The Crown also propelled other British royal figures into the Top 25 with impressive view numbers. But nineteen of the Top 25 articles are election-related, a new record for single-topic related articles in a week.

    The most notable death, which would have probably been #1 in any other week, was that of cult songwriter Leonard Cohen (#13). This week's chart is also astounding because every article in the Top 25 exceeded one million views -- we have never even come close to that level of traffic before among the top viewed articles. Usually a few of the top articles in a given week get to that level. And for the first time since we began this report in January 2013, Deaths in (Year) was knocked out of the Top 25, placing at #34. So we've provided an extended list for #26-35 this week at the bottom of the chart, many of which are also election-related.

    Please note that this report refrains from making any strong editorial comments about Donald Trump; no conclusions should be drawn from that decision. The press in the United States and around the world is reporting heavily on the meaning and effect of Trump's election. Just don't get your news and commentary from fake news sites posted to Facebook.

    Also, please see our SPECIAL REPORT on the U.S. Presidential election — tracking the popularity of Donald Trump and Hillary Clinton's articles for the whole campaign cycle, from June 2015 to November 2016. As detailed there, attention and enthusiasm for Donald Trump far exceeded that of Clinton across the board. Perhaps this was an overlooked indicator of Trump's chances of success.

    For the full top-25 lists (and archives back to January 2013), see WP:TOP25. See this section for an explanation of any exclusions. For a list of the most-edited articles every week, see WP:MOSTEDITED.

    As prepared by Milowent, for the week of November 6 to 12, 2016, the ten most popular articles on Wikipedia, as determined from the WP:5000 report were:

    Rank Article Class Views Image Notes
    1 Donald Trump C-Class 12,331,880
    Trump won the November 8 election to become President-Elect of the United States, and his article got the second-most views ever for this chart. 6.1 million of these views were on November 9. As our daily data from the WP:5000 is based on UTC hours, no doubt views increased in the early hours of November 9 as it became clear that Trump could, and then would, win the election.
    2 United States presidential election, 2016 B-Class 5,414,267
    Views peaked at 2.36 million on November 9.
    3 Electoral College (United States) B-Class 4,496,355
    In the United States, the president is not elected by the popular vote, which Hillary Clinton won, but by the "electoral college," which consists of 538 votes spread out over the 50 states and District of Columbia, and where the winner of the popular vote in each state (with the exception of two states which distribute electors by Congressional district) receives all the electoral votes for that state. This is the fifth time that the winner of the popular vote lost the election, the last being in 2000. When the counts are final, it is clear that the popular vote count between Clinton and Trump will be largest gap ever in this situation. Trump threaded the needle by winning in Rust Belt states such as Pennsylvania, Ohio and Michigan even though losing the popular vote by a large margin in populous states like California and New York.
    4 Melania Trump C-Class 4,198,183
    Mrs. Trump will be the first foreign-born First Lady of the United States since Louisa Adams in the 1820s. Louisa was born in Britain to an American father and a British mother, so Melania will be the first non-native speaker of English to hold the title, which is a bit bizarre considering Trump's rhetoric on immigration. Though her English is accented, she does speak six languages, which is very uncommon for Americans.
    5 United States presidential election, 2012 B-Class 2,854,744
    No doubt this article was popular as readers tried to figure out how Obama won so handily in 2012 over Mitt Romney, and what changed. One thing that changed is that Donald Trump did not run a campaign that resembled that of prior Republican candidates.
    6 Hillary Clinton Featured Article 2,644,676
    Throughout the campaign, Clinton's article was less popular than Trump's. See our SPECIAL REPORT. Often we ascribed this to Trump's tendency to say outrageous things and dominate media coverage, but maybe this was also evidence of more enthusiasm among Americans for Trump than for Clinton.
    7 Ivanka Trump Start class 2,163,529
    No doubt the most liked Trump outside core Trump-fandom. Her views regularly exceeded those of her siblings. In the report for the July 2016 week of the Republican National Convention, Ivanka placed #4, ahead of her three adult siblings. (Trump's youngest child, Barron Trump, is only 10 years old and should not yet have his own article here, if the precedent set for Malia and Sasha Obama is applied.)
    8 Elizabeth II Featured Article 2,053,702
    The longest-reigning British monarch in history is bound to draw attention whenever the British Royal Family becomes a topic of interest. For the second consecutive week she gets an additional boost from her appearance in The Crown, a $100 million melodrama about her early years where she is played by Claire Foy.
    9 Barack Obama Featured List 2,014,336
    The outgoing president campaigned hard in favor of Hillary Clinton (#6) in the closing weeks of the campaign. Now he has to turn over power to the person who championed the awful lie of birtherism. There really is no way to sugarcoat this.
    10 List of Presidents of the United States B-Class 1,868,016
    Trump will be the first U.S. president not to hold a previous governmental office or military command.




    Reader comments

    If articles have been updated, you may need to refresh the single-page edition.

    It's your Signpost. You can help us.

    Archives

    Newsroom

    Subscribe

    Suggestions


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Wikipedia:Wikipedia_Signpost/Single/2016-11-26&oldid=1183873311"

    Category: 
    Wikipedia Signpost Single archives 2016
     



    This page was last edited on 7 November 2023, at 00:57 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki