Organization:
Archive Team

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.
The Wayback Machine - http://web.archive.org/web/20240511150114/https://www.drupal.org/node/2495145
Comments
Comment #1
Comment #2
As far as I can tell this is reflected XSS that requires no interaction or permissions to trigger.
Feels more like a critical.
Comment #3
Alright, changed things around a bit. The pubsubhubbub standard has changed quite a bit. verify_token is gone. This patch makes the logic much easier to understand.
Uses drupal_random_key(40) for the hmac secret.
check_plains() the challenge. <- This should solve the problem. From what I can tell, there aren't any restrictions on what the verify token can be, but this is the only way to fix the problem.
I have never looked at this code, ugh.
Comment #4
The last submitted patch, 3: feeds-pub-xss-2495145-3.patch, failed testing.
Comment #5
Bumping to dev, since that bug that broke the test is already fixed.
Working on a test to verify the fix. Once it's done, I'll make a special release with just this patch in it.
Comment #6
twistor queued 3: feeds-pub-xss-2495145-3.patch for re-testing.
Comment #7
Comment #8
The last submitted patch, 7: feeds-pub-xss-2495145-7-should-fail.patch, failed testing.
Comment #9
No direct print_r of $_GET anymore, looks good from a visual review.
Comment #10
Comment #11
Comment #12
Automatically closed - issue fixed for 2 weeks with no activity.