COLLECTED BY
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Collection: ArchiveBot: The Archive Team Crowdsourced Crawler
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.
There is a dashboard running for the archivebot process at http://www.archivebot.com.
ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot.

|
This also sets the content-length for GET requests (and other requests that don't have a body). I think this should also check See this for an example: require 'net/http'
h = Net::HTTP.new 'localhost', 8000
h.set_debug_output $stderr
h.get '/'
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
|
When you're done you should create a ticket on bugs.ruby-lang.org that references this pull request. You don't need to submit the patch.
Sorry holidays got in the way. Yes I am but I'm having trouble writing a proper test. After chatting with drbrain I thought that instead of looking at the http response code, I would check the headers in the request, but I can't seem to figure out how to do that. Any pointers anyone could give me?
|
|
gregors |
add check for request body permitted
…
We don't want get requests to have a content-length header |
cb4df80
|
add check for request body permitted …
We don't want get requests to have a content-length header
| @@ -75,6 +75,9 @@ def body_stream=(input) | ||
| def set_body_internal(str) #:nodoc: internal use only | ||
| raise ArgumentError, "both of body argument and HTTPRequest#body set" if str and (@body or @body_stream) | ||
| self.body = str if str | ||
| + if @body.nil? && @body_stream.nil? && @body_data.nil? && request_body_permitted? | ||
|
This also sets the content-length for GET requests (and other requests that don't have a body). I think this should also check See this for an example: require 'net/http'
h = Net::HTTP.new 'localhost', 8000
h.set_debug_output $stderr
h.get '/'
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
|
||
| + self.body = '' | ||
| + end | ||
| end | ||
| # | ||
It is bad form to not set the content-length header on empty post requests. This will most likely end with a 411 response from the server. Sometimes post requests will be empty e.g. the initial request during a challenge-response authentication scenario.
I've also included a test. It checks to see that webrick does not return the 411 response code. This is a resubmitted pull request. I initially dev'd on the 1.9.3 branch pull request #200. Sorry about that - still trying to figure out the workflow around here.