| Sep | OCT | Nov |
| 31 | ||
| 2019 | 2020 | 2021 |
COLLECTED BY
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.
There is a dashboard running for the archivebot process at http://www.archivebot.com.
ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot.
{{ message }}
Manages application of security headers with many safe defaults
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
* Fix typo * Improve header description about default configuration * Titlecase "Ruby"
main branch represents 6.x line. See the upgrading to 4.x doc, upgrading to 5.x doc, or upgrading to 6.x doc for instructions on how to upgrade. Bug fixes should go in the 5.x branch for now.
The gem will automatically apply several headers that are related to security. This includes:
It can also mark all http cookies with the Secure, HttpOnly and SameSite attributes. This is on default but can be turned off by using config.cookies = SecureHeaders::OPT_OUT.
secure_headers is a library with a global config, per request overrides, and rack middleware that enables you customize your application settings.
If you do not supply a default configuration, exceptions will be raised. If you would like to use a default configuration (which is fairly locked down), just call SecureHeaders::Configuration.default without any arguments or block.
All nil values will fallback to their default values. SecureHeaders::OPT_OUT will disable the header entirely.
Word of caution: The following is not a default configuration per se. It serves as a sample implementation of the configuration. You should read more about these headers and determine what is appropriate for your requirements.
SecureHeaders::Configuration.default do |config|
config.cookies = {
secure: true, # mark all cookies as "Secure"
httponly: true, # mark all cookies as "HttpOnly"
samesite: {
lax: true # mark all cookies as SameSite=lax
}
}
# Add "; preload" and submit the site to hstspreload.org for best protection.
config.hsts = "max-age=#{1.week.to_i}"
config.x_frame_options = "DENY"
config.x_content_type_options = "nosniff"
config.x_xss_protection = "1; mode=block"
config.x_download_options = "noopen"
config.x_permitted_cross_domain_policies = "none"
config.referrer_policy = %w(origin-when-cross-origin strict-origin-when-cross-origin)
config.csp = {
# "meta" values. these will shape the header, but the values are not included in the header.
preserve_schemes: true, # default: false. Schemes are removed from host sources to save bytes and discourage mixed content.
disable_nonce_backwards_compatibility: true, # default: false. If false, `unsafe-inline` will be added automatically when using nonces. If true, it won't. See #403 for why you'd want this.
# directive values: these values will directly translate into source directives
default_src: %w('none'),
base_uri: %w('self'),
block_all_mixed_content: true, # see http://www.w3.org/TR/mixed-content/
child_src: %w('self'), # if child-src isn't supported, the value for frame-src will be set.
connect_src: %w(wss:),
font_src: %w('self' data:),
form_action: %w('self' github.com),
frame_ancestors: %w('none'),
img_src: %w(mycdn.com data:),
manifest_src: %w('self'),
media_src: %w(utoob.com),
object_src: %w('self'),
sandbox: true, # true and [] will set a maximally restrictive setting
plugin_types: %w(application/x-shockwave-flash),
script_src: %w('self'),
style_src: %w('unsafe-inline'),
worker_src: %w('self'),
upgrade_insecure_requests: true, # see https://www.w3.org/TR/upgrade-insecure-requests/
report_uri: %w(https://report-uri.io/example-csp)
}
# This is available only from 3.5.0; use the `report_only: true` setting for 3.4.1 and below.
config.csp_report_only = config.csp.merge({
img_src: %w(somewhereelse.com),
report_uri: %w(https://report-uri.io/example-csp-report-only)
})
endAll headers except for PublicKeyPins and ClearSiteData have a default value. The default set of headers is:
Content-Security-Policy: default-src 'self' https:; font-src 'self' https: data:; img-src 'self' https: data:; object-src 'none'; script-src https:; style-src 'self' https: 'unsafe-inline'
Strict-Transport-Security: max-age=631138519
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Frame-Options: sameorigin
X-Permitted-Cross-Domain-Policies: none
X-Xss-Protection: 1; mode=block
Which headers you decide to use for API responses is entirely a personal choice. Things like X-Frame-Options seem to have no place in an API response and would be wasting bytes. While this is true, browsers can do funky things with non-html responses. At the minimum, we suggest CSP:
SecureHeaders::Configuration.override(:api) do |config|
config.csp = { default_src: 'none' }
config.hsts = SecureHeaders::OPT_OUT
config.x_frame_options = SecureHeaders::OPT_OUT
config.x_content_type_options = SecureHeaders::OPT_OUT
config.x_xss_protection = SecureHeaders::OPT_OUT
config.x_permitted_cross_domain_policies = SecureHeaders::OPT_OUT
endHowever, I would consider these headers anyways depending on your load and bandwidth requirements.
This project originated within the Security team at Twitter. An archived fork from the point of transition is here: https://github.com/twitter-archive/secure_headers.
Contributors include:
If you've made a contribution and see your name missing from the list, make a PR and add it!
We use essential cookies to perform essential website functions, e.g. they're used to log you in. Learn more
We use analytics cookies to understand how you use our websites so we can make them better, e.g. they're used to gather information about the pages you visit and how many clicks you need to accomplish a task. Learn more