2,873 captures
14 Oct 2014 - 04 Feb 2026
Aug SEP Oct
16
2019 2020 2021
success
fail

About this capture

COLLECTED BY

Organization: Archive Team

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.

The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.

This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.

Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.

The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.

Collection: ArchiveBot: The Archive Team Crowdsourced Crawler

ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).

To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.

There is a dashboard running for the archivebot process at http://www.archivebot.com.

ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot.

TIMESTAMPS
The Wayback Machine - http://web.archive.org/web/20200916230332/https://github.com/github/fetch
Skip to content
Sign in Sign up
  • Star
  • Fork 2.6k
  • A window.fetch JavaScript polyfill.

    github.github.io/fetch/
    MIT License
    24k stars 2.6k forks
    Star
    Watch
    master
    5 branches 36 tags
    Go to file
    Code

    If nothing happens, download GitHub Desktop and try again.

    If nothing happens, download GitHub Desktop and try again.

    If nothing happens, download Xcode and try again.

    If nothing happens, download the GitHub extension for Visual Studio and try again.

    Latest commit

    JakeChampion 3.4.1
    5e3aa10 Sep 7, 2020
    3.4.1
    5e3aa10

    Git stats

    Files

    Permalink
    Failed to load latest commit information.
    Type
    Name
    Latest commit message
    Commit time
    .github/workflows
    Move from Travis to GitHub Actions (#793)
    Jul 8, 2020
    test
    Throw a TypeError if Request or Response functions are called without…
    Jul 9, 2020
    .eslintrc.json
    Use globalThis as the global object if it exists
    Aug 7, 2020
    .gitignore
    Build UMD dist file with Rollup
    May 17, 2018
    .npmignore
    Add npmignore file to ensure we always publish the dist directory
    Sep 7, 2020
    CODE_OF_CONDUCT.md
    Fix code of conduct contact email
    Sep 1, 2018
    CONTRIBUTING.md
    Clarify what parts of the standard we don't want to implement
    Sep 18, 2018
    LICENSE
    2016
    Jan 18, 2016
    Makefile
    Make the clean task remove the dist directory and the default task cr…
    Sep 7, 2020
    README.md
    Recommend an AbortController polyfill which is fully synchronous
    Jul 30, 2020
    bower.json
    Remove Bower dependency in development
    Nov 12, 2016
    fetch.js
    Use globalThis as the global object if it exists
    Aug 7, 2020
    fetch.js.flow
    Add DOMException to type definitions
    Sep 7, 2018
    package.json
    3.4.1
    Sep 7, 2020
    prettier.config.js
    Switch from jshint to eslint
    May 16, 2018
    rollup.config.js
    Update rollup.config.js
    Jul 31, 2020

    README.md

    window.fetch polyfill

    The fetch() function is a Promise-based mechanism for programmatically making web requests in the browser. This project is a polyfill that implements a subset of the standard Fetch specification, enough to make fetch a viable replacement for most uses of XMLHttpRequest in traditional web applications.

    Table of Contents

    Read this first

    Installation

    npm install whatwg-fetch --save
    

    As an alternative to using npm, you can obtain fetch.umd.js from the Releases section. The UMD distribution is compatible with AMD and CommonJS module loaders, as well as loading directly into a page via <script> tag.

    You will also need a Promise polyfill for older browsers. We recommend taylorhakes/promise-polyfill for its small size and Promises/A+ compatibility.

    Usage

    For a more comprehensive API reference that this polyfill supports, refer to https://github.github.io/fetch/.

    Importing

    Importing will automatically polyfill window.fetch and related APIs:

    import 'whatwg-fetch'
    
    window.fetch(...)

    If for some reason you need to access the polyfill implementation, it is available via exports:

    import {fetch as fetchPolyfill} from 'whatwg-fetch'
    
    window.fetch(...)   // use native browser version
    fetchPolyfill(...)  // use polyfill implementation

    This approach can be used to, for example, use abort functionality in browsers that implement a native but outdated version of fetch that doesn't support aborting.

    For use with webpack, add this package in the entry configuration option before your application entry point:

    entry: ['whatwg-fetch', ...]

    HTML

    fetch('/users.html')
      .then(function(response) {
        return response.text()
      }).then(function(body) {
        document.body.innerHTML = body
      })

    JSON

    fetch('/users.json')
      .then(function(response) {
        return response.json()
      }).then(function(json) {
        console.log('parsed json', json)
      }).catch(function(ex) {
        console.log('parsing failed', ex)
      })

    Response metadata

    fetch('/users.json').then(function(response) {
      console.log(response.headers.get('Content-Type'))
      console.log(response.headers.get('Date'))
      console.log(response.status)
      console.log(response.statusText)
    })

    Post form

    var form = document.querySelector('form')
    
    fetch('/users', {
      method: 'POST',
      body: new FormData(form)
    })

    Post JSON

    fetch('/users', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        name: 'Hubot',
        login: 'hubot',
      })
    })

    File upload

    var input = document.querySelector('input[type="file"]')
    
    var data = new FormData()
    data.append('file', input.files[0])
    data.append('user', 'hubot')
    
    fetch('/avatars', {
      method: 'POST',
      body: data
    })

    Caveats

    Handling HTTP error statuses

    To have fetch Promise reject on HTTP error statuses, i.e. on any non-2xx status, define a custom response handler:

    function checkStatus(response) {
      if (response.status >= 200 && response.status < 300) {
        return response
      } else {
        var error = new Error(response.statusText)
        error.response = response
        throw error
      }
    }
    
    function parseJSON(response) {
      return response.json()
    }
    
    fetch('/users')
      .then(checkStatus)
      .then(parseJSON)
      .then(function(data) {
        console.log('request succeeded with JSON response', data)
      }).catch(function(error) {
        console.log('request failed', error)
      })

    Sending cookies

    For CORS requests, use credentials: 'include' to allow sending credentials to other domains:

    fetch('https://example.com:1234/users', {
      credentials: 'include'
    })

    The default value for credentials is "same-origin".

    The default for credentials wasn't always the same, though. The following versions of browsers implemented an older version of the fetch specification where the default was "omit":

    If you target these browsers, it's advisable to always specify credentials: 'same-origin' explicitly with all fetch requests instead of relying on the default:

    fetch('/users', {
      credentials: 'same-origin'
    })

    Note: due to limitations of XMLHttpRequest, using credentials: 'omit' is not respected for same domains in browsers where this polyfill is active. Cookies will always be sent to same domains in older browsers.

    Receiving cookies

    As with XMLHttpRequest, the Set-Cookie response header returned from the server is a forbidden header name and therefore can't be programmatically read with response.headers.get(). Instead, it's the browser's responsibility to handle new cookies being set (if applicable to the current URL). Unless they are HTTP-only, new cookies will be available through document.cookie.

    Redirect modes

    The Fetch specification defines these values for the redirect option: "follow" (the default), "error", and "manual".

    Due to limitations of XMLHttpRequest, only the "follow" mode is available in browsers where this polyfill is active.

    Obtaining the Response URL

    Due to limitations of XMLHttpRequest, the response.url value might not be reliable after HTTP redirects on older browsers.

    The solution is to configure the server to set the response HTTP header X-Request-URL to the current URL after any redirect that might have happened. It should be safe to set it unconditionally.

    # Ruby on Rails controller example
    response.headers['X-Request-URL'] = request.url

    This server workaround is necessary if you need reliable response.url in Firefox < 32, Chrome < 37, Safari, or IE.

    Aborting requests

    This polyfill supports the abortable fetch API. However, aborting a fetch requires use of two additional DOM APIs: AbortController and AbortSignal. Typically, browsers that do not support fetch will also not support AbortController or AbortSignal. Consequently, you will need to include an additional polyfill for these APIs to abort fetches:

    import 'yet-another-abortcontroller-polyfill'
    import {fetch} from 'whatwg-fetch'
    
    // use native browser implementation if it supports aborting
    const abortableFetch = ('signal' in new Request(''))?window.fetch : fetch
    
    const controller = new AbortController()
    
    abortableFetch('/avatars', {
      signal: controller.signal
    }).catch(function(ex) {
      if (ex.name === 'AbortError') {
        console.log('request aborted')
      }
    })
    
    // some time later...
    controller.abort()

    Browser Support

    Note: modern browsers such as Chrome, Firefox, Microsoft Edge, and Safari contain native implementations of window.fetch, therefore the code from this polyfill doesn't have any effect on those browsers. If you believe you've encountered an error with how window.fetch is implemented in any of these browsers, you should file an issue with that browser vendor instead of this project.

    About

    A window.fetch JavaScript polyfill.

    Topics

    polyfill fetch javascript promise

    Resources

    Readme

    License

    MIT License

    Releases 36

    Republishing to ensure the dist/fetch.umd.js is up-to-date Latest
    Sep 7, 2020
    + 35 releases

    Packages

    No packages published

    Used by 5,000+

  • @tilano
  • @konkura
  • @Diddleslip
  • @DSemenenko
  • @alejandro-saenz
  • @saramutina
  • @boilerhousetheatreco
  • + 2,514,491

    Contributors 66

  • @josh
  • @dgraham
  • @JakeChampion
  • @matthew-andrews
  • @joaovieira
  • @lrowe
  • @jamesplease
  • @nikhilm
  • @bryanrsmith
  • @CrOrc
  • + 55 contributors

    Languages

  • Privacy
  • Security
  • Status
  • Help
  • You can’t perform that action at this time.