| Mar | APR | May |
| 19 | ||
| 2021 | 2022 | 2023 |
COLLECTED BY
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Collection: Archive Team: URLs
●Getting Started
●Where to Get Help
●Lifecycle of a Pull Request
●Running & Writing Tests
●Increase Test Coverage
●Helping with Documentation
●Documenting Python
●Silence Warnings From the Test Suite
●Fixing “easy” Issues (and Beyond)
●Issue Tracking
Toggle child pages in navigation
●GitHub Labels
●GitHub issues for BPO users
●Triaging an Issue
●Following Python’s Development
●Porting Python to a new platform
●How to Become a Core Developer
●Developer Log
●Accepting Pull Requests
●Development Cycle
●Continuous Integration
●Adding to the Stdlib
●Changing the Python Language
●Experts Index
●gdb Support
●Exploring CPython’s Internals
●Changing CPython’s Grammar
●Guide to CPython’s Parser
●Design of CPython’s Compiler
●Design of CPython’s Garbage Collector
●Updating standard library extension modules
●Changing Python’s C API
●Coverity Scan
●Dynamic Analysis with Clang
●Running a buildbot worker
●Core Developer Motivations and Affiliations
●Git Bootcamp and Cheat Sheet
●Appendix: Topics
v: latest
Versions
latest
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds
python-buildbots@python.org mailing list where all buildbot
worker owners are subscribed; or
contact the release manager of the branch you have issues with.
bedevere-bot on GitHub will put a message on your merged Pull Request
if building your commit on a stable buildbot worker fails. Take care to
evaluate the failure, even if it looks unrelated at first glance.
Not all failures will generate a notification since not all builds are executed
after each commit. In particular, reference leaks builds take several hours to
complete so they are done periodically. This is why it’s important for you to
be able to check the results yourself, too.
bbreport.py client, which you can get from
https://code.google.com/archive/p/bbreport. Installing it is trivial: just add
the directory containing bbreport.py to your system path so that
you can run it from any filesystem location. For example, if you want
to display the latest build results on the development (“main”) branch,
type:
bbreport.py -q 3.x
The buildbot “console” interface at https://buildbot.python.org/all/
This works best on a wide, high resolution
monitor. Clicking on the colored circles will allow you to open a new page
containing whatever information about that particular build is of interest to
you. You can also access builder information by clicking on the builder
status bubbles in the top line.
If you like IRC, having an IRC client open to the #python-dev-notifs channel on
irc.libera.chat is useful. Any time a builder changes state (last build
passed and this one didn’t, or vice versa), a message is posted to the channel.
Keeping an eye on the channel after pushing a changeset is a simple way to get
notified that there is something you should look in to.
Some buildbots are much faster than others. Over time, you will learn which
ones produce the quickest results after a build, and which ones take the
longest time.
Also, when several changesets are pushed in a quick succession in the same
branch, it often happens that a single build is scheduled for all these
changesets.
./python.exe -Wd -E -bb ./Lib/test/regrtest.py -uall -rwWNote Running
Lib/test/regrtest.py is exactly equivalent to running
-m test.
-r
option to the test runner) to maximize the probability that potential
interferences between library modules are exercised; the downside is that it
can make for seemingly sporadic failures.
The --randseed option makes it easy to reproduce the exact randomization
used in a given build. Again, open the stdio link for the failing test
run, and check the beginning of the test output proper.
Let’s assume, for the sake of example, that the output starts with:
./python -Wd -E -bb Lib/test/regrtest.py -uall -rwW
== CPython 3.3a0 (default:22ae2b002865, Mar 30 2011, 13:58:40) [GCC 4.4.5]
== Linux-2.6.36-gentoo-r5-x86_64-AMD_Athlon-tm-_64_X2_Dual_Core_Processor_4400+-with-gentoo-1.12.14 little-endian
== /home/buildbot/buildarea/3.x.ochtman-gentoo-amd64/build/build/test_python_29628
Testing with flags: sys.flags(debug=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=1, verbose=0, bytes_warning=2, quiet=0)
Using random seed 2613169
[ 1/353] test_augassign
[ 2/353] test_functools
You can reproduce the exact same order using:
./python -Wd -E -bb -m test -uall -rwW --randseed 2613169It will run the following sequence (trimmed for brevity):
[ 1/353] test_augassign [ 2/353] test_functools [ 3/353] test_bool [ 4/353] test_contains [ 5/353] test_compileall [ 6/353] test_unicodeIf this is enough to reproduce the failure on your setup, you can then bisect the test sequence to look for the specific interference causing the failure. Copy and paste the test sequence in a text file, then use the
--fromfile (or-f) option of the test runner to run the exact
sequence recorded in that text file:
./python -Wd -E -bb -m test -uall -rwW --fromfile mytestsequence.txt
In the example sequence above, if test_unicode had failed, you would
first test the following sequence:
[ 1/353] test_augassign [ 2/353] test_functools [ 3/353] test_bool [ 6/353] test_unicodeAnd, if it succeeds, the following one instead (which, hopefully, shall fail):
[ 4/353] test_contains [ 5/353] test_compileall [ 6/353] test_unicodeThen, recursively, narrow down the search until you get a single pair of tests which triggers the failure. It is very rare that such an interference involves more than two tests. If this is the case, we can only wish you good luck! Note You cannot use the
-j option (for parallel testing) when diagnosing
ordering-dependent failures. Using -j isolates each test in a
pristine subprocess and, therefore, prevents you from reproducing any
interference between tests.
test_poplib, test_urllibnet, etc.
Their failures can stem from adverse network conditions, or imperfect
thread synchronization in the test code, which often has to run a
server in a separate thread.
Tests dealing with delicate issues such as inter-thread or inter-process
synchronization, or Unix signals: test_multiprocessing,
test_threading, test_subprocess, test_threadsignals.
When you think a failure might be transient, it is recommended you confirm by
waiting for the next build. Still, even if the failure does turn out sporadic
and unpredictable, the issue should be reported on the bug tracker; even
better if it can be diagnosed and suppressed by fixing the test’s
implementation, or by making its parameters - such as a timeout - more robust.
buildbot-custom short-lived branch of the
python/cpython repository, which is only accessible to core developers.
To start a build on the custom builders, push the commit you want to test to
the buildbot-custom branch:
$ git push upstream <local_branch_name>:buildbot-custom
You may run into conflicts if another developer is currently using the custom
builders or forgot to delete the branch when they finished. In that case, make
sure the other developer is finished and either delete the branch or force-push
(add the -f option) over it.
When you have gotten the results of your tests, delete the branch:
$ git push upstream :buildbot-custom # or use the GitHub UIIf you are interested in the results of a specific test file only, we recommend you change (temporarily, of course) the contents of the
buildbottest clause in Makefile.pre.in; or, for Windows builders,
the Tools/buildbot/test.bat script.
See also
Running a buildbot worker
Next
Adding to the Stdlib
Previous
Development Cycle
Copyright © 2011-2022, Python Software Foundation
Made with Sphinx and @pradyunsg's
Furo
Last updated on Apr 18, 2022
Contents
●Continuous Integration
●In case of trouble
●Buildbot failures on Pull Requests
●Checking results of automatic builds
●Stability
●Flags-dependent failures
●Ordering-dependent failures
●Transient failures
●Custom builders