| Dec | JAN | Feb |
| 02 | ||
| 2020 | 2021 | 2022 |
COLLECTED BY
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Collection: Archive Team: The Github Hitrub
lambda_function.py) and then call it from a test program - test.py
However, there are a few housekeeping issues to settle first in order to provide a genuine simulated AWS LF environment:
●Making sure you're on Python 2.7 (or whatever AWS LF supports)
●Reproducing any environment vars, like KMS-encrypted API keys
●Logging compatibility (with AWS logging root)
●Passing in the 'event' and 'context' objects
●Using the correct AWS IAM role
LAMBDA_RUNTIME = '2.7' if LAMBDA_RUNTIME not in sys.version.split(' ')[0]: logger.error('Wrong version of Python') exit()And then don't forget to install the Boto3 library conda install boto3
ENCRYPTED = os.environ['APIKEY'] # Decrypt code should run once and variables stored outside of the function # handler so that these are decrypted once per container DECRYPTED = boto3.client('kms').decrypt(CiphertextBlob=b64decode(ENCRYPTED))['Plaintext']Such vars need to be set locally for your test. This should be done via a standard OS call from Python:
# Don't forget to set any keys used by Lambda function on AWS (e.g. KMS keys etc.) APIKEY = 'AQECAHi+cZAiuTwzWIe727iJYVmf0wb0pvlfHqoD...rlH4=' os.environ['APIKEY'] = APIKEY
# This test is for when using the local testing harness if 'LAMBDA_TEST' in os.environ: logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) else: logger = logging.getLogger()This relies on setting the environment variable LAMBDA_TEST from the test harness. Of course, on the AWS servers this will not be set and so the logger will default to the root
logging.getLogger()
~/.aws/configure and make it point to the profile that you wish to use:
[profile <profile_name>]
role_arn = arn:aws:iam::123456789012:role/somerole
source_profile = development
In this case, I have set the role_arn (which you can get from the IAM console) and pointed it to an AWS credentials profile 'development' (set in ~/.aws/credentials). I just need to do two things now:
(一)Make sure I set the AWS default profile to the one I want
export AWS_DEFAULT_PROFILE=development
(一)Make sure that the profile credentials have permission to invoke sts:AssumeRole on the role_arn by adding the following policy to the default profile (AWS IAM user) for these tests.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123456789012:role/somerole"
}]
}
You should check you're using the right IAM settings:
aws configure
lambda-event.json and then it's ready to pass into your handler. However, you might be wondering how to get that data.
There are two ways. The best way, and is an implicit assumption in setting up this test, is to first deploy your LF on AWS and connect it to the trigger condition, such as an object creation in S3. Then run the condition and use a basic lamba handler to pass that data out to the logs.
def lambda_handler(event, context): print("Received event: " + json.dumps(event, indent=2))Then go to the logs for your LF and look for 'Received event:' to find the dump from the event object. Then cut and paste this into the
lamda-event.json file. Then you're ready to go.
Alternatively, if you haven't yet set up the trigger, then you can use the various test templates already loaded into the AWS LF environment - select "Configure test event" from the Actions menu.
All being well, you should be ready to go and test your LF
python simulate.py test
Happy testing FUNCTION_NAME = 'napkin-glyph-s3-svg-trigger' FUNCTION_LIBS = ['requests']Don't forget that you must have installed the libs locally to your current folder: pip install libname -t . Have fun
tests.py and run them till they pass.
python tests.py