| Sep | OCT | Nov |
| 30 | ||
| 2019 | 2020 | 2021 |
COLLECTED BY
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Collection: Archive Team: The Github Hitrub
Thanks @BlenderDude!
6799159
const sql = require("pg-sql2"); // or import sql from 'pg-sql2'; const tableName = "user"; const fields = ["name", "age", "height"]; // sql.join is used to join fragments with a common separator, NOT to join tables! const sqlFields = sql.join( // sql.identifier safely escapes arguments and joins them with dots fields.map(fieldName => sql.identifier(tableName, fieldName)), ", " ); // sql.value will store the value and instead add a placeholder to the SQL // statement, to ensure that no SQL injection can occur. const sqlConditions = sql.query`created_at > NOW() - interval '3 years' and age > ${sql.value( 22 )}`; // This could be a full query, but we're going to embed it in another query safely const innerQuery = sql.query`select ${sqlFields} from ${sql.identifier( tableName )} where ${sqlConditions}`; // Symbols are automatically assigned unique identifiers const sqlAlias = sql.identifier(Symbol()); const query = sql.query` with ${sqlAlias} as (${innerQuery}) select (select json_agg(row_to_json(${sqlAlias})) from ${sqlAlias}) as all_data, (select max(age) from ${sqlAlias}) as max_age `; // sql.compile compiles the query into an SQL statement and a list of values const { text, values } = sql.compile(query); console.log(text); /* -> with __local_0__ as (select "user"."name", "user"."age", "user"."height" from "user" where created_at > NOW() - interval '3 years' and age > $1) select (select json_agg(row_to_json(__local_0__)) from __local_0__) as all_data, (select max(age) from __local_0__) as max_age */ console.log(values); // [22] // Then to run the query using `pg` module, do something like: // const { rows } = await pg.query(text, values);
sql.query`...`
sql.* expression is passed in, e.g.:
sql.query`select ${1}`;then an error will be thrown.
sql.identifier(ident, ...)
"schema"."table"."column").
sql.value(val)
sql.literal(val)
sql.value, but in the case of very simple values may write them directly
to the SQL statement (correctly escaped) rather than using a placeholder. Should only be used with
data that is not sensitive and is trusted (not user-provided data), e.g. for
the key arguments to json_build_object(key, val, key, val, ...) which you
have produced.
sql.join(arrayOfFragments, delimeter)
const arrayOfSqlFields = ["a", "b", "c", "d"].map(n => sql.identifier(n)); sql.query`select ${sql.join(arrayOfSqlFields, ", ")}`; // -> select "a", "b", "c", "d" const arrayOfSqlConditions = [ sql.query`a = 1`, sql.query`b = 2`, sql.query`c = 3` ]; sql.query`where (${sql.join(arrayOfSqlConditions, ") and (")})`; // -> where (a = 1) and (b = 2) and (c = 3) const fragments = [ { alias: "name", sqlFragment: sql.identifier("user", "name") }, { alias: "age", sqlFragment: sql.identifier("user", "age") } ]; sql.query` json_build_object( ${sql.join( fragments.map( ({ sqlFragment, alias }) => sql.query`${sql.literal(alias)}, ${sqlFragment}` ), ",\n" )} )`; const arrayOfSqlInnerJoins = [ sql.query`inner join bar on (bar.foo_id = foo.id)`, sql.query`inner join baz on (baz.bar_id = bar.id)` ]; sql.query`select * from foo ${sql.join(arrayOfSqlInnerJoins, " ")}`; // select * from foo inner join bar on (bar.foo_id = foo.id) inner join baz on (baz.bar_id = bar.id)
sql.raw(val)
sql.compile(query)
const query = sql.query`...`; const { text, values } = sql.compile(query); // const { rows } = await pg.query(text, values);
pg-sql, combining the additional work
that was done to it in
postgraphql
and offering the following enhancements:
●Better development experience for people not using Flow/TypeScript (throws
errors a lot earlier allowing you to catch issues at the source)
●Slightly more helpful error messages
●Uses a symbol-key on the query nodes to protect against an object
accidentally being inserted verbatim and being treated as valid (because
every Symbol is unique an attacker would need control of the code to get a
reference to the Symbol in order to set it on an object (it cannot be
serialised/deserialised via JSON or any other medium), and if the attacker
has control of the code then you've already lost)
●Adds sql.literal which is similar to sql.value but when used with simple
values can write the valid direct to the SQL statement. USE WITH CAUTION.
The purpose for this is if you are using trusted values (e.g. for the keys
to
json_build_object(...))
then debugging your SQL becomes a lot easier because fewer placeholders are
used.