GAKN.AI - King James Bible Project
This project aims at testing the functionalities of GRAKN.AI
- Relations between books, chapters, verses and words
- Relations between topics and verses
- Cross references (Verses speaking of other verses or refering to)
- Relation among people and places
- Relation among people
- Inferences
- CSV imports
- JSON imports
- Templates
- etc.
Then
- Use of inference rules (Already working)
- Implementation of a basic recommendataion engine
- Why not generate inference rules with ML
Ontology migration
the bible.gql file is the original ontology, loaded in the graph. I wanted to experiment and improve the ontology, for it to look like bible-latest.gql but this version of the file, removes plays <role> from the person entity and causes out of memory exceptions. Thus I tried with the bible-updated.gql, which leaves the plays <role> but that too out of memor exceptions
Data parse
Prepares some data for import. To regenerate the files just run
>>> npm install
>>> node indexImport steps
This will be optimized and an import script written. Some of thoses steps could be added in templates.
graql.sh -f ./ontology/bible.gqlgraql.sh>>> insert $x isa bible, has name "King James Version", has short-name "KJV", has language "English", has language-code "en-EN", has publication-year 1611;>>> commitmigration.sh csv -t ./templates/books.gql -i ./CSV/Books.csv -k graknmigration.sh json -t ./templates/chapters.gql -i ./JSON/chapters.json -k grakn>>> match $b isa book, has book-id $bid; $c isa chapter, has book-id $bid;insert (book-role:$b, chapter-role: $c) isa belongs;>>> commitmigration.sh csv -t ./templates/verses.gql -i ./CSV/Verses.csv -k graknmigration.sh csv -t ./templates/places.gql -i ./CSV/Places.csv -k graknmigration.sh csv -t ./templates/people.gql -i ./CSV/People.csv -k grakn$ migration.sh csv -t ./templates/words.gql -i ./CSV/MainIndex.csv -k graknmigration.sh csv -t ./templates/topics.gql -i ./CSV/Topics.csv -k graknmigration.sh csv -t ./templates/topic-relations.gql -i ./CSV/TopicIndex.csv -k graknmigration.sh csv -t ./templates/place-alias.gql -i ./CSV/PlaceAliases.csv -k graknmigration.sh csv -t ./templates/people-alias.gql -i ./CSV/PeopleAliases.csv -k graknmigration.sh csv -t ./templates/book-alias.gql -i ./CSV/BookAliases.csv -k graknmigration.sh csv -t ./templates/strongs.gql -i ./CSV/Strongs.csv -k graknmigration.sh csv -t ./templates/strongs-rel.gql -i ./CSV/StrongsIndex.csv -k graknmigration.sh csv -t ./templates/groups.gql -i ./CSV/groups.csv -k graknmigration.sh csv -t ./templates/people-to-groups.gql -i ./CSV/PeopleGroups.csv -k graknmigration.sh json -t ./templates/verse-rel.gql -i ./JSON/verse-relations.json -k graknmigration.sh csv -t ./templates/people-relations.gql -i ./CSV/PeopleRelationships.csv -k grakn

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
