-
Updated
Jan 6, 2022 - Python
{{ message }}
Building a modern functional compiler from first principles. (http://dev.stephendiehl.com/fun/)
Klipse is a JavaScript plugin for embedding interactive code snippets in tech blogs.
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
Multi-class confusion matrix library in Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
Short and sweet LISP editing
XAI - An eXplainability toolbox for machine learning
FuzzBench - Fuzzer benchmarking as a service.
Most tests in Avalanche are integration tests and are using very expensive datasets (e.g. MNIST) with large networks and a high number of iterations. This is not needed for most tests, so there is space for a lot of speedup.
Python implementation of the IOU Tracker
TCExam is a CBA (Computer-Based Assessment) system (e-exam, CBT - Computer Based Testing) for universities, schools and companies, that enables educators and trainers to author, schedule, deliver, and report on surveys, quizzes, tests and exams.
A General Toolbox for Identifying Object Detection Errors
Expression evaluation in golang
SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
Case Recommender: A Flexible and Extensible Python Framework for Recommender Systems
Visual Object Tracking (VOT) challenge evaluation toolkit
High-fidelity performance metrics for generative models in PyTorch
C# Eval Expression | Evaluate, Compile, and Execute C# code and expression at runtime.
A collection of datasets that pair questions with SQL queries.
An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.
A Simple Math and Pseudo C# Expression Evaluator in One C# File. Can also execute small C# like scripts
Simple Safe Sandboxed Extensible Expression Evaluator for Python
ERRor ANnotation Toolkit: Automatically extract and classify grammatical errors in parallel original and corrected sentences.
Add a description, image, and links to the evaluation topic page so that developers can more easily learn about it.
To associate your repository with the evaluation topic, visit your repo's landing page and select "manage topics."
Description
Currently, when a challenge link from EvalAI is shared users see a generic view of EvalAI homepage. We want the details specific to a challenge to be shown when a link is shared. Here's how it looks currently
Expected behavior:
T