| Apr | MAY | Jun |
| 14 | ||
| 2019 | 2020 | 2021 |
COLLECTED BY
Collection: Outlinks From Tweets
●
Install
●
User Guide
●
API
●
Examples
●
Getting Started
●
Tutorial
●
What's new
●
Glossary
●
Development
●
FAQ
●
Related packages
●
Roadmap
●
About us
●
GitHub
●
Other Versions
●
More
Getting Started
Tutorial
What's new
Glossary
Development
FAQ
Related packages
Roadmap
About us
GitHub
Other Versions
Prev
Up
Next
scikit-learn 0.24.dev0
Other versions
Please scoring parameter: defining model evaluation rules
●3.3.2. Classification metrics
●3.3.3. Multilabel ranking metrics
●3.3.4. Regression metrics
●3.3.5. Clustering metrics
●3.3.6. Dummy estimators
●3.4. Model persistence
●3.4.1. Persistence example
●3.4.2. Security & maintainability limitations
●3.5. Validation curves: plotting scores to evaluate models
●3.5.1. Validation curve
●3.5.2. Learning curve
●4. Inspection
●4.1. Partial dependence plots
●4.1.1. Mathematical Definition
●4.1.2. Computation methods
●4.2. Permutation feature importance
●4.2.1. Outline of the permutation importance algorithm
●4.2.2. Relation to impurity-based importance in trees
●4.2.3. Misleading values on strongly correlated features
●5. Visualizations
●5.1. Available Plotting Utilities
●5.1.1. Functions
●5.1.2. Display Objects
●6. Dataset transformations
●6.1. Pipelines and composite estimators
●6.1.1. Pipeline: chaining estimators
●6.1.2. Transforming target in regression
●6.1.3. FeatureUnion: composite feature spaces
●6.1.4. ColumnTransformer for heterogeneous data
●6.1.5. Visualizing Composite Estimators
●6.2. Feature extraction
●6.2.1. Loading features from dicts
●6.2.2. Feature hashing
●6.2.3. Text feature extraction
●6.2.4. Image feature extraction
●6.3. Preprocessing data
●6.3.1. Standardization, or mean removal and variance scaling
●6.3.2. Non-linear transformation
●6.3.3. Normalization
●6.3.4. Encoding categorical features
●6.3.5. Discretization
●6.3.6. Imputation of missing values
●6.3.7. Generating polynomial features
●6.3.8. Custom transformers
●6.4. Imputation of missing values
●6.4.1. Univariate vs. Multivariate Imputation
●6.4.2. Univariate feature imputation
●6.4.3. Multivariate feature imputation
●6.4.4. References
●6.4.5. Nearest neighbors imputation
●6.4.6. Marking imputed values
●6.5. Unsupervised dimensionality reduction
●6.5.1. PCA: principal component analysis
●6.5.2. Random projections
●6.5.3. Feature agglomeration
●6.6. Random Projection
●6.6.1. The Johnson-Lindenstrauss lemma
●6.6.2. Gaussian random projection
●6.6.3. Sparse random projection
●6.7. Kernel Approximation
●6.7.1. Nystroem Method for Kernel Approximation
●6.7.2. Radial Basis Function Kernel
●6.7.3. Additive Chi Squared Kernel
●6.7.4. Skewed Chi Squared Kernel
●6.7.5. Mathematical Details
●6.8. Pairwise metrics, Affinities and Kernels
●6.8.1. Cosine similarity
●6.8.2. Linear kernel
●6.8.3. Polynomial kernel
●6.8.4. Sigmoid kernel
●6.8.5. RBF kernel
●6.8.6. Laplacian kernel
●6.8.7. Chi-squared kernel
●6.9. Transforming the prediction target (y)
●6.9.1. Label binarization
●6.9.2. Label encoding
●7. Dataset loading utilities
●7.1. General dataset API
●7.2. Toy datasets
●7.2.1. Boston house prices dataset
●7.2.2. Iris plants dataset
●7.2.3. Diabetes dataset
●7.2.4. Optical recognition of handwritten digits dataset
●7.2.5. Linnerrud dataset
●7.2.6. Wine recognition dataset
●7.2.7. Breast cancer wisconsin (diagnostic) dataset
●7.3. Real world datasets
●7.3.1. The Olivetti faces dataset
●7.3.2. The 20 newsgroups text dataset
●7.3.3. The Labeled Faces in the Wild face recognition dataset
●7.3.4. Forest covertypes
●7.3.5. RCV1 dataset
●7.3.6. Kddcup 99 dataset
●7.3.7. California Housing dataset
●7.4. Generated datasets
●7.4.1. Generators for classification and clustering
●7.4.2. Generators for regression
●7.4.3. Generators for manifold learning
●7.4.4. Generators for decomposition
●7.5. Loading other datasets
●7.5.1. Sample images
●7.5.2. Datasets in svmlight / libsvm format
●7.5.3. Downloading datasets from the openml.org repository
●7.5.4. Loading from external datasets
●8. Computing with scikit-learn
●8.1. Strategies to scale computationally: bigger data
●8.1.1. Scaling with instances using out-of-core learning
●8.2. Computational Performance
●8.2.1. Prediction Latency
●8.2.2. Prediction Throughput
●8.2.3. Tips and Tricks
●8.3. Parallelism, resource management, and configuration
●8.3.1. Parallelism
●8.3.2. Configuration switches
lopers (BSD License).
Show this page source