Feature Selection
#Terms
WHAT IS… ?
Feature selection is an important problem in predictive modeling because many practical applications have very large data sets, and training a predictive model with all of the data can be infeasible. One way to address this is by using feature selection and creating a predictive model using only the selected features. Feature selection algorithms have been developed to solve this problem, and they are typically classified in three categories: filter methods, wrapper methods and intrinsic methods.
HOW IS IT USED ?
The goal of feature selection is to find a combination of features that will add the most information when modelled together.
A simple statistical method is the chi-square statistic, and feature subset size is constrained by the rejection region size.
Feature selection has a serious limitation: it can only be used after the data has been collected. An advantage to feature selection over traditional statistical modelling (e.g. regression) is feature selection does not require domain knowledge, thus enabling reduced model uncertainty and overfitting.
Empower your data science team and make the most of
Accelerated Platform Data Analytics
SOME OF OUR CLIENTS


$ pip install shapelets-soloCopy to clipboard
$ shapelets infoCopy to clipboard
Learn more by reading our installation guide and tutorials.






Empower your data science team and make the most of accelerated platform data analytics
hello@shapelets.io
Platform
Overview
Storage & Streaming
Flow & Computing
Data Apps
Solutions
Industries
Resources
Documentation
Blog
Company
Contact
Privacy Policy
© Shapelets 2021. All Rights Reserved





hello@shapelets.io
Solutions
Resources
Company

© Shapelets 2021. All Rights Reserved