Feature Engineering and Feature Selection with Python
Minimum price
Suggested price

Feature Engineering and Feature Selection with Python

A Practical Guide For Feature Crafting

About the Book

With recent developments in big data, we’ve been given more access to data in general and high-dimensional data. Consequently, the performance of machine learning models has improved by a large margin. On the other hand, there are significant features often collected or generated by different sensors and methods that can influence the model accuracy in a harmful way that needs careful consideration, Not only that, but these features can demand a lot of computational resources to build and maintain the model.

For that, we need handy processes that contribute to the machine learning pipeline to build great models even with these kinds of features, In this book about feature engineering and feature selection techniques for machine learning with python in a hands-on approach, we will explore pretty much all you’ll need to know about feature engineering and feature selection.

Specifically, we’ll learn how to modify dataset variables to extract meaningful information to capture as much insight as possible, filter out unneeded features leaving datasets and their variables ready to be used in machine learning algorithms.

What's inside?

This book is divided into two parts— Feature Engineering and Feature Selection, we will start with

feature engineering first, then we will move to the other section of feature selection.

Here are brief descriptions of each of the sections:

Part I: Feature Engineering

  • Feature Types: Or variables types—we’ll learn about continuous, discrete, and categorical variables (which can be nominal or ordinal), alongside time-date and mixed variables.
  • Common Issues: This chapter will discuss different issues you’ll see in real-world datasets like missing data, variable distribution, data imputations, outliers, and others.
  • Dealing with Missing Values: We’ll examine the major techniques used to fill the missing data in your dataset.
  • Categorical Encoding: This chapter will discuss the different techniques to transform categorical variables into numbers—frequency encoding, one-hot encoding, and more.
  • Feature Transformation: We’ll explore the mathematical transformations you can apply to alter the distribution of numerical variables, like logarithmic or reciprocal transformations.
  • Variable Discretization: This chapter looks at the procedures to discretize variables, like equal width, equal-frequency, discrete decision using decision trees, clustering, and more.
  • Dealing with Outliers: This chapter will show how to identify outliers and remove them from your dataset.
  • Feature Scaling: We’ll cover several techniques to scale features, like standardization, scaling to the minimum and maximum, scaling to the unit length of the vector, and more.
  • Handling Time-Date and Mixed Variables: We’ll discuss various ways to create new features from date, time, and mixed variables.
  • Engineering Geospatial Features: We’ll see how to handle geospatial features that are (most of the time) represented as longitude and latitude.
  • Resampling Imbalanced Datasets: We will learn how to apply a technique called resampling that can solve an issue where the classes in a given dataset are not represented equally.
  • Advanced Feature Engineering: We’ll look at advanced categorical encoding, advanced outlier detection, automated feature engineering, and more.

Part II: Feature Selection

  • Filter Methods: Rely on the features’ characteristics without using any machine learning algorithm. Very well-suited for a quick “screen and removal” of irrelevant features.
  • Wrapper methods: Consider the selection of a set of features as a search problem, then uses a predictive machine learning algorithm to select the best feature subset. In essence, these methods train a new model on each feature subset, which makes it very computationally expensive. However, they provide the best performing feature subset for a given machine learning algorithm.
  • Embedded methods: Just like the wrapper methods, embedded methods take the interaction of features and models into consideration. They also perform feature selection as part of the model construction process, and they are less computationally expensive.
  • Advanced Feature Selection: We’ll see more advanced techniques that use deep learning, heuristic searches, and several other ways to select features.

In each chapter of this book, we’ll practically learn various techniques and examine their advantages, limitations, the assumptions that each method makes, and when you should consider applying each technique.

  • Share this book

  • Categories

    • Artificial Intelligence
    • Machine Learning
    • Data Science
    • Computer Science
    • Computers and Programming
    • Python
  • Feedback

    Email the Author(s)

About the Author

Charfaoui Younes
Charfaoui Younes

Hey there, I am Charfaoui Younes, a Software Engineer from Algeria, Author 📄, and Speaker. 🗣

I am a Google Certified Android Developer, Previous Github Campus Expert, Former Developer Student Clubs Lead, and Machine Learning Passionate 👨‍🔬. I am always passionate about learning 📚 something new about new technologies, I love writing code, so I am a Constant Learner 🚴.

Table of Contents

  • Introduction
  • Feature Engineering
    • What Is Feature Engineering?
    • Why Feature Engineering?
    • Feature Engineering Vs. Feature Selection
  • Variable Types
    • What is a Variable?
    • Numerical Variables
    • Categorical Variables
    • Dates & Times Variables
    • Mixed Variables
    • Conclusion
  • Common Issues in Datasets
    • Missing Data
    • Categorical Variable — Cardinality
    • Categorical Variable — Rare Labels
    • Linear Model Assumptions
    • Variable Distribution
    • Outliers
    • Feature Magnitude
    • Conclusion
  • Imputing Missing Values
    • Data Imputation
    • Missing Data Imputation Techniques
    • Mean Or Median Imputation
    • Arbitrary Value Imputation
    • End Of Tail Imputation
    • Frequent Category Imputation
    • Missing Category Imputation
    • Complete Case Analysis
    • Missing Indicator
    • Random Sample Imputation
    • Conclusion
  • Encoding Categorical Variables
    • Categorical Encoding
    • One-hot Encoding
    • Integer (Label) Encoding
    • Count Or Frequency Encoding
    • Ordered Label Encoding
    • Mean (Target) Encoding
    • Weight Of Evidence Encoding
    • Probability Ratio Encoding
    • Rare Label Encoding
    • Binary Encoding
    • Conclusion
  • Transforming Variables
    • Why These Transformations?
    • How Can We Transform Variables?
    • Logarithmic Transformation
    • Square Root Transformation
    • Reciprocal Transformation
    • Exponential Or Power Transformation
    • Box-Cox Transformation
    • Yeo-Johnson Transformation
    • Conclusion
  • Variable Discretization
    • Discretization Approaches
    • Equal-width Discretization
    • Equal-Frequency Discretization
    • K-Means Discretization
    • Discretization With Decision Trees
    • Using The Newly-created Discreet Variable
    • Custom Discretization
    • Conclusion
  • Handling Outliers
    • Outliers
    • Detecting Outliers
    • Handling Outliers
    • Trimming
    • Censoring
    • Imputing
    • Transformation
    • Conclusion
  • Feature Scaling
    • Definition
    • Why Feature Scaling Matters
    • Scaling Methods
    • Mean Normalization
    • Standardization
    • Robust Scaling (Scaling To Median And IQR)
    • Min-Max Scaling
    • Maximum Absolute Scaling
    • Scaling To Vector Unit Norm
    • Conclusion
  • Handling Date-Time and Mixed Variables
    • Engineering Variables Of Date And Time
    • Date Variable
    • Time Variables
    • Engineering Mixed Variables Types
    • Cyclical Feature Problem
    • Conclusion
  • Resampling Imbalanced Data
    • What Is An Imbalanced Dataset?
    • Extra Library
    • The Metric Problem
    • Investigate Your Dataset
    • Resampling
    • Undersampling
    • Oversampling
    • Combining Oversampling And Undersampling
    • Other Techniques
    • Resources
    • Conclusion
  • Engineering Geospatial Data
    • More Libraries
    • Visualization
    • Techniques & Ideas
    • Use Features As They Are
    • Perform Clustering
    • Reverse Geocoding
    • Distance Feature
    • Extract X, Y, & Z
    • Other Derived Features
    • Conclusion
  • Advanced Feature Engineering Techniques
    • Advanced Categorical Encoding
    • Advanced Missing Value Imputation
    • Advanced Outlier detection
    • Automated Feature Engineering
    • Conclusion
  • Feature Selection
    • What Is Feature Selection?
    • Why Should We Select Features?
    • Feature Selection Vs. Dimensionality Reduction
    • The Procedure Of Feature Selection
  • Filter Methods
    • Definition
    • Advantages
    • Types
    • The Methods
    • Basic Filter Methods
    • Correlation Filter Methods
    • Statistical & Ranking Filter Methods
    • Conclusion
  • Wrapper Methods
    • Definition
    • Advantages
    • Process
    • Stopping Criteria
    • Search Methods
    • Forward Feature Selection
    • Backward Feature Elimination
    • Exhaustive Feature Selection
    • Limitations Of Step Forward/Backward Selection
    • LRS Or Plus-L, Minus-R
    • Sequential Floating
    • Other Search Methods
    • Conclusion
  • Embedded Methods
    • Definition
    • Advantages
    • Process
    • Using Regularization
    • Tree-based Feature Importance
    • Conclusion
  • Hybrid Methods
    • Definition
    • Advantages
    • Process
    • Using Filter & Wrapper Methods
    • Using Embedded & Wrapper Methods
    • Conclusion
  • Advanced Feature Selection Techniques
    • Dimensionality Reduction
    • Heuristic Search Algorithms
    • Feature Importance
    • Deep Learning
  • Conclusion

The Leanpub 60-day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

See full terms

Do Well. Do Good.

Authors have earned$11,595,069writing, publishing and selling on Leanpub, earning 80% royalties while saving up to 25 million pounds of CO2 and up to 46,000 trees.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers), EPUB (for phones and tablets) and MOBI (for Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF, EPUB and/or MOBI files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub