Supervised Machine Learning for Text Analysis in R 1st Edition by Emil Hvitfeldt, Julia Silge – Ebook PDF Instant Download/Delivery: 0367554186, 9780367554187
Full download Supervised Machine Learning for Text Analysis in R 1st Edition after payment
Product details:
ISBN 10: 0367554186
ISBN 13: 9780367554187
Author: Emil Hvitfeldt; Julia Silge
Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling, train models, and evaluate model performance using tools from the tidyverse and tidymodels ecosystem. Models like these can be used to make predictions for new observations, to understand what natural language features or characteristics contribute to differences in the output, and more. If you are already familiar with the basics of predictive modeling, use the comprehensive, detailed examples in this book to extend your skills to the domain of natural language processing. This book provides practical guidance and directly applicable knowledge for data scientists and analysts who want to integrate unstructured text data into their modeling pipelines. Learn how to use text data for both regression and classification tasks, and how to apply more straightforward algorithms like regularized regression or support vector machines as well as deep learning approaches. Natural language must be dramatically transformed to be ready for computation, so we explore typical text preprocessing and feature engineering steps like tokenization and word embeddings from the ground up. These steps influence model results in ways we can measure, both in terms of model metrics and other tangible consequences such as how fair or appropriate model results are.
Supervised Machine Learning for Text Analysis in R 1stTable of contents:
I Natural Language Features
1 Language and modeling
1.1 Linguistics for text analysis
1.2 A glimpse into one area: morphology
1.3 Different languages
1.4 Other ways text can vary
1.5 Summary
1.5.1 In this chapter, you learned:
2 Tokenization
2.1 What is a token?
2.2 Types of tokens
2.2.1 Character tokens
2.2.2 Word tokens
2.2.3 Tokenizing by n-grams
2.2.4 Lines, sentence, and paragraph tokens
2.3 Where does tokenization break down?
2.4 Building your own tokenizer
2.4.1 Tokenize to characters, only keeping letters
2.4.2 Allow for hyphenated words
2.4.3 Wrapping it in a function
2.5 Tokenization for non-Latin alphabets
2.6 Tokenization benchmark
2.7 Summary
2.7.1 In this chapter, you learned:
3 Stop words
3.1 Using premade stop word lists
3.1.1 Stop word removal in R
3.2 Creating your own stop words list
3.3 All stop word lists are context-specific
3.4 What happens when you remove stop words
3.5 Stop words in languages other than English
3.6 Summary
3.6.1 In this chapter, you learned:
4 Stemming
4.1 How to stem text in R
4.2 Should you use stemming at all?
4.3 Understand a stemming algorithm
4.4 Handling punctuation when stemming
4.5 Compare some stemming options
4.6 Lemmatization and stemming
4.7 Stemming and stop words
4.8 Summary
4.8.1 In this chapter, you learned:
5 Word Embeddings
5.1 Motivating embeddings for sparse, high-dimensional data
5.2 Understand word embeddings by finding them yourself
5.3 Exploring CFPB word embeddings
5.4 Use pre-trained word embeddings
5.5 Fairness and word embeddings
5.6 Using word embeddings in the real world
5.7 Summary
5.7.1 In this chapter, you learned:
II Machine Learning Methods
Overview
6 Regression
6.1 A first regression model
6.1.1 Building our first regression model
6.1.2 Evaluation
6.2 Compare to the null model
6.3 Compare to a random forest model
6.4 Case study: removing stop words
6.5 Case study: varying n-grams
6.6 Case study: lemmatization
6.7 Case study: feature hashing
6.7.1 Text normalization
6.8 What evaluation metrics are appropriate?
6.9 The full game: regression
6.9.1 Preprocess the data
6.9.2 Specify the model
6.9.3 Tune the model
6.9.4 Evaluate the modeling
6.10 Summary
6.10.1 In this chapter, you learned:
7 Classification
7.1 A first classification model
7.1.1 Building our first classification model
7.1.2 Evaluation
7.2 Compare to the null model
7.3 Compare to a lasso classification model
7.4 Tuning lasso hyperparameters
7.5 Case study: sparse encoding
7.6 Two-class or multiclass?
7.7 Case study: including non-text data
7.8 Case study: data censoring
7.9 Case study: custom features
7.9.1 Detect credit cards
7.9.2 Calculate percentage censoring
7.9.3 Detect monetary amounts
7.10 What evaluation metrics are appropriate?
7.11 The full game: classification
7.11.1 Feature selection
7.11.2 Specify the model
7.11.3 Evaluate the modeling
7.12 Summary
7.12.1 In this chapter, you learned:
III Deep Learning Methods
Overview
8 Dense neural networks
8.1 Kickstarter data
8.2 A first deep learning model
8.2.1 Preprocessing for deep learning
8.2.2 One-hot sequence embedding of text
8.2.3 Simple flattened dense network
8.2.4 Evaluation
8.3 Using bag-of-words features
8.4 Using pre-trained word embeddings
8.5 Cross-validation for deep learning models
8.6 Compare and evaluate DNN models
8.7 Limitations of deep learning
8.8 Summary
8.8.1 In this chapter, you learned:
9 Long short-term memory (LSTM) networks
9.1 A first LSTM model
9.1.1 Building an LSTM
9.1.2 Evaluation
9.2 Compare to a recurrent neural network
9.3 Case study: bidirectional LSTM
9.4 Case study: stacking LSTM layers
9.5 Case study: padding
9.6 Case study: training a regression model
9.7 Case study: vocabulary size
9.8 The full game: LSTM
9.8.1 Preprocess the data
9.8.2 Specify the model
9.9 Summary
9.9.1 In this chapter, you learned:
10 Convolutional neural networks
10.1 What are CNNs?
10.1.1 Kernel
10.1.2 Kernel size
10.2 A first CNN model
10.3 Case study: adding more layers
10.4 Case study: byte pair encoding
10.5 Case study: explainability with LIME
10.6 Case study: hyperparameter search
10.7 Cross-validation for evaluation
10.8 The full game: CNN
10.8.1 Preprocess the data
10.8.2 Specify the model
10.9 Summary
10.9.1 In this chapter, you learned:
IV Conclusion
Text models in the real world
A Regular expressions
A.1 Literal characters
A.1.1 Meta characters
A.2 Full stop, the wildcard
A.3 Character classes
A.3.1 Shorthand character classes
A.4 Quantifiers
A.5 Anchors
A.6 Additional resources
B Data
B.1 Hans Christian Andersen fairy tales
B.2 Opinions of the Supreme Court of the United States
B.3 Consumer Financial Protection Bureau (CFPB) complaints
B.4 Kickstarter campaign blurbs
C Baseline linear classifier
C.1 Read in the data
C.2 Split into test/train and create resampling folds
C.3 Recipe for data preprocessing
C.4 Lasso regularized classification model
C.5 A model workflow
C.6 Tune the workflow
People also search for Supervised Machine Learning for Text Analysis in R 1st:
3 examples of machine learning
unsupervised machine learning examples
unsupervised text analysis
qa text
Tags:
Emil Hvitfeldt,Julia Silge,Supervised,Machine