在线咨询
eetop公众号 创芯大讲堂 创芯人才网
切换到宽版

EETOP 创芯网论坛 (原名:电子顶级开发网)

手机号码,快捷登录

手机号码,快捷登录

找回密码

  登录   注册  

快捷导航
搜帖子
查看: 17633|回复: 41

[资料] 一步一步教你使用 TensorFlow建立机器学习和深度学习网络

[复制链接]
发表于 2018-5-11 10:43:33 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?注册

x
一步一步教你使用 TensorFlow建立机器学习和深度学习网络包含代码和例子程序

Table of Contents
Building Machine Learning Projects with TensorFlow
Preface
What this book covers
What you need for this book
Who this book is for
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Exploring and Transforming Data
TensorFlow's main data structure - tensors
Tensor properties - ranks, shapes, and types
Tensor rank
Tensor shape
Tensor data types
Creating new tensors
From numpy to tensors and vice versa
Useful method
Getting things done - interacting with TensorFlow
Handling the computing workflow - TensorFlow's data flow graph
Computation graph building
Useful operation object methods
Feeding
Variables
Variable initialization
Saving data flow graphs
Graph serialization language - protocol buffers
Useful methods
Example graph building
Running our programs - Sessions
Basic tensor methods
Simple matrix operations
Reduction
Tensor segmentation
Sequences
Tensor shape transformations
Tensor slicing and joining
Dataflow structure and results visualization - TensorBoard
Command line use
How TensorBoard works
Adding Summary nodes
Common Summary operations
Special Summary functions
Interacting with TensorBoard's GUI
Reading information from disk
Tabulated formats - CSV
The Iris dataset
Reading image data
Loading and processing the images
Reading from the standard TensorFlow format
Summary
2. Clustering
Learning from data - unsupervised learning
Clustering
k-means
Mechanics of k-means
Algorithm iteration criterion
k-means algorithm breakdown
Pros and cons of k-means
k-nearest neighbors
Mechanics of k-nearest neighbors
Pros and cons of k-nn
Practical examples for Useful libraries
matplotlib plotting library
Sample synthetic data plotting
scikit-learn dataset module
About the scikit-learn library
Synthetic dataset types
Blobs dataset
Employed method
Circle dataset
Employed method
Moon dataset
Project 1 - k-means clustering on synthetic datasets
Dataset description and loading
Generating the dataset
Model architecture
Loss function description and optimizer loop
Stop condition
Results description
Full source code
k-means on circle synthetic data
Project 2 - nearest neighbor on synthetic datasets
Dataset generation
Model architecture
Loss function description
Stop condition
Results description
Full source code
Summary
3. Linear Regression
Univariate linear modelling function
Sample data generation
Determination of the cost function
Least squares
Minimizing the cost function
General minima for least squares
Iterative methods - gradient descent
Example section
Optimizer methods in TensorFlow - the train module
The tf.train.Optimizer class
Other Optimizer instance types
Example 1 - univariate linear regression
Dataset description
Model architecture
Cost function description and Optimizer loop
Stop condition
Results description
Reviewing results with TensorBoard
Full source code
Example - multivariate linear regression Building Machine Learning Projects with TensorFlow-Packt Publishing (2016).pdf (7.89 MB, 下载次数: 475 )
Useful libraries and methods
Pandas library
Dataset description
Model architecture
Loss function description and Optimizer loop
Stop condition
Results description
Full source code
Summary
4. Logistic Regression
Problem description
Logistic function predecessor - the logit functions
Bernoulli distribution
Link function
Logit function
The importance of the logit inverse
The logistic function
Logistic function as a linear modeling generalization
Final estimated regression equation
Properties of the logistic function
Loss function
Multiclass application - softmax regression
Cost function
Data normalization for iterative methods
One hot representation of outputs
Example 1 - univariate logistic regression
Useful libraries and methods
TensorFlow's softmax implementation
Dataset description and loading
The CHDAGE dataset
CHDAGE dataset format
Dataset loading and preprocessing implementation
Model architecture
Loss function description and optimizer loop
Stop condition
Results description
Fitting function representations across epochs
Full source code
Graphical representation
Example 2 - Univariate logistic regression with skflow
Useful libraries and methods
Dataset description
Model architecture
Results description
Full source code
Summary
5. Simple FeedForward Neural Networks
Preliminary concepts
Artificial neurons
Original example - the Perceptron
Perceptron algorithm
Neural network layers
Neural Network activation functions
Gradients and the back propagation algorithm
Minimizing loss function: Gradient descent
Neural networks problem choice - Classification vs Regression
Useful libraries and methods
TensorFlow activation functions
TensorFlow loss optimization methods
Sklearn preprocessing utilities
First project - Non linear synthetic function regression
Dataset description and loading
Dataset preprocessing
Modeling architecture - Loss Function description
Loss function optimizer
Accuracy and Convergence test
Example code
Results description
Second project - Modeling cars fuel efficiency with non linear regression
Dataset description and loading
Dataset preprocessing
Modeling architecture
Convergency test
Results description
Third project - Learning to classify wines: Multiclass classification
Dataset description and loading
Dataset preprocessing
Modeling architecture
Loss function description
Loss function optimizer
Convergence test
Results description
Full source code
Summary
6. Convolutional Neural Networks
Origin of convolutional neural networks
Getting started with convolution
Continuous convolution
Discrete convolution
Kernels and convolutions
Interpretation of the convolution operations
Applying convolution in TensorFlow
Other convolutional operations
Sample code - applying convolution to a grayscale image
Sample kernels results
Subsampling operation - pooling
Properties of subsampling layers
Invariance property
Subsampling layers implementation performance.
Applying pool operations in TensorFlow
Other pool operations
Sample code
Improving efficiency - dropout operation
Applying the dropout operation in TensorFlow
Sample code
Convolutional type layer building methods
Convolutional layer
Subsampling layer
Example 1 - MNIST digit classification
Dataset description and loading
Dataset preprocessing
Modelling architecture
Loss function description
Loss function optimizer
Accuracy test
Result description
Full source code
Example 2 - image classification with the CIFAR10 dataset
Dataset description and loading
Dataset preprocessing
Modelling architecture
Loss function description and optimizer
Training and accuracy tests
Results description
Full source code
Summary
7. Recurrent Neural Networks and LSTM
Recurrent neural networks
Exploding and vanishing gradients
LSTM neural networks
The gate operation - a fundamental component
Operation steps
Part 1 - set values to forget (input gate)
Part 2 - set values to keep, change state
Part 3 - output filtered cell state
Other RNN architectures
TensorFlow LSTM useful classes and methods
class tf.nn.rnn_cell.BasicLSTMCell
class MultiRNNCell(RNNCell)
learn.ops.split_squeeze(dim, num_split, tensor_in)
Example 1 - univariate time series prediction with energy consumption data
Dataset description and loading
Dataset preprocessing
Modelling architecture
Loss function description
Convergency test
Results description
Full source code
Example 2 - writing music "a la" Bach
Character level models
Character sequences and probability representation
Encoding music as characters - the ABC music format
ABC format data organization
Useful libraries and methods
Saving and restoring variables and models
Loading and saving pseudocode
Variable saving
Variable restoring
Dataset description and loading
Network Training
Dataset preprocessing
Vocabulary definition
Modelling architecture
Loss function description
Stop condition
Results description
Full source code
Summary
8. Deep Neural Networks
Deep neural network definition
Deep network architectures through time
LeNet 5
Alexnet
Main features
The original inception model
GoogLenet (Inception V1)
Batch normalized inception (V2)
Inception v3
Residual Networks (ResNet)
Other deep neural network architectures
Example - painting with style - VGG style transfer
Useful libraries and methods
Dataset description and loading
Dataset preprocessing
Modeling architecture
Loss functions
Content loss function
Style loss function
Loss optimization loop
Convergency test
Program execution
Full source code
Summary
9. Running Models at Scale – GPU and Serving
GPU support on TensorFlow
Log device placement and device capabilities
Querying the computing capabilities
Selecting a cpu for computing
Device naming
Example 1 - assigning an operation to the GPU
Example 2 - calculating Pi number in parallel
Solution implementation
Source code
Distributed TensorFlow
Technology components
Jobs
Tasks
Servers
Combined overview
Creating a TensorFlow cluster
ClusterSpec definition format
Creating tf.Train.Server
Cluster operation - sending computing methods to tasks
Sample distributed code structure
Example 3 - distributed Pi calculation
Server script
Client script
Full source code
Example 4 - running a distributed model in a cluster
Sample code
Summary
10. Library Installation and Additional Tips
Linux installation
Initial requirements
Ubuntu preparation tasks (need to apply before any method)
Pip Linux installation method
CPU version
Testing your installation
GPU support
Virtualenv installation method
Environment test
Docker installation method
Installing Docker
Allowing Docker to run with a normal user
Reboot
Testing the Docker installation
Run the TensorFlow container
Linux installation from source
Installing the Git source code version manager
Git installation in Linux (Ubuntu 16.04)
Installing the Bazel build tool
Adding the Bazel distribution URI as a package source
Updating and installing Bazel
Installing GPU support (optional)
Installing CUDA system packages
Creating alternative locations
Installing cuDNN
Clone TensorFlow source
Configuring TensorFlow build
Building TensorFlow
Testing the installation
Windows installation
Classic Docker toolbox method
Installation steps
Downloading the Docker toolbox installer
Creating the Docker machine
MacOS X installation
Install pip
Summary
发表于 2018-5-11 21:09:23 | 显示全部楼层
shafa   , thx for sharing
发表于 2018-5-12 14:24:18 | 显示全部楼层
好东西,谢谢楼主
发表于 2018-5-13 01:30:01 | 显示全部楼层
thks very much !!!!!
发表于 2018-5-14 11:44:42 | 显示全部楼层
thanks a lot
发表于 2018-5-16 01:15:59 | 显示全部楼层
谢谢楼主
发表于 2018-5-16 12:08:41 | 显示全部楼层
发表于 2018-5-17 08:39:05 | 显示全部楼层
谢谢您的分享!!
发表于 2018-5-17 14:04:41 | 显示全部楼层
gooood
发表于 2018-5-18 16:52:16 | 显示全部楼层
good document, thank you for your sharing..
您需要登录后才可以回帖 登录 | 注册

本版积分规则

关闭

站长推荐 上一条 /1 下一条

小黑屋| 关于我们| 联系我们| 在线咨询| 隐私声明| EETOP 创芯网
( 京ICP备:10050787号 京公网安备:11010502037710 )

GMT+8, 2024-3-28 19:07 , Processed in 0.029178 second(s), 7 queries , Gzip On, Redis On.

eetop公众号 创芯大讲堂 创芯人才网
快速回复 返回顶部 返回列表