sign in

Username Password

Forget Password ? ? Click Here

Don't Have An Account ? Create One

sign up

name Username Email Mobile Password

To contact us, you can contact us via the following mobile numbers by calling and WhatsApp

+989115682731 Connect To WhatsApp
+989917784643 Connect To WhatsApp

Unlimited Access

For Registered Users

Secure Payment

100% Secure Payment

Easy Returns

10 Days Returns

24/7 Support

Call Us Anytime

Applied Data Analytics - Principles and Applications by Johnson I. Agbinya

Applied Data Analytics - Principles and Applications

Details Of The Book

Applied Data Analytics - Principles and Applications

ISBN : 9788770220965, 9788770220958 
publish year:  
pages: 370 
language: English 
ebook format : PDF (It will be converted to PDF, EPUB OR AZW3 if requested by the user) 
file size: 77 MB 

price : $13.2 15 With 12% OFF

Your Rating For This Book (Minimum 1 And Maximum 5):

User Ratings For This Book:       

You can Download Applied Data Analytics - Principles and Applications Book After Make Payment, According to the customer's request, this book can be converted into PDF, EPUB, AZW3 and DJVU formats.

Abstract Of The Book

Table Of Contents

Front Cover
Applied Data Analytics –   Principles and Applications
List of Contributors
List of Figures
List of Tables
List of Abbreviations
1 Markov Chain and its Applications
	1.1 Introduction
	1.2 Definitions
		1.2.1 State Space
		1.2.2 Trajectory Transition probability State transition matrix
	1.3 Prediction Using Markov Chain
		1.3.1 Initial State
		1.3.2 Long-run Probability Algebraic solution Matrix method
	1.4 Applications of Markov Chains
		1.4.1 Absorbing Nodes in a Markov Chain
2 Hidden Markov Modelling (HMM)
	2.1 HMM Notation
	2.2 Emission Probabilities
	2.3 A Hidden Markov Model
		2.3.1 Setting up HMM Model
		2.3.2 HMM in Pictorial Form
	2.4 The Three Great Problems in HMM
		2.4.1 Notation Problem 1: Classification or the likelihood problem (find p(O\"026A30C )) Problems 2: Trajectory estimation problem Problem 3: System identification problem
		2.4.2 Solution to Problem 1: Estimation of Likelihood Naïve solution Forward recursion Backward recursion Solution to Problem 2: Trajectory estimation problem
	2.5 State Transition Table
		2.5.1 Input Symbol Table
		2.5.2 Output Symbol Table
	2.6 Solution to Problem 3: Find the Optimal HMM
		2.6.1 The Algorithm
	2.7 Exercises
3 Introduction to Kalman Filters
	3.1 Introduction
	3.2 Scalar Form
		3.2.1 Step (1): Calculate Kalman Gain
	3.3 Matrix Form
		3.3.1 Models of the State Variables Using prediction and measurements in Kalman filters
		3.3.2 Gaussian Representation of State
	3.4 The State Matrix
		3.4.1 State Matrix for Object Moving in a Single Direction Tracking including measurements
		3.4.2 State Matrix of an Object Moving in Two Dimensions
		3.4.3 Objects Moving in Three-Dimensional Space
	3.5 Kalman Filter Models with Noise
4 Kalman Filter II
	4.1 Introduction
	4.2 Processing Steps in Kalman Filter
		4.2.1 Covariance Matrices
		4.2.2 Computation Methods for Covariance Matrix Manual method Deviation matrix computation method
		4.2.3. Iterations in Kalman Filter
5 Genetic Algorithm
	5.1 Introduction
	5.2 Steps in Genetic Algorithm
	5.3 Terminology of Genetic Algorithms (GAs)
	5.4 Fitness Function
		5.4.1 Generic Requirements of a Fitness Function
	5.5 Selection
		5.5.1 The Roulette Wheel
		5.5.2 Crossover Single-position crossover Double crossover Mutation Inversion
	5.6 Maximizing a Function of a Single Variable
	5.7 Continuous Genetic Algorithms
		5.7.1 Lowest Elevation on Topographical Maps
		5.7.2 Application of GA to Temperature Recording with Sensors
6 Calculus on Computational Graphs
	6.1 Introduction
		6.1.1 Elements of Computational Graphs
	6.2 Compound Expressions
	6.3 Computing Partial Derivatives
		6.3.1 Partial Derivatives: Two Cases of the Chain Rule Linear chain rule Loop chain rule Multiple loop chain rule
	6.4 Computing of Integrals
		6.4.1 Trapezoidal Rule
		6.4.2 Simpson Rule
	6.5 Multipath Compound Derivatives
7 Support Vector Machines
	7.1 Introduction
	7.2 Essential Mathematics of SVM
		7.2.1 Introduction to Hyperplanes
		7.2.2 Parallel Hyperplanes
		7.2.3 Distance between Two Parallel Planes
	7.3 Support Vector Machines
		7.3.1 Problem Definition
		7.3.2 Linearly Separable Case
	7.4 Location of Optimal Hyperplane (Primal Problem)
		7.4.1 Finding the Margin
		7.4.2 Distance of a Point i from Separating Hyperplane Margin for support vector points
		7.4.3 Finding Optimal Hyperplane Problem Hard margin
	7.5 The Lagrangian Optimization Function
		7.5.1 Optimization Involving Single Constraint
		7.5.2 Optimization with Multiple Constraints Single inequality constraint Multiple inequality constraints
		7.5.3 Karush–Kuhn–Tucker Conditions
	7.6 SVM Optimization Problems
		7.6.1 The Primal SVM Optimization Problem
		7.6.2 The Dual Optimization Problem Reformulation of the dual algorithm
	7.7 Linear SVM (Non-linearly Separable) Data
		7.7.1 Slack Variables Primal formulation including slack variable Dual formulation including slack variable Choosing C in soft margin cases
		7.7.2 Non-linear Data Classification Using Kernels Polynomial kernel function Multi-layer perceptron (Sigmoidal) kernel Gaussian radial basis function Creating new kernels
8 Artificial Neural Networks
	8.1 Introduction
	8.2 Neuron
		8.2.1 Activation Functions Sigmoid Hyperbolic tangent Rectified Linear Unit (ReLU) Leaky ReLU Parametric rectifier Maxout neuron The Gaussian Error calculation Output layer node Hidden layer nodes Summary of derivations
9 Training of Neural Networks
	9.1 Introduction
	9.2 Practical Neural Network
	9.3 Backpropagation Model
		9.3.1 Computational Graph
	9.4 Backpropagation Example with Computational Graphs
	9.5 Back Propagation
	9.6 Practical Training of Neural Networks
		9.6.1 Forward Propagation
		9.6.2 Backward Propagation Adapting the weights
	9.7 Initialisation of Weights Methods
		9.7.1 Xavier Initialisation
		9.7.2 Batch Normalisation
	9.8 Conclusion
10 Recurrent Neural Networks
	10.1 Introduction
	10.2 Introduction to Recurrent Neural Networks
	10.3 Recurrent Neural Network
11 Convolutional Neural Networks
	11.1 Introduction
	11.2 Convolution Matrices
		11.2.1 Three-Dimensional Convolution in CNN
	11.3 Convolution Kernels
		11.3.1 Design of Convolution Kernel Separable Gaussian kernel Separable Sobel kernel Computation advantage
	11.4 Convolutional Neural Networks
		11.4.1 Concepts and Hyperparameters Depth (D) Zero-padding (P) Receptive field (R) Stride (S) Activation function using rectified linear unit
		11.4.2 CNN Processing Stages Convolution layer
		11.4.3 The Pooling Layer
		11.4.4 The Fully Connected Layer
	11.5 CNN Design Principles
	11.6 Conclusion
12 Principal Component Analysis
	12.1 Introduction
	12.2 Definitions
		12.2.1 Covariance Matrices
	12.3 Computation of Principal Components
		12.3.1 PCA Using Vector Projection
		12.3.2 PCA Computation Using Covariance Matrices
		12.3.3 PCA Using Singular-Value Decomposition
		12.3.4 Applications of PCA Face recognition
13 Moment-Generating Functions
	13.1 Moments of Random Variables
		13.1.1 Central Moments of Random Variables
		13.1.2 Properties of Moments
	13.2 Univariate Moment-Generating Functions
	13.3 Series Representation of MGF
		13.3.1 Properties of Probability Mass Functions
		13.3.2 Properties of Probability Distribution Functions f(x)
	13.4 Moment-Generating Functions of Discrete Random Variables
		13.4.1 Bernoulli Random Variable
		13.4.2 Binomial Random Variables
		13.4.3 Geometric Random Variables
		13.4.4 Poisson Random Variable
	13.5 Moment-Generating Functions of Continuous Random Variables
		13.5.1 Exponential Distributions
		13.5.2 Normal Distribution
		13.5.3 Gamma Distribution
	13.6 Properties of Moment-Generating Functions
	13.7 Multivariate Moment-Generating Functions
		13.7.1 The Law of Large Numbers
	13.8 Applications of MGF
14 Characteristic Functions
	14.1 Characteristic Functions
		14.1.1 Properties of Characteristic Functions
	14.2 Characteristic Functions of Discrete Single Random Variables
		14.2.1 Characteristic Function of a Poisson Random Variable
		14.2.2 Characteristic Function of Binomial Random Variable
		14.2.3 Characteristic Functions of Continuous Random Variables
15 Probability-Generating Functions
	15.1 Probability-Generating Functions
	15.2 Discrete Probability-Generating Functions
		15.2.1 Properties of PGF
		15.2.2 Probability-Generating Function of Bernoulli Random Variable
		15.2.3 Probability-Generating Function for Binomial Random Variables
		15.2.4 Probability-Generating Function for Poisson Random Variable
		15.2.5 Probability-Generating Functions of Geometric Random Variables
		15.2.6 Probability-Generating Function of Negative Binomial Random Variable Negative binomial probability law
	15.3 Applications of Probability-Generating Functions in Data Analytics
		15.3.1 Discrete Event Applications Coin tossing Rolling a die
		15.3.2 Modelling of Infectious Diseases Early extinction probability Models of extinction probability
16 Digital Identity Management System Using Artificial Neural Networks
	16.1 Introduction
	16.2 Digital Identity Metrics
	16.3 Identity Resolution
		16.3.1 Fingerprint and Face Verification Challenges Fingerprint Face
	16.4 Biometrics System Architecture
		16.4.1 Fingerprint Recognition
		16.4.2 Face Recognition
	16.5 Information Fusion
	16.6 Artificial Neural Networks
		16.6.1 Artificial Neural Networks Implementation
	16.7 Multimodal Digital Identity Management System Implementation
		16.7.1 Terminal, Fingerprint Scanner and Camera
		16.7.2 Fingerprint and Face Recognition SDKs
		16.7.3 Database
		16.7.4 Verification: Connect to Host and Select Verification Verifying user Successful verification
	16.8 Conclusion
17 Probabilistic Neural Network Classifiers for IoT Data Classification
	17.1 Introduction
	17.2 Probabilistic Neural Network (PNN)
	17.3 Generalized Regression Neural Network (GRNN)
	17.4 Vector Quantized GRNN (VQ-GRNN)
	17.5 Experimental Works
	17.6 Conclusion and Future Works
18 MML Learning and Inference of Hierarchical Probabilistic Finite State Machines
	18.1 Introduction
	18.2 Finite State Machines (FSMs) and PFSMs
		18.2.1 Mathematical Definition of a Finite State Machine
		18.2.2 Representation of an FSM in a State Diagram
	18.3 MML Encoding and Inference of PFSMs
		18.3.1 Modelling a PFSM Assertion code for hypothesis H Assertion code for data D generated by hypothesis H
		18.3.2 Inference of PFSM Using MML Inference of PFSM by ordered merging (OM) First stage merge Second stage merge Third stage merge Ordered merging (OM) algorithm Inference of PFSM using simulated annealing (SA) Simulated annealing (SA) Simulated annealing (SA) algorithm
	18.4 Hierarchical Probabilistic Finite State Machine (HPFSM)
		18.4.1 Defining an HPFSM
		18.4.2 MML Assertion Code for the Hypothesis H of HPFSM
		18.4.3 Encoding the transitions of HPFSM
	18.5 Experiments
		18.5.1 Experiments on Artificial datasets Example-1 Example-2
		18.5.2 Experiments on ADL Datasets
	18.6 Summary
Solution to Exercises
About the Author
Back Cover

First 10 Pages Of the book

Comments Of The Book