このページのリンク

Informatics and machine learning : from Martingales to metaheuristics / Stephen Winters-Hilt

データ種別 電子ブック
出版者 (Hoboken, NJ : John Wiley & Sons, Inc)
出版年 2022
大きさ 1 online resource (xv, 566 pages) : illustrations
著者標目 *Winters-Hilt, Stephen author

所蔵情報を非表示

URL
射水-電子 007 EB0005248 Wiley Online Library: Complete oBooks

9781119716730

書誌詳細を非表示

一般注記 Includes bibliographical references and index
"This book provides an interdisciplinary presentation on machine learning, bioinformatics and statistics. This book is an accumulation of lecture notes and interesting research tidbits from over two decades of the author's teaching experience. The chapters in this book can be traversed in different ways for different course offerings. In the classroom, the trend is moving towards hands-on work with running code. Therefore, the author provides lots of sample code to explicitly explain and provide example-based code for various levels of project work. This book is especially useful for professionals entering the rapidly growing Machine Learning field due to its complete presentation of the mathematical underpinnings and extensive examples of programming implementations. Many Machine Learning (ML) textbooks miss a strong intro/basis in terms of information theory. Using mutual information alone, for example, a genome's encoding scheme can be 'cracked' with less than one page of Python code. On the implementation side, many ML professional/reference texts often do not shown how to actually access raw data files and reformat the data into some more usable form. Methods and implementations to do this are described in the proposed text, where most code examples are in Python (some in C/C++, Perl, and Java, as well). Once the data is in hand all sorts of fun analytics and advanced machine learning tools can be brought to bear."-- Provided by publisher
Description based on online resource; title from digital title page (viewed on January 25, 2022)
Calculus ... Python (or Perl) and Linux 2 1.2 Informatics and Data Analytics 3 1.3 FSA-Based Signal Acquisition and Bioinformatics 4 1.4 Feature Extraction and Language Analytics 7 1.5 Feature Extraction and Gene Structure Identification 8 1.5.1 HMMs for Analysis of Information Encoding Molecules 11 1.5.2 HMMs for Cheminformatics and Generic Signal Analysis 11 1.6 Theoretical Foundations for Learning 13 1.7 Classification and Clustering 13 1.8 Search 14 1.9 Stochastic Sequential Analysis (SSA) Protocol (Deep Learning Without NNs) 15 1.9.1 Stochastic Carrier Wave (SCW) Analysis - Nanoscope Signal Analysis 18 1.9.2 Nanoscope Cheminformatics - A Case Study for Device "Smartening" 19 1.10 Deep Learning using Neural Nets 20 1.11 Mathematical Specifics and Computational Implementations 21 2 Probabilistic Reasoning and Bioinformatics 23 2.1 Python Shell Scripting
Wikipedia 125 5.1.1.2 Library of Babel 126 5.1.1.3 Weather Scraper 127 5.1.1.4 Stock Scraper - New-Style with Cookies 128 5.1.2 Word Frequency Analysis: Machiavelli's Polysemy on Fortuna and Virtu 130 5.1.3 Word Frequency Analysis: Coleridge's Hidden Polysemy on Logos 139 5.1.4 Sentiment Analysis 143 5.2 Phrases - Short (Three Words) 145 5.2.1 Shakespearean Insult Generation - Phrase Generation 147 5.3 Phrases - Long (A Line or Sentence) 150 5.3.1 Iambic Phrase Analysis: Shakespeare 150 5.3.2 Natural Language Processing 152 5.3.3 Sentence and Story Generation: Tarot 152 5.4 Exercises 153 6 Analysis of Sequential Data Using HMMs 155 6.1 Hidden Markov Models (HMMs) 155 6.1.1 Background and Role in Stochastic Sequential Analysis (SSA) 155 6.1.2 When to Use a Hidden Markov Model (HMM)? 160 6.1.3 Hidden Markov Models (HMMs) - Standard
Formulation and Terms 161 6.2 Graphical Models for Markov Models and Hidden Markov Models 162 6.2.1 Hidden Markov Models 162 6.2.2 Viterbi Path 163 6.2.2.1 The Most Probable State Sequence 164 6.2.3 Forward and Backward Probabilities 164 6.2.4 HMM: Maximum Likelihood discrimination 165 6.2.5 Expectation/Maximization (Baum-Welch) 166 6.2.5.1 Emission and Transition Expectations with Rescaling 167 6.3 Standard HMM Weaknesses and their GHMM Fixes 168 6.4 Generalized HMMs (GHMMs - "Gems"): Minor Viterbi Variants 171 6.4.1 The Generic HMM 171 6.4.2 pMM/SVM 171 6.4.3 EM and Feature Extraction via EVA Projection 172 6.4.4 Feature Extraction via Data Absorption (a.k.a
(Further details in Appendix) 232 7.6 Exercises 234 8 Neuromanifolds and the Uniqueness of Relative Entropy 235 8.1 Overview 235 8.2 Review of Differential Geometry 236 8.2.1 Differential Topology - Natural Manifold 236 8.2.2 Differential Geometry - Natural Geometric Structures 240 8.3 Amari's Dually Flat Formulation 243 8.3.1 Generalization of Pythagorean Theorem 246 8.3.2 Projection Theorem and Relation Between Divergence and Link Formalism 246 8.4 Neuromanifolds 247 8.5 Exercises 250 9 Neural Net Learning and Loss Bounds Analysis 253 9.1 Brief Introduction to Neural Nets (NNs) 254 9.1.1 Single Neuron Discriminator 254 9.1.1.1 The Perceptron 254 9.1.1.2 Sigmoid Neurons 256 9.1.1.3 The Loss Function and Gradient Descent 257 9.1.2 Neural Net with Back-Propagation 258 9.1.2.1 The Loss Function - General Activation in a General Neural
John Wiley and Sons Wiley Online Library: Complete oBooks
HTTP:URL=https://onlinelibrary.wiley.com/doi/book/10.1002/9781119716730
件 名 LCSH:Machine learning
LCSH:Computer science
LCSH:Bioinformatics
LCSH:Electronic data processing
LCSH:Computational biology
CSHF:Apprentissage automatique
CSHF:Informatique
CSHF:Bio-informatique
FREE:data processing
FREE:computer science
FREE:Electronic data processing
FREE:Computational biology
FREE:Bioinformatics
FREE:Computer science
FREE:Machine learning
分 類 LCC:Q325.5
DC23:006.3/1
書誌ID EB00004537
ISBN 9781119716730

 類似資料