TensorFlow 2. Shadow CNN example for MNIST data

The practice is to understand how Tensorflow applied to shadow NN in MNIST data. The practice is from Big Data University lectures. Reference: Support_Vector_Machines.html  (Coursera Machine Learning Course) Big Data University TensorFlow course Deep Learning Concept  Using multiple process layer with non-linear algorithm to simulate brain ability;  A branch of machine learning. We will focus on shadow NN in this note. Shadow NN MNIST Example: two or three layers only. In the context of supervised learning, digits recognition in our case, the learning consists of a target/feature which is toRead More

TensorFlow 101C. Image Texture

This Note is for image texture explanation: Reference  (computer vision) Why Texture Texture gives us information about the spatial arrangement of the colors or intensities in an image. Why? The answer is the histogram can’t fully represent/classify images. All images below are half white and half black. However, the images are different. How to recognize texture Structural approach: Texture is a set of primitive texels in some regular or repeated relationship. Statistical approach: Texture is a quantitative measure of the arrangement of intensities in a region. Statistical method Co-occurrenceRead More

TensorFlow 101B. CNN Concept

The note is to understand the concept/rise of CNN. Reference: Introduction convolutional neural network Lot’s of same neurons, similar as java function, which can be re-use X is the input layer (you can sense that is see/hear/smell, etc. for example, image, video, audio, document) Next Layer is not always fully connected with previous layer:  one neuron of type A neuron is not fully connected to each X.    B is not fully connected with All A  F is fully connected with all B Why so many same neurons? ThatRead More

TensorFlow 101A. Manual Convolution Calculation

The note is to describe how to calculate Convolution via manual or TensorFlow command. TensorFlow convolution common commands: y= np.convolve(x,h,”valid”) and y= np.convolve(h,x,”valid”)  are same…also true for “same”,”full” options. from scipy import signal as sg   sg.convolve is using FFT which is faster than np.convolve for big matrix convolution inverse = numpy.linalg.inv(x)  One dimension with zero padding When we apply kernel, we always use kernel’s inverse to multiply and sum. One dimension without zero padding  One dimension filter with multiple dimensions of input x= [[255,   7,  3],    Read More

TensorFlow 3. A Small and Interest word2vec NN example to start your adventure

The note intention is to understand the word2vec, and how to build a small NN to start your adventure on Deep Learning. You can see many source codes here to build the NN. But I am not yet built it with TF. Reference: A visual toy NN network for word2vec generation. The whole source code is from  The source code is written by javascript for NN with html and svg ( for graph visibility.  Try your best to understand all thoroughly (not just well enough). It will help youRead More



A Simple Implementation of MVC Framework

Most of us uses lots of Java MVC Framework in the software development. The famous will be Spring. Maybe you are scared away to think to implement yours when you saw the huge code in the Spring framework. Actually, it is not so hard that you just implement a simple MVC application without the use of Spring framework. In the article, we will demo a simple MVC of inventory management mobile APP in text mode (not gui), which include product in, product out and inventory tracking.  We will use ChainRead More



DOS in Y2016 – How to Run It Smoothly in Your Clinic Lab

Tools/Reference used for the project: dosbox debug: OK. Let’s talk the grandpa’s OS, the Dos. You may wonder, is DOS still survived at some corners in Silicon Valley in USA, the Most Developed digital and AI-pioneer area?  The answer is “IT DID”. Here are some photos to rock you!    From left to right, bottom to top, you can see Dos, 3 ISA cards, Clinic Lab PC and 80486 CPU. Those machines are used for work endurance test for workers, who are employed widely inRead More



A note for the eigenvector used in the Markov Steady Status: using P transpose or P to calculate eigenvector

  A note for the eigenvector used in the Markov Steady Status: using or P to calculate eigenvector? Last night, a classmate of my friend asks a good question about the eigenvector used for the Markov Steady status. Do we use Markov probability transition matrix to calculate its eigenvalue, or use its transpose to calculate? Why? Here is an example. The P matrix below is the Markov probability transition matrix: sum of each row probability is 1.  You can image the 3 nodes with transition graph as below: We can computeRead More

SVM Manual Maximization Procedure

Start from idiot example: 4 sample data:  +(1,0), +(2,0), -(-1,0),-(-2,0), what is the SVM boundary and support vectors? It is easy to know that the most closest positive (+) example and negative (-) example are (1,0) and (-1,0) accordingly.  It is easy to know the SVM boundary should be x=0. Ok, how can we use math analytic to get it via SVM concept, i.e. max the width between closet positive/negative samples. Set the SVM boundary is ax+by+c=0 The support vector are ax+by+c=d and is ax+by+c=-d.   (d>0) Set the twoRead More

Complete and Simple PCA SVD Tutorial Note

Ref: PCA is the major method to reduce features/variables before you train your data in the machine learning. It uses the top K most variance transformed features to represent the original N features (assume N>>K). For example, we have food consumption of 17 types of food in grams per person per week for every country in the UK.   Maybe even after you view the above table for 5 minutes, you are hardly to get some patterns. But if you use PCA to extract theRead More


Reference  LikelyhoodFunction_The world is a complex place.pdf Example Likelihood:  when an event......

Junction Tree local consistency and global consistency

This note is to describe the Junction Tree local consistency and global consistency (text book:  P109, Example 6.1) Reference: pgm_Princeton_COS513 Foundations of Probabilistic Modelinglecture7.pdf gouws_python_2010: a master thesis on how to implement graphical model with python Text: Bayesian Reasoning and Machine Learning Junction tree property(JTP): For each pair U, V of cliques with intersection S, all cliques on the path between U and V contain S. (from gouws_python_2010.pdf, a master thesis on how to implement graphical model with python) Example 1 to reflect the property Add Separators in diagram b), you may findRead More