Saturday, April 29, 2017

Over the past few weeks we opened our discussion of machine learning and neural networks with an introduction to linear classification that discussed the concept of parameterized learning, and how this type of learning enables us to define a scoring function that maps our input data to output class labels.

This scoring function is defined in terms of parameters; specifically, our weight matrix W and our bias vector b. Our scoring function accepts these parameters as inputs and returns a predicted class label for each input data point .

From there, we discussed two common loss functions: Multi-class SVM loss and cross-entropy loss (commonly referred to in the same breath as “Softmax classifiers”). Loss functions, at the most basic level, are used to quantify how “good” or “bad” a given predictor (i.e., a set of parameters) are at classifying the input data points in our dataset.

Given these building blocks, we can now move on to arguably the most important aspect of machine learning, neural networks, and deep learning — optimization.

Throughout this discussion we’ve learned that high classification accuracy is dependent on finding a set of weights W such that our data points are correctly classified. Given W, can compute our output class labels via our scoring function. And finally, we can determine how good/poor our classifications are given some W via our loss function.

But how do we go about finding and obtaining a weight matrix W that obtains high classification accuracy?

Do we randomly initialize W, evaluate, and repeat over and over again, hoping that at some point we land on a W that obtains reasonable classification accuracy?

Well we could — and it some cases that might work just fine.

But in most situations, we instead need to define an optimization algorithm that allows us to iteratively improve our weight matrix W.

In today’s blog post, we’ll be looking at arguably the most common algorithm used to find optimal values of W — gradient descent.

Tuesday, April 11, 2017

If you need to update your php version , just dowload PHP executable and save it to a folder. Add that folder to PATH and make sure it's the first php.exe in the PATH's folder list.
Rename php.ini.development to php.ini and uncomment the extension you need. You will need mbstring and openssl for sure (otherwise composer update will fail).