arrow
arrow

 

A certain notion exists that machine learning is only for the experts and is not for people with less knowledge of programming. The people you’ll be dealing with are machines, and machines can’t make decisions on their own. Your job is to make these machines think. At first, you would think that this task is impossible, but through this guide, you’ll understand how machine learning (ML) can be much easier than it sounds.

As with almost every framework in the history of humankind — along the lines of Newton’s laws of motion, the law of supply and demand, and others — the particular ideas and concepts of machine learning are likewise achievable and not complicated. Among those challenges that you can face when discovering machine learning are the notations, formulas, and a specific language which you may have never heard about before.

Choosing Your Data Set

Just about any data set can be used for ML purposes, but an efficient model can only be achieved through a well-structured data set.

When thinking about data sets, the better data sets with well-labeled data that features a number of ML algorithms mean you won’t have a hard time building your model. However, coming up with simpler algorithms is best despite the volume, especially when you’re dealing with a number of data sets, for easier sorting later on. As an example, regularized logistic regression using billions of features can work well despite the substantial numbers.

The more complicated algorithms — which include GBRT (Gradient Boosted Regression Tree), deep neural networks, and random forests — perform better than simpler platforms; however, they can be more expensive to use and train. When you’re working with tons of data, you will always make use of linear perceptron and other learning algorithms online. When using any learning algorithms online, you always need to use a stochastic gradient descent.

Perhaps always choosing a good selection of features would be safe, such as the essential tools to learn about your data. Also a features selection serves more likely an art instead of a science. The selection process could be complicated; however, it could be eased out if you work on removing the unpromising ones first to narrow down your options.

Model Selection

To find the best model that suits your needs, you need to keep in mind a few of the following considerations:

Accuracy

While accuracy is important, you don’t always have to go with the most accurate answer. This can happen when you feel that you have a model that is producing a high bias or high variance. Sometimes an approximation will do depending on the use. Opting for the approximation, you’ll be able to reduce processing time and you avoid overfitting.

Training Time

Each algorithm will require varying training times for instructing its models. As we consider the train- ing time, we should note that this time is closely related to the accuracy — as training time increases, so does the accuracy. Also keep in mind that other algorithms can be more sensitive than others and may require more training time.

Linear Classification

Linearity is common in machine learning algorithms. A line can possibly separate classes as assumed by linear classification algorithms. These classes can be composed of support vector and logistic regression machines. Linear classification algorithms work off of assumptions which can lead to the previously mentioned high bias and variance issues. These algorithms are still commonly implemented first because they’re algorithmically simple and quick to train.

Number of Parameters

Data scientists use precise measurements to set up an algorithm called parameters. These parameters are in the form of numbers which provide changes on the behavior of algorithms. These behavioral changes can include the number of iterations, error tolerances, or options among variants about the behavior of algorithms. Sometimes, if you want to have the best settings, you may often need to adjust the accuracy and the training time. The algorithms having the largest number of parameters will always require more trial-and-error combinations and tests.

How You Can Train Your Model

A substantial amount of repetition will always be required with training a model to achieve a high level of accuracy out of a selected model.

As a first step, you’ll have to select the best model suited for your particular project. Previously, we discussed the pros and cons for various models. With the information provided, the steps would cover the following:

Data Set Partitioning for Testing and Training

Also called the process of “data partitioning,” you’ll find various options from the variety of tools and languages to help you choose which data points to divide over training and testing data sets. Below is an example of partitioning data sets over R.

1 # By default R will come with a number of datasets
2 data = mtcars
3 dim(data) # 32 11
4
5 #Sample Indexes
6 indexes = sample(1:nrow(data), size=0.2*nrow(data))
7
8 # Split data
9 test = data[indexes,]
10 dim(test) #6 11
11 train = data[-indexes,]
12 dim(train) # 26 11

Eliminating Biases in Data Through Cross-Validation

Performing a cross-validation out of your model will give you a guarantee for an accurately correct model that doesn’t pick too much noise or, in simpler terms, low on variance and bias. Cross-validation becomes a reliable technique if you want to track the effectiveness of your particular model especially within cases where you need to ease overfitting.

You will find various cross-validation techniques available where the holdout method could be the most basic form that makes use of the partitioning method. Function approximators will only fit a certain function through training sets and afterward will be prompted to make a prediction on the output values for the particular data within the testing set. One good feature about this method is its quickness to compute, a benefit recommended for the residual method.

Maintain the Accuracy of Your Model

Using the data provided after a cross-validation process, let’s check out the errors and devise a way to fix these errors.

Repeat These Steps Until You Obtain the Desired Accuracy

Gather and Test

When on the lookout for the basic data sets to learn with and gain comfort with, experts would recommend multivariate data sets provided by Ronald Fisher, a British statistician who, in 1936, introduced the Iris Flower Data Set. This data set studied four measured features across three different species of iris. It comes with 50 samples for every species.

Moving on to the next steps as soon as we acquire our data set:

Basic Predictive Model Out of Orange

Let’s go with a pair of basic predictive models within the Orange canvas along with their specific Jupyter notebooks.

Initially, let’s examine our Simple Predictive Model – Part 1 notebook. After we do, let’s proceed with creating the model over the Orange Canvas. Below is a diagram for predicting results coming from the Iris data set by using a classification from within Orange.

orange flow

The toolbar over the left part of the canvas provides you with more than 100 widgets that you can use by dragging the toolbar over the canvas. Reading the schema from left to right, and details from widget to widget out of the pipelines, the Iris data set can be seen over a variety of widgets once it is loaded in. By clicking the simple data table along with the scatter plot, we’ll have the following displayed:

orange data
orange data

Using three widgets, we can already have a better picture of our data. The scatter plot provides us with the option “Rank Projections,” which tracks the best step to view the specific data. In this regard, having “Petal Width vs. Petal Length” provides a quick pattern on the width of the flower’s petal and the type of Iris flower studied.

Looking at the way we build the predictive model, we connect the data over a classification tree widget to examine through the viewer widget.

classification tree

Here the predictive model becomes noticeable and thus allows us to connect the model plus data over to the “Test and Score” along with the “Predictions” widgets. This process illustrates how the classification tree performs.

predictions

The Predictions widget will be able to predict the specific type of iris flower through the set of input data. Using a confusion matrix, we can find the most accurate predictions.

matrix

Hold Out Test Over Orange to Build Predictive Model

In this next example for predictive modeling, we’ll be having a slightly complicated model out of holding out a particular test set. Through this effort, we can use varying data sets to test and train the model, thus disabling overfitting.

Using Orange Canvas, we can now proceed with building the same predictive model for a better view of what we are building.

orange canvas

Are you with us so far? Now let’s take your knowledge to the next level and apply what you’ve learned in this chapter. We’ll next consider some more resources you can use to get you started with a more complex ML project.