Search Results
737 results found with an empty search
- How to Write Codersarts Blog | Codersarts
Follow Below steps to write the Blog in Wix as per your given credentials: Frist login wix as per given credential First Click on Below Link: click here, to login wix. Now it open as like: Click on âStart Nowâ Enter your given âemail idâ and âpasswordâ and then click on âLog inâ When you login then new window is open like that:. Click on âCreate New Postâ at right corner to write the post: Next Steps to design or decorated the Blog Adding title in blog: In highlighted area you can write your title of the blog as per your project topics, Example: If you writing âhello wordâ python in java then title is like that: âWrite hello word program in python | Codersartsâ Adding content in blog: In highlighted content area you can write your content and adding here many functionality which is given below like add code block, images, set font size, adding link etc. Adding image in blog: New popup window is open: By using â+upload Mediaâ you can add image form our own system which is created by your using any image creator like PowerPoint, adobe hd, etc You can also choose some online images form wix using âMedia form wixâ or âUnsplashâ Change font size: Frist write content or text in content area then select it, then new popup tools window open, you can change text size by clicking on âTTâ symbol. Adding code In code blog: After selecting code you can click on below icon then code is set in code blog with background to it can easily visible by any readers. Adding link in blog: You can add link in our blog using blow tool which is given in screenshot: Choose categories: In this you can choose one or more categories related to your blog, For example: If you working on java blog, then java related category as âJava Assignment Helpâ ro âJava project Helpâ which is in your categories option Your website blog is ready now need to publish it. Now publish your blog: After completing the blog you can publish it using right corner "Publish" button After publishing: After pulishing you see window like that How to Edit Blog: You Can scroll your mouse on top of your blog area then your see âEditâ Option, by using this you can edit your blog After editing you can publish updated post. Now you can check you blog by using codersarts: Open new tab in web browser in and open âcodersrtsarts.comâ, and then click on âBlogâ option in top menu, then you see all blog is here which you completd
- How to print hello using Python
You can print hello using simple line of code in python 3.x print("hello")
- Hello World Program in Java
The very first program that any Java programmer learns to code is Hello World Program in Java. But many a time we miss out on the nitty-gritty of the basic syntax. Through the medium of this article, I will get into the details of the Hello World Program in Java Before we get into the details, lets first start with the coding and see how a basic Hello World program in Java is coded. public class HelloWorldProgram{ public static void main(String args[]){ System.out.println("Hello world in java"); System.exit(0); } } Now that you are done with the coding, lets now analyze the programâs syntax in depth. Line 1: public class HelloWorldProgram { This line makes use of the keyword class for declaring a new class called HelloWorldProgram . Since Java is an Object-Oriented Programming (OOP) Language, the entire class definition, including all of its members must be contained in between the opening curly brace { and the closing curly brace}. Also, it is using the public keyword to specify the accessibility of the class from outside the package. Line 2: public static void main( String[] args ) { This line declares a method called main(String[]). It is called the main method and acts as the entry point for the Java compiler to begin the execution of the program. In other words, whenever any program is executed in Java, the main method is the first function to be invoked. Other functions in the application are then invoked from the main method. In a standard Java application, one main method is mandatory to trigger the execution. public: it is an access modifier specifies the visibility. It allows JVM to execute the method from anywhere. static: It is a keyword which helps in making any class member static. The main method is made static as there is no need for creating an object to invoke the static methods in Java. Thus, JVM can invoke it without having to create an object which helps in saving the memory. void: It represents the return type of the method. Since the Java main method doesnât return any value its return type is declared as void. main(): It is the name of the method that has been configured in the JVM. String[]: It represents that the Java main method can accept a single line argument of the type String array. Line 3: System.out.println("Hello world in java"); System: It is a pre-defined class in java.lang package which holds various useful methods and variables. out: It is a static member field of type PrintStream. println: It is a method of PrintStream class and is used for printing the argument that has been passed to the standard console and a newline. You can also use print() method instead of println(). Line 4: System.exit( 0 ); he java.lang.System.exit() method is used to exit the current program by terminating the currently executing Java Virtual Machine. This method takes a status code as input which is generally a non-zero value. It indicates in case any abnormal termination occurs. Now in order to compile the program type in the below command: javac HelloWorldProgram.java In order to execute your HelloWorldProgram in Java program on the command line, all you need to do is type in the below code: java HelloWorldProgram You have successfully executed your first program in Java. Output: Hello world in java
- Titanic Exploratory Data Analysis
Would You Survive The Titanic? The Titanic Ship incident is very much well known to everyone. Here in this blog, we are going to discuss the titanic data with a detailed analysis. The dataset is present in the Kaggle itself. Dataset Link: https://www.kaggle.com/c/titanic/data Dataset Description: This dataset contains 891 rows with 12 columns. The column descriptions are shown below Column description: pclass: A proxy for socio-economic status (SES) 1st = Upper 2nd = Middle 3rd = Lower age: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5 sibsp: The dataset defines family relations in this way... Sibling = brother, sister, stepbrother, stepsister Spouse = husband, wife (mistresses and fiancĂŠs were ignored) parch: The dataset defines family relations in this way... Parent = mother, father Child = daughter, son, stepdaughter, stepson Some children traveled only with a nanny, therefore parch=0 for them. survival: It defines a person present in the ship was Survived or not. Age: Indicates a person's age in years. ticket: Ticket number of the person. fare: Passenger fare cabin: Cabin number embarked: Port of Embarkation (C = Cherbourg, Q = Queenstown, S = Southampton) The dataset's first 5 rows are shown below Data Analysis: This figure has shown the survived column where 0 represents NO and 1 represents YES. This figure has shown the sex column where we can see that males numbers were greater than female ones. This figure has shown the survived column wrt the sex column, where we can see that males deaths are more than female ones whereas female survival rate is more than male ones. This figure showed the age distribution of all the ages. It is shown that between the age group (20-35) most of the passengers are present. we can also plot by changing the size of the bins. # use bins to add or remove bins titanic_data.Age.plot(kind='hist', title='histogram for Age', color='c', bins=20); For KDE(Kernel Density Estimation plot) we have to plot using KDE. # use kde for density plot titanic_data.Age.plot(kind='kde', title='Density plot for Age', color='c'); Now let's plot the scatter points between age and fare column. Plot with more Transparency The below figure shows the distribution of passenger's class. so from this figure we can see that the age group between(30-50) traveled in Pclass:1, the age group between(25-30) traveled in Pclass:2, and mostly the Youths were traveled in Pclass:3. Data Wrangling and Preprocessing: In this section, we will discuss how to handle with NAN values. sns.heatmap(titanic_data.isnull(),cmap='Accent') This heatmap shows the NAN Values present in the dataset. we can simply drop The column contains most of the NAN values i.e. Cabin. titanic_data.drop(['Cabin'],axis=1,inplace=True) So now from the titanic data cabin column has been removed. Now the age column has also contained the NAN values, so we can do drop all the NAN values, or we can fill it by some numbers. Things to be done 1. Dropping values can pamper our dataset cleaning. 2. Filling random values is still not a good choice. 3. Filling by the mean age column is a good choice but we have to check the filling values to each data. titanic_data.groupby(['Pclass'])['Age'].mean() Output: P-class 1 38.233441 2 29.877630 3 25.140620 Name: Age, dtype: float64 Here we have to fill age.mean() for the 3 Pclasses. Now we have to return each age.mean() to the respective Pclass. Now let's check for the Embarked column. it contains two NAN values. so we can fill the NAN values to the most repeated occurrence. Let's check the most repeated occurrence. titanic_data['Embarked'].value_counts() Output: S 644 C 168 Q 77 Name: Embarked, dtype: int64 So S is the most repeated occurrence so we can replace the two NAN values by S. After replacing let's check if our dataset contains any further NAN values. sns.heatmap(titanic_data.isnull()) So we have ready with the dataset to fit in our model. The last step is to be converting categorical variables into numerical ones, and drop unnecessary columns. Here we have two categorical columns i.e. Sex and embarked. so first drop the unnecessary columns. titanic_data['Embarked'].replace(to_replace=['S','C','Q'],value=['1','2','3'],inplace=True) Now the embarked column has been changed to numerical ones. we are now left with Sex column. from sklearn import preprocessing # label_encoder object knows how to understand word labels. label_encoder = preprocessing.LabelEncoder() titanic_data['Sex']= label_encoder.fit_transform(titanic_data['Sex']) Now the sex column also has been changed to numerical ones. The Analysis has been made to each column present in the titanic dataset. Now it can be directly fit into the model for prediction. Thank You!
- Support Vector Machines for Machine Learning (SVM) & Maths Behind SVM
Support Vector Machines are perhaps one of the most popular and talked about machine learning algorithms. They were extremely popular around the time they were developed in the 1990s and continue to be the go-to method for a high-performing algorithm with little tuning. In this post you will discover the Support Vector Machine (SVM) machine learning algorithm. After reading this post you will know: How to disentangle the many names used to refer to support vector machines. The representation used by SVM when the model is actually stored on disk. How a learned SVM model representation can be used to make predictions for new data. How to learn an SVM model from training data. How to best prepare your data for the SVM algorithm. Where you might look to get more information on SVM. Maximal-Margin Classifier The Maximal-Margin Classifier is a hypothetical classifier that best explains how SVM works in practice. The numeric input variables (x) in your data (the columns) form an n-dimensional space. For example, if you had two input variables, this would form a two-dimensional space. A hyperplane is a line that splits the input variable space. In SVM, a hyperplane is selected to best separate the points in the input variable space by their class, either class 0 or class 1. In two-dimensions you can visualize this as a line and letâs assume that all of our input points can be completely separated by this line. For example: B0 + (B1 * X1) + (B2 * X2) = 0 Where the coefficients (B1 and B2) that determine the slope of the line and the intercept (B0) are found by the learning algorithm, and X1 and X2 are the two input variables. You can make classifications using this line. By plugging in input values into the line equation, you can calculate whether a new point is above or below the line. Above the line, the equation returns a value greater than 0 and the point belongs to the first class (class 0). Below the line, the equation returns a value less than 0 and the point belongs to the second class (class 1). A value close to the line returns a value close to zero and the point may be difficult to classify. If the magnitude of the value is large, the model may have more confidence in the prediction. The distance between the line and the closest data points is referred to as the margin. The best or optimal line that can separate the two classes is the line that as the largest margin. This is called the Maximal-Margin hyperplane. The margin is calculated as the perpendicular distance from the line to only the closest points. Only these points are relevant in defining the line and in the construction of the classifier. These points are called the support vectors. They support or define the hyperplane. The hyperplane is learned from training data using an optimization procedure that maximizes the margin. Soft Margin Classifier In practice, real data is messy and cannot be separated perfectly with a hyperplane. The constraint of maximizing the margin of the line that separates the classes must be relaxed. This is often called the soft margin classifier. This change allows some points in the training data to violate the separating line. An additional set of coefficients are introduced that give the margin wiggle room in each dimension. These coefficients are sometimes called slack variables. This increases the complexity of the model as there are more parameters for the model to fit to the data to provide this complexity. A tuning parameter is introduced called simply C that defines the magnitude of the wiggle allowed across all dimensions. The C parameters defines the amount of violation of the margin allowed. A C=0 is no violation and we are back to the inflexible Maximal-Margin Classifier described above. The larger the value of C the more violations of the hyperplane are permitted. During the learning of the hyperplane from data, all training instances that lie within the distance of the margin will affect the placement of the hyperplane and are referred to as support vectors. And as C affects the number of instances that are allowed to fall within the margin, C influences the number of support vectors used by the model. The smaller the value of C, the more sensitive the algorithm is to the training data (higher variance and lower bias). The larger the value of C, the less sensitive the algorithm is to the training data (lower variance and higher bias). Support Vector Machines (Kernels) The SVM algorithm is implemented in practice using a kernel. The learning of the hyperplane in linear SVM is done by transforming the problem using some linear algebra, which is out of the scope of this introduction to SVM. A powerful insight is that the linear SVM can be rephrased using the inner product of any two given observations, rather than the observations themselves. The inner product between two vectors is the sum of the multiplication of each pair of input values. For example, the inner product of the vectors [2, 3] and [5, 6] is 2*5 + 3*6 or 28. The equation for making a prediction for a new input using the dot product between the input (x) and each support vector (xi) is calculated as follows: f(x) = B0 + sum(ai * (x,xi)) This is an equation that involves calculating the inner products of a new input vector (x) with all support vectors in training data. The coefficients B0 and ai (for each input) must be estimated from the training data by the learning algorithm. Polynomial Kernel SVM Instead of the dot-product, we can use a polynomial kernel, for example: K(x,xi) = 1 + sum(x * xi)^d Where the degree of the polynomial must be specified by hand to the learning algorithm. When d=1 this is the same as the linear kernel. The polynomial kernel allows for curved lines in the input space. Radial Kernel SVM Finally, we can also have a more complex radial kernel. For example: K(x,xi) = exp(-gamma * sum((x â xi^2)) Where gamma is a parameter that must be specified to the learning algorithm. A good default value for gamma is 0.1, where gamma is often 0 < gamma < 1. The radial kernel is very local and can create complex regions within the feature space, like closed polygons in two-dimensional space. How to Learn a SVM Model The SVM model needs to be solved using an optimization procedure. You can use a numerical optimization procedure to search for the coefficients of the hyperplane. This is inefficient and is not the approach used in widely used SVM implementations like stochastic gradient descent Data Preparation for SVM This section lists some suggestions for how to best prepare your training data when learning an SVM model. Numerical Inputs: SVM assumes that your inputs are numeric. If you have categorical inputs you may need to covert them to binary dummy variables (one variable for each category). Binary Classification: Basic SVM as described in this post is intended for binary (two-class) classification problems. Although, extensions have been developed for regression and multi-class classification.
- Predict Boston House Prices Using Python & Linear Regression
In machine learning, the ability of a model to predict continuous or real values based on a training dataset is called Regression. With a small dataset and some great python libraries, we can solve such a problem with ease. In this blog post, we will learn how to solve a supervised regression problem using the famous Boston housing price dataset. Other than location and square footage, a house value is determined by various other factors. Letâs analyze this problem in detail and using machine learning model to predict a housing price. Dependencies pandas - To work with solid data-structures, n-dimensional matrices and perform exploratory data analysis. matplotlib - To visualize data using 2D plots. seaborn - To make 2D plots look pretty and readable. scikit-learn - To create machine learning models easily and make predictions. Boston Housing Prices Dataset In this dataset, each row describes a boston town. There are 506 rows and 13 attributes (features) with a target column (price). The problem that we are going to solve here is that given a set of features that describe a house in Boston, our machine learning model must predict the house price. To train our machine learning model with boston housing data, we will be using scikit- learnâs boston dataset. We will use pandas and scikit-learn to load and explore the dataset. The dataset can easily be loaded from scikit-learn datasets module using load_boston function import pandas as pd from sklearn import datasets boston = datasets.load_boston() There are four keys in this dataset using which we can access more information about the dataset .["data ", "target", "feature_name" and "DESCR"] are the four keys which could be accessed using keys() on the dataset variable. To know the description of each column name in this dataset, we can use DESCR to display the description of this dataset . Exploratory Data Analysis We can easily convert the dataset into a pandas dataframe to perform exploratory data analysis. Simply pass in the boston.data as an argument to pd.DataFrame(). We can view the first 5 rows in the dataset using head() function. bos = pd.DataFrame(boston.data, columns = boston.feature_names) bos['PRICE'] = boston.target bos.head() Exploratory Data Analysis is a very important step before training the model. Here, we will use visualizations to understand the relationship of the target variable with other features. Letâs first plot the distribution of the target variable. We will use the histogram plot function from the matplotlib library. sns.set(rc={'figure.figsize':(11.7,8.27)}) plt.hist(bos['PRICE'],color ="brown", bins=30) plt.xlabel("House prices in $1000") plt.show() histogram plot We can see from the plot that the values of PRICE are distributed normally with few outliers. Most of the house are around 20â24 range (in $1000 scale) Now, we create a correlation matrix that measures the linear relationships between the variables. The correlation matrix can be formed by using the corr function from the pandas dataframe library. We will use the heatmap function from the seaborn library to plot the correlation matrix. #Created a dataframe without the price col, since we need to see the #correlation between the variables bos_1=pd.DataFrame(boston.data, columns=boston.feature_names) correlation_matrix=bos_1.corr().round(2) sns.heatmap(data=correlation_matrix, annot=True) The correlation coefficient ranges from -1 to 1. If the value is close to 1, it means that there is a strong positive correlation between the two variables. When it is close to -1, the variables have a strong negative correlation. By looking at the correlation matrix we can see that RM has a strong positive correlation with PRICE (0.7) where as LSTAThas a high negative correlation with PRICE (-0.74). plt.figure(figsize=(20, 5)) features = ['LSTAT', 'RM'] target = bos['PRICE'] for i, col in enumerate(features): plt.subplot(1, len(features) , i+1) x = bos[col] y = target plt.scatter(x, y,color='green', marker='o') plt.title("Variation in House prices") plt.xlabel(col) plt.ylabel('"House prices in $1000"') The prices increase as the value of RM increases linearly. There are few outliers and the data seems to be capped at 50. The prices tend to decrease with an increase in LSTAT. Though it doesnât look to be following exactly a linear line. âRMâ shows positive correlation with the House Prices we will use this variable. X_rooms = bos.RM y_price = bos.PRICE X_rooms = np.array(X_rooms).reshape(-1,1) y_price = np.array(y_price).reshape(-1,1) Since we need to test our model, we split the data into training and testing sets. We train the model with 80% of the samples and test with the remaining 20%. We do this to assess the modelâs performance on unseen data. To split the data we use train_test_split function provided by scikit-learn library. We finally print the shapes of our training and test set to verify if the splitting has occurred properly. Splitting dataset into training and testing Since we need to test our model, we split the data into training and testing sets. We train the model with 80% of the samples and test with the remaining 20%. We do this to assess the modelâs performance on unseen data. To split the data we use train_test_split function provided by scikit-learn library. We finally print the shapes of our training and test set to verify if the splitting has occurred properly. X_train_1, X_test_1, Y_train_1, Y_test_1=train_test_split(X_rooms, y_price, test_size=0.2, random_state=5) Training and Testing the Model Here we use scikit-learnâs LinearRegression to train our model on both the training and check it on the test sets. and check the model performance on the train dataset. reg_1=LinearRegression() reg_1.fit(X_train_1, Y_train_1) y_train_predict_1=reg_1.predict(X_train_1) rmse= (np.sqrt(mean_squared_error(Y_train_1, y_train_predict_1))) r2=round(reg_1.score(X_train_1, Y_train_1),2) print('RMSE is {}'.format(rmse)) print('R2 score is {}'.format(r2)) print("\n") Model Performance The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample or population values) predicted by a model or an estimator and the values observed. y_pred_1=reg_1.predict(X_test_1) rmse= (np.sqrt(mean_squared_error(Y_test_1, y_pred_1))) r2=round(reg_1.score(X_test_1, Y_test_1),2) print("Root Mean Squared Error: {}".format(rmse)) print("R^2: {}".format(r2)) print("\n") Plotting the Model plotting scatter plot for our model performance which x-axis label = features of house and y-axis label = price of house prediction_space = np.linspace(min(X_rooms), max(X_rooms)).reshape(-1,1) plt.scatter(X_rooms,y_price) plt.plot(prediction_space, reg_1.predict(prediction_space), color = 'black', linewidth = 3) plt.ylabel('value of house/1000($)') plt.xlabel('number of rooms') plt.show()
- Machine learning-Iris classification
Problem Statement Create the model that can classify the different species of the Iris flower. Problem solving: Load the dataset. Build the model Train the model Make predictions. Iris Flower: Iris is the family in the flower which contains the several species such as the iris.setosa, iris.versicolor, iris.virginica,etc. 1. Load the datasets: sklearn with the inbuilt datasets for the iris classification problem. Scikit learn only works if data is stored as numeric data, irrespective of it being a regression or a classification problem. It also requires the arrays to be stored at numpy arrays for optimization. Since, this dataset is loaded from scikit learn, everything is appropriately formatted. Let us first understand the datasets The data set consists of: 150 samples 3 labels: species of Iris (Iris setosa, Iris virginica and Iris versicolor) 4 features: Sepal length, Sepal width, Petal length, Petal Width in cm python code to load the Iris dataset. from sklearn import datasets iris = datasets.load_iris() create a pandas dataframe from the iris dataset import pandas as pd data=pd.DataFrame(iris['data'],columns=["Petal length","Petal Width","Sepal Length","Sepal Width"]) data["Species"] = iris["target"] 2. Analysis the iris dataset : There are different types of plots like bar plot, box plot, scatter plot etc. Scatter plot is very useful when we are analyzing the relation ship between 2 features on x and y axis. In seaborn library we have pairplot function which is very useful to scatter plot all the features at once instead of plotting them individually. import seaborn as sns sns.pairplot(data) We can also use histogram to analysis # histograms data.hist() plt.show() 3. Splitting the dataset Since our process involve training and testing ,We should split our dataset. It can be executed by the following code x = data.drop("Species" ,axis=1) y = data["Species"] from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(x,y,test_size =0.3) x_train contains the training features x_test contains the testing features y_train contains the training label y_test contains the testing labels 4. Build the Model We can use any classification algorithm to solve the problem. but i will go with KNN K-Nearest Neighbors (KNN) The k-nearest neighbors (KNN) algorithm is a simple, easy-to-implement supervised machine learning algorithm that can be used to solve both classification and regression problems from sklearn import neighbors classifier=neighbors.KNeighborsClassifier(n_neighbors=3) 5. Train the Model We can train the model with fit function. classifier.fit(x_train,y_train) Now the model is ready to make predictions 6. Make Predictions predictions=classifier.predict(x_test) Accuracy Value predictions by our model can be matched with the expected output to measure the accuracy value. from sklearn.metrics import accuracy_score print(accuracy_score(y_test,predictions)) So the Accuracy of our model is : 93.3 % If you have project or assignment files, You can send at contact@codersarts.com directly.
- What is JWT?
JWT stands for JSON web Token It is very popular technology to verify the json data of user (User authentication). It is very secure as once it is send to frontend then no one can modify it if someone modify it user lose the access of the information. It is mostly used for rest API authentication. Now letâs understand it by writing some code. We will be understanding it by using Node.js We also need a NPM package called jsonwebtoken Step1:- Generation a json web token using the user information payload with an expire time. Importing jwt from package installed Var jwt = require(âjsonwebtokenâ); Const payload = { Name: âuserâ Username: âusernameâ, } Jwt.sign( //injecting payload {âŚpayoad}, //key âauthenticationâ, { // the jwt token will be expire in 10hrs. expiresIn: â10hrsâ }, // we will get an error or an token (err, token) => { //check if error is there If(err){ return Console.log(err); } Console.log(token); } ) Step2:- Method to verify token Jwt.verify( Usertoken, âauthenticationâ, (err, decodedToken) => { If(err){ return console.log(âunauthorizedâ) } // you will get the same payload object that you have set return console.log(decodedToken) } )
- Deep Learning / Neural Networks In Machine Learning
In this blog, we will learn some important facts or topics which is related to deep learning which is given below: In this, we will cover some important and useful topics which are listed below. Recommendation systems Deep Classifiers DeepFace TensorFlow Keras LSTMs CNNs Recommender System This is a top useful machine learning application which is used for predicting the future preference of a set of items for a user and recommend the top items. Let us suppose the real-life example like Netflix, it is a collection of a large number of movies then a new problem arose as people had a hard time selecting the items they actually want to see. To overcome this problem the recommender system is used. Methods used to build the recommender system Content-based recommendation Collaborative Filtering Machine Learning Classifiers There are different types of classifiers, a classifier is an algorithm that maps the input data to a specific category. Now, let us take a look at the different types of classifiers: Perceptron Naive Bayes Decision Tree Logistic Regression K-Nearest Neighbor Artificial Neural Networks/Deep Learning Support Vector Machine DeepFace Recognition In Machine Learning It is a facial recognition system, which is used by Facebook for tagging images. It was proposed by researchers at Facebook AI Research (FAIR) at the 2014 IEEE Computer Vision and Pattern Recognition Conference (CVPR). Below the four steps which are used to a recognition of face: Detect Align Represent Classify TensorFlow TensorFlow is an open-source machine learning framework. It is used for implementing machine learning and deep learning applications. It is created by the Google team for developing and research an idea on AI. To designed the TensorFlow the python programming language is used hence it makes it easy to use. Some of the key features of TensorFlow are: Efficiently works with mathematical expressions involving multi-dimensional arrays Good support of deep neural networks and machine learning concepts GPU/CPU computing where the same code can be executed on both architectures High scalability of computation across machines and huge data sets Installing TensorFlow using âpipâ pip install tensorflow Importing TensorFlow import tensorflow as tf Example: To calculate a simple linear function. import tensorflow as tf x = tf.constant(-2.0, name="x", dtype=tf.float32) a = tf.constant(5.0, name="a", dtype=tf.float32) b = tf.constant(13.0, name="b", dtype=tf.float32) y = tf.Variable(tf.add(tf.multiply(a, x), b)) init = tf.global_variables_initializer() with tf.Session() as session: session.run(init) print(session.run(y)) Keras If you know the other two important python library âTheanoâ and âTensorFlowâ then we will now learn another important library which is used in python machine learning is âKerasâ It is a deep learning library that can run on top of Theano or TensorFlow. It runs on both It runs on Python 2.7 or 3.5 and executes on GPUs and CPUs. It installs using pip: sudo pip install keras It makes simple Keras model with few lines of code so it is better than other libraries like âTensorFlowâ or âTheanoâ Example: from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential() model.add(Dense(64, activation='relu', input_dim=50)) #input shape of 50 model.add(Dense(28, activation='relu')) #input shape of 50 model.add(Dense(10, activation='softmax')) LSTMs This is the new technology of deep learning artificial neural network(ANNs) Convolutional Neural Networks (CNNs) Long Short-Term Memory (LSTM) Long Short Term Memory networks â usually just called âLSTMsâ â are a special kind of RNN, capable of learning long-term dependencies. They were introduced by Hochreiter & Schmidhuber (1997). LSTMs are explicitly designed to avoid the long-term dependency problem. LSTMs also have this chain-like structure, but the repeating module has a different structure. Instead of having a single neural network layer, there are four, interacting in a very special way. Source CNNs Convolutional neural network (CNNs) is one of the main categories of deep learning neural network which is used to images recognition, images classifications. Objects detections, recognition faces, etc., are some of the areas where CNNs are widely used. It also called ConvNet. Contact us for this machine learning assignment Solutions by Codersarts Specialist who can help you mentor and guide for such machine learning assignments. If you have project or assignment files, You can send at contact@codersarts.com directly
- Machine Learning Algorithms And Important Topics
As per increasing demands by different organization and software industries, here we provide only simple ways to learn various machine learning algorithms and some useful topics which are used now a day most of the areas, which is given below: Supervised Learning Unsupervised Learning Ensemble Learning Reinforcement Learning Predictive modeling Regression analysis Classification Perceptrons TimeSeries data analysis Supervised Learning: This algorithm consists of a target/outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables) It divided into two categories: Classification Regression Example: Linear Regression, Decision Tree, Random Forest, KNN, Logistic Regression, etc. Unsupervised Learning: In this algorithms we do not have any target or outcome variable to predict / estimate. It is used for the clustering population in different groups, which is widely used for segmenting customers in different groups for specific intervention or researches. It divided into below categories: Deep Learning Other Deep Learning: Representation Learning 1. Mutual Information 2. Disentanglement 3. Information bottleneck Generative Models 1. GANs 2. VAE Other: Dimension Reduction 1. PCA 2. t-SNE Clustering 1. K-mean 2. GMMs 3. HMMs Ensemble Learning This algorithm used to improve the result by combining more than one algorithms or method. This approach provides a better result than the single machine Learning algorithm, so most of the cases it takes the place first in machine learning challenges. It is used to decrease variance(bagging), bias(boosting), or improve predictions. It divided into two categories: sequential ensemble methods parallel ensemble methods Sequential ensemble methods Example: Adaboost Parallel ensemble methods Example: Random Forest Reinforcement Learning Reinforcement Learning (RL) is a branch of machine learning concerned with actors, or agents, taking actions is some kind of environment in order to maximize some type of reward that they collect along the way. Some important features of Reinforcement Learning: Two types of reinforcement learning are 1) Positive 2) Negative Two widely used learning model are 1) Markov Decision Process 2) Q learning Reinforcement Learning method works on interacting with the environment, whereas the supervised learning method works on given sample data or example. Predictive modeling Predictive modeling is an important feature of the machine learning task, it involves some basic steps: Descriptive analysis on the Data Data treatment (Missing value and outlier fixing) Data Modelling Estimation of performance Regression analysis Regression analysis is a form of predictive modelling technique that investigates the relationship between a dependent (target) and the independent variable (s) (predictor). This technique is used for forecasting, time series modelling and finding the causal effect relationship between the variables. Types: Linear Regression Logistic Regression Polynomial Regression Stepwise Regression Ridge Regression Lasso Regression ElasticNet Regression Classification analysis Classification is a process of categorizing a given set of data into classes, It can be performed on both structured or unstructured data. Classification Algorithms Logistic Regression Naive Bayes Stochastic Gradient Descent K-Nearest Neighbors Decision Tree Random Forest Artificial Neural Network Support Vector Machine Contact us for this machine learning assignment Solutions by Codersarts Specialist who can help you mentor and guide for such machine learning assignments. If you have project or assignment files, You can send at contact@codersarts.com directly
- Data Visualization In Machine Learning | Machine Learning Project Help
Data visualization is the representation of data or information in a graph, chart, or other visual formats. It communicates the relationships of the data with images. Data visualization is used in a large number of areas in statistics and machine learning. There are five key plots that you need to know well for basic data visualization. They are: Line Plot Bar Chart Histogram Plot Box and Whisker Plot Pie chart Scatter chart Series chart Mosaic chart Heat Map Line Plot A line plot is generally used to present observations collected at regular intervals. The x-axis represents the regular interval, such as time. The y-axis shows the observations, ordered by the x-axis and connected by a line. A line plot can be created by calling the plot() function and passing the x-axis data for the regular interval, and y-axis for the observations. Line plot is a type of chart that displays information as a series of data points connected by straight line segments. Line plots are generally used to visualize the directional movement of one or more data over time. In this case, the X axis would be DateTime and the y axis contains the measured quantity, like, stock price, weather, monthly sales, etc. # create line plot pyplot.plot(x, y) Bar Chart A bar chart or bar graph is a chart or graph that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. Draw vertically: Example: import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt objects = ('red', 'green', 'yellow', 'blue', 'orange', 'pink') y_pos = np.arange(len(objects)) performance = [15,12,10,5,4,1] plt.barh(y_pos, performance, align='center', alpha=0.5) plt.xticks(y_pos, objects) plt.ylabel('Value') plt.title('Color usage') plt.show() Output: Draw horizontally: To draw horizontally used the function barh() Example: import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt objects = ('red', 'green', 'yellow', 'blue', 'orange', 'pink') y_pos = np.arange(len(objects)) performance = [15,12,10,5,4,1] plt.barh(y_pos, performance, align='center', alpha=0.5) plt.xticks(y_pos, objects) plt.ylabel('Value') plt.title('Color usage') plt.show() Output: Histogram plot: A histogram shows the frequency on the vertical axis and the horizontal axis is another dimension. Usually, it has bins, where every bin has a minimum and maximum value. Each bin also has a frequency between x and infinite. Example: import numpy as np import matplotlib.mlab as mlab import matplotlib.pyplot as plt x = [15,18,17,2,4,3,55,8,9,40,61,12,33,22,35,36,36,14,46,45] num_bins = 5 n, bins, patches = plt.hist(x, num_bins, facecolor='blue', alpha=0.5) plt.show() Output: Box and Whisker Plot A box plot which is also known as a whisker plot displays a summary of a set of data containing the minimum, first quartile, median, third quartile, and maximum. Drawing a Box Plot Boxplot can be drawn calling Series.box.plot() and DataFrame.box.plot(), or DataFrame.boxplot() to visualize the distribution of values within each column. Example import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(15, 5), columns=['Box1', 'Box2', 'Box3', 'Box4', 'Box5']) df.plot.box(grid='True') Output: Pie Chart Matplotlib pie chart First import matplotlib as: import matplotlib.pyplot as plt Example: import matplotlib.pyplot as plt # Data to plot labels = 'color1', 'color2', 'color3', 'color4' sizes = [115, 110, 280, 230] colors = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue'] explode = (0.2, 0, 0, 0) # explode 1st slice # Plot plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.2f%%', shadow=True, startangle=180) plt.axis('equal') plt.show() Output: With âLegendâ import matplotlib.pyplot as plt labels = ['green', 'yello', 'other', 'red'] sizes = [45, 20, 30, 25] colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral'] patches, texts = plt.pie(sizes, colors=colors, shadow=True, startangle=90) plt.legend(patche s, labels, loc="best") plt.axis('equal') plt.tight_layout() plt.show() Output: Scatter Plot Use the scatter() method to draw a scatter plot diagram: Example: import matplotlib.pyplot as plt x = [2,1,10,8,5,15] y = [45,54,56,55,110,78] plt.scatter(x, y) plt.show() Output: Series chart There are many ways to draw the time-series graph: Line Plots. Histograms and Density Plots. Box and Whisker Plots. Heat Maps. Lag Plots or Scatter Plots. Autocorrelation Plots. Mosaic chart These charts are a good representation of categorical entries. A mosaic plot allows visualizing multivariate categorical data in a rigorous and informative way. Example from statsmodels.graphics.mosaicplot import mosaic import matplotlib.pyplot as plt import pandas gender = ['male', 'male', 'male', 'female', 'female', 'female'] pet = ['cat', 'dog', 'dog', 'cat', 'dog', 'cat'] data = pandas.DataFrame({'gender': gender, 'pet': pet}) mosaic(data, ['pet', 'gender']) plt.show() Output: Heat Map It shows the 2D representation of data. Example: import numpy as np import matplotlib.pyplot as plt data = np.random.random((8, 8)) plt.imshow(data, cmap='cool', interpolation='nearest') plt.show() imshow(), function use to draw the heat map Output: Contact us for this machine learning assignment Solutions by Codersarts Specialist who can help you mentor and guide for such machine learning assignments. If you have project or assignment files, You can send at contact@codersarts.com directly
- Data Analysis In Machine Learning | Machine Learning Project Help
This is the process of cleaning, transforming, and modeling data for extract relevant and useful information. There are many tools which used to analysis the data which is as: Xplenty, Microsoft HDInsight, Skytree, Talend, Splice Machine, Spark, Plotly, Apache SAMOA, Lumify, Elasticsearch, R-Programming, IBM SPSS Modeler, and more others. There are different techniques that are used for data analysis which is listed below: Data Exploration Summary Statistics Distribution analysis One-Way Frequencies Correlation Analysis Table Analysis t-Tests Predictive Analysis Prescriptive Analysis Statistical Analysis Text Analysis Data Exploration Data exploration is the initial steps in data analysis, it used the techniques of data visualization which is done manually or with the help of many data visualization techniques. Data Exploration is about describing the data by means of statistical and visualization techniques. We explore data in order to bring important aspects of that data into focus for further analysis. Univariate Analysis Univariate analysis explores variables (attributes) one by one. Variables could be either categorical or numerical. There are two types of Univariate Analysis: Categorical Variables Numerical Variables Bivariate Analysis Bivariate analysis is the simultaneous analysis of two variables (attributes). It explores the concept of the relationship between two variables There are three types of bivariate analysis. Numerical & Numerical Categorical & Categorical Numerical & Categorical Summary Statistics or Descriptive This technique is used to summarizing or describing the data. It uses two approaches: Quantitative Approach Visual Approach Descriptive statistics can be used on one or many datasets or variables Distribution analysis A distribution analysis helps us understand the distribution of the various attributes of our data. There are different types of distribution used in machine learning: Types of Distributions: Bernoulli Distribution Uniform Distribution Binomial Distribution Normal Distribution Poisson Distribution One-Way Frequencies The One-Way Frequencies task generates frequency tables from your data. You can also use this task to perform binomial and chi-square tests. One-Way Tables Create frequency tables (also known as crosstabs) in pandas using the pd.crosstab() function. Example: One_way_table_data = pd.crosstab(index=titanic_train["Survived"], columns="count") # Make a crosstab One_way_table_data # Name the count column You can use value_counts() to cross - check these counts titanic_train.count.value_counts() you can get the same result. Correlation Analysis Data correlation is the way in which one set of data may correspond to another set. Correlation is a bivariate analysis that measures the strength of association between two variables and the direction of the relationship. In terms of the strength of relationship, the value of the correlation coefficient varies between +1 and -1. Usually, in statistics, we measure four types of correlations: Pearson correlation, Kendall rank correlation, Spearman correlation, and the Point-Biserial correlation. The software below allows you to very easily conduct a correlation. Syntax used to find a correlation dataframe.corr(method='',min_periods=1) Where, method: {âpearsonâ, âkendallâ, âspearmanâ} Table Analysis Often you need to analyze the information in a table, sometimes called a contingency table or a cross-classification table. You may analyze a single table, or you may analyze a set of tables. Using the Table Analysis task, not only can you analyze a single table, but you can also analyze sets of tables. This provides a way to control, or adjust for, a covariate while assessing the association of the rows and columns of the tables. t-Tests The t-test (also called Studentâs T-Test) compares two averages(means) and tells you if they are different from each other. The Studentâs t-test is a statistical hypothesis test for testing whether two samples are expected to have been drawn from the same population. Predictive Analysis In this extracting information from existing data in order to determine patterns and predict future outcomes. It does not tell you what will happen in the future. Instead, it forecasts what might happen in the future with an acceptable level of reliability, and includes what-if scenarios and risk assessment. Prescriptive Analysis This is another types type of data analyticsâthe use of technology to help businesses make better decisions through the analysis of raw data. Specifically, prescriptive analytics factors information about possible situations or scenarios, available resources, past performance, and current performance, and suggests a course of action or strategy. Statistical Analysis Itâs the science of collecting, exploring and presenting large amounts of data to discover underlying patterns and trends. Statistics are applied every day â in research, industry and government â to become more scientific about decisions that need to be made. Other some important techniques Linear models Survival Analysis Multivariate Analysis Contact us for this machine learning assignment Solutions by Codersarts Specialist who can help you mentor and guide for such machine learning assignments. If you have project or assignment files, You can send at contact@codersarts.com directly











