top of page

Search Results

737 results found with an empty search

  • Caesar Ciphers & Turtle Graphics Program Using Python

    Assessment Details Part one – Caesar Ciphers1 In this part, you are to develop a Python application that will encrypt text from a file and also, given the key, decrypt text from a file. We shall use the easy cipher. XYZABCDEFGHIJKLMNOPQRSTUVW shifted alphabet ABCDEFGHIJKLMNOPQRSTUVWXYZ original alphabet Now the word COAL becomes FRDO - in this case, we have encrypted using a key of 3. In the encryption mode, your application should ask the user for a filename that contains the plain text and the key to using for encryption. The decryption process is similar, but this time, the filename contains the encrypted text and when we give the key your application should return the original text (the arrows, shown above, go the other way). Ideally, your application should be able to handle single words, sentences, paragraphs, and even whole books2 You need to demonstrate your application and show that it is working correctly by using tests that you design. Part two – Your Name in Turtle Graphics Turtle graphics is built into the Python language and there are many resources freely available3. Here are some sample images: The requirement here is that you use the Turtle graphics to write your name – the name that appears on your student ID card. (If you have any queries ask your lecturer/tutor.) Although you must use Python 3 in this assignment, you have a creative license in doing this application – if you are unsure, please ask your lecturer/tutor. As an example, something like this: would just be a bare pass. It is suggested that you develop your code starting from pseudo-code and then incrementally make it better. You are also asked to keep a journal; it is useful in confirming ownership of your work and, in case you don’t reach your final goal it demonstrates your journey. The journal can consist of hand-written notes, diagrams, drawings, code-fragments, screen-shots and so on – put your journal in an Appendix – these pages are not counted in your report. Need help in python graphics programming, then you can contact us at: contact@codersarts.com

  • ID3 Decision Tree Building Algorithm, Perceptron Training Algorithm, Ensemble method

    Goal of Project The goal of this project is to develop your hands-on skills in performing learning from data, as well as further understanding of the practical technical details of the learning procedure. You will implement a simple machine learning algorithm from scratch, for example, the ID3 decision tree building algorithm or the perceptron training algorithm or an ensemble method. You can also implement another algorithm of your interest to solve the supervised learning problem. The algorithm needs to be tested using a simple but practical dataset and under an appropriate learning framework as instructed in the course. The task also includes a report of your reflection on the development, the testing scheme, and the results. Alternatively, unsupervised learning algorithms can also be considered, but you must specify the testing scheme and the criteria clearly in the report (see below) if you choose to implement an unsupervised learning algorithm. Specification Implementation from scratch: The implementation should include detailed computational steps of an algorithm. We do NOT consider straightforward usage of the off-the-shelf toolboxes as implementing the algorithm details. For example, if you choose to build a decision tree, the implementation of the tree-building algorithm should address the construction of the tree structure, the computation of splitting data (a subset of the training dataset) at a tree node to create the children nodes -- in ID3, this is to compute the information gain and the entropy and decide the split accordingly. However, it is allowed to use basic auxiliary tools such as the libraries to perform matrix or linear algebra operations, to facilitate loading and parsing the data files, etc. Practical dataset: The dataset contains sufficient samples to represent practical relationships between the attributes and the target to be predicted. Typical examples include the Iris flower dataset, the image dataset of hand-written digits, or other examples that have been used in the tutorial demos. Manually crafted toy datasets are not recommended. Learning framework and test scheme: A proper training and validation scheme must be set up for the test of the implementation. More sophisticated evaluation schemes are also welcomed. For formal and detailed information of the learning framework, refer to the related sections in course materials. What to Submit You will submit a PDF file, including a link to a cloud-based source code hosting service where your implementation code can be assessed and evaluated. An example template of the PDF report is attached at the bottom of this document. It is difficult to give an estimate of the number of lines of code. However, you should be able to code it all up using 3 or 4 classes in object-oriented design. Totally, the PDF file may contain about 20 pages. You can refer to the demo tutorials in this course as a reference. Software We recommend using Python. But you are free to develop the software in other languages. If you want to choose a different language. The program should be a text-based “console” program. Looking Solutions of these algorithms or Ensemble methods, then you can contact us at the below link: contact us at here

  • Name Entity Recognition(NER)

    (NER)is probably the first step towards information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. NER is used in many fields in Natural Language Processing (NLP), and it can help to answer many real-world questions. Named-entity recognition (NER)goes by many names such as entity identification, entity chunking, or entity extraction is a task involving extraction of information from a corpus of textual data. NER aims to locate and classify named entity mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. This article describes how to build a named entity recognizer with NLTK and SpaCy, to identify the names of things, such as persons, organizations, or locations in the raw text. Dependencies: nltk spacy collections en_core_web_sm Importing The libraries: import nltk from nltk.tokenize import word_tokenize from nltk.tag import pos_tag import spacy from spacy import displacy from collections import Counter import en_core_web_sm from nltk.chunk import conlltags2tree, tree2conlltags from pprint import pprint Information Extraction This sentence has been taken “European authorities fined Google a record $5.1 billion on Wednesday for abusing its power in the mobile phone market and ordered the company to alter its practices.” nltk.download('averaged_perceptron_tagger') text = '''European authorities fined Google a record $5.1 billion on Wednesday for abusing its power in the mobile phone market and ordered the company to alter its practices''' O/P: [nltk_data] Downloading package punkt to /home/codersarts/nltk_data... [nltk_data] Package punkt is already up-to-date! [nltk_data] Downloading package averaged_perceptron_tagger to [nltk_data] /home/codersarts/nltk_data... [nltk_data] Package averaged_perceptron_tagger is already up-to- [nltk_data] date! True Tokenization & Parts Of Speech tagging: This involves splitting the sentences into word tokens and extraction pos features from them. tokens = nltk.word_tokenize(text) pos_text = nltk.pos_tag(tokens) pos_text[:5] O/p: [('European', 'JJ'), ('authorities', 'NNS'), ('fined', 'VBD'), ('Google', 'NNP'), ('a', 'DT')] Tree Visualization: The next portion involves visualization of the relation between different postages in a tree representation We get a list of tuples containing the individual words in the sentence and they're associated part-of-speech. Now we’ll implement noun phrase chunking to identify named entities using a regular expression consisting of rules that indicate how sentences should be chunked. Our chunk pattern consists of one rule, that a noun phrase, NP, should be formed whenever the chunker finds an optional determiner, DT, followed by any number of adjectives, JJ, and then a noun, NN. pattern = 'NP: {?*}' content_parser = nltk.RegexpParser(pattern) parsed_tree = content_parser.parse(pos_text) parsed_tree The output can be read as a tree or a hierarchy with S as the first level, denoting sentence. we can also display it graphically. iob_tagged = tree2conlltags(cs) pprint(iob_tagged) O/p: [('European', 'JJ', 'O'), ('authorities', 'NNS', 'O'), ('fined', 'VBD', 'O'), ('Google', 'NNP', 'O'), ('a', 'DT', 'B-NP'), ('record', 'NN', 'I-NP'), ('$', '$', 'O'), ('5.1', 'CD', 'O'), ('billion', 'CD', 'O'), ('on', 'IN', 'O'), ('Wednesday', 'NNP', 'O'), ('for', 'IN', 'O'), ('abusing', 'VBG', 'O'), ('its', 'PRP$', 'O'), ('power', 'NN', 'B-NP'), ('in', 'IN', 'O'), ('the', 'DT', 'B-NP'), ('mobile', 'JJ', 'I-NP'), ('phone', 'NN', 'I-NP'), ('market', 'NN', 'B-NP'), ('and', 'CC', 'O'), ('ordered', 'VBD', 'O'), ('the', 'DT', 'B-NP'), ('company', 'NN', 'I-NP'), ('to', 'TO', 'O'), ('alter', 'VB', 'O'), ('its', 'PRP$', 'O'), ('practices', 'NNS', 'O')] IOB tags have become the standard way to represent chunk structures in files, and we will also be using this format. In this representation, there is one token per line, each with its part-of-speech tag and its named entity tag. Based on this training corpus, we can construct a tagger that can be used to label new sentences. Read a file from local: file = open(fileloc, mode='rt', encoding='utf-8') article = file.read() file.close() Loading Model The model I have used for the sake of this article is en_core_web_sm, which comes alongside SpaCy.It consists of English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl. Assigns word vectors, context-specific token vectors, POS tags, dependency parse, and named entities. model = en_core_web_sm.load() doc = model(article) pprint([(X.text, X.label_) for X in doc.ents]) O/p: [('Kamra', 'ORG'), ('Goswami', 'GPE'), ('IndiGo', 'GPE'), ('Tuesday', 'DATE'), ('Kamra', 'PERSON'), ('Goswami', 'PERSON'), ('20 seconds', 'TIME'), ('Mumbai', 'GPE'), ('Lucknow', 'GPE'), ('Kunal Kamra', 'PERSON'), ('a period of six months', 'DATE')] Named-Entity-Recognition on an article of Indian Express. It should be noted that these models are not perfect but provide near to perfect results. displacy.render(doc, jupyter=True, style='ent') So in this manner, you can build a Named Entity Recognition. For Code Refer this Link: https://github.com/kapuskaFaizan/NLP-jupyter_notebook/blob/master/NER-Named%20Entity%20Recognition.ipynb. Thank you! for Reading Happy Learning!

  • Text Classification Using NLP

    Unstructured text is everywhere, such as emails, chat conversations, websites, and social media but it’s hard to extract value from this data unless it’s organized in a certain way. Doing so used to be a difficult and expensive process since it required spending time and resources to manually sort the data or creating handcrafted rules that are difficult to maintain. Text classifiers with NLP have proven to be a great alternative to structure textual data in a fast, cost-effective, and scalable way. Text classification also known as text tagging or text categorization is the process of categorizing text into organized groups. By using Natural Language Processing (NLP), text classifiers can automatically analyze text and then assign a set of pre-defined tags or categories based on its content. Text Classification is one of the most widely adopted Natural Language Task in not just the IT industry today, but also in a variety of businesses. The main aim of text classification is to automate the process of classifying the text documents into one or more defined categories. Some examples of text classification are: Sentiment Analysis: the process of understanding if a given text is talking positively or negatively about a given subject (e.g. for brand monitoring purposes). Topic Detection: the task of identifying the theme or topic of a piece of text (e.g. know if a product review is about Ease of Use, Customer Support, or Pricing when analyzing customer feedback). Language Detection: the procedure of detecting the language of a given text (e.g. know if an incoming support ticket is written in English or Spanish for automatically routing tickets to the appropriate team). Environment Setup: The project is set up in Anaconda Environment on the jupyter notebook. Dependencies/Libraries Required: pandas sklearn pickle nltk matplotlib word cloud seaborn spacy collections en_core_web_sm Table of Contents: 1. Dataset Exploration: The first step is the Dataset Exploration step which includes the process of loading a dataset and checking out its fields with a bit of visualization. 2. Preparation and Feature Engineering: This step includes the removal of stopword and other basic preprocessing. In Feature Engineering raw dataset is transformed into vector formations that can be used by the machine learning model. 3. Model Training: The final step is the Model Building step in which a machine learning model is trained on a labeled dataset. 4. Evaluation of Text Classifier: The Classifier could be evaluated using different evaluation measures such as confusion matrix, F1-Score, Accuracy score, etc 1. Loading The Libraries: %matplotlib inline from sklearn import metrics import seaborn as sn import pandas as pd from sklearn.feature_extraction.text import CountVectorizer import pickle import nltk import re from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, f1_score,accuracy_score from wordcloud import WordCloud import matplotlib.pyplot as plt from sklearn import model_selection, preprocessing,svm In this step, we imported all the required libraries like seaborn, pandas(for preprocessing). nltk(For textual) etc. Data Exploration Once the environment is set up and dependencies are installed it is time to get started and explore our data set. For this particular article, I have used a dataset consisting of more than 60000 textual sentences along with their respective targets. data = pd.read_csv(dataset,engine='python') data.head() In this above code file, we imported our dataset with moreover 60k of data. Here is how the dataset looks like Let's check the unique columns of the Predicted_category column data['predicted_category'].unique() O/p: array(['affection', 'exercise', 'bonding', 'leisure', 'achievement', 'enjoy_the_moment', 'nature'], dtype=object) WordCloud: all_words = ' '.join([text for text in data['SentimentText']]) wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(all_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show() In this section, the word cloud has made on column SentimentText. In the first step, all words are joined. Then a word cloud with height 800 and width 500, with font size 110 has been plotted. (With figure size of width 10 and height 7) the word cloud interpolation is bilinear. Data Preparation And Feature Engineering: Sequal to data exploration is data preparation and feature engineering. In this step, we encode the Target variable and vectorize the textual data present in our data set. This could be done in multiple ways such as: 1: By using the TF-IDF encoder 2: count vectorizer 3: word2vec etc If the data had been messier then this step would include cutting out noise as well .i.e. more of data preprocessing but since the data we have is already processed we can simply leave that part. Also, we need to split the data into training and validation set, this will come handy when we come to model evaluation. Instead, we will use the TF-IDF vectorizer (Term Frequency — Inverse Document Frequency), a similar embedding technique that takes into account the importance of each term to document. While most vectorizers have their unique advantages, it is not always clear which one to use. In our case, the TF-IDF vectorizer was chosen for its simplicity and efficiency in vectorizing documents such as text messages. TF-IDF vectorizes documents by calculating a TF-IDF statistic between the document and each term in the vocabulary. The document vector is constructed by using each statistic as an element in the vector. After settling with TF-IDF, we must decide the granularity of our vectorizer. A popular alternative to assigning each word as its own term is to use a tokenizer. A tokenizer splits documents into tokens (thus assigning each token to its own term) based on white space and special characters. data.replace(r'\b\w{1,4}\b','', regex =True, inplace = True) encoder = preprocessing.LabelEncoder() data['Target'] = encoder.fit_transform(data['predicted_category']) vectorizer = CountVectorizer() vectorizer.fit(data['cleaned_hm']) data['vec'] = vectorizer.transform(data['cleaned_hm']) Train_X, Test_X, Train_Y, Test_Y = model_selection.train_test_split(vec,data['predicted_category'],test_size=0.1) data.head() Let's have a look at the shape of the data. Train_X.shape,Test_X.shape O/p: ((54288, 17021), (6033, 17021)) Model Training This involves the selection of algorithms and training models based on that algorithm. There are multiple algorithms that could perform this kind of stuff e.g Naive Bayes, SVM, Neural nets, and so on. SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto') SVM.fit(Train_X , Train_Y) predictions_SVM = SVM.predict(Test_X) here we have imported the Support vector machine model into it to train our model. Accuracy: The accuracy of 79.62 with an F1-score of 0.79 is achieved by SVM, which is not that bad we can tune this model and choose different features like POS, word embeddings, etc in place of cout vector formations in order to increase the accuracy and other evaluation measures of our model. print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y)*100) print(classification_report(Test_Y,predictions_SVM)) print(f1_score(Test_Y,predictions_SVM, average='weighted')) So in this way, we can make a text classification. For code Refer to this link: https://github.com/kapuskaFaizan/NLP-jupyter_notebook/blob/master/Text_classification_SVM.ipynb Thank You For Reading!! Happy Learning

  • Python Top Practice Examples | Python Programming Help

    - 1 - Given a sorted List, remove the duplicates in-place such that each element appears only once and returns the new length. Example 1: Given List1 = [1, 1, 2], Your function should return length = 2, with the first two elements of List1 being 1 and 2 respectively. Example 2: Given List2 = [0, 0,1, 1, 1, 2, 2, 3, 3, 4], Your function should return length = 5, with the first five elements of List2 being modified to 0, 1, 2, 3, and 4 respectively. - 2 - 2. Write a program that will take the marks (for three assessments Quiz1 (20%), Quiz2(30% and Final(50%)) of all the students of a unit as input and stores all the detail in a list. Your program should have the following features and constraints: Check the input validity Compute the total marks and letter grades. Use the Murdoch university grade conversion table to calculate the letter grade. Keep taking the input unless a negative or zero is given as the input for student ID Save the data in the following format: [st_ID, Q1_mark, Q2_mark, Final_mark, Total, Letter grade] Display the whole list. Output: A typical input-output might be as follows: Student ID:1001 Enter Q1: 20 Enter Q2: 25 Enter Final: 40 Student ID:1002 Enter Q1: 12 Enter Q2: 2.5 Enter Final: 45 Student ID:1003 Enter Q1: 20 Enter Q2: 30 Enter Final: 32 Student ID:0 [[1001, 20.0, 25.0, 40.0, 85.0, ‘HD’], [1002, 12.0, 2.5, 45.0, 59.5, ‘P’], [1003, 20.0, 30.0, 32.0, 82.0, ‘D’]] Note: This output format may not look the same as the format asked. 3. [Continuation of question 2] Show the following statistics: Average, highest and lowest mark The ID of all the students who got the highest mark The ID of all the students who got the lowest mark Total number of students failed in the unit (assuming 50% is the pass mark) Get Instant python programming help: contact us at - contact@codersarts.com

  • Sentiment Analysis with Deep Learning

    Questionnaire Introduction Sentiment Analysis is the process of determining whether a piece of writing is positive, negative, or neutral. An example of sentiment analysis is given below: In this assignment, you will implement a multilayer perceptron (MLP) in Tensorflow. An example structure is given in Figure 1 Multilayer perceptron: Problem definition In this assignment, you will do sentiment analysis on a given set of sentences. Your model will learn to classify the given sentences whether they are positive or negative. First, you will train your model on a training set that involves annotated positive and negative sentences, then you will test your model on unannotated sentences which will produce the positive or negative labels for each sentence. Dataset You will be given three input files: positive sentences, negative sentences and pretrained word vectors file. To create input of the MLP, you will read the vectors.txt and extract word vectors. In each line a word is separated with colon (:) from vectors and each vector is separated with space. You will read positive and negative examples (you do not need to convert words to lowercase. You will remove punctuation and create your label based on two classes. After reading dataset,you will shuffle data and first % 75 examples (the percentage is taken from the command line) will be used as train data and the rest of them will be used as test set. You will print out the accuracy at the end of the code. The accuracy of the model will be calculated by the proportion of the correct labels produced by the model to the total number of reviews. Notes Do not miss the submission deadline. The assignment must be original, individual work. Duplicate, very similar assignments or code from Internet are going to be considered as cheating. You need to implement Python (Python 3). Please submit your source codes and README file in the following submission format. programs will run from the command line as following. python3 assignment4.py positive.txt negative.txt vectors.txt 75 If you are looking for solution to exactly this Assignment or any variant assignment of this nature you can contact us.

  • Criss-Cross Multi-Step Word Guessing Game Assignment

    Questionnaire Getting Started To start, download a1.zip from Blackboard and extract the contents. The a1.zip folder contains all the necessary files to start this assignment. Some support code has been provided to assist with implementing the tasks. You will be required to implement your assignment in a1.zip. The other provided file is a1 support.py, which contains some code to help you implement your assignment. You are not compelled to use this file to implement your assignment but it is strongly recommended. Do not make changes to the a1 support.py file. The only file that you should submit is a1.py. It could cause unexpected errors submit more than one file or if you made changes to the a1 support.py file as well. Note: The functionality of your assignment will be marked by automated tests. This means that the output of each function, as well as the output of your overall program, must be exact to the specifications outlined below in the Implementation section. This includes whitespace, grammar, etc. If you add your own messages throughout the game, you will likely fail most of the automated tests. Some sample tests will be provided to give you an idea of whether your output differs from the expected output. These tests are not necessarily the complete set of tests that will be used in marking, so if you pass all sample tests you are not necessarily guaranteed to achieve full functionality marks. However, if you do not pass all sample tests, you are guaranteed not to achieve full functionality marks. Concepts At the start of the game the user selects a difficulty, and then a word is chosen at random based on that difficulty. The word length will depend on the difficulty selected by the user; either “FIXED” (meaning the word will be exactly eight letters long), or “ARBITRARY” (meaning the word will be anywhere between six to nine letters long). The goal of the game is for the player to guess that word through a series of guesses of “subwords”. The player will have a different number of guesses (with different subwords to guess; see GUESS INDEX TUPLE ), depending on the difficulty selection. If the player chooses the “FIXED” difficulty, the word must be selected at random from the WORDS FIXED.txt file. If the player chooses the “ARBITRARY” difficulty, the word must be selected at random from the WORDS ARBITRARY.txt file. At program startup, the user is asked to specify one of three actions: If the user specifies “FIXED” The game randomly selects an eight-letter word from the WORDS.txt file and the correct guess sequence from the GUESS INDEX TUPLE tuple. Each vowel guessed in the correct position gets 14 points. Each consonant guessed in the correct position gets 12 points. Each letter guessed correctly but in the wrong position within the substring gets 5 points. You can assume that the words do not contain repeated letters and all guesses are lowercase letters. If the user specifies “ARBITRARY” The game randomly selects a word from the WORDS ARBITRARY.txt file and the correct guess sequence from the GUESS INDEX TUPLE tuple. The scoring system is the same as for the ”FIXED” word length. Guessing procedure For an eight-letter word The user will be prompted to guess the word, step by step. The guessing procedure involves 8 steps, where the guess slices will depend on the GUESS INDEX TUPLE. At each of the 7 first steps the user guesses a subsection of the word and receives feedback (their score) for that guess. The final 8th step involves guessing the whole word. After the 8th guess, the user is informed of whether they ‘won’ (i.e. guessed the word correctly) or ‘lost’ (in which case they are told what the word was). If, at any stage, the player enters a guess that is of the incorrect length then the game should repeatedly prompt for the correct word length until the player enters a guess that matches the length of the substring to be guessed in that step. The guessing and scoring procedure is illustrated in Table 2, for the 8 letter word, ”crushing”. Game over If the user guesses the correct word at the end of the game, the following message should be printed out: "You have guessed the word correctly. Congratulations". If the user guesses the wrong word at the end of the game, the following message should be printed out: "Your guess was wrong. The correct word was "{word}"" (Where word represents the word the player was trying to guess.) Examples Implementation Within this section, the following variables hold meaning as defined below: word select: A string representing a FIXED or ARBITRARY word selection. guess no: An integer representing how many guesses the player has made. word: A string representing the word being guessed by the player. word length: An integer representing the length of the word being guessed by the player. You must write the following functions as part of your implementation. You are encouraged to add your own additional functions if they are beneficial to your solution. select word at random(word select)-> str: Given the word select is either “FIXED” or “ARBITRARY” this function will return a string randomly selected from WORDS FIXED.txt or WORDS ARBITRARY.txt respectively. If word select is anything other then the expected input then this function should return None. Hint: see a1 support.load words() and a1 support.random index() create guess line(guess no, word length)-> str: This function returns the string representing the display corresponding to the guess number integer, guess no. Example: >>> cre ate_gu ess_li ne (2 , 8) ’ Guess 2 | - | * | * | * | - | - | - | - | ’ display guess matrix(guess no, word length, scores)-> None: This function prints the progress of the game. This includes all line strings for guesses up to guess no with their corresponding scores (a tuple containing all previous scores), and the line string for guess no (without a score). Example: compute value for guess(word, start index, end index, guess)-> int: Return the score, an integer, the player is awarded for a specific guess. The word is a string representing the word the player has to guess. The substring to be guessed is determined by the start index and end index. The substring is created by slicing the word from the start index up to and including the end index. The guess is a string representing the guess attempt the player has made. Example: >>> compute_value_for_guess("crushing", 0, 1, "rc") 10 main()-> None: This function handles player interaction. At the start of the game the player should be greeted with the Welcome message. Once the guessing sequence commences the game should loop for the correct number of rounds until either the player wins by guessing the correct word or loses by guessing the incorrect word. Hint: the main function should be your starting point but also the last function you finish implementing. ASSESSMENT AND MARKING CRITERIA Functionality Assessment The functionality will be marked out of 7. Your assignment will be put through a series of tests and your functionality mark will be proportional to the number of tests you pass. If, say, there are 25 functionality tests and you pass 20 of them, then your functionality mark will be 20/25 ∗ 7. You will be given the functionality tests before the due date for the assignment so that you can gain a good idea of the correctness of your assignment yourself before submitting. You should, however, make sure that your program meets all the specifications given in the assignment. That will ensure that your code passes all the tests. Note: Functionality tests are automated and so string outputs need to exactly match what is expected. Code Style Assessment The style of your assignment will be assessed by one of the tutors, and you will be marked according to the style rubric provided with the assignment. The style mark will be out of 3. ASSIGNMENT SUBMISSION You must submit your completed assignment electronically through Blackboard. The only file you submit should be a single Python file called a1.py (use this name – all lower case). Appendix Welcome Message : Help message : "Game rules - You have to guess letters in place of the asterixis. Each vowel guessed in the correct position gets 14 points. Each consonant guessed in the correct position gets 12 points. Each letter guessed correctly but in the wrong position gets 5 points. If the true letters were "dog", say, and you guessed "hod", you would score 14 points for guessing the vowel, "o", in the correct position and 5 points for guessing "d" correctly, but in the incorrect position. Your score would therefore be 19 points." Printing Example If you are looking for solution to exactly this Assignment or any variant assignment of this nature you can contact us.

  • Text Field Material Design | CodersArts

    Text field material design basically help us to design attractive Ui for login and register page we are able to add validation also there. In input text material design Implementation first we need to add dependency for material design In App-> Gradle Script -> Build.gradle implementation 'com.google.android.material:material:1.0.0' 2. In Login Activity.xml we need implement Text materialInput layout TextInputEditText implementation Outline of EditText style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox.Dene"> Outline box is above the box that is know as outline box FeildText style="@style/Widget.MaterialComponents.TextInputLayout.FilledBox"> When we filling the text inside field that is knows as text field like in above text is inside the field inside the text field Diff b/w FilledBox and Outline text box Code : Hire an android developer to get quick help for all your android app development needs. with the hands-on android assignment help and android project help by Codersarts android expert. You can contact the android programming help expert any time; we will help you overcome all the issues and find the right solution. Want to get help right now? Or Want to know price quote Please send your requirement files at contact@codersarts.com. and you'll get instant reply as soon as requirement receives

  • Spam or Ham?

    In today's world, we see the infected messages, fake prompts, or infected emails very often in our day to day life. These messages are called spams. Hackers send these kinda things in order to maliciously attack our systems. Without any proper knowledge, anyone can fall prey to these kinds of attacks. With the help of these spams, hackers aim to get private information like credit card credentials from our system. The spam detection system came into existence in order to prevent this kind of attack. The above image gives an overview of spam filtering, plenty of emails arrive every day, some goes to spam and rest stays in our primary inbox(unless you have further categories defined). The blue box in the middle is the Machine Learning Model, how does it decide which mail is spam and which one is not. Environment Setup: The project is set up in Anaconda Environment on the jupyter notebook. Dependencies/Libraries Required: pandas sklearn pickle nltk matplotlib word cloud seaborn Table of Contents: 1. Loading Data: The data which has been used in this article has been gathered from Kaggle. The data set consists of more than five thousand spam/ham messages. 2. Visualizing data: Once we have the data it is important to explore it and check its features. This could be done using word clouds. Word clouds form images from frequently occurring tokens. 3. Text Cleaning: In any text mining problem, text cleaning is the first step where we remove those words from the document which may not contribute to the information we want to extract. Emails may contain a lot of undesirable characters like punctuation marks, stop words, digits, etc which may not be helpful in detecting the spam email. The emails in the Ling-spam corpus have been already preprocessed in the following ways: Removal of stop words – Stop words like “and”, “the”, “of”, etc are very common in all English sentences and are not very meaningful in deciding spam or legitimate status, so these words have been removed from the emails. Lemmatization – It is the process of grouping together the different inflected forms of a word so they can be analyzed as a single item. For example, “include”, “includes,” and “included” would all be represented as “include”. The context of the sentence is also preserved in lemmatization as opposed to stemming (another buzz word in text mining which does not consider the meaning of the sentence). We still need to remove the non-words like punctuation marks or special characters from the mail documents. There are several ways to do it. Here, we will remove such words after creating a dictionary, which is a very convenient method to do so since when you have a dictionary, you need to remove every such word only once. 4. Train-Test Split: The data need to be split into training and test in order to evaluate how good the classifier is working. 5. Training the model & Predictions: In order to make predictions on data using a classifier, the classifier first needs to be trained on the training data. Once the model is trained we can get started and make predictions using the new data, which the model has not seen. 6. Evaluation: The Classifier could be evaluated using different evaluation measures such as confusion matrix, F1-Score, Accuracy score, etc 7. Visualize Results: In this, we visualize the obtained evaluation measures. Importing the Libraries: %matplotlib inline from sklearn import metrics import seaborn as sn import pandas as pd from sklearn.feature_extraction.text import CountVectorizer import pickle import nltk import re from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, f1_score,accuracy_score from wordcloud import WordCloud import matplotlib.pyplot as plt from sklearn import model_selection, preprocessing,svm In this step, we imported all the required libraries like seaborn, pandas(for preprocessing). nltk(For textual) etc. Loading The Data: raw_data = pd.read_csv(loc,engine='python') data = pd.DataFrame() data['target'] = raw_data['v1'] data['text'] = raw_data['v2'] data.head() In this step, we have read the raw_data. In the next step let's see how many ham and spam are present. ham = [i for i in data['target'] if i == 'ham'] spam = [i for i in data['target'] if i == 'spam'] len(ham),len(spam) O/p: (4825, 747) So hence we have 4825 ham emails whereas we have 747 spam emails that are there in our dataset. Convert Categorical Column into Numerical ones: In this step, we need to convert the categorical column (The target column) to numerical ones i.e we need to convert it to zeros and ones. data['target'] = data['target'].map({'ham': 0, 'spam': 1}) data.head() Wordcloud: In this Step, the word cloud has built on column SentimentText. Word cloud for spam words. spam_words = ' '.join(list(data[data['target'] == 1]['text'])) spam_wc = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(spam_words) plt.figure(figsize=(10, 7)) plt.imshow(spam_wc, interpolation="bilinear") plt.axis('off') plt.show() In this section, the word cloud has made on column SentimentText. In the first step, all words are joined. Then a word cloud with height 800 and width 500, with font size 110 has been plotted. (With figure size of width 10 and height 7) the word cloud interpolation is bilinear. 2. Word cloud for Ham emails: ham_words = ' '.join(list(data[data['target'] == 0]['text'])) ham_wc = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(ham_words) plt.figure(figsize=(10, 7)) plt.imshow(ham_wc, interpolation="bilinear") plt.axis('off') plt.show() Vectorize the Text: In the first part of this series, we explored the most basic type of word vectorizer, the Bag of Words Model, which will not work very well for our Spam or Ham classifier due to its simplicity. Instead, we will use the TF-IDF vectorizer (Term Frequency — Inverse Document Frequency), a similar embedding technique that takes into account the importance of each term to document. While most vectorizers have their unique advantages, it is not always clear which one to use. In our case, the TF-IDF vectorizer was chosen for its simplicity and efficiency in vectorizing documents such as text messages. TF-IDF vectorizes documents by calculating a TF-IDF statistic between the document and each term in the vocabulary. The document vector is constructed by using each statistic as an element in the vector. After settling with TF-IDF, we must decide the granularity of our vectorizer. A popular alternative to assigning each word as its own term is to use a tokenizer. A tokenizer splits documents into tokens (thus assigning each token to its own term) based on white space and special characters. data.replace(r'\b\w{1,4}\b','', regex =True, inplace = True) vectorizer = CountVectorizer() vectorizer.fit(data['text']) vec = vectorizer.transform(data['text']) data['encoded_text'] = vec data.head() Train Test Splitting: ' SO we have split our dataset into training and testing part with a 70-30 ratio. Train_X, Test_X, Train_Y, Test_Y = model_selection.train_test_split(vec,data['target'],test_size=0.3) Training Model & Predictions: The next step is to select the type of classifier to use. Typically in this step, we will choose several candidate classifiers and evaluate them against the testing set to see which one works the best. To keep things, we can assume that a Support Vector Machine works well enough. The objective of the SVM is to find The C term is used as a regularization to influence the objective function. A larger value of C typically results in a hyperplane with a smaller margin as it gives more emphasis to the accuracy rather than the margin width. Parameters such as this can be precisely tuned via grid search. SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto') SVM.fit(Train_X , Train_Y) predictions_SVM = SVM.predict(Test_X) Evaluation: print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y)*100) print(classification_report(Test_Y,predictions_SVM)) print(f1_score(Test_Y,predictions_SVM, average='weighted')) So here we can see a very good accuracy of 97% that we have got on this dataset. Visualize The Results Through Confusion Matrix: A confusion matrix is used to describe the performance of a classification model: True positives (TP): cases when classifier predicted TRUE (they have the disease), and the correct class was TRUE (patient has the disease). True negatives (TN): cases when the model predicted FALSE (no disease), and the correct class was FALSE (patient do not have the disease). False positives (FP) (Type I error): classifier predicted TRUE, but the correct class was FALSE (the patient did not have the disease). False negatives (FN) (Type II error): classifier predicted FALSE (patients do not have the disease), but they actually do have the disease. cm=metrics.confusion_matrix(Test_Y,predictions_SVM) plt.matshow(cm) Create a heatmap of confusion matrix: plt.figure(figsize = (10,7)) ax= plt.subplot() ax.set_title('Confusion Matrix'); sn.heatmap(cm, annot=True,ax = ax) df = pd.DataFrame(Test_Y) df['pred'] = predictions_SVM sent = df['target'] pred = df['pred'] ham= len([i for i in df['pred'] if i ==0]) spam = len([i for i in df['pred'] if i ==1]) In this above line of code, we have created a pred column which stores the predictions part. Let's plot the spam and ham emails. plt.title('Spam/ham distribution') cat = ['spam', 'ham'] freq = [spam,ham] plt.ylabel('frequency') plt.bar(cat,freq,color= ['blue','green']) plt.show() So in this manner, we have built the Spam detection model. For more reference go through this GitHub Link: https://github.com/kapuskaFaizan/NLP-jupyter_notebook/blob/master/spam_detection.ipynb Thank You For Reading!! Happy Learning Note: If you need implementation for any of the topics mentioned above or assignment help on any of its variants, feel free contact us on contact@codersarts.com.

  • We will design and develop website app and dashboard

    We will design website, mobile apps. any kind of design and development too as your need. Cost will be discussed in inbox. Get User experience (ux) Expert Help Are you looking for UX and wireframe designer for web app. Hire Codersarts experts to design UI & UX to help users make data-driven decision making. Objective: Design UI & UX for an application. The application will help users to upload data, create/apply the business rules on the data to generate results Deliverables Logo Web page wireframes & designs Front End Pages Landing page Header, Logo, Body, Menu (About, Features, Contact & Profile (Once logged in)) & Footer About Us Contact Us Form (Name, Email, Phone, Message, Social Media) Signup First Form (Name, Email & Password) Second Form (Company, Phone, Title, # Employees etc) Signup confirmation Sign In Login Form Password Reset Form Email validation form Edit Profile Dashboard Pages: Statistics, Projects, Data, List of existing results with filters and charts Access Management Users management (Approve, Block, etc): List of users Company management: List of signup companies Design System for E-commerce projects on Bootstrap UI Design Creative UI design templates for e-commerce web sites with creative design components Code Bootstrap ui kit includes HTML CSS components library to build online shop web sites Templates Fully responsive e-commerce templates and layouts to boost your front-end devlopment Mobile Mobile apps and web sites are popular now. Get mobile html templates and ui layouts We have professional designers with specialty in mobile apps UI/UX and will design minimalistic, eye catching and state of the art UI/UX designs for your mobile apps to help your app Contact Us to placing any orders. What you can get ! PNG images Any other format you want XD source files All copyrights NDA signed (if you want to) Mockup Retina display design Font files Sliced png images Looking forward to work with you. Contact us.

  • Treasure In Cups | Android Assignment Help | Codersarts

    UI: https://www.figma.com/file/6oG69Qjr2N1JW68UL3KtXi/Treasure-In-Cups?node-id=0%3A1 Sound: https://drive.google.com/drive/folders/1X-7tAK0lS5_qjiXJCdj3uFwnE6nyyanh?usp=sharing UI kit: Main menu (best score; play; sound on/off; music on/off): Game screen (timer; score): Classic thimbles or guess where the ball is under the glass. Initially on the table 3 glasses, after 10 scores 1 glass is added, after 25 - on the table 5 glasses. Before each "round" the glass rises, we are shown where the coin is. The glasses start mixing randomly, after that they stop and the timer starts in 10 seconds (it is placed on top as a line which decreases on both sides and after 10 seconds it should disappear completely). The player taps the glass, and the glass rises. If there is a coin, the player gets 1 score and a new "round" starts.The stirring speed of the glasses is increased after each "round" (after 10 scores)the speed is knocked down and a new glass is added, similar to 25 scores). The game ends when the timer reaches zero or the player has chosen the wrong glass. If you need any project or assignment help related to android or need any correction in any existing project then you can contact us at here: contact Us: contact@codersarts.com

  • Go Bananas/ Banana go wild Android Game | Codersarts

    Game Location: https://www.figma.com/file/XFuMhtsLzNR22OB9EPJLnl/GoBananas-BananaGoWild_250620?node-id=29%3A0 Menu screen: - Play - Settings(sound, music, vibration) - Best score Game screen: The banana floats on a leaf and jumps over different obstacles . A coin hangs above each obstacle. Over time, the banana floats faster and faster. When the player clicks on the screen the banana jumps (the whole game is essentially the same mechanics as the dinosaur game in google chrome). If a banana hits an obstacle, the game ends. Pause screen: restart resume sound music vibration Lose screen: Score Best Score Restart Menu Sound: https://drive.google.com/drive/folders/1HQ-mX-9e7oSci-GmG1ybXcCkSOmh_LGs?usp=sha ring Contact us to Get instant Help: contact@codersarts.com

bottom of page