top of page

Search Results

737 results found with an empty search

  • Principal Component Analysis | PCA.

    When you have such a large data set that you can’t understand the relationship between the variables or there is a risk of overfitting your model to the data or you can’t decide which features to remove and on which features to focus on and if you are okay with making your independent variables less understandable, then Principal Component Analysis (PCA) comes to your rescue. Principal Component Analysis is a very useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension. It is a method that is often used to reduce the dimensionality of large data sets, by transforming a larger set of variables into a smaller one that still maps most of the relationships in the larger set. Before going too deep into the workings of PCA, we will discuss a few mathematical concepts that are used in it. We will cover standard deviation, covariance, eigenvectors and eigenvalues. This background knowledge is meant to help understand PCA easily, but can be skipped if the concepts are already familiar. Mathematical Concepts: Standard Deviation (SD):It is a measure of how spread out is a data set. It is the average distance from the mean of the data set to a point. To calculate it we compute the squares of the distance from each data point to the mean of the set, then add them all up and divide by n-1 ( n is the no. of points in the data set) , and take the positive square root. The mathematical representation of SD is: Variance: It is another measure of the spread of data in a data set. It is the square of Standard Deviation. The mathematical representation of variance is. Covariance: The measures we discussed above are useful when dealing with 1-dimensional data set only. If we have an n-dimensional data set, we can only calculate SD and variance for each dimension of the data set independent of other dimensions. If we want to find out how much dimensions vary from the mean, with respect to each other, we calculate the covariance. It is always measured for two dimensions i.e. if you have a 3-dimensional data set (x, y, z), then you can measure the covariance between x and y dimensions, y and z dimensions, and x and z dimensions. Measuring the covariance between x and x, or y and y, or z and z, would give you the variance of x, y and z dimensions respectively. The mathematical representation of covariance is: The exact value of covariance is not as important as its sign (i.e. positive or negative). If it is positive, then that indicates that two dimensions increase together, if it is negative, then as one dimension increases, the other decreases. If the covariance is zero, it indicates that the two dimensions are independent of each other. Covariance Matrix: For n-dimensional data set, we can calculate n!/((n-2)! x 2) different covariance values. It would be convenient to calculate all the possible covariance values between all the dimensions and put them in a matrix. The matrix is called Covariance matrix. The mathematical representation for the covariance matrix of a set of data with n- dimensions is: where C(nxn) is a matrix with n rows and n columns, Dim(x) is the x-th dimension. All this complex looking formula says is that, if you have an n-dimensional data set, then the matrix has n rows and columns (so is square) and each entry in the matrix is the result of calculating the covariance between two separate dimensions. Eigen values and Eigen vectors: These two concepts lie at the heart of PCA. When we have a large data set in the form of a matrix, It becomes a hassle to manage it and it also consumes a lot of space on a disk. Therefore, we aim towards compressing the data in such a way that it holds on to the important information. For such purposes, we can use Eigenvectors and Eigenvalues to reduce the dimensions. In simple words, all of the information which doesn’t impact the data set in a crucial manner is reduced so that only the key information is retained. We know that a vector contains the magnitude and direction of a particular movement. Also, when we multiply a matrix (i.e., transformation matrix). with a vector, we get a new vector. This new transformed vector is often times the scaled version of the original vector and other times it has no scalar relationship with the original vector. Here, the first type of vector is important. This vector is called the eigenvector. These vectors are used to represent the large dimensional matrices. These vectors are also called special vectors, because it does not change when a transformation is applied to it. It only becomes a scaled version of the original vector. Hence, the large matrix T can be replaced by a vector v, given that the product of matrix T and vector v is the same as the product of vector v and scalar b. T*v=b*v Here, b is the Eigen value and v is the Eigen vector. In this way, eigenvectors help us in calculating the approximation of a large matrix as a smaller vector. And eigenvalues are the scalar quantity that is used to transform (or stretch) an eigenvector. Now that we are familiar with the mathematical concepts we can move ahead with the working of PCA. Steps to perform PCA: Step 1: Get some data. Step 2: Subtract the mean: For PCA to work properly, for each column of a matrix you need to subtract the mean of that column from each entry. So, all the x values have x̅ (the mean of the x values of all the data points) subtracted, and all the y values have y̅ subtracted from them. This produces a mean centered data set whose mean is zero. Step 3: Calculate the covariance matrix. Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix. Step 5: Choose components and form a feature vector: Here is where the notion of data compression and reduced dimensionality comes into it. It turns out that the eigenvector with the highest eigenvalue is the principle component of the data set. It is the most significant relationship between the features of the data set. In general, once eigenvectors are found from the covariance matrix, the next step is to order them by eigenvalue, highest to lowest. This gives you the components in order of significance. Now, if you like, you can decide to ignore the components of lesser significance. You do lose some information, but if the eigenvalues are small, you don’t lose much. If you leave out some components, the final data set will have less dimensions than the original. To be precise, if you originally have n dimensions in your data, and so you calculate n eigenvectors and eigenvalues, and then you choose only the first p eigenvectors, then the final data set has only p dimensions. What needs to be done next is that you need to form a feature vector. This is constructed by taking the eigenvectors that you want to keep from the list of sorted eigenvectors (high to low), and forming a matrix with these eigenvectors in the columns. Step 6: Derive the new data set: Once we have chosen the components (eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the transpose of the vector and multiply it with the transposed mean centered data set. Final Data= ( Feature vector_T ) x ( Original centered data set_T ) where _T represents transpose, ( Feature vector_T ) represents the eigenvectors in rows instead of columns and ( Original centered data set_T ) {from step 2} represents data items in each column, with each row holding a separate dimension. Final data is the final data set, with data items in columns, and dimensions along rows. It will give us the original data solely in terms of the vectors we chose . **The original data is then reoriented from the original axes to the ones represented by the principal components (hence the name Principal Components Analysis).** It should be noted that if we had taken all the eigenvectors in the feature vector then we will get the exact original data back. STEP 7: Get the old data back (reconstruction): If we reverse the equation given in step 6 we can get the original data back i.e. Original centered data set = ( Feature vector_T )^(-1) x Final Data the inverse of our feature vector is actually equal to the transpose of our feature vector. Thus, Original centered data set = ( Feature vector_T )_T x Final Data The above equation gives back the centered original data set but to get the exact original data back, you need to add on the mean of that original data (remember we subtracted it in step 2). So, for completeness, Original centered data set = { ( Feature vector_T )_T x Final Data } + Mean Example: Let us consider an example of reconstruction of a grayscale image which utilizes eigen decomposition technique for dimensionality reduction as discussed above. We will look at results corresponding to feature vector consisting of various top p-eigen vectors to get an idea of the functionality of PCA for dimension reduction. Original grayscale image: Reconstruction with top 10 eigen vectors Reconstruction with top 30 eigen vectors Reconstruction with top 50 eigen vectors Reconstruction with top 100 eigen vectors Reconstruction with top 500 eigen vectors In the above example we can see that even top 100 eigenvectors were insufficient to represent the original data while top 500 eigenvectors were able to represent the data more closely than other values but with some loss of information. The value of eigenvectors that can sufficiently represent the original data varies from one problem to another. Don't get intimidated by the complexity of the mathematical concepts. There are various packages available (for example Scikit learn) to do the mathematical calculations for you. The import thing is to get the idea of what all these methods are doing to the data which can be achieved by trying your hands on various problems.

  • JOB LIST APP

    INTRODUCTION:- Looking for a job can be extremely hard, especially in the IT industry. We wanted to take a closer look at the job market and design an app that would match the users’ needs. As designers, we need our workplace to fit certain criteria. Because design work is a creative process, each one of us has their very own preferences and it’s hard to find an employer that suits us. So, for this concept project, we focused on the designers’ job market and their needs to design an ultimate smooth, easy and intuitive experience. More than 45% of candidates use their mobile phone when job searching. Creating a tool to allow candidates and recruiters find the best possible match for each other was key for new recruitment portal. WHAT WE DID:- This Job list app for iOS and Android learns your skills, job history and preferences to accurately match you with the best roles. Not only that, but it keeps learning. The more you use the app, the more it gets to know exactly what career path you want to take. The app server is updated daily with thousands of new jobs, sending a push notification to users when roles matching their criteria have been uploaded. For easy accessibility, the app stores all user documentation such as CVs, cover letters and references in one place. Plus, thanks to the Tinder-style right and left swiping feature, browsing for relevant roles is made quicker and easier. THE OPPORTUNITY:- Our team started with research and took a look at what JobStreet.com currently offers to its users. During the research we noticed inconsistency across the desktop, mobile and mobile browser. The existing Job app did not reflect the features and functions of the desktop website and mobile website. Both the Desktop and mobile browser allows for quick access to ‘Search Jobs’, while the app requires the users to sign up/ register. This gave us an opportunity to relook at the app, since there was a lot of room for improvement to stay consistent with the website and provide users with a seamless experience searching and applying for jobs. DESIGN PROCESS:- For each project we go through several phases to make sure our process is bulletproof. Here are the five stages we go through and what happens during each one of them. USER SURVEY:- For our research process, we decided to do interviews and a survey to better got to know what our users need. Surveys are a great first look at the customer environment because they provide quantitative data. We like to also do qualitative research (interviews) to fully understand how to approach the design. USER INTERVIEWS:- We conducted a few interviews among our community of designers to understand their pain points, behaviours and motivations when looking for a job. We also wanted to know how they take on job-seeking and which apps they use. COMPETITIVE ANALYSIS:- To design the perfect experience, we took a closer look at the competition and their design approach. We wanted to see if they offer certain features that users mentioned during our research. STYLE GUIDE:- As usual, to make our work easier and more organized, we created a style guide. We wanted the app’s main focus to remain on the listings; therefore, the design is rather minimalist. USER FLOW:- Each time we design a product, we create a user flow to understand how the users navigate and behave. You will find a simplified map of the users’ actions in the app. WIRE FRAMES:- Before we move on to the UI design, we create wire frames. This way, we can focus on perfecting the features and the user flow without being distracted by colors and icons. It is also the perfect way to test functionalities without the testers being overwhelmed. THE FINAL PRODUCT:- After going through all the above steps, we did a final round of testing and designed the UI according to our predefined style guide. PERSONALIZED VIEW:- For the homepage, we wanted to gather the most important information instead of just flooding the user with job offers. That’s why we focused the main page around recommended jobs and recently added ones within preset criteria. STRAIGHT TO THE POINT:- Following what our research has shown us, the job offer page has all the most important information in a simple, condensed form for quick access and easy decision-making. Everything the users need is on a single screen: requirements, location, schedule and salary. OVERVIEW YOUR APPLICATIONS:- We designed a space in the app for the users to manage all of their applications a view employer feedback about the application. This way it’s easy to track your job-looking progress and make decisions. THE RIGHT JOB FOR YOU:- The filters in the app were designed according to our research and enable the user to sort by salary, location and position type. QUICK SEARCH:- The app was designed to provide a quick and easy experience, so the navigation is extremely simple and straightforward. You can either view job offers, your applications or your profile or instantly access all the information. USABILITY TESTING:- We did a final usability test with 5 participants and used the SUS usability scale to evaluate the results. We set a series of task that included Search and apply for jobs for an entry-level analyst role. Search and apply for jobs with a fixed salary range and find out more about the company. The tasks were completed by all 5 participants and the revamp app got a score of 85. Any score of 80.3 or higher is an A. This means people love your site and will recommend it to their friends. THE RESULTS:- There were more than 68,000 downloads of this Job list app in the first 8 weeks alone.40,000 new jobs are added to the solution every day in over 25,000 different job categories. More than 250,000 jobs are currently listed on the app, and the solution has seen more than 2 million users actively job searching since going live.

  • Building A Music Analyzer Application Using JavaFx

    Description In this project, you will be building a Music Analyzer application. It will read data from different music files, perform queries based on user input, and export reports to new files. For this project, you will be implementing four classes: Song Artist InvalidSongFormatException MusicAnalyzer You will not need to implement any additional classes, though you may do so if you wish. Note: 5 points of your grade is based on Coding Style.  You will need to follow the standards described here.  Use the "Run" button to check your Coding Style without using a submission. Instructions The project Javadoc is available here. The implementation requirements for each class are described there. Before beginning, download the Starter Code from Vocareum and read through it. The menu is provided for you already. Note: For this assignment, you are not permitted to use an ArrayList. Song class In this class, we will be implementing a class that represents a Song. While this class doesn't hold any specific data from an actual song file, the fields and methods here would represent the same data one could get from an actual song. Artist class In this class, we will be implementing a class that represents an Artist. Fields Constructor Methods InvalidSongFormatException class This is a class where you will be implementing a custom Exception. This will allow us to create more specific error types that are particular to certain situations in the solution. Note: Your class must extend the Exception class. Constructor MusicAnalyzer class This class will implement all of the functionality that the user will engage with while the program is running. Fields Constructor Methods Finally, note that many of the methods listed above correspond to a specific menu selection. That is to say, when the user enters a specific choice from the menu, the program should invoke the appropriate method as described. For reference, follow the guide below. If the user chooses menu choice 1, then invoke listSongsByArtist. If the user chooses menu choice 2, then invoke listFeaturesOnSong. If the user chooses menu choice 3, then invoke findMainArtistOnSong. If the user chooses menu choice 4, then invoke countSongsByArtist. If the user chooses menu choice 5, then invoke findSongLength. If the user chooses menu choice 6, then invoke findArtistGenre. If the user chooses menu choice 7, then invoke findArtistAndFeatures. If the user chooses menu choice 8, then invoke exportByArtist. If the user chooses menu choice 9, then exit the MusicAnalyzer with the procedure described in the project JavaDoc. Testing RunLocalTest We have included a program that will allow you to create and run output tests automatically in the Starter Code. This will make it easier for you to verify that each possible progression through your solution is correct. Take a look at RunLocalTest.java. There are many utility features and tools that you do not need to worry about at the moment, instead, focus on the test case. It is included in the Starter Code. Read through it. You can modify the test to test any method you like by following the same format. You can either download the program and run the main method or use the "Run" button on Vocareum to run the test. You can repeat this process for each path. Public Test Cases Note For many homeworks and projects, we will give you test cases that correspond to several of the ways we will be testing your program. But, we will not give you test cases for ALL of the ways we will be testing your program. You should think of other test cases to use that will fully test every aspect of every feature of your program. Just because your program passes all the test cases we give you does not mean that it is fully correct and will receive a score of 100. If need solution of this with an affordable prices then you can contact us at contact@codersarts.com

  • Design And Implement a Java Application

    Overview This is assignment forms a major part of your assessment (30%) within the Programming Design and Implementation unit. Please keep up-to-date with announcements within Blackboard to ensure that all that is required is submitted at the appropriate time. Background Large amounts of data are made available for free by all different organizations around the world. The City of New York is one such organization. A large number of open data sets are available from: https://opendata.cityofnewyork.us. To understand their philosophy about this, you can read the following web page: https://opendata.cityofnewyork.us/overview/. The “LinkNYC Kiosk Status” dataset is one such dataset and is available for download from: https://data.cityofnewyork.us/City-Government/LinkNYC-Kiosk-Status/n6c5-95xh. The dataset is updated hourly and “...provides the most current listing of LinkNYC Kiosks, their location, and the status of the Link’s wifi, tablet, and phone.” This is the dataset that will be used within this assignment. The Big Picture In this assignment you are going to design and implement a Java application that will allow users to interrogate the data to understand the state of Kiosks throughout New York City. The real world development of this would permit users to know all about the Kiosks, updated hourly. Relax, we won’t go that far. There is a lot of data in the file that can be analysed. An important thing to know is that New York City is made up of five Boroughs which are: (a) Manhattan; (b) Staten Island; (c) Queens; (d) Brooklyn; and (e) Bronx. Figure 1: The Five Boroughs and JFK (https://en.wikipedia.org/wiki/Boroughs_of_New_York_ City#/media/). For this assignment, the other important breakdowns of New York City are the: Council Districts; Community Boards; and Postcodes. Your program will create knowledge by generating statistics in response to user requests, using the data within the provided .csv file. The data will need to be processed through the creation of objects and then the appropriate calculations performed. The Tasks 1. The Data Here is a screen capture showing a sample of the data that is in the .csv file to be processed. The data will be extracted from the file and loaded into your program for processing. 2. Required Classes and Class Fields For this program you are required to write two classes: Kiosk class; Address class. The Kiosk class will have the following class fields (instance variables): pptID (String); latitude (real number); longitude (real number); status (String); communityBoard (String); councilDistrict (String); wifiStatus (boolean); phoneStatus (boolean); tabletStatus (boolean); and address (an Address object). The Address class will have the following class fields (instance variables): street (String); borough (String); state (String); and postcode (String). Please note: Kiosk class contains an Address object as one of its class fields. You are required to design these classes in pseudocode and implement them in Java to be used within your program. 3. Menu System As you have done in the practical worksheets, you will implement a menu system(s) that provides the user with the options to select what areas they will carry out an analysis on and what specifics will be in the analysis. The first menu will ask users for which grouping they would like to interrogate: Welcome to the New York City Kiosk Program. There are a total of `XYZ' WiFi kiosks throughout the city. Please make a selection from the Menu below. All of New York City Manhattan Staten Island Queens Brooklyn Bronx Postcode Community Board Council District Exit Program You are required to design this menu in pseudocode and implement it in Java within your program. 4. Knowledge to Display After selecting the area they want, users are then presented with a Menu to select the knowledge they want: Please select from a statistic below: Total number of kiosks. Total number and percentage of kiosks with status: removal pending. Total number and percentage of kiosks with status: ready for activation. Total number and percentage of kiosks with status: repair. Total number and percentage of kiosks with status: installed. Total number and percentage of kiosks with WiFi status: up. Total number and percentage of kiosks with WiFi status: down. All of the above statistics. You are required to design this menu in pseudocode and implement it in Java within your program. 5. Displaying Knowledge to the User For the requested knowledge to be useful to the user, it must be displayed in an appropriate and meaningful way. In response to the user’s selection, you must display the output in an easy to understand meaningful way. Once the requested knowledge has been displayed to the user, the program returns to the first menu and waits for the user to input their next choice. You are required to design this output style in pseudocode and implement it in Java within your program. An example of output to the user may be: Kiosks in Queens: 12/100 Kiosks (12.0%) currently have an Up WiFi status. Kiosks in Brooklyn: 26/100 Kiosks (26.0%) currently have an Installed Status. Things to note Your program should be designed in pseudocode and implemented in Java. Your pseudocode needs to follow the CLUCC principles: Clear; Logical; Understandable; Consistent; and Correct. Your Java code needs to comply with the Coding Standard and be well documented. If you looking basic, intermediate, and advanced java application then you can contact us at contact@codersarts.com

  • T O-DO APP

    INTRODUCTION : - To-do apps are widely used, but different users use the apps differently depending on their particular needs. In order to identify the problems faced by users of to-do apps, I conducted research to develop an understanding of the market and users’ needs. Problem statement Social media and other easily accessible online distractions make it hard for us to stay focused on our tasks and make it difficult for us to do our work efficiently. Also, constantly switching between tasks may give us the false feeling that we are being productive when we are, in fact, not. It’s more important for us to prioritize tasks and work on those that are most important, rather than focusing on deleting small items from our to-do list just for the sake of appearances. The goal of this app is to help us become more aware of how we spend time in the process of doing those tasks and how productive that time is. It can help set some constraints on social media to reduce distraction and track the time we spend working on the to-do items. When we have a better sense of the estimated time we’ll need to spend on our tasks, along with the validated time spent on the items for reference or personal/team reviews, we are able to manage our daily routines more efficiently. User interview & Survey Since I am still exploring what the issues are at this phase, I was hoping to use interviews to learn more about to-do app users who either work full-time or as freelancers. I chose these categories because these users are most likely to work in an environment that demands efficient time allocation and in which a to-do app could be most beneficial. My interview questions are available here. Besides doing these qualitative interviews to get to know the users, I also wanted to conduct some quantitative research in an attempt to identify patterns in problem solving among users. I sent out a survey to which 24 people responded. From my interviews and survey, I was able to extract themes among users and identify where I think the core problem lies. The collected data helped me to form a user persona whose traits align with the audience I am aiming for, and I referred to this persona through my product design process. I identified the following common themes among the users interviewed and surveyed: Theme: They appreciate easy set up (getting the app up and running quickly). They want to be able to add tasks to their to-do lists simply and quickly. They are heavily distracted by social media. They tend to lose track of time. From the interview/survey results and the themes I extracted from them, I was able to develop a list of potential solution based on users’ needs: Set some constraints on social media, apps, and online distractions. (My research indicates that users are heavily distracted in this area.) Manage time on tasks, and then validate time spent. (My research shows that users often lose track of their time and delay work.)Generate a report showing where we spend our time on tasks. (Users want to know better how their time is spent and how to improve their efficiency.) Make it easy for users to manage to-do lists, track time and set constraints in one place. (Users typically depend on different tools to help organize tasks and track time.) Add the ability to prioritize tasks via deadline. (Users wish to prioritize to-dos mainly based on deadlines.)Add the ability to prioritize tasks via deadline. (Users wish to prioritize to-dos mainly based on deadlines.) Add the ability to easily add tasks and to search to-do lists. (Users want be able to easily find to-do items.) Reduce distractions from mobile by allowing device settings to be synced from desktop. (Users want to have data sync cross platform.)It’s very tempting to include many functions in our quest to help users, but all those functions might not lead us toward the target we’re aiming for. The MoSCow method helps to set the path of product planning and trim down the scope we need for a minimum viable product(MVP). The point is to get the product out for real-world testing so we can have feedback and continue iterating the product High-Level User Journey & Apps Actions From the potential solutions to user problems developed from my research, I was able to construct a high-level user journey and the apps actions in sections. I planned two user journeys to make sure the flows follow how we usually approach our tasks. With social media and other easily accessible online distractions, it’s more difficult than ever for us to stay focused through the course of doing our tasks. In order to handle tasks more efficiently, we must learn to manage digital distractions. This app helps us when we don’t want to engage with these distractions and maintain boundaries at work. The app also provides reporting that will make us more aware of how we spend our time and more informed about how productive we are. As a result, we are able to more efficiently manage our tasks. Concept testing Concept testing helps me verify if the flow and idea can be understood by users. I translated my two flows into two simple sketches labeled Concept v1 and v2 below. I produced the flows with pen and paper, rather than jump too quickly to wireframes, so that I would be pushed to consider carefully how I would place the elements in the layout and how they would connect logically. Early concept testing with pen and paper is valuable because it allows me to identify what works and what does not and to make sure that what I am creating is based on user needs. In my testing, I started by giving users a few tasks to see if they were able to accomplish them. Below is what I found. During the process, users were able to complete tasks assigned to them, such as create a to-do and start a stopwatch. They also indicated that the app function doesn’t look cluttered with too many extra options. I’ve tried to avoid giving the user too many options so that the app does not overwhelm them and hinder their actions. At this early stage, the following problems were encountered: I made a mistake by not including one major user concern in the “Block” apps section (later Focus mode for Do not Disturb). Users have a fear of missing urgent messages, so a total block function might be intrusive. Users feel confused of the icon used in the focus mode because according to them, it looks like it would kick on location services on the device. Through testing, I noticed that users were asking if the “start” stopwatch button in task detail would automatically start the timer. This flow worked, but it might be better to add another button on the timer page to actually “start” the stopwatch so that users won’t be confused. Wireframes At this stage, I translated the sketches to Sketch. While I was making the wireframes, I reconsidered some interaction elements at the bottom navigation. For example, I considered if I should have the add button in the bottom nav, why the menu is on the top left, why the search icon sits on the top right, etc. I then decided to stop my work and go back to the problem statement and user research findings. Revisiting my research put me back on the right track regarding the elements and interactions, and I was able to make decisions about them confidently. Add a to-do will be the primary task on the to-do screen, but not for the other two functions (Focus, Report). I decided to keep the task within the to-do instead of moving it to the bottom bar; when the to-do list gets long, it’s a quick and easy for users to access it, as well. The search should still be kept to the top right, as it is not the primary action in the to-do screen. It works for a quick search, and it functions as a sub-action compared with the “Add” a to-do. Finally, account should be placed in the bottom bar, as it is also a high-level navigation that contains profile and settings. It should not sit on the top right alone. For user testing, I selected users who were involved in the previous user testing, and I also included some new users. The age range of the users was from 25 to 45. They are currently working full-time or doing freelancing work. After incorporating user feedback, I was able to make another revision to my slides. Users were able to perform assigned tasks such as quickly adding to-dos and switching dates in the calendar. They were able to able to understand the time stamp next to a to-do and associate it with the stopwatch. Users appreciated the convenience of categorization settings, and they have the flexibility to selectively switch on Do Not Disturb in a batch without worrying about missing any important messages. Users also have access to a report that provides the percentage of work and break time so they know where they spend their time and if they are being productive or not. CONCLUSION : - App to-do app performs very efficient comparing to competitor. There is a space for further developments with regards of keeping app small and quick. To-do-app can be developed as a sole application as well as a very efficient module to be combined in a larger project. One of the key challenges is to choose appropriate storage solution that will allow maintaining its biggest advantages: Simplicity Speed Low resources demand T H A N K Y O U

  • Real Estate Listings Using Java

    Problem Statement: Suppose you are looking to buy a house. You have certain criteria in mind such as: price range, square-footage, number of bedrooms, etc. You would like to enter your criteria and expect that the real estate program to come up with a house, or possibly, a list of houses, that satisfy those criteria. You are asked to write a program that works on those lines. A “House” (i.e. a house that is for sale) for us is defined by the following data: address – This is the street address such as 23-Linden-Avenue. Even though there can be many words, we make it one word by introducing hyphens. price – This is a number like 149999 area – the square-footage of the house, again a number, such as 1800 numberOfBedrooms – again, a small number, like 3 A “HouseList” consists of an ArrayList of ‘House’s that are for sale. A “Critreia” represents a buyer’s criteria. A ‘criteria’ consists of the following data: minimumPrice maximumPrice minimumArea maximumArea minimumNumberOfBedrooms maximumNumberOfBedrooms You are asked to write an interactive program with the following specifications. Specification 1: You will be given a text file (called “houses.txt”) that contains all the data about the houses for sale. Your program should read this text file and populate the ArrayList of House objects that are on sale. Specification 2: Your program needs to prompt the user to enter the criteria they want to use to search for houses that they are interested in. In particular, the program should prompt the user to enter the following data: minimum price maximum price minimum area (i.e., square footage) maximum area (i.e., square footage) minimum number of bedrooms maximum number of bedrooms This data entered by the user should be held in a “Criteria” object. Specification 3: Your program then needs to work with the entered “Criteria” object and the ArrayList of “House” objects created (i.e., the list of houses on sale) and find a set of houses on sale that match the entered criteria. This list should then be printed out on the screen, with a separator between every house data printed out. A Class-Responsibility-Collaboration (CRC) card that describes the House class you need to code is presented below. The important points you need to focus on to write your code (i.e., the attributes – instance variables – of the class and their types, and the methods the class must have) are highlighted in RED. Class Name: House Description: /* General Description of the class */ It represents the details of a house for sale. Subclass: /* List any subclasses here */ None Superclass: /* List any class that this is a subclass of */ None Attribute: Type: /* List attributes and their types */ String address; int price; int area; int numBedrooms; Method: /* List Methods */ Constructor get (“accessor”) methods for all the instance variables public boolean satisfies(Criteria c) – does this house meet the criteria specified by c. public String toString() – to create a nice printable string describing the House data Collaborating Classes: Criteria Class Name: Criteria Description: /* General Description of the class */ This contains the criteria specified by the user to select houses. Subclass: /* List any subclasses here */ None Superclass: /* List any class that this is a subclass of */ None Attribute: /* List attributes and their types */ int minimumPrice; int maximumPrice; int minimumArea; int maximumArea; int minimumNumberOfBedrooms; int maximumNumberOfBedrooms; Method: /* List Methods */ Constructor Get (“accessor”) methods for all instance variables In addition to the above classes, you should code a “HouseList” class that encapsulates an ArrayList of “House” objects Class Name: HouseList Description: /* General Description of the class */ Contains an ArrayList of House objects. Reads the data from a file called houses.txt and adds them to the array list. Allows for searching of houses that satisfy criteria. Subclass: /* List any subclasses here */ None Superclass: /* List any class that this is a subclass of */ None Attribute: Type: /* List attributes and their types */ ArrayList houseList; ArrayList Method: /* List Methods */ public HouseList(String fileName): reads data from file called ‘fileName’, creates House objects and adds them to the instance variable ‘houseList’ public void printHouses(Criteria c) – print all the houses that satisfy the criterion ‘c’ public String getHouses(Criteria c) – returnsconcatenated string of the details of all houses that satisfy the criterion ‘c’ Finally, you need to write the tester class with the main program, which is described below (note that it must have a (private) instance variable of type ‘HouseList’, which is the class you just coded using the above CRC card): Class Name: HouseListTester Description: /* General Description of the class */ Interacts with the user. Accepts user criteria and arranges for search Subclass: /* List any subclasses here */ None Superclass: /* List any class that this is a subclass of */ None Attribute: /* List attributes and their types */ HouseList availableHouses; Type: HouseList Method: /* List Methods */ This class should just have the main method, which operates as follows: Create the HouseList object names availableHouses using “houses.txt” Read in seven different Criteria objects with varying upper and lower limits for price, area and number of bedrooms. For each criteria, use printHouses to print a list of houses that satisfy the criterion. Use the following data for your HouseListTester (i.e., these are the criteria to use in your testing – the data the user enters): minPrice maxPrice minArea maxArea minBed maxBed 1000 500000 100 5000 0 10 1000 100000 500 1200 0 3 100000 200000 1000 2000 2 3 200000 300000 1500 4000 3 6 100000 500000 2500 5000 3 6 150000 300000 1500 4000 3 6 100000 200000 2500 5000 4 6 Data for the houses.txt file (Create the file in Notepad and copy and paste the data). Use the Scanner class to read data from the file. 123-canal-street 129000 1800 3 124-main-street 89000 1600 3 125-college-street 199000 2000 4 126-lincoln-street 56000 1200 2 127-state-street 82000 1500 3 223-canal-street 385000 4500 5 224-main-street 40000 800 2 225-college-street 37999 800 2 226-lincoln-street 125000 1200 2 227-state-street 130000 1250 3 323-canal-street 60000 900 2 324-main-street 80000 1000 2 325-college-street 45000 800 1 326-lincoln-street 63000 900 1 327-state-street 145000 1400 3 423-canal-street 199999 2000 4 424-main-street 250000 3500 5 425-college-street 350000 4600 6 426-lincoln-street 133000 1300 2 427-state-street 68000 850 1 523-canal-street 299999 3000 4 524-main-street 260000 2500 6 525-college-street 359000 4900 4 526-lincoln-street 233000 1900 2 527-state-street 58000 750 1 We are providing online JavaFx project help at an affordable prices, if you need any help related to java or need solution of above task then you can contact us at contact@codersarts.com

  • Database Assignment Using MS Access

    Access software: We need to use Microsoft Access 2007 or higher version to complete this project. Access is installed on most of computers at CI computer labs and you can also borrow one from the CI Library. It can also be accessed via Virtual Lab (Please refer to a separate posted page “Using Microsoft Access via Virtual Lab” in this Module regarding how to use Virtual Lab to use Microsoft Access for both Mac and Windows users.) How to get started. Check out the video posted on Canvas, it will help tremendously. Download the Northwind database (listed as an item under the Database and SQL Module) and rename it with the following naming convention: Rename your database Northwind_LastName_FirstName.mdb By repalcing LastName and FirstName with your own last name and first name such as Northwind_Chen_Minder.mdb (don’t change the file extension .mdb If you save the database in new Access format .accdb after working on it, it is acceptable.) Open the database, under SECURITY WARNING (see figure below). Click Enable Content to get started. Start working on the predefined dummy queries by modifying them with real queries. Save these queries after you are done under the same query name. Q14 is not a query, you need to perform 4 tasks: (1) create a department table, (2) a simple data entry form, (3) a relationship, and (4) a master-detail form. Submit the completed database by first close the database (no need to Zip it) via Canvas system's: Database Assignment under Assignments area. You should be seeing Q01... to Q13, and Your Final Score queries under Queries. Don't forget there is Q14 which is not a query but needs to be done. All the queries have been created for you with a dummy SELECT statement. These queries have the following naming convention: QNN_MeaningfulNameOfYourQuery where NN is the question number, 01, 02, 03, 04, etc. Such as Q01_TotalNumberOf_USA_Customers Q02_MostExpensiveProducts Double click on an existing query that you are working on. You will see the default “dummy” result of the query showing up. Fist make sure that you select the Home Tab (The first tab). Click on the View Icon at the top-right corner to select either SQL View and Design View to modify the query such that it will generate the correct query result. Remember to Save the query under the same name (use Save, instead of Save As) when you are done. When you close the Query window, if you have not saved your changes, the following dialog box show up. Click Yes to save your asnwer. Submit the completed northwind database via Canvas under Assignments area under Assignment 2: Database Assignment, You will receive 0.8 for each question from Q1~Q13 if you have the correct answer. Question 14 counts for 2.6 points. Q12 is an UPDATE statement, not a typical query. Grading: You will receive the graded database back to your assignment and then you can run the “Your Final Score” query in the database to find out your final score. The grading result and instructor’s comment can be found in the Grading table. NorthWind Scenerio: The database contains the sales data for a fictitious company called Northwind Traders, which imports and exports specialty foods from all around the world. Suppliers: Suppliers' names, addresses, phone numbers and hyperlinks to home pages. Products: Product names, suppliers, prices and units in stock. Categories: Categories of Northwind products. Orders: Customer name, order date and freight charge for each order. Order Details: Details on products, quantities and prices for each order in the Orders table. Employees: Employees' names, titles and personal information. Customers: Customers' names, addresses and phone numbers. Shippers: Shippers' names and phone numbers. === Beginning of the Assignment Questions ==== Q1 to Q13: 0.8 point per Question 0.8*13=10.4 points === Read the requirements and Hint for each query/question carefully. Q1: List all the customers from United Kingdom with Company Name and Country columns. Sorted by Company Name in ascending order. Hint: You need to find out the country code for United Kingdom by studying the country coulumn of the Customers table. Q2. How many customers are located in France? Your query should return just a number representing the number of customers who are located in France. Define a meaningful alias for the resulting column so that the heading says “The total number of customers in “France”. Q3: How many unique titles do we have in the employee table? Hint: You need to create TWO queries, 1. The first query returned all the unique titles, i.e., Q03_UniqueTitles (use the Distinct keyword and use the SQL View to type in the SQL statement directly (Design View will not work). Please refer to SQL_MIS310 slide#105~107 for guidance. You cannot use Distinct and Count together in Access and this is why you have to do it in two related two queries.). The second query is a query against the first query (i.e., Q03_UniqueTitles) and count the number of unique titles (Use the aggregate function Count) and the query should return a number. 2. The second query is named Q03_UniqueTitlles_Count and it has been created for you already. You should use Q03_UniqueTitles (not a table) to create this query. This query should return a result that is a simple number. Hint: See SQL slide#105~107 Q4: List all current products (the products that are not discontinued – Discontinued is False to exclude discontinued products) from the most expensive products to the least expensive ones. Requirements: Your query should return all products that are not discontinued and sorted by price from highest to the lowest. The query result should contain Product id, product name, UnitPrice, and discontinued. To select only active products, you need to set the criteria under Discontinued column to False (no quotation) – Discontinued is a Boolean data type. This will allow you to exclude all discontinued products). Q5: List all the products by its product ID, Product Name, and UnitPrice (has to be in this sequence). Sort the query result first by UnitPrice in DESCENDING order, and then by Product Name in ASCENDING order. Hint: See SQL slide#83 & 84 Q6: List all products with the price range between 30 to 50 (>= 30 and <= 50), and Price (sorted by price in Ascending order). List productID, ProductName, UnitPrice, and CategoryName of the product. Hint: You need to JOIN the Product table with the Category table using the cetagroyID in both tables in order to display categoryName (Not CatgoryID) in the query result. Q7: List only those products with their inventory below the reordering level. Please list the product ID, Product Name, UnitsInStock, ReorderLevel, and the shortage amount (sorted by Shortage amount in Descending order). ShoratgeAmount is a calculated column and it is defined as Hint: Only list products that that have ShortageAmount > 0 Q8: Please find out all the products with product name start with “Ch” List the query result with Product Name, UnitPrice, and UnitsInStock and Sorted by ProductName. Hint: See SQL slide#76  use Ch* Q9. Total Sales for each customer in October 1996 (based on OrderDate). Show the result in CustomerID, CompanyName, and [total sales], sorted in [total sales] in Decending order. Hint: [total sales] is a calculated field. Limit the query results to order date that is during October 1996. You need to JOIN customers, order, and [order details] tables. Use the following critera to limit OrderDate between #10/01/1996# and #10/31/1996# Please [Choose WHERE at the TOTAL Row under OrderDate] and please note "between" is a keywword and is part of the criteria. for October 1996 sales data. [chooset WHERE at the TOTAL Row] [Total Sales] is a calculated field defined as the following (copy and paste it to prevent typing error): Total Sales: CCur(Sum(([UnitPrice]*[Quantity])*(1-[discount]))) [choose EXPRESSION at the TOTAL Row] for this calculated column If you use the following alternative formula Total Sales: CCur(([UnitPrice]*[Quantity])*(1-[discount])) [choose SUM at the TOTAL Row] Q10: List all the products total sales amount from 1997? List the query result by ProductID, ProductName, and Annual Sales Amount for 1997 and sorted it by Annual Sales in Descending order (the annual sales is a calculated column). Hint: 1. You need to join several tables to answer this question. (Products, Orders, and [Order Details]) 2. Use the critera for OrderDate between #01/01/1997# and #12/31/1997# [Annual Sales] is a calculated column. You need to use the aggregate function Sum( ) to calculate the annual sales for each product, i.e., the sum of the subtotal (extended price) of an order item for a product ordered in an order. The calculated column is defined as: CCur(Sum([Order Details].[UnitPrice]*[Quantity]*(1-[Discount]))) *** You have to use [Order Details].[UnitPrice] instead of [UnitPrice] because there are two UnitPrice colum in Products table and [Order Details] table. Please note that there is a space character in the [Order Details] The result will be displayed in currency format and to 2 decimal places. You need to use Group By at the TOTAL Row for ProductID and ProductName Q11: Write a SQL Update Statement to change the Compnay Name of the shipper #2 (ShipperID = 2) from 'United Package' to 'DHL' You need to save the query as Q11_UpdateShipperName. The best way to do this is to use SQL View of the query to write the SQL Update statement directly. When you run it, you are going to be prompted with messageBox telling you a record will be changed. Say YES, and no query result will be retruned because this is not a read only query. Hint: See SQL slide#92 Q12: Show the two most expensive products that are less than 90 dollars with ProductID, Product Name, and Unit Price and remerber to sort the result by Unit Price in Decending order. Hint: Use Select Top 2 ...... to get the top 2 records. (Do this under SQL View) See SQL slide#82 You need to type Top 2 in the Select statement Q13: What is the average product price for products in each product category? Display category ID, Category name, and average price in each category. Sort by the average price in Decending order. Hint: Join the Products and Categories tables and GROUP BY Category ID and Categories Name and use the aggregate function AVG for average == The last Question Q14 is not a query, but has to be done == Q14: (2.6 points) There are 4 questions that are not queries. You need to create a table, enter some data for the table, create a relationship, and create two forms. Hint: Watch Create Tables and Forms http://www.youtube.com/watch? v=AH3ilFm_C88 a. (Add a new table called Department based on the following information (0.7 point): Table name: Department Columns: DepartmentID: Text data type with 5 characters in length Name: Text data type with 50 characters in length Looking database assignment help using MS Access then you can contact us at contact@codersarts.com

  • Conditional Random Fields

    What is the difference between ‘India’ and ‘India Times’? Obviously, we all know that ‘India’ is the name of a country and ‘India Times’ is the name of a newspaper. Another question: how did you know that? At some point of time in your life you were introduced to these words i.e. you learnt them and now when the question was asked you remembered what you learnt i.e. you knew the context. How does these questions relate to CRFs (Conditional Random Fields)? First we will discuss about generative and discriminative models and then we will talk about CRFs. GENERATIVE Vs. DISCRIMINATIVE Generative Algorithms model the actual distribution of the classes. Discriminative Algorithms model the decision boundary between classes. To understand it better, let us consider a very common example: Suppose you were asked to classify a speech to a language. There are two possible ways to do it: 1. You learn each language there is and then classify the speech using what you learnt 2. You just learn the differences between the languages without learning them from scratch and then classify the speech accordingly The first approach is called a generative approach, it learns how the data is generated and then classifies and the second approach is called a discriminative approach as it just learns the decision boundaries and then classifies. A Generative approach model is time consuming and thus discriminative approach models are preferred when we have a sufficiently large amount of data but when the data is scarce the generative approach is better suited. We now know what generative and discriminative models are. Conditional Random Field Since we are introducing a complex topic we will consider an example to get a better understanding. Imagine that you have a sequence of words and you have to tag the words with the part of speech that they represent. You can do this in two ways: First, by ignoring the sequential aspect and tagging the words according to the tags in the train set of the corpus. In this method the tag assigned to a word will be the same as the tag that corresponds to the word in the train set. But, we know that a word can represent different parts of speech, for e.g. the word ‘Will’ can be a modal verb or a noun depending on the context. Ignoring the sequential aspect of the words makes us lose the context. Hence, we are not able to effectively assign tags to words. The above problem can be overcome if we take the sequential aspect into account as well along with the classification. In this method the tag assigned to a word depends on the tag that corresponds to the word in the train set and on the previous tag in the sequence. This way we gain context and we can correctly classify the word ‘Will’ as a modal verb or a noun depending on the place of the word in the sequence. The second approach is achieved by the use of Conditional Random Fields. Let us define CRFs: Conditional Random Fields are a discriminative model which is used for predicting sequences. They use contextual information from previous labels, thereby increasing the amount of information the model has to make a good prediction. Mathematical overview: Since Conditional Random Fields are a discriminative model, we calculate the conditional probability i.e. p (y | X) It is the probability of the output vector y given an input sequence X. To predict the proper sequence we need to maximize the probability and we take the sequence with maximum probability. We will use a feature function f. The output sequence is modelled as the normalized product of the feature function. Where Z (X) is the normalization. λ (lambda) is the feature function weights, which is learned by the algorithm. For the estimation of the parameter λ, we will use maximum likelihood estimation. Thus the model is log-linear on the feature function. Next, we apply partial derivative w.r.t. λ in order to find argmin on the negative log function. For the parameter optimization, we use iterative approach i.e. gradient descent based method. The gradient update step for the CRF model is: Thus, we use Conditional Random Fields by first defining the feature functions needed, then initializing the weights to random values, and then applying Gradient Descent iteratively until the parameter values (lambda) converge. Applications of CRFs: CRF are widely used in NLP for various purposes two of which are POS tagging and Named Entity Recognition. Other processes which utilise CRFs are gene prediction, activity and gesture etc. Note: If you need implementation for any of the topics mentioned above or assignment help on any of its variants, feel free contact us on contact@codersarts.com.

  • TensorFlow, PyTorch Assignment Help

    1 Overview In this project, you will design and implement your own deep learning model to perform 10-class image classification on the given dataset. You will be able to access the training data to train and tune your model, and a public testing dataset for the evaluation before your final submission. Your model will be finally evaluated on a private testing dataset. The grading of the project will be based on the novelty of the proposed model and the testing performance on the private dataset. 2 Project Instructions What You Are Expected to Do You are expected to explore the state-of-the-art of deep learning based image classification, propose and implement your own deep learning models. You can explore everything including Novel neural network architectures, novel operations, blocks and modules, Data pre-processing, normalizations and augmentations, Training strategies, optimizers, parameter initializations, regularizations, etc. What You Are NOT Expected to Do In this project, you are provided with the training dataset. However, for the sake of fairness, you are NOT allowed to use extra image data for transfer learning or pre-training. Do not simply and directly apply the existing commonly used network architectures such as ResNet, GoogLeNet, VGG, etc. Deep Learning Framework and Libraries You are free to choice one framework between TensorFlow and PyTorch. Recommended stable versions are TensorFlow 1.14 TensorFlow 2.2 PyTorch 1.6 Please specify the version you use in your report. If you use any other versions, please make sure your code works on one of the three versions. Besides, you are free to use any other common open source libraries in your project. Training and Testing Dataset You are provided with the CIFAR-10 Dataset for this project. The Dataset is available at https://www.cs.toronto.edu/˜kriz/cifar.html. Please carefully read the dataset description and implement your code to load and process the images. Note that you should only use the training dataset containing 50k images for training and validation. The public testing dataset should NOT be used for tuning your model. The images of the private testing dataset will be released one week prior to the due date. The images are stored in an .npy file in the shape [N, 32, 32, 3], where N is the number of testing images. The labels of the private testing dataset will not be available. You will need to run your model prediction on the private dataset and submit the prediction results. Please make sure you have saved your model variables in files after training for your convenience. Code Files Structure For better code readability, please follow the starter code and organize your codes into the following files during implementation. main.py: Includes the code that loads the dataset and performs the training, testing and prediction. DataLoader.py: Includes the code that defines functions related to data I/O. ImageUtils.py: Includes the code that defines functions for any (pre-)processing of the images. Configure.py: Includes dictionaries that set the model configurations, hyper-parameters, training settings, etc. The dictionaries are imported to main.py Model.py: Includes the code that defines the your model in a class. The class is initialized with the configuration dictionaries and should have at least the methods “train(X, Y, configs, [X_valid, Y_valid,])”, “evaluate(X, Y)”, “predict_prob(X)”. The defined model class is imported to and referenced in main.py. Network.py: Includes the code that defines the network architecture. The defined network will be imported and referenced in Model.py. Detailed descriptions are provided in the starter code. You can add additional files that define your specific modules, blocks, utility functions, etc. Submission Guidance Your final submission should include the following files. Prediction on the private test images Please store your prediction results as an array into an .npy file named “predictions.npy”. For each image, store a vector of the probabilities for the 10 classes instead of the predicted class. The shape of the saved array is [N, 10], where N is the number of testing images. Code Please put all your code files in a “code” folder. Also include a README file that describes how to run your code for training and prediction. Saved models Please keep your trained model that can reproduce the results on the public testing set and predictions on the private testing set. Put the related model files in a “saved_models” folder. If the file exceeds the uploading size limit, you can put it on Google Drive and include a share link in the “saved_models” folder. Report Describe your proposed method, implementation details and summarize your results on the public testing dataset in your report. You can also report anything that you find interesting. The report should be in .pdf format and named as “report.pdf ”. Please compress all the files above in a single .zip file with name “Firstname_Lastname.zip”. Get complete solution of this task or other Tensorflow or pytourch realted assignment help with an affordable price at: contact@codersarts.com

  • STOCK MARKET APP

    Introduction Stock markets are where individual and institutional investors come together to buy and sell shares in a public venue. Nowadays these exchanges exist as electronic marketplaces. Share prices are set by supply and demand in the market as buyers and sellers place orders Stock ownership implies that the shareholder owns a slice of the company equal to the number of shares held as a proportion of the company's total outstanding shares. For instance, an individual or entity that owns 100,000 shares of a company with one million outstanding shares would have a 10% ownership stake in it. Most companies have outstanding shares that run into the millions or billions. User Goal To minimize the risk of loss in stock investment and to get stock trading tips from country’s best traders and friends. Business Objective To attract as many investors as possible on the platform by solving their stock trading problems and to gain revenue from subscription plans sold to investors. My Role As part of the design team I was responsible for the design and user experience of the app. I partner with Project Lead and we led the UI/UX efforts with ownership of all major design deliverables. Developed for ios devices the app provides its users with a single platform to invest in stock market. With integration of social features, the app is set to become a one of its kind” Social Trading App”. The Approach Partner with Project Lead to plan and define the Mobile experience. Design user flows, wireframes, detailed designs and prototypes. Conduct meeting throughout the design process with stakeholders and the team. Iterate designs based on feedbacks gathered during meetings. Work closely with the developers to deliver solutions and resources for final production. Competitor Research Affinity Mapping After I gather all the information I needed in order to move into the next phase of the process, I start doing competitor analysis. I looked at some mobile apps at the time Best Broker, TradeHero, E Trade, Robinhood etc to find out the pros and cons in the product. User Persona “We used personas constantly throughout the project to guide design decisions, priorities and create empathy amongst the client and our team”. Our persona hypothesis consisted of two different archetypes which we used to facilitate decisions about our user needs, desires, their lifestyle and aspirations in contexts of use. Through careful analysis of our research, we identified sufficient behavioral variables to segment our user audience. Pain Points It is difficult for investors to build trust in market. High risk of loss. It is difficult to invest for newcomers without knowledge. Earlier it was done via telephonic calls. Information Architecture I tried to figure out how the app will be structured so that it will be easy to use for those users who are new in stock trading or who are buying or selling stocks via telephone. The app was integrated with third party so we had to make the signup mandatory for users to minimize the risk of fake users on the platform. After successful signup user can search stocks or investors but to purchase a stock they have to create a trading account and link their bank account with it. Wireframes After I had a better understanding of user goals & behavior, I have listed some key features of the app in order to create high fidelity wireframes. Wireframes gave me an idea that how things would look like and made my work easy for visual design. I design both low fidelity and high fidelity designs based on the project needs. I prefer high fidelity wireframe for content rich apps. Detailed Designs and Prototyping Once the wireframes were approved, I start working on detailed designs for ios as well as prototyping. It was a challenge for me to design a stock trading app because I have never invested in stocks. So to get an idea how these people think and react, I talk to my friends and colleagues who either use stock trading apps or invest in stocks by any other means. I watched YouTube videos for further information. Home Screen My Client wanted to have a light color theme. I designed it in both light and dark theme and explained the reason why stocks trading displays are always in black background everywhere. Investors just care about profit and loss which are denoted by green and red in stock tradingBuy Stocks. Buy Stocks To buy a stock a user has to fill the number of stocks then select type of order and after that its validity. So easy to use and minimize the confusion and cognitive load we divided the task in 2 steps. In first step we just ask for number of stock he wants to buy. When he types we display the total amount at top of the screen. After that when user taps on next button we displays the other 2 options. We used slider instead of text field to make it limit or stop order to make it easy for use. We made “Good till cancelled” as a default selection so that user doesn't get confused what to do here. Leaderboard It’s the most important part of the app where you can view the positions of other investors. We used a pie chart to display the stocks held by the user. When a user taps on a specific part of pie chart, it displays the information of that stocks Activity Feeds. A user can follow and subscribe other investors and can view their activities. We wanted to add a feature to praise someone’s activity just like “like” feature on facebook. Conclusion Highly customized app that lets you trade with friends – With this app you can trade with your friends like for a real stock market and get to know the real market strategies. Add upto 20x margin to your trades – the app allows its users to earn points while trading and the more stock they have the better position on the leaderboard. Follow successful traders on the app – this app allows you to follow all the successful traders and learn trading tactics to help build your score and compete in the market like a pro. Hire Figma Experts for any kind of projects – urgent bug fixes, minor enhancement, full time and part time projects, If you need any type project hep, Our expert will help you start designing immediately. T H A N K Y O U!!

  • RSA implementation from your own using GUI

    ICT582 Major Assignment 2020 This assignment aims to provide a Python solution to a specific implementation problem. To solve this problem, you need the knowledge gained from various Topics of this Unit. You may also need to undertake independent research to find out some Python packages, modules and functions required in your solution. Description of the Problem RSA implementation from your own using GUI You can start learning GUI from: https://www.youtube.com/watch?v=J-chyaIVuzE . You are also encouraged to learn more detail (as much as you need for your assignment) about graphics libraries e.g. tkinter. RSA is one of the most popular public key encryption algorithms which is used all over the world in most applications. The RSA algorithm involves various mathematical operations that make the encryption happen. There are three major operations involved in RSA namely: Key generation, encryption and decryption. Some of the following tutorials would explain RSA in more detail. The unit lecturer will also explain about it in one of the lectures. The mathematical summarization of the RSA operations (key generations, encryption and decryption) are as follows: Key generation: We need to generate public key e and private key d Assume two prime big numbers p, q N=pXq. N becomes so big that it is virtually impractical to factorize it. (ʎ(n)=(p-1)(q-1) Choose e such that (1<e<ʎ(n) and gcd(e, ʎ(n))=1 [i.e. they are coprime] Compute d=e-1 (mod n). [d and e are known as modular inverse] Hence the public key: n, e. Private key: n, d Encryption: Let’s say anyone wants to encrypt a message m for the recipient (then the above-mentioned keys will belong to the recipient). Then: c=me (mod n). c is known as the ciphertext Decryption: If the ciphertext c is received to the recipient, he can decrypt and recover the message m’ as follows: m’=cd (mod n), If all goes well m should be equal to m’. An example: Key Generation ➢ p=11, q=3, n=33 ➢ ʎ(n)=10.2=20 (e.g. the Totient function λ(p) = φ(p) = p − 1=10) ➢ Choose e=3. Check gcd(e, ʎ(n))=1 ➢ Try for d so that d.e (mod 20)=1. e.g. d=7 ➢ Public Key: (n, e)=(33,3), Private Key: (n, d)=(33,7) Encryption: ➢ m=7, c=73 (mod 33)=343 mod 33=13 Decryption: ➢ m’=137 (mod 33)=7 [even this works if d=27] Your task: Write a GUI based Python program that will allow the user to perform (i) generate RSA keys, (ii) encrypt a given message and (iii) decrypt the message without using any cryptographic library. Your program will generate the values of p unless the user provides them manually. Then the program will generate the keys (private and public). Then the program will allow the user to enter his/her message to be encrypted. Finally, the program will perform the decryption operation as well. Following link may give you some more ideas about the expectation of your program. https://www.mobilefish.com/services/rsa_key_generation/rsa_key_generation.php Subtask and mark distribution: Friendly and logical GUI Key generation Encryption Decryption Performance enhancement : Due to the limitation of the size of integer type variables, many operations such as xy, xy etc. would not be possible when the value of x, y will be very high. Hence, you would not be able to work with big prime numbers. By definition, this will provide a weak RSA. Under this task you have to optimize the algorithm and implementation to support bigger number. You have to include an extra section in the report to explain how you have achieved this enhancement. Report and presentation Solution Screenshots: Contact Us to get complete GUI solution at affordable price at: contact@codersarts.com

  • Debugging And Errors In C++

    1. Overview In this lab you will: Understand how program memory works Evaluate debugging both logical and runtime errors Examine how gdb works Practice debugging code with runtime errors and logic errors 2. Program Memory As we have discussed briefly in class, the memory a program uses are typically divided into a few different areas, called segments: The code segment (also called a text segment), where the compiled program sits in memory. The code segment is typically read-only. The bss segment (also called the uninitialized data segment), where zero-initialized global and static variables are stored. The data segment (also called the initialized data segment), where initialized global and static variables are stored. The heap, where dynamically allocated variables are allocated from. The call stack, where function parameters, local variables, and other function-related information are stored. The reason this is important is because when we have a bug (or error) in our code, we need to be able to figure out where it came from. The two primary places that we will look to debug our code will be in the call stack for most of our errors and the heap for errors related to dynamically allocated variables. The heap is a bit more difficult to debug because the compiler will often crash and you will only get an error message that says, “Segmentation Fault.” These types of bug can be very tricky to figure out and fix without some help which is why we will be practicing using the g++ debugger or gdb for short. 3. Introduction to Debuggers When you run your program within a debugger, you can stop the program at critical points and examine the values of variables and objects. The debugger provides much more capability and flexibility than debugging a program using print statements. The debugger allows you to examine the value of a variable after your program has crashed. Using print (such as cout) statements, you can only see values before a program crashes. If you have a segmentation fault, you may not even know where the error occurred. What? Where do I start to debug my code? Without using a debugger, we may not have any idea where to start to debug our code. Thankfully, using a debugger will tell us where the code failed. This is the output of the gdb running my code: Ahh, the error is on line 11. Now I know where to start to figure out what is wrong with my code. In this lab, we will use gdb, a debugger with a command line interface. Although the user interface is a bit clunky, you will find that gdb has many useful features. It understands C and C++ types and syntax, and it works well with source code that is distributed across multiple files. 4. Errors There are three main types of “errors” in C++ code. 1. Syntax Errors Syntax errors are based on the grammatical rules of the programming language. They are identified during compilation and may include not declaring an identifier used in a program or not closing an open parenthesis or brace. These are usually the easiest types of errors to identify and fix. 2. Logical Errors A logic error is a bug in a program that causes it to operate incorrectly, but not to terminate abnormally (or crash). A logic error produces unintended or undesired output or other behavior, although it may not immediately be recognized as such. 3. Runtime Errors A runtime error occurs whenever the program instructs the computer to do something that it is either incapable or unwilling to do.  These can be caused by things such as trying to push information to a location in memory that doesn’t exist or from a divide by 0 error. 5. gdb Commands Before you can use GDB, you need to change your compilation command from -Wall to -g. GDB offers a big list of commands, however the following commands are the ones used most frequently: b - Puts a breakpoint at the current line b main - Puts a breakpoint at the beginning of the program b N - Puts a breakpoint at line N b +N - Puts a breakpoint N lines down from the current line b fn - Puts a breakpoint at the beginning of function "fn" d N - Deletes breakpoint number N info break - list breakpoints r - Runs the program until a breakpoint or error c - Continues running the program until the next breakpoint or error f - Runs until the current function is finished s – Steps to the next line of the program s N – Steps through the next N lines of the program n – Continues to the next function call. u N - Runs until you get N lines in front of the current line p var - Prints the current value of the variable "var" bt - Prints a back trace which prints the stack trace u - Goes up a level in the stack d - Goes down a level in the stack q - Quits gdb 6. Example 1 – Logic Error For this example, let’s look at a logical error. In this case, the code is supposed to use a pass a variable to a function called addLoop(). AddLoop is supposed to use the number to loop from 0 to n-1 adding the value to the initial number. For example, we are going to set the total to 0 and pass the function a literal of 10. We are expecting the output to be 0+1+2+3+4+5+6+7+8+9 = 45. Here is the code: #include <iostream> using namespace std; //Takes in a number and loops through and adds the number to the total. //If i = 10 and num = 10 then it should print 0+0+1+2+3+4+5+6+7+8+9=45 int addLoop(int num); int main() { int total = 0; //Set total to 0 total = addLoop(10);//Call addLoop with a literal of 10 cout << "New total is: " << total << endl; //Print output return 0; } int addLoop(int num){ for(int i = 0; i < num; i++) { //Loop 10 times num += i; //Add i to num } return num; } Here is the sample output: New total is: -2147450870 If you are looking any C or C++ assignment related help with an affordable price then you can contact us at cotact@codersarts.com

bottom of page