top of page

Search Results

737 results found with an empty search

  • SMART HOME APP

    Introduction:- Today, more and more smart devices are entering your home. These are regular household items, like a TV or fridge but now these items can sense information about you and pass it along. As you fill your home with these smart devices, how can they let you and your family knows more about your energy usage to help you save more money? Project Overview & the Challenge:- Utility bills are pretty light on information. They only tell us 1) How much electricity, water or gas we have used. 2) How much we have to pay. Unfortunately, that’s about it. If we want to cut our energy use to save money, we should do a guessing game. We either have to try to ax our unnecessary consumption as a whole or simply guess which devices are heavy users. Best solution comes to mind might be buying smart home devices which enable us to control home appliances remotely and reduce the power usage. But the problem is that the smart home devices only help in controlling and generating automation which does a big favor but is not enough. There are no data about monthly spending, estimated bills, and energy usage reduction tips in such devices. However; Home Energy Monitors connect to your circuit breaker. They allow you to track your energy consumption with much finer detail, put the ax away and cut your energy consumption with a scalpel instead. Goal:- So, the goal of this project is to design an intelligent energy monitoring system, which enables users to track energy usage of any device, control and set up routines for home appliances, get notifications for devices left turned on and view a timeline of daily home activity. With this app, users can get an insight of their electricity, water and gas bills, while they can pay them with one tap. Objectives: Energy Consumption: Homeowners see a visual report of energy usage to get readings of how their house operates and gain intuition about power consumption. Home Appliances Control: Users can easily control gadgets from anywhere. Real-time cost tracking: Tracking the cost of energy consumption in real-time will allow users to watch as their electricity consumption and costs increase or decrease. Goals: Users can define energy usage and spending targets to generate and manage the monthly household budgets and avoid unexpected expenses. Notifications To Reduce Energy Usage: Notifications for energy efficiency help to verify spurious utility billing and save more money on monthly bills. Setting Up Routines: Users can create automation with routines, to streamline their lives and also be sure that they’re conserving energy and saving money. Research:- User Reviews The first step was to analyze the competitors and get familiar with the existing Home Energy products, their pros and cons, and user’s reviews. User Research:- I decided to design a solution for millennial (25–36 years old) who are already accustomed to using technology on a daily basis, so I conducted a semi-structured interview with some students and young professionals to discover their pain points, needs, and requirements. Interview Questions:- How do you track your home energy usage? Do you own any smart devices? How was your experience with them? What problems could you see arising with the whole smart home ecosystem? What features or qualities you like about smart home energy monitor devices? Do you plan for monthly household budgets? Can you achieve those targets? How do you pay your bills? Do you pay on time? Major Findings:- In my market studies and initial user interviews, I discovered commonalities among users insights that helped me to find issues users ran into while working with such smart devices. Smart home devices take a long time to detect all the home appliances which result in incorrect data and wrong estimation of bills. Users prefer to be in charge and speed up the detection process by educating manually and providing human data. Users expect a smart home ecosystem to develop intelligent pattern recognition and operate according to user’s behavior and preferences. Moreover, they also expressed the preference for bill reminders and straightforward, online payment in such a system. Sketches:- Once I gained a clearer picture of users, I started sketching design solutions and preparing low-fi prototypes. By doing so, I was able to go through design ideas and quickly find the right ones. User Flow Wireframes:- I took a step forward and prepared wireframes to focus on usability and ensure that the app is built according to goals. Also, I had the chance to ask for feedback much sooner in the process. Visual Design:- Visual Design Iterations Before I start designing I knew how other applications in the market, look, so I tried new styles for showing energy consumption, with each iteration I made improvements. Colour Palette I love the bright side of the color palette and using gradients for grabbing more attention. I think it’s a great tool to enrich your design. Onboarding Flow:- The goal of onboarding is to say a lot with illustrations and very little text, so users get an insight of app’s functionality. At any moment, the user can opt to sign up by pressing Get Started, or he can Log in if he has got an account with already setup devices. Note that the Get Started button stands out more to make it easier for the app newbie have to sign up. Device Setup Flow:- If a user has an account with already setup devices, he can opt to log in. If he has just installed Sweet Home devices at home, first, he has to connect devices to the application. The process includes searching for nearby devices and entering Device ID Number to connect. If device setup is successful, the user can proceed with registration. Dashboard:- In the dashboard, the user can see an overview of energy usage, most energy consumption devices in each place at home and total spendings. The background color changes as user swipes for different places at home. Users can see energy usage in details, with statistics showing which and when a device has been using lots of energy, and how much each of them cost. Also, the user can see which places at home are high energy users. The goal of showing results for different places and energy types is to give correct data about total spending. Conclusion:- I think that the learnings from the app design process affect the way I will approach similar projects in the future. The way I have learned about the user-centered design process has given me a great amount of information needed to design apps with the user in mind by gaining a deeper understanding of human behavior and emotions. I liked trying different techniques to gather more information about smart home users, how they interact with these devices and what they wished to have in such apps.

  • Apriori algorithm

    To better understand this concept we will first discuss a few very common examples. The famous Wal-Mart’s beer and diaper parable: A sales person from Wal-Mart trying to increase the sales of the store analysed all sales records. He discovered a general trend that people who buy diapers often buy beers as well. Although the two products are unrelated this relation arises because parenting is a difficult task for many. By bundling the products together and giving discounts on them the sale person was able to escalate the sales. Likewise, you may have seen in supermarkets that bread and jams are placed together, which makes it easy for a customer to find them together and thereby increasing the chance of purchasing both the items. Also, you must have noticed that the local vegetable seller always bundles onions and potatoes together. He even offers a discount to people who buy these bundles. He does so because he realizes that people who buy potatoes also buy onions. Therefore, by bunching them together, he makes it convenient for the customers. At the same time, he also increases his sales performance. It also allows him to offer discounts. Moreover, apart from the field of sales, in medical diagnosis for instance, understanding which symptoms tend to co-morbid can help to improve patient care and medicine prescription. The above mentioned are examples of Association rules in data mining. It will help us get an idea about the Apriori algorithm. To generalise the above examples, if there is a pair of items, A and B (or more), that are frequently bought together: Both A and B can be placed together on a shelf, so that buyers of one item would most probably buy the other. Promotional discounts could be applied to just one out of the two items. Advertisements on A could be targeted at buyers of B. A and B could be combined into a new product, such as having B in flavours of A. To uncover such associations we make use of Association rules in data mining. Now we will define some concepts which will aid us in understanding the Apriori algorithm. Association rule mining: Association Rule Mining is used when we want to discover an association between different objects in a set, find frequent patterns in a transaction database, relational databases or any other information repository. It tells us what items do customers frequently buy together by generating a set of rules called Association Rules. To put it simply, it gives you output as rules in form if this then that. A formal definition of the problem of association rules given by Rakesh Agrawal, the President and Founder of the Data Insights Laboratories: Let I = {i1, i2, i3, …, in} be a set of n attributes called items and D={t1, t2, …, tn} be the set of transactions. It is called database. Every transaction, ti in D has a unique transaction ID, and it consists of a subset of itemsets in I. A rule can be defined as an implication, X⟶Y where X and Y are subsets of I(X, Y⊆ I), and they have no element in common, i.e., X∩Y. X and Y are the antecedent and the consequent of the rule, respectively. Let us take a look at an example. Consider the following database, where each row is a transaction and each cell is an individual item of the transaction: The association rules that can be determined from this database are the following: 100% of sets with alpha also contain beta 50% of sets with alpha, beta also have epsilon 50% of sets with alpha, beta also have theta There are three common ways to measure association: 1. Support: refers to items’ frequency of occurrence. It tells how popular an itemset is, as measured by the proportion of transactions in which an itemset appears. supp (X) = Number of transaction in which X appears / Total number of transactions 2. Confidence: It is a conditional probability. It tells us how likely item Y is purchased when item X is purchased, it is expressed as {X -> Y}. This is measured by the proportion of transactions with item X, in which item Y also appears. conf (X⟶Y) = supp (X ∪ Y) / supp (X) But there is a major drawback. It only takes into account the popularity of the itemset X and not the popularity of Y. If Y is as popular as X then there will be a higher probability that a transaction containing X will also contain Y thus increasing the confidence. To overcome this drawback there is another measure called lift. 3. Lift: It signifies the likelihood of the itemset Y being purchased when item X is purchased while taking into account the popularity of Y. lift ( X⟶Y) = supp (X ∪ Y) / supp (X) ∗ supp (Y) A lift value greater than 1 means that item Y is likely to be bought if item X is bought, while a value less than 1 means that item Y is unlikely to be bought if item X is bought. Frequent Itemset: Itemset: A set of items together is called an itemset. If any itemset has k-items it is called a k-itemset. An itemset consists of two or more items. An itemset that occurs frequently is called a frequent itemset. A set of items is called frequent if it satisfies a minimum threshold value for support and confidence. Support shows transactions with items purchased together in a single transaction. Confidence shows transactions where the items are purchased one after the other. We will now discuss Apriori algorithm. Apriori Algorithm Definition: It is an algorithm for frequent itemset mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database. It was proposed by R. Agrawal and R. Srikant in 1994 for finding frequent itemsets in a dataset for boolean association rule. Name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties. A major concept in Apriori algorithm is the anti-monotonicity of the support measure. It assumes that All subsets of a frequent itemset must be frequent Similarly, for any infrequent itemset, all its supersets must be infrequent too Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. This algorithm uses two steps “join” and “prune” to reduce the search space. It is an iterative approach to discover the most frequent itemsets. The steps followed in the Apriori Algorithm of data mining are: Join Step: This step generates (K+1) itemset from K-itemsets by joining each item with itself. Prune Step: This step scans the count of each item in the database. If the candidate item does not meet minimum support, then it is regarded as infrequent and thus it is removed. This step is performed to reduce the size of the candidate itemsets. Steps In Apriori: This algorithm follows a sequence of steps to find the most frequent itemset in the given database. It follows the join and the prune steps iteratively until the most frequent itemset is achieved. A minimum support threshold is given in the problem or it is assumed by the user. Step 1: In the first iteration of the algorithm, each item is taken as a 1-itemset candidate. The algorithm will count the frequency of each item. Step 2: Let the minimum support be assigned ( e.g. 2). The set of 1 – itemset whose frequency is satisfying the minimum support are determined. Only those candidates which count more than or equal to minimum support, are taken ahead for the next iteration and the others are pruned. Step 3: Then, 2-itemset frequent items with minimum support are discovered. For this step, the 2-itemset is generated by forming a group of 2 by combining items with itself (join step). Step 4: The 2-itemset candidates are pruned using minimum support threshold value. Now the table will have 2 –itemsets with minimum support only. Step 5: The next iteration will form 3 –itemsets using join and prune step. This iteration will follow anti-monotone property where the subsets of 3-itemsets, that is the 2 –itemset subsets of each group fall in minimum support. If all 2-itemset subsets are frequent then the superset will be frequent otherwise it is pruned. Step 6: Next step will make 4-itemset by joining 3-itemset with itself and pruning if its subset does not meet the minimum support criteria. The algorithm is stopped when the most frequent itemset is achieved. Let us discuss an example to understand these steps more clearly. Consider the following table: We will generate the association rules for the above table using Apriori algorithm. Let the minimum support be 3. Step 1: Create the frequency distribution table of the items. Step 2: This is prune step, the item whose count does not meet minimum support are removed. Only I5 item does not meet minimum support of 3, thus it is removed. Items I1, I2, I3, I4 meet minimum support count and therefore they are frequent. Step 3: This is a join step, here we will generate a list of all pairs of the frequent items (2-itemset). Step 4: This is a prune step. The pairs I1,I4 and I3I4 do not meet the minimum support and are removed. Step 5: This is a join and prune step, here we will now look for frequent triples in the database, but we can already exclude all the triples that contain pairs which were not frequent in the previous step (following the anti-monotone property of support measure). For example, in the table below: We can see for itemset {I1, I2, I4}, the subsets are: {I1, I2}, {I1, I4}, {I2, I4}. The pair {I1, I4} is not frequent, as it do not occur in TABLE-5. Thus {I1, I2, I4} is not frequent, hence it is deleted. Similarly, all the triples are pruned in this way and we are left only the itemset {I1, I2, I3} which is frequent as all its subsets are frequent (again the anti-monotone property of support measure comes into play here). Step 6: Generate Association Rules: From the frequent itemset discovered above and setting the minimum confidence threshold to be 60%, the association could be: {I1, I2} => {I3} Confidence = support {I1, I2, I3} / support {I1, I2} = (3/ 4)* 100 = 75% {I1, I3} => {I2} Confidence = support {I1, I2, I3} / support {I1, I3} = (3/ 3)* 100 = 100% {I2, I3} => {I1} Confidence = support {I1, I2, I3} / support {I2, I3} = (3/ 4)* 100 = 75% {I1} => {I2, I3} Confidence = support {I1, I2, I3} / support {I1} = (3/ 4)* 100 = 75% {I2} => {I1, I3} Confidence = support {I1, I2, I3} / support {I2 = (3/ 5)* 100 = 60% {I3} => {I1, I2} Confidence = support {I1, I2, I3} / support {I3} = (3/ 4)* 100 = 75% This shows that all the above association rules are strong if minimum confidence threshold is 60%.

  • REAL ESTATE

    INTRODUCTION:- Real estate sector is one of the most globally recognized sectors. It comprises of four sub sectors - housing, retail, hospitality, and commercial. The growth of this sector is well complemented by the growth in the corporate environment and the demand for office space as well as urban and semi-urban accommodations. The construction industry ranks third among the 14 major sectors in terms of direct, indirect and induced effects in all sectors of the economy. Real estate is an integral part of the economy of any country. On an average, it accounts for about 11% of the gross domestic profit. A US-based company that helps students in finding College Apartments & off campus housing was facing challenges in achieving sales due to stiff competition and changes in the economy. The agency owner soon realized that their old brochure-style website was a passé which needed to change in a hurry. An upgrade to help them promote new properties and break new ground was the need of the hour. The owner also wanted to offer his clients some value-added services that would help them get more takers for their properties. When they came to Exceptionaire, majority of their new leads came from referrals. Though this is a good situation to be in, however, they wanted to maximize their online presence to attract more eyeballs to their website. As a strategy to convert more leads into clients, the owner wanted to position the company as a market leader in local property investment. He also wanted to offer assistance to his current clients to help them sell more thereby building customer loyalty so as to get them to use the agency services more. MARKET SIZE:- By 2040, real estate market will grow to Rs 65,000 crore (US$ 9.30 billion) from Rs 12,000 crore (US$ 1.72 billion) in 2019. Real estate sector in India is expected to reach a market size of US$ 1 trillion by 2030 from US$ 120 billion in 2017 and contribute 13% to the country’s GDP by 2025. Retail, hospitality, and commercial real estate are also growing significantly, providing the much-needed infrastructure for India's growing needs. Indian real estate increased by 19.5% CAGR from 2017 to 2028. Office space has been driven mostly by growth in ITeS/IT, BFSI, consulting and manufacturing sectors. During 2019, the office leasing space reached 60.6 msf across eight major cities, registering a growth of 27% y-o-y. In 2019, office sector demand with commercial leasing activity reached 69.4 msf. Co-working space across top seven cities increased to reach 12 sq ft by end of 2019. GAP ANALYSIS:- To emerge above the competition and make a search engine optimized website. To drive more traffic to the website from a few hundreds to tens of thousands. To handle traffic efficiently. Real-time social media updates triggered by any changes made by the Manager. A mobile responsive website. To enable users to compare between properties. To connect agents with related properties. To create a comprehensive marketing plan. DIGITAL TRANSFORMATION SOLUTION:- We created a strong web presence so that the site could be easily indexed on search engines. Making their website search-engine friendly was no mean feat as each one of 200,000 property links had to be made conducive to SEO, which we did manually to ensure there were no errors. We did on page & off page optimization to drive a sizeable traffic to the site as well as achieve top keyword ranking in major search engines. We also expanded the digital footprint by making an impactful presence across major social media. We built a complex CMS system with a system and database architecture to ensure that the site now had the ability to handle thousands of concurrent users without any lags or glitches. Content is king. Keeping in line with this, we had to create an eco-system such that the content from any blog is efficiently updated on all integrated social media channels like Facebook and Twitter and create as many back links as possible to the site. With over 60% of traffic coming from the handheld, it was imperative to make the website perfectly responsive. We built the website in a way to ensure that it renders well across all devices of varying screen-sizes. We created a logically driven User experience tool to aid users in making informed-decisions. We built a comprehensive comparison tool to enable users to compare between properties based on multiple parameters. This was done by creating a database with all pertinent property details which would be a likely point of comparison going into it in order to create an efficient comparison engine. The initial database did not connect properties to the agents that were handling them which were quite chaotic. It was hardly possible to do this manually as we were staring at 200,000 properties. We eventually devised an algorithm to add agents corresponding to each property to the system. We created, developed and launched a website and full marketing plan with the following: A contact form linked to an email address created specifically for the property. A fully interactive gallery showcasing all properties. A traditional and interactive marketing plan created specifically to drive qualified leads back to the website. Optimized the website for search results (SEO). RESULT:- After delivering the SMAC Solutions to the customer, client is getting a huge amount of traffic through search engine. The customer is getting over 45,538 hits to their website on a monthly basis and aims to reach a million users in the next year. T H A N K Y O U !!

  • Bloxorz Game! Using Python Artificial Intelligence

    Bloxorz is a 3-D block sliding puzzle game consists of a terrain that is built by 1×1 tile with a special shape and size, and a 1×1×2 size block. This game is a single agent path-finding problem that involves moving the block from its initial position using four directions (right, left, up, and down) and ensuring that its ends are always within the terrain boundary, until it falls into a 1×1 square hole in the terrain that represents our goal state. The block can be in three states, standing, lying horizontally, and lying vertically. When the block reaches the hole, it must be in standing state to fall in it. The level-1 of the game begins with the size of the terrain as 6×10 rows and columns, starting position is at row 2 and column 2 or as user determines it, and goal position is at row 5 and column 8. The shape of the terrain in the first level is shown in Fig.1. First lets have some fun by playing this game at https://www.miniclip.com/games/bloxorz/en/# Implement an agent to solve level-1 of Bloxorz game using python. a. You may choose to represent the game as a map using of 0,1 as shown here. 1’s represents the tile and0’s is void spaces & 9 represents goal b. Show/Print the step by step process your agent takes to reach the goal. c. You may use “X” to indicate the current position at each step. d. Also print the number of steps taken by your agent to reach the goal. e. You can choose any uninformed search or informed search strategies to implement this in python. If you need solutions of this problem then you can contact us at contact@codersarts.com and get instant help with an affordable prices

  • Large Scale Data Driven Applications Using MongoDB

    Learning Outcomes: On successful completion of this module, you will be able to: Demonstrate a knowledge and understanding of the theory and practice of large scale data driven application. Apply skills to deal with the complex issues involved in the design and implementation of a reliable large scale data driven application. Demonstrate competence by applying theoretical skills to practical problems. Problem You are required to build a web-based system for displaying and querying the World Wide COVID-19 data sets A dataset consisting of COVID-19 attributes for 3718 cases can be found in the file COVID- 19.7z This should be unpackable with archive manager and loadable into Excel/Open office (use Unicode UTF-8). You should examine and attempt to understand the data (there will be lab/tutorial help), clean if needed. Each case has the following attributes: Field description FIPS: US only. Federal Information Processing Standards code that uniquely identifies counties within the USA. Admin2: County name. US only. Province_State: Province, state or dependency name. Country_Region: Country, region or sovereignty name. The names of locations included on the Website correspond with the official designations used by the U.S. Department of State. Last Update: MM/DD/YYYY HH:mm:ss (24 hour format, in UTC). Lat and Long: Dot locations on the dashboard. All points (except for Australia) shown on the map are based on geographic centroids, and are not representative of a specific address, building or any location at a spatial scale finer than a province/state. Australian dots are located at the centroid of the largest city in each state. Confirmed: Confirmed cases include presumptive positive cases and probable cases, in accordance with CDC guidelines as of April 14. Deaths: Death totals in the US include confirmed and probable, in accordance with CDC guidelines as of April 14. Recovered: Recovered cases outside China are estimates based on local media reports, and state and local reporting when available, and therefore may be substantially lower than the true number. US state-level recovered cases are from COVID Tracking Project. Active: Active cases = total confirmed - total recovered - total deaths. Combined Key: Admin2 + Province_State + Country_Region. Incidence_Rate: Incidence Rate = cases per 100,000 persons. Case-Fatality Ratio (%): Case-Fatality Ratio (%) = Number recorded deaths Number cases. Design and implement a database to store this, with a view that this is only a small sample. You need to choose (noSQL) type of database management systems and a system to implement it in. Implement a web-based system linked to the database to allow the querying and display of the data using PHP and MongoDB. PART I:- Requirements DB design report to be handed in end of week 11 as a single pdf document via Canvas. This should include: your analysis of the data, and data model diagram [25 Marks] From your point view, which type of database (relational, noSQL) and database management systems you might think better to use with such application, and a database design. [15 Marks] (For formative feedback and evidence of work on assessment) Apply the following queries on your database design List the following details (FIPS, Provinces State, Lat, Long, Confirmed, Deaths, Recovered, Active) where the death cases are less than 850 and greater than 800 worldwide. Find the number of records and the total numbers of the confirmed cases, the death cases, and the recovered cases within the country region of Denmark. List all the details that feature COVID-19 cases in Greece and Cyprus List 10 records’ details that feature COVID-19 cases where both Lat and Long have no values. Find out the top 10 countries’ names with highest Confirmed cases and Death cased. List all the confirmed cases in China including the Province State, the location (Lat and Long) details. System Report To be handed in Friday 11 th of December (tbc) as a final single pdf document via Canvas. This should include: Details of the database implementation, noting any differences from the initial design (include the original database design report as an appendix). [5 Marks] Details of the system implementation, choice of language (PHP), overall architecture of the system (full code listings are not required) Sample screen shots of the interface and sample query results. [5 Marks] Discussion of the decisions made, a critical evaluation of the system and how it could be extended if required (assume further data will be added on a regular basis) (50% weighting) PART II:- Prototype System Demo of the system, time slots to be arranged after hand in of System Report. During the demo session, student will be asked on the following questions:- Design GUI of the system (One web page written in php and MongoDB) [15 Marks] MongoDB connectivity with PHP [10 Marks] Importing the dataset into MongoDB [5 Marks] MongoDB query statements [10 Marks] Discuss the results of the queries above [10 Marks] Using Google map to plot query statements’ results [Extra Marks] (50% weighting) The coursework is the complete assessment for this module; reassessment will be by a resubmission of the report and/or revised implementation. Field description FIPS: US only. Federal Information Processing Standards code that uniquely identifies counties within the USA. Admin2: County name. US only. Province_State: Province, state or dependency name. Country_Region: Country, region or sovereignty name. The names of location included on the Website correspond with the official designations used by the U.S. Department of State. Last Update: MM/DD/YYYY HH:mm:ss (24 hour format, in UTC). Lat and Long_: Dot locations on the dashboard. All points (except for Australia) shown on the map are based on geographic centroids, and are not representative of a specific address, building or any location at a spatial scale finer than a province/state. Australian dots are located at the centroid of the largest city in each state. Confirmed: Counts include confirmed and probable (where reported). Deaths: Counts include confirmed and probable (where reported). Recovered: Recovered cases are estimates based on local media reports, and state and local reporting when available, and therefore may be substantially lower than the true number. US state-level recovered cases are from COVID Tracking Project. Active: Active cases = total cases - total recovered - total deaths. Incidence_Rate: Incidence Rate = cases per 100,000 persons. Case-Fatality Ratio (%): Case-Fatality Ratio (%) = Number recorded deaths /Number cases. All cases, deaths, and recoveries reported are based on the date of initial report. Exceptions to this are noted in the "Data Modification" and "Retrospective reporting of (probable) cases and deaths" subsections below. If you are looking any mongodb project assignment help or mongodb assignment help then you can contact us at conact@codersarts.com

  • CHAT DASHBOARD

    INTRODUCTION:- This dashboard gives us a real-time overview of one of our most important support channels: live chat. The team can see at a glance how busy chat is at that time, and how well we’re meeting demand. If we have everything under control, then we tend to switch our attention to other channels or longer term projects. But if the dashboard highlights an influx of chats and we’re struggling to meet our First Response Time targets, we can quickly switch agents to ensure we’re maintaining our standards. A less tangible, but very important use for this dashboard is to show the wider company how hard the team works at addressing our customers’ questions and problems. It’s a great morale boost when we get compliments from other teams. What information is on this dashboard, and where does it come from? This dashboard tells us instantly how busy our live chat is, and whether we have enough agents online to meet demand. It also gives us a rough idea of how our levels of service have been throughout the week. All the metrics on this dashboard are pulled from our account. The ones we look at frequently throughout the day are Current Chat Queue and Current Wait Time. We’ve also added warnings to our visualizations that turn red if there’s a spike in chats, so we can take action immediately. Next to that is the number of agents we have online, to give us a ballpark of whether we’re serving customers in an efficient way. The client wanted to create an admin dashboard where admin will be able to create a new user for the application & can create channels for the different user-based requirement. They will be able to view the list of users and channels with the details like ID, channel name, participants list, created date, image, and description. Once an admin clicks on any channels list, he will be able to view the detailed description of that channel, make an edit of that particular channel, change avatar URL, edit data description and able to delete it. We had to address the following tasks: Client required a multi-support android, IOS, and JavaScript UI platform. In order to enhance the user base, the client wanted us to build this complex multi-support dashboard with an effective user-friendly interface. Our developers at Value Coders had overcome these challenges with their innovative ideas and technical expertise. OUR SOLUTIONS:- Our expert team of developers understood the technical details of this project, reviewed other applications and started working on the necessary areas of this project at first. This application provides the following solutions to the clients: Solution designing had been done with continued sessions with client so that during the development, efforts could be minimize Sprint wise development plan allowed us and client to review the development at each stage so that any bug can be fixed at that point only. Feedback implementations of the client had been done at effective pace. As a final outcome, we delivered high performing, feature-rich and user-friendly online admin panel. The panel had the following features- Admin is able to view their user's list & status of users. Admin can define the UI setting, chat settings and their push notifications format in which they want to view in their application Admin will come to the dashboard and he can discover various users in different categories and channels. He will be able to view the list of users and channels with the details like ID, channel name, participants list, created date, image, and description. Once an admin clicks on any channels list, he will be able to view the detailed description of that channel, edit that particular channel, change avatar URL, edit data description and able to delete it. Admin is able to filter listing with accordance to need, the status of channels, filter particular channel name, make read-only, make it public or private, create a custom filter list and deactivate that channel. LIVE CHAT:- When we joined the team, they already had a customer support platform. Our job was to create a customer-facing chat widget and the user interface for a support manager who operates live chats. We used the best UX practices to design a clean and minimalistic chat widget. The visitor’s page shows the user's current status, their path on the website, when they were last seen, what country they come from, how many visits and chats they had previously, and what browser and device they used. The chat history contains the records of all conversations that happened in the live chats. In the settings, customers can get an overview of their chat performance via comprehensive dashboards. Reports give managers a good understanding of the performance of their team and customer satisfaction. In the Templates users can select the layout for their campaigns from a number of common use cases such as sell products, make an announcement, and send a newsletter and so on. With ready-made themes that we designed users can create a great look for their email campaigns. A simple drag-and-drop editor helps users create custom emails. We also added the possibility to edit the HTML so users could customize their emails even more. SMS CAMPAIGN:- We improved the SMS campaign configuration design making it consistent with the email campaigns. Despite the variety of features such as templates, tags, opt-out messages, file attachments, email previews, we made it look simple and minimalistic so the interface doesn't overwhelm the user and helps them get the job done quickly. We created an intuitive three-step SMS configuration flow similar to how it works with email campaigns. Drag-and-drop tags help to personalize every message. Ready-made templates for various use cases that users create themselves speed up the process of launching SMS campaigns. If users want to include a file to their SMS we made it possible for them to preview how it will look in the message. As the last step in the SMS configuration, we created message preview using a Smartphone mockup to give a more realistic impression of the soon-to-be-launched campaign. ADMIN DASHBOARD:- The Admin dashboard brings together all the platform's features under one user interface. We made it convenient so users could manage payments, agents, customers, products, and notifications with only a few clicks. Hire Figma Experts for any kind of projects – urgent bug fixes, minor enhancement, full time and part time projects, If you need any type project hep, Our expert will help you start designing immediately. T H A N K Y O U !!

  • MySQL Assignment Queries Using Schema

    Learning Objectives: 1. Understand how to write SQL statements and run queries to retrieve data from databases; 2. Master the basic usage of the SELECT statement; 3. Get familiar with the popular open-source database management system (DBMS) MySQL, and the associated software package XAMPP Preparation: a. Download the MS Word file HW4 – SQL – 2020 Fall - Answerbook.docx and save it using the naming convention outlined above. Open the file and record the required students’ details on p.1. b. In you have not installed XAMPP, install it and make check it runs properly – detailed instructions were provided in the installation tutorial (in Canvas). Also follow the instructions on how to set up security. c. Start XAMPP, start Apache and MySQL, and get into phpMyadmin (follow the instructions in the installation guide and lecture slides)C d. Create an empty database named using your name initials (not full names) using the following naming convention LastIniFirstIni-MusicStar. For example, if your name is George Burdell, name the database BG-MusicStar. Note: It is IMPORTANT to name the database with your own name. No credit will be given for screenshot submissions for which the names of the databases do not match the name(s) of the student(s). e. Download the following SQL file from Canvas: MusicStar.sql. This file contains the commands to create the database tables and to add the records to the tables. f. Import the MusicStar.sql into MySQL and execute the contents (see XAMPP instructional videos on Canvas to show you how this can be done). Verify that all the tables have been created and records added. General Rules: Each answer MUST be on a different page in the answer word document. An ‘answer’ includes both the relevant SQL and accompanying screenshot. See Query 1 in the answer document for an example of what the screenshot must look like. To help with pagination, each page has a heading. Be careful not to accidentally insert additional pages (only the first 14 pages of this document will be graded). The SQL statement has to be placed above the screenshot (i.e., SQL statements will be placed towards the top of the page, under the page heading – e.g., Query 5). The SQL statement must be correctly formatted – it cannot be a single line statement (with wrapping) in either your answer or in the screenshot. Indentation should be used – see lecture examples for formatting. If you need to make your screenshot smaller to fit on the same page as the applicable SQL statement, the aspect ratio must be preserved to prevent distorting the image. You must use the explicit JOIN notation for all your queries: “The implicit join notation is no longer considered a best practice. See here” For inner joins, use the notation: INNER JOIN, rather than JOIN. Alias for Tables can be used – see here. However, this is usually optional (mandatory use is noted below). Fields in query results that report aggregate values must be renamed with an appropriate alias – e.g., an alias for average(GPA) could be ‘Average GPA’ - see here. The font used for SQL statements in this document must be Times New Roman 12 pt. All queries are equally weighted. There may be records added to the MusicStar database in the future. The queries you author therefore need to be able to accommodate any such additions (without revision). Do not include more than 10 records in the submitted screen shot. If necessary, you can limit the output to 10 rows by placing this command on the last line of your SQL statement: LIMIT 10 Do not include duplicate (redundant) information in your query output. For example, if asked to list customer names that have made a purchase, do not list the same customer more than once. You must not include in your query output any fields that are not necessary to answer the query question. Final Task: SUBMISSION Once you are done with saving all query texts and result screenshots the main document (the docx file created in step a) of the Preliminary Tasks), submit this file. How to take a screenshot If you are using Windows, follow the steps below to take the screenshot: Stay in the phpMyAdmin window and press Alt + PrintScreen at the same time. (Nothing will happen immediately when you press these keys). You can also use the utility Snipping Tool, which is installed on most PCs. Switch to the MS Word document with your answers, left-click on the place in the document where you want the image to be inserted, and press Ctrl + V (or right-click on the mouse and select Paste). The phpMyAdmin screen should show up as a picture in your Word document. Make sure the picture contains the database name (the left panel of the window) and the query result. If you are using Mac OS X, follow the steps below to take the screenshot Stay in the phpMyAdmin window and press Command-Shift-4, then press the Spacebar. The cursor will change to a camera and the application window below the cursor will be highlighted. When you have the cursor over the phpMyAdmin window, just click the mouse button while holding down the Control key at the same time. There will be a camera shutter sound and the screenshot will be placed on the clipboard. Switch to the Word document, click on the place in the document where you want the image to be inserted, and press Command + V (or select Paste). The phpMyAdmin screen should show up as a picture in your Word document. Make sure the picture contains the database name, the query, and the query result. MusicStar is a Canadian-based online retailer of music. Music is sold by the track (a customer does not have to purchase the whole album). In a single purchase, a customer can buy multiple music tracks from different music artists, drawn from different albums. Each purchase produces a separate invoice, with each track being an invoice line item. In this respect, the Invoice is a header, with the details of what has been purchased listed in the associated line items. Each track belongs to an album, which is associated with a specific music artist. Each track is also a member of a category of music – e.g. Jazz. Tracks can also be placed on playlists The schema of this database (taken from the Designer tab) is shown graphically below. This image is helpful in highlighting the relationships between the tables. The schema reflects the efficient storage of data (with each table focusing on data about a single ‘thing’), the relationships between data is also recorded via the use of foreign keys. Tasks: Write the appropriate SQL statements for each of the following queries, run them in MySQL (in phpMyAdmin), and save the query and results in your MS Word document in the form of a screenshot. Query 1 (example – no credit) List the id and name of customers who are from the United States and made a purchase of more than $20. Order the output by the customer’s id. NOTE: The answer for this query is already included in “HW4_AnswerSheet.docx” as a sample answer. You do not have to do anything for this query. Use this format for all the other parts of the assignment (i.e., include text of query and screenshot after running the query – the name of your database should be visible). Query 2 What was the date of the last sale of an album containing the track "Take It Or Leave It"? Restrict your query to sales within the following countries: Canada, The United States of America, and United Mexican States. Hint: You are required to use set inclusion when writing some of the conditions. Query 3 Each track is classified by genre. Examples of music genre include rock, jazz, and reggae. List the names of all customers who have purchased reggae music. Order the report by customer’s last name. Query 4 List the best customers who purchased Jazz (in terms of overall dollar sales of Jazz). Higher-value customers should be reported first. Query 5 Albums are a set of tracks released by a music artist. The tracks the artist performs may, or may not, be composed by the performing artist/s. List the distinct genres of the tracks that Eric Clapton has performed on. However, exclude any tracks where Clapton did not either compose the track or co-compose the track with another composer. Query 6 Report the number of tracks sold, and the total cost of those tracks. For this query, restrict the focus to only purchases made in the Canadian province of Alberta during 2009. Also exclude any tracks where the composer is unknown. Hint: You are required to use ranges when writing some of the conditions. Query 7 Report the number of tracks sold of each music genre. However, exclude any genre where the total number of tracks sold is less than 200. Order the report so the music genre with the most tracks sold is reported first. Hint: You will need to use HAVING to specify a group condition. Query 8 List the names of employees who report to either Michael Mitchell or Nancy Edwards, or were hired after 2002. In addition, all employees in this report must live in Calgary, Alberta. Query 9 List the name and unit price of tracks whose price is above the average unit price across all tracks. Order the report by track name. Order the report by the names of tracks. Hint: You are required to use a NESTED query. Query 10 List the names of tracks that have never been sold, in alphabetical order. Hint: You are required to use a NESTED query. Hint: You are required to use the NOT IN combined Boolean operator. Query 11 We will investigate the existence of track outliers in terms of the space (in bytes) required to store tracks. List the track name, track size (in bytes), and average track size (in bytes) of the track’s respective genre, A track outlier is defined as any track whose number of bytes in size is more than 5 standard deviations above the mean for the track genre it is a member of. Hint: You are required to use one of the JOIN commands, a NESTED query within the FROM clause, and aliases for variable and table name. You may find useful to review an example in MGT 2210 [#4] Advanced Queries. Hint: you are required to use AVG() and STDDEV() functions Contact Us to get Database Assignment Help, MySQL project Assignment help at an affordable prices at contact@codersarts.com

  • Photo Management Application Using Java

    The role of a photo management application is to organize photographs so that they can be easily accessed. In order to help organize photos, the user can provide tags to describe the content of the photos. A tag is a keyword associated with a photo. For example, we can associate the tag ”vacation” to any photo taken during a vacation. In Table 1, you may find some examples of photos and the tags that are used to describe them. The photo manager organizes the photos into albums created by the user. An album is identified by a unique name and regroups photos that satisfy certain conditions. For the purpose of this assignment, the conditions used to create albums consist in a sequence of tags separated by ”AND”: Tag1 AND Tag2 AND Tag3 Photos that contain all specified tags will appear in the album. An empty tag list matches all photos. Example 1. Using the photos of Table 1, the album with the condition bear, will contain two photos (that of the panda and the grizzly bear). The album with the condition animal AND grass will contain four photos (hedgehog, grizzly bear, fox and panda). The albbum with no tags will contain all eight photos. 2. Inverted index In order to accelerate the search for photos, it is possible to store the tags in an inverted index. The idea is that instead of having the photos point to the tags, the inverted index will store all the tags, and each tag will point to all the photos that contain it. The following is an example showing a partial inverted index for the photos shown above: animal → hedgehog.jpg, bear.jpg, fox.jpg, panda.jpg, wolf.jpg, racoon.jpg apple → hedgehog.jpg bear → bear.jpg, panda.jpg black → butterfly2.jpg butterfly → butterfly1.jpg, butterfly2.jpg ... You are required to: 1. Represent the photos-tags association using an inverted index stored in the class PhotoManager. 2. Use a data structure that allows O(log n) in average to search for a tag. 3 Requirements You are required to implement the following specification: public class Photo { // Constructor public Photo(String path, LinkedList tags); // Return the path (full file name) of the photo. A photo is uniquely identified by its path. public String getPath(); // Return all tags associated with the photo public LinkedList getTags(); } public class Album { // Constructor public Album(String name, String condition, PhotoManager manager); // Return the name of the album public String getName(); // Return the condition associated with the album public String getCondition(); // Return the manager public PhotoManager getManager(); // Return all photos that satisfy the album condition public LinkedList getPhotos(); // Return the number of tag comparisons used to find all photos of the album public int getNbComps(); } public class PhotoManager { // Constructor public Photomanager(); // Add a photo public void addPhoto(Photo p); // Delete a photo public void deletePhoto(String path); // Return the inverted index of all managed photos public BST> getPhotos(); } Remark 1. The list of photos that belong to the album is determined at the time when the method getPhotos is called, not when the album is created. Contact Us to get any help related to Java project, Java Homework Help with an affordable prices at contact@codersarts.com

  • Creating Database Schema, ER Diagram With MySQL PHP

    Descriptions: Using your knowledge on Entity-Relation Model to draw an ER-diagram, convert the diagram into relations and perform a set of database operations for the following application related to music CD industry. 1) A CD has a title, a year of production and a CD type. You can come with you own CD types. 2) A CD usually has multiple songs on different tracks. Each song has a name, an artist and a track number. Entity set Song is considered to be weak and needs support from entity set CD. 3) A CD is produced by a producer which has a name and an address. 4) A CD may be supplied by multiple suppliers, each has a name and an address. 5) A customer may rent multiple CDs. Customer information such as Social Security Number (SSN), name, telephone needs to be recorded.  The date and period of renting (in days) should also be recorded. 6) A customer may be a regular member and a VIP member. A VIP member has additional information such as the starting date of VIP status and percentage of discount. Task 1(30%): Draw an ER-diagram based on the above description. Do not forget to specify keys. Pay special attentions to cardinalities of relationships, weak entity sets and entity sets involved in ISA hierarchies. The diagram should be drawn based the notations discussed in the lectures. The diagram should be generated by a computer program (e.g., WORD). Hand-drawn diagram will NOT be accepted to avoid ambiguity. Task 2 (30%): Convert the entity sets and relationships to database relations. The relations should be able to be realized in MySQL database, i.e., write SQL statements. Pay attentions to the supporting relationships and weak entity sets. Entity corresponding to child classes in ISA hierarchies should follow E/R style. Task 3 (40%): Write a menu-drive program (e.g. using PHP) to enable the following queries. The weight for each query has been specified (out the 40%). You will need to supply with your own test data. 1) Insert a producer (5%) 2) Insert a CD supplied by a particular supplier and produced by a particular producer (6%) 3) Insert a regular-customer borrowing a particular CD (6%) 4) Insert a VIP customer borrowing a particular CD (7%) 5) Find names and Tel# of all customers who borrowed a particular CD and are supposed to return by a particular date. (8%) 6) List producers information who produce CD of a particular artist released in a particular year. (8%) If you are need complete solution of this project with an affordable prices then you can contact us at contact@codersarts.com. We are also providing other database related project assignment help with an affordable prices.

  • Sending Email - Spring Boot Project Sample

    In this article, we will learn step by step to send emails using Spring Boot Application. In our Restful web service application, we will implement the feature of sending an email with Gmail SMTP Server. The acronym of SMTP is the Simple Mail Transfer Protocol. Whenever we send a mail to another person’s email id, then there is some protocol through which our email goes through before delivering to the mentioned email id. Following are the steps required for sending the email via Gmail SMTP. Step 1: Create a simple Spring Starter Project from STS. Step 2: Add the Spring Boot Starter Mail dependency in your pom.xml file. org.springframework.boot spring-boot-starter-mail So the final pom.xml file must have following dependencies. 4.0.0 org.springframework.boot spring-boot-starter-parent 2.3.5.RELEASE com.example GmailMessaging 0.0.1-SNAPSHOT GmailMessaging Demo project for Spring Boot 1.8 org.springframework.boot spring-boot-starter org.springframework.boot spring-boot-starter-web org.springframework.boot spring-boot-starter-test test org.junit.vintage junit-vintage-engine org.springframework.boot spring-boot-starter-mail org.springframework.boot spring-boot-maven-plugin Step 3: Configure the Gmail SMTP important properties in the application.propertie server.port=8083 spring.mail.host=smtp.gmail.com spring.mail.port=25 spring.mail.username= your gmail id spring.mail.password= your password # Other properties spring.mail.properties.mail.debug=true spring.mail.properties.mail.transport.protocol=smtp spring.mail.properties.mail.smtp.auth=true spring.mail.properties.mail.smtp.connectiontimeout=5000 spring.mail.properties.mail.smtp.timeout=5000 spring.mail.properties.mail.smtp.writetimeout=5000 # TLS , port 587 spring.mail.properties.mail.smtp.starttls.enable=true Step 4: Configure the main Spring Boot Application class. GmailMessagingApplication.java package com.example.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class GmailMessagingApplication { public static void main(String[] args) { SpringApplication.run(GmailMessagingApplication.class, args); } } Step 5: Create a Controller class as shown below. GmailController.java package com.example.demo.controller; import org.springframework.beans.factory.annotation.Autowired; import javax.mail.MessagingException; import javax.mail.internet.MimeMessage; import org.springframework.core.io.ClassPathResource; import org.springframework.mail.javamail.JavaMailSender; import org.springframework.mail.javamail.MimeMessageHelper; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class GmailController { @Autowired private JavaMailSender sender; @RequestMapping("/hello") public void Hello() { System.out.println("ehlloo......"); } @RequestMapping("/sendMail") public String sendMail() { MimeMessage message = sender.createMimeMessage(); MimeMessageHelper helper = new MimeMessageHelper(message); try { helper.setTo("praveenaditya0@gmail.com"); helper.setText("Greetings :)"); helper.setSubject("Mail From Spring Boot"); } catch (MessagingException e) { e.printStackTrace(); return "Error while sending mail .."; } sender.send(message); return "Mail Sent Success!"; } @RequestMapping("/sendMailAtt") public String sendMailAttachment() throws MessagingException { MimeMessage message = sender.createMimeMessage(); MimeMessageHelper helper = new MimeMessageHelper(message,true); try { helper.setTo("demo@gmail.com"); helper.setText("Greetings :)\n Please find the attached docuemnt for your reference."); helper.setSubject("Mail From Spring Boot"); ClassPathResource file = new ClassPathResource("document.PNG"); helper.addAttachment("document.PNG", file); } catch (MessagingException e) { e.printStackTrace(); return "Error while sending mail .."; } sender.send(message); return "Mail Sent Success!"; } } Step 6: Run the Spring Boot Application and hit the endpoint http://localhost:8083/sendMail You will get a message as shown below.

  • MVC, Multiple Views, and Interactive 2D Graphics In JavaFx

    Overview In this assignment you will build a JavaFX system that demonstrates your skills with Model-View-Controller and multiple views, 2D graphics, and interaction with graphical objects. The application, called CriticalPath, allows users to create a dependency graph for a project and see the critical path from project start to project completion. A dependency graph is a directed acyclic graph (DAG) where the nodes are activities in the project, and edges connect activities in terms of their time dependencies. For example, a dependency graph for the task of making a cheese sandwich might be as follows (note that we assume unlimited workers to carry out the tasks, meaning that all branches can be carried out in parallel): The critical path is the path through the graph that requires the most total time, and so indicates the total time that will be needed to complete the project. Since the “toast” path in the graph requires 6 minutes (and the “cheese” path requires only 4 minutes), the toast path is the critical path. (Note that critical path is calculated only based on time, and does not care about the number of nodes in the path). The assignment has 3 parts: Part 1 involves the main UI for creating and working with the graph, Part 2 involves additional views and navigation, and Part 3 involves graph operations. Part 1. Main graph UI The main panel for the system shows the graph and allows the user to interact with it (i.e., creating and deleting nodes and edges, dragging nodes on the workspace). The system uses an MVC architecture as specified below. Interaction requirements: The graph view shows a graphical representation of the graph, with nodes as coloured circles (with titles and time costs), and edges as arrows between the nodes The user can create a new node by right-clicking the mouse (pressing + releasing) on the background. When a new node is created, it has the default title “Activity” and the default time cost 1.0 The user can select a node by pressing the left mouse button on the node; selection is indicated in the view by drawing the node with a dashed outline and a different fill colour. The user can move a node by pressing the left mouse button on a node and then dragging the mouse. (This also selects the node) Selection is persistent (i.e., the node stays selected even after a move is completed) The user can delete a node by selecting it and then pressing the Delete key on the keyboard The user can create an edge by right-clicking on the start node, dragging to the end node, and releasing the mouse button. As the user drags the mouse, a temporary line is shown from the start node to the mouse cursor; when the mouse is released on the end node, the edge is created and shown. If the user releases when the cursor is not on a node, the temporary line is discarded and no edge is created. Edges show arrowheads to indicate the direction of the edge The user can select an edge by left-clicking on it (within a small “close enough” region as described in lectures); a selected edge is shown using a dashed line The user can delete an edge by selecting it and then pressing the Delete key on the keyboard Selecting any node or edge removes any previous selection. There is no multiple selection. When the application starts, the graph contains a single node titled “Start”, with a time cost of 0.0. This is the root node for calculation of the critical path, and cannot be deleted or edited (however, it can be dragged). Code requirements: Create class MainGraphView to show the graph o The view must use immediate-mode graphics in JavaFX (i.e., a Canvas) o The view should use 2D transforms to position the graphics reference frame before drawing nodes, edges, and arrowheads Create class GraphModel to store the graph o Create classes Node and Edge that will be used by the model o GraphModel must use publish-subscribe communication to the view o GraphModel must have a public API that can be called by the controller o The model must store locations as normalized coordinates Create class GraphViewController to handle user events from the MainGraphView o The controller must implement a state machine to handle mouse events from the MainGraphView o Note: when you set up event handling in MainGraphView, normalize the mouse coordinates before they are sent to the controller (as shown in the lab example). Create class InteractionModel to store view state including node and edge selection Resources for Part 1 The BoxDemo code and the graphics demos available in the Code Examples folder on the course website JavaFX APIs for Canvas and GraphicsContext: openjfx.io/javadoc/15/ Result for Part 1: a zipped IDEA project that meets the interaction and code requirements above. To export your project, choose File > Export > Project to Zip file. NOTE: if you have fully completed Part 2, do not hand in a project for Part 1. Part 2. Additional views and viewport navigation In Part 2 you will create additional views for the CriticalPath app, and will implement panning and zooming capability. There will be two new views of the graph (a view of a single node’s details, and an overview of the entire workspace), and you will combine these and your existing graph view into a new composite view that holds the entire UI. MainGraphView: new interaction requirements The graph is now shown on a square workspace that may be larger than the MainGraphView’s canvas, and therefore the MainGraphView now shows a viewport onto the workspace. For example, the workspace might be 2000x2000, and a 500x500 MainGraphView will show one quarter of the workspace. The viewport can now be panned: when the user presses on the background (either mouse button) and drags, the view will pan by the amount of the drag. Panning is restricted to the size of the workspace (i.e., the left/top of the MainGraphView can never go past 0,0 in the workspace, and the right/bottom of the MainGraphView can never go past the workspace extents) If the window is resized, the size of the MainGraphView canvas changes, but not the size of the workspace. MainGraphView: new code requirements Add variables to MainGraphView to store the size of a logical workspace that can be different from the size of the MainGraphView. (You may wish to also store the workspace size in the InteractionModel) Use the width and height of the workspace as the extents for normalizing coordinates (i.e., divide the mouse X and Y by the workspace width and height, rather than the MainGraphView width and height) Add viewportLeft and viewportTop variables to the InteractionModel to store the left and top coordinates of the viewport within the workspace When you draw the graph in the MainGraphView, subtract viewportLeft and viewportTop from your translations in order to draw the correct content in the MainGraphView. For example, assume the workspace is 2000x2000 and the MainGraphView is 500x500. Assume a Node has location 0.5, 0.5 in the workspace, and that the viewport is at viewportLeft 0.4 and viewportTop 0.4. The Node should be therefore be drawn in the MainGraphView at x = (0.5 - 0.4) * 500 and y = (0.5 - 0.4) * 500. NodeDetailView: Create class NodeDetailView to show the name and time cost for a selected node Lay out the components of NodeDetailView as shown in the picture If no node is selected, this view should show blanks for both name and time The text fields in the NodeDetailView are editable: if the user enters a new name and presses Return, or enters a new cost and presses Return, the selected node will be updated with the new values. (Note that using Return to generate an event in a TextField can be done with the setOnAction() method.) If the user attempts to edit the Start node, the values return to their defaults (“Start” and 0.0) MiniGraphView: Create class MiniGraphView to show an overview of the entire workspace The mini view should be very similar to the MainGraphView, with the following changes: o The mini view is 200x200 pixels, and does not change size o The mini view always shows the entire workspace o The mini view shows the MainGraphView’s viewport as a transparent grey rectangle o There are no user interactions with the mini view. You may wish to create an inheritance hierarchy to avoid duplicating code for the two graph views (e.g., MainGraphView and MiniGraphView both extend abstract GraphView) Composite app view: Create class MainAppView that will hold your other views and additional controls Lay out the views inside MainAppView as shown in the picture Add a panel “View Controls” between the mini view and the node view o The view controls should show a slider with extents [0.25..2.0] o Dragging the slider changes the zoom level in the MainGraphView (and also changes the size of the viewport rectangle in the mini view). Add a variable zoomLevel to InteractionModel to store this value. Contact us to get JavaFx visualization assignment help with an affordable price at contact@codersarts.com

  • Linear Regression Implementation In Python | Sample Assignment

    You will work in teams of maximum three students from the previous assignment. The programming code will be graded on both implementation and correctness. The written report will be graded on content, conclusions, and presentation. It must be formatted according to the given template (posted on Canvas). The report will be graded as if the values obtained from the code portion were correct. The report should be short and to the point. The length should be between 2 to 4 pages of text plus any tables and graphs. Assignment Goals and Tasks This assignment is intended to build the following skills: Implementation of the iterative optimization Gradient Descent algorithms for solving a Linear Regression problem Implementation of various regularization techniques Polynomial regression Learning curve Assignment Instructions Note: you are not allowed to use any Scikit-Learn models or functions. i. The code should be written in a Jupyter notebook and the report should be prepared as a PDF file. a. Name the notebook `____assignment1.ipynb` b. Name the PDF `____assignment1.pdf` ii. The Jupyter notebook and the report should be submitted via webhandin. Only one submission is required for each group. iii. Use the cover page (posted on Canvas) as the front page of your report. iv. Download the wine quality dataset from: http://archive.ics.uci.edu/ml/datasets/Wine+Quality You will be using the red wine dataset: “winequality-red.csv” Part A: Model Code 1. Implement the following function that generates the polynomial and interaction features for a given degree of the polynomial. polynomialFeatures(X, degree) Argument: X : ndarray A numpy array with rows representing data samples and columns representing features (d-dimensional feature). degree : integer The degree of the polynomial features. Default = 1. Returns: A new feature matrix consisting of all polynomial combinations of the features with degree equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [a, b, a2, ab, b2 ]. mse(Y_true, Y_pred) Arguments: Y_true : ndarray 1D array containing data with “float” type. True y values. Y_pred : ndarray 1D array containing data with “float” type. Values predicted by your model. Returns: cost : float It returns a float value containing mean squared error between Y_true and Y_pred. Note: these 1D arrays should be designed as column vectors. 3. Implement the following function to compute training and validation errors. It will be used to plot learning curves. The function takes the feature matrix X (usually the training data matrix) and the training size (from the “train_size” parameter) and by using cross-validation computes the average mse for the training fold and the validation fold. It iterates through the entire X with an increment step of the “train_size”. For example, if there are 50 samples (rows) in X and the “train_size” is 10, then the function will start from the first 10 samples and will successively add 10 samples in each iteration. During each iteration it will use k-fold cross-validation to compute the average mse for the training fold and the validation fold. Thus, for example, for 50 samples there will be 5 iterations (on 10, 20, 30, 40, and 50 samples) and for each iteration it will compute the cross-validated average mse for the training and the validation fold. For training the model (using the “fit” method) it will use the model parameters from the function argument. The function will return two arrays containing training and validation root-mean-square error (rmse) values. learning_curve(model, X, Y, cv, train_size=1, learning_rate=0.01, epochs=1000, tol=None, regularizer=None, lambd=0.0, **kwargs) Arguments: model : object type that implements the “fit” and “predict” methods. An object of that type which is cloned for each validation. X : ndarray A numpy array with rows representing data samples and columns representing features. Y : ndarray A 1D numpy array with labels corresponding to each row of the feature matrix X. cv : int integer, to specify the number of folds in a k-fold cross-validation. train_sizes : int or float Relative or absolute numbers of training examples that will be used to generate the learning curve. If the data type is float, it is regarded as a fraction of the maximum size of the training set (that is determined by the selected validation method), i.e. it has to be within (0, 1]. Otherwise it is interpreted as absolute sizes of the training sets. learning_rate : float It provides the step size for parameter update. epochs : int The maximum number of passes over the training data for updating the weight vector. tol : float or None The stopping criterion. If it is not None, the iterations will stop when (error > previous_error - tol). If it is None, the number of iterations will be set by the “epochs”. regularizer : string The string value could be one of the following: l1, l2, None. If it’s set to None, the cost function without the regularization term will be used for computing the gradient and updating the weight vector. However, if it’s set to l1 or l2, the appropriate regularized cost function needs to be used for computing the gradient and updating the weight vector. lambd : float It provides the regularization coefficient. It is used only when the “regularizer” is set to l1 or l2. Returns: train_scores : ndarray root-mean-square error (rmse) values on training sets. val_scores : ndarray root-mean-square error (rmse) values on validation sets. 4. [Extra Credit for 478 and Mandatory for 878] Implement the following function to plot the training and validation root mean square error (rmse) values of the data matrix X for various polynomial degree starting from 1 up to the value set by “maxPolynomialDegree”. It takes the data matrix X (usually the training data matrix) and the maxPolynomialDegree; and for each polynomial degree it will augment the data matrix, then use k-fold cross-validation to compute the average mse for both the training and the validation fold. For training the model (using the “fit” method) it will use the model parameters from the function argument. Finally, the function will plot the root-mean-square error (rmse) values for the training and validation folds for each degree of the data matrix starting from 1 up to the maxPolynomialDegree plot_polynomial_model_complexity(model, X, Y, cv, maxPolynomialDegree, learning_rate=0.01, epochs=1000, tol=None, regularizer=None, lambd=0.0, **kwargs) Arguments: model : object type that implements the “fit” and “predict” methods. An object of that type which is cloned for each validation. X : ndarray A numpy array with rows representing data samples and columns representing features. Y : ndarray A 1D numpy array with labels corresponding to each row of the feature matrix X. cv : int integer, to specify the number of folds in a (Stratified)K-Fold, maxPolynomialDegree : int It will be used to determine the maximum polynomial degree for X. For example, if it is set to 3, then the function will compute both the training and validation mse values for degree 1, 2 and 3. learning_rate : float It provides the step size for parameter update. epochs : int The maximum number of passes over the training data for updating the weight vector. tol : float or None The stopping criterion. If it is not None, the iterations will stop when (error > previous_error - tol). If it is None, the number of iterations will be set by the “epochs”. regularizer : string The string value could be one of the following: l1, l2, None. If it’s set to None, the cost function without the regularization term will be used for computing the gradient and updating the weight vector. However, if it’s set to l1 or l2, the appropriate regularized cost function needs to be used for computing the gradient and updating the weight vector. lambd : float It provides the regularization coefficient. It is used only when the “regularizer” is set to l1 or l2. Returns: There is no return value. This function plots the root-mean-square error (rmse) values for both the training set and the validation set for degree of X between 1 and maxPolynomialDegree. 5. Implement a Linear_Regression model class. It should have the following three methods. Note the that “fit” method should implement the batch gradient descent algorithm. a) fit(self, X, Y, learning_rate=0.01, epochs=1000, tol=None, regularizer=None, lambd=0.0, **kwargs) Arguments: X : ndarray A numpy array with rows representing data samples and columns representing features. Y : ndarray A 1D numpy array with labels corresponding to each row of the feature matrix X. learning_rate : float It provides the step size for parameter update. epochs : int The maximum number of passes over the training data for updating the weight vector. tol : float or None The stopping criterion. If it is not None, the iterations will stop when (error > previous_error - tol). If it is None, the number of iterations will be set by the “epochs”. regularizer : string The string value could be one of the following: l1, l2, None. If it’s set to None, the cost function without the regularization term will be used for computing the gradient and updating the weight vector. However, if it’s set to l1 or l2, the appropriate regularized cost function needs to be used for computing the gradient and updating the weight vector. Note: you may define two helper functions for computing the regularized cost for “l1” and “l2” regularizers. lambd : float It provides the regularization coefficient. It is used only when the “regularizer” is set to l1 or l2. Returns: No return value necessary Note: the “fit” method should use a weight vector “theta_hat” that contains the parameters for the model (one parameter for each feature and one for bias). The “theta_hat” should be a 1D column vector. Finally, it should update the model parameter “theta” to be used in “predict” method as follows. self.theta = theta_hat b) predict(self, X) Arguments: X : ndarray A numpy array containing samples to be used for prediction. Its rows represent data samples and columns represent features. Returns: 1D array of predictions for each row in X. The 1D array should be designed as a column vector. Note: the “predict” method uses the self.theta to make predictions. __init__(self) It’s a standard python initialization function so we can instantiate the class. Just “pass” this Part B: Data Processing 6. Read in the winequality-red.csv file as a Pandas data frame. 7. Use the techniques from the recitation to summarize each of the variables in the dataset in terms of mean, standard deviation, and quartiles. Include this in your report. [3 pts] 8. Shuffle the rows of your data. You can use def = df.sample(frac=1) as an idiomatic way to shuffle the data in Pandas without losing column names. [2 pts] 9. Generate pair plots using the seaborn package. This will be used to identify and report the redundant features, if there is any. Part C: Model Evaluation 10. Model selection via Hyperparameter tuning: Use the kFold function (known as sFold function from previous assignment) to evaluate the performance of your model over each combination of lambd, learning_rate and regularizer from the following sets: a. lambd = [1.0, 0, 0.1, 0.01, 0.001, 0.0001] b. learning_rate = [0.1, 0.01, 0.001, 0.001] c. regularizer = [l1, l2] d. Store the returned dictionary for each and present it in the report. e. Determine the best model (model selection) based on the overall performance (lowest average error). For the error_function argument of the kFold function (known as sFold function from previous assignment), use the “mse” function. For the model selection don’t augment the features. In other words, your model selection procedure should use the data matrix X as it is. 11. Evaluate your model on the test data and report the mean squared error 12. Using the best model plot the learning curve. Use the rmse values obtained from the “learning_curve” function to plot this curve. 13. Determine the best model hyperparameter values for the training data matrix with polynomial degree 3 and plot the learning curve. Use the rmse values obtained from the “learning_curve” function to plot this curve. 14. [Extra Credit for 478 and Mandatory for 878] Using the plot_polynomial_model_complexity function plot the rmse values for the training and validation folds for polynomial degree 1, 2, 3, 4 and 5. Use the training data as input for this function. You need to choose the hyperparameter values judiciously to work on the higher-degree polynomial models. 15. [Extra Credit for both 478 & 878] Implement the Stochastic Gradient Descent Linear Regression algorithm. Using cross-validation determine the best model. Part D: Written Report 16. Describe whether or not you used feature scaling and why or why not. 17. Describe whether or not you dropped any feature and why or why not. 18. In the lecture we have studied two types of Linear Regression algorithm: closedform solution and iterative optimization. Which algorithm is more suitable for the current dataset? Justify your answer. 19. Would the batch gradient descent and the stochastic gradient descent algorithm learn similar values for the model weights? Justify your answer. Let’s say that you used a large learning rate. Would that make any difference in terms of learning the weights by both algorithms? 20. Consider the learning curve of your model (degree 1). What conclusion can you draw from the learning curve about (a) whether your model is overfitting/underfitting and (b) its generalization error. Justify your answer. 21. Consider the learning curve of the 3rd degree polynomial data matrix. What conclusion can you draw from the learning curve about (a) whether your model is overfitting/underfitting and (b) its generalization error. Justify your answer. Contact us to get complete solution of this project, or if you need any instant help related to machine learning at contact@codersarts.com

bottom of page