Predicting Stock Market Behavior using Data Mining Technique and News Sentiment Analysis

From Wifi Adapters DB
Jump to: navigation, search
ABSTRACT
Stock Market has turned into an impact point because of its essential business economy. The huge measure of information created by the stock market is viewed as a fortune of learning for speculators. Sentiment analysis is the process of determining people‘s attitudes, opinions, evaluations, appraisals and emotions towards entities such as products, services, organizations, individuals, issues, events, topics, and their attributes.
This proposed system provides better accuracy predication results of future stock market than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. The initial step is to break down news assessment to get the content extremity utilizing innocent Bayes calculation. The second step joins news polarities and verifiable stock costs together to anticipate future stock costs.  
CHAPTER 1 –INTRODUCTION
Stock market deciding could be a terribly troublesome and important task because of the complicated behavior and therefore the unstable nature of the stock exchange. there's a vital need to explore the large quantity of valuable information generated by stock exchange. All investors sometimes have the imminent want of finding a much better thanks to predict the future behavior of stock costs, this can facilitate in determining the simplest time to shop for or sell stocks so as to achieve the simplest profit on their investments. Trading in stock market may be done physically or electronically.
When Associate in Nursing capitalist buys an organization stock, this mean that this capitalist becomes Associate in Nursing owner of the corporate according to the possession share of this company‘s shares. This provide the stockholders rights on the company‘s dividends [1]. Monetary information of stock exchange is of complicated nature that makes it troublesome to predict or forecast the stock exchange behavior. Data processing may be used to analyze the large and sophisticated quantity of financial information that ends up in higher ends up in predicting the stock exchange behavior. Mistreatment data processing techniques to analyze stock exchange could be a wealthy field of analysis, because of its importance in social science, as higher costs lead to a rise in countries‘financial gain. Data processing tasks square measure divided into 2 major categories; descriptive and prognosticative tasks [2], [3]. In our study we tend to think about the predictive tasks. Classification analysis is employed to predict the stock exchange behavior. We tend to use Naïve mathematician and KNN algorithms to make our model.
The prediction of stock exchange helps investors in their investment choices, by providing them robust insights about stock exchange behavior to avoid investment risks. It was found that news has Associate in Nursing influence on the stock value behavior [4]. Stock exchange prediction supported news mining is a sexy field of analysis, and encompasses a ton of challenges owing to the unstructured nature of stories. News mining may be outlined because the method of extracting hidden, helpful and doubtless unknown patterns from news information to get information. Text mining could be a technique accustomed handle the unstructured information. Text mining conjointly identified in data processing because the step of Knowledge Discovery in Text (KDT). Music director et al. [4] investigates the relation between monetary news and stock market volatility mistreatment creator relation. The study predicting stock exchange Behavior mistreatment data processing Technique and News Sentiment Analysis twenty three reveals that there's a relation between news sentiment and stock costs changes.
Sentiment analysis is that the method of crucial people‘s attitudes, opinions, evaluations, appraisals and emotions towards entities like product, services, organizations, people, issues, events, topics, and their attributes [5]. Sentiment analysis thought-about a selected branch of knowledge mining that classifies matter data into positive, negative and neutral sentiments [28]. Zubair et al.[6] analyze the correlations between Reuters news sentiment and S&P500 index for 5 years data. this can be done mistreatment Harvard general verbalizer to obtain positive or negative sentiment, then kalman filter tool is employed for smoothing estimation and noise reduction.
The results demonstrate that there's a powerful correlation between S&P500 index and negative economic sentiment time series. Text preprocessing [7], [8] could be a very important and important task in text mining, human language technology and knowledge retrieval. it's used for getting ready unstructured information for knowledge extraction. There square measure many various tasks for text preprocessing; tokenization, stop-word-removal and stemming square measure among the foremost common techniques. Tokenization is that the method of ripping the text into a stream of words known as tokens. Tokenization has Associate in Nursing importance in linguistics and computing fields and considered a section of lexical analysis. Distinctive the meaningful keywords is that the main goal of mistreatment tokenization. Stop-word-removal is that the method of removing the oftentimes recurrent words that doesn't have any vital that means within the document like the, and, are, this…etc. Stemming aims at come back the variation of the word into common illustration by removing suffixes [7].
In this paper, the planned approach uses sentiment analysis for monetary news, at the side of options extracted from historical stock costs to predict the long run behavior of stock exchange. The prediction model uses naïve mathematician and K-NN algorithms. This can be done by considering different types of stories associated with corporations, markets and financial reports. Also, totally different techniques for numeric data preprocessing likewise as text analysis for handling the unstructured news information. The competitive advantage of stock market trend prediction achieved by data processing and sentiment analysis includes maximization of profit, minimizing prices and risks at the side of up the investor‘s awareness of stock exchange that ends up in accurate investment choices.

1.1 System Specifications
Software Requirements: -
• Jupyter notebook
• Anaconda Server
• Phython Language
• Panda librariesTools:
CHAPTER 2 – LITERATURE REVIEW
Several approaches for predicting stock exchange behavior and costs trend are studied in literature. a number of these studies target rising the accuracy of prediction supported sentiment analysis of stories or tweets in conjunction with stock costs like [9]. Others target worth prediction with totally different time frames like [10]. Moreover, totally different analysis approaches evidenced that there's a robust correlations between monetary news and stock costs changes like [4], [6]. Finally, analysis studies were conducted to enhance the prediction accuracy like [11], [12]. All previous studies have a challenge attributable to the quality of handling unstructured information. All approaches ar supported text mining techniques to predict stock exchange trend, a number of them depend upon matter info compared with solely closing costs et al depend upon matter info and stock costs charts screen tickers like [6]. A. Studies Relaying On Social Media info Analysis L.I. Bing et al. [13] projected Associate in Nursing formula to predict the stock worth movement with accuracy up to seventy six.12% by analyzing public social media info pictured in tweets information. Bing adopted a model to research public tweets and hourly stock costs trend. NLP techniques are used in conjunction with data processing techniques to find relationship patterns between public sentiment and numeric stock costs. This study investigates whether or not there's an indoor association within the multilayer stratified structures, and located that there's a relation between internal layers and also the high layer of unstructured information. This study considers solely daily closing values for historical stock costs. Y. E. Cakra [14] projected a model to predict Indonesian stock exchange supported tweets sentiment analysis. The model has 3 objectives: worth fluctuation prediction, margin share and stock worth. 5 supervised classification algorithms are utilized in tweets prediction: support vector machine, naïve mathematician, call tree, random forest and neural network. This study evidenced that random forest and naïve mathematician classifiers outperformed the opposite used algorithms with accuracy sixty.39% and 56.50% severally. Also, rectilinear regression performs well on costs prediction with sixty seven.73% accuracy. The limitation of this study is that the prediction model is built primarily based solely on the costs of 5 previous days. Hana and Hasan [9] used hourly stock news with breaking tweets in conjunction with one hour stock costs charts to predict if hourly stock worth direction can increase or decrease. This study investigates whether or not the knowledge in news story with breaking tweets volume indicates applied math vital boost in hourly directional prediction. The analysis results incontestible that supply regression with 1-gram keyword performed well in directional prediction, additionally victimisation extracted document level sentiment options doesn't have a applied math vital in boosting hourly directional prediction, however this study depends on solely breaking news for hourly prediction. B. Studies Relaying On News Analysis Patric et al. [10] used many desegregation text mining ways for sentiment analysis in monetary markets by desegregation word association and lexical resources to research stock exchange news reports. The study analyzes West Germanic language victimisation sentiWS tool for sentiment analysis on totally different levels. The stock costs screens ar compared to sentiment measures model to urge investor‘s recommendation for one week to assist them avoid investment risks. Shynkevichl et al.[15] used multiple kernel learning (MKL) ways to research victimisation 2 classes of stories, articles associated with sub-industry and articles associated with a target stock. The analysis investigates if these 2 classes can enhance the prediction of stock trend accuracy looking on news information and historical stock costs information. Historical stock costs utilized in Shynkevichl‘s study ar open and shut attributes. This study reveals that victimisation totally different classes of stories can enhance the accuracy of prediction up to seventy nine.59 that when polynomial kernels ar used on news classes. The study additionally evidenced that victimisation support vector machine and k-NN deliver the goods worse prediction accuracy. In [16] association rule mining has been wont to uncover stock exchange patterns and generate rules to predict the stock worth through serving to the investors within the investment selections. The prediction has been done through giving investors clear insight to make your mind up whether or not to shop for, sell or hold shares. Association rule mining used necessary six commercialism technical indicators to get rules. Naive mathematician formula has been wont to predict the category label for capitalist like sell, obtain and hold for every stock. This can be done through considering the consequences of all technical indicators values and calculate the technical indicator that has the very best likelihood. The limitation of this analysis is victimisation the price solely while not victimisation the matter monetary info, that is light to produce info concerning event extraction monetary news. Ho‘ang and Phayung [11] projected a model to predict stock worth trend victimisation Vietnam stock market index costs information and news info of stories publications . During this study, support vector machine formula is combined with linear SVM. The results of Hoang‘s model demonstrate that the accuracy of prediction is improved up to seventy fifth. This study additionally used the closing costs of the index costs solely to predict the trend. Jageshwer and Shagufta [12] analyzed the impact of economic news on the stock exchange costs prediction and daily changes within the index movements. The main target of this study is to enhance the accuracy of the prediction by combining technical analysis and also the rule primarily based classifier. The prediction model depends on the monetary news and monthly average for daily stock worth. Ruchi and Gandhi [17] gave a model to predict the stock trends by analyzing non-quantifiable info that's given in news articles. NLP methodology is made during this model victimisation senti-wordnet zero.3 in conjunction with the applied math parameter primarily based module. The model used stock intrinsic values of open and shut to output the sentence polarity and also the behavior to be either positive or negative. The obtained behavior relies on a applied math parameter, but this study will be improved victimisation different attributes which will have an effect on the stock costs directly in conjunction with the info mining prediction algorithms. Sadi et al.[18] investigated the correlation between the economic news and statistic analysis ways over the charts of the stock exchange closing values. 10 ways are applied for statistic analysis in conjunction with victimisation SVM and KNN classifiers. Y.Kim et al.[19] explored the stock exchange trend prediction victimisation opinion mining analysis for the economic news. Kim‘s study assumed that there's a robust relation between news and stock costs changes to be either positive or negative changes. This model is made victimisation NLP, news sentiment and opinion mining primarily based sentimental lexicon. This study achieved Associate in Nursing accuracy of prediction starting from hr to sixty fifth. S.Abdullah et al.[20] analyzed East Pakistan stock exchange victimisation text mining and NLP techniques to extract basic info from matter information. This study used the knowledge computer program formula and Apache OpenNLP that may be a java primarily based machine learning toolkit for tongue process to research matter data associated with the stock exchange. This study thought-about the various basic factors includes, EPS, P/E ratio, beta, correlation and variance in conjunction with worth trend from historical information to match it to the extracted basic info. The aim of this study was to assist investors build their investment selections for obtain or sell signals. The previous conducted researches ar supported matter information analysis, and those they achieved accuracies that don't exceed a spread of seventy fifth to eightieth for stock trend prediction. In news polarities, the predictions accuracy vary doesn't exceed seventy six. The projected study during this paper, aims at minimizing losses by achieving high accuracy in prediction supported sentiment and historical numeric information analysis. The mentioned pervious researches disagree in prediction horizon, a number of them predict costs fluctuation for five to twenty minutes, hourly and daily when news releases. Among the previous researches goals is to get investors recommendation like [10], others is to predict solely news polarities compared with actual trend from historical information. tries to predict the stock exchange on the history isn't simply restricted to data processing models, there ar heaps of studies designed to predict the stock exchange victimisation neural networks and computer science like [29],[30]. During this study, we tend to aim to construct a model to predict news sentiment victimisation NLP techniques so predict the long run stock worth trend victimisation data processing techniques. The projected study presents a replacement approach with improved prediction accuracy to avoid the large losses and risks of investment and maximizes the stock exchange profits so avoids the Economic crises. 
CHAPTER 3 OVERALL DESCRIPTION OF THE PROPOSED SYSTEM
3.1 Existing Solution:
• Stock market decision making is a very difficult and important task due to the complex behavior and the unstable nature of the stock market.
• All investors usually have the imminent need of finding a better way to predict the future behavior of stock prices
• Financial data of stock market is of complex nature, which makes it difficult to predict or forecast the stock market behavior.

3.2 Proposed System:
Stock market prediction based on news mining is an attractive field of research Twitter Live dataset to fetch the News mining knowledge. The proposed approach uses sentiment analysis for financial news, along with features extracted from historical stock prices to predict the future behavior of stock market. sentiment analysis includes different types of news related to companies, markets and financial reports sentiment analysis includes maximization of profit, minimizing costs and risks along with improving the investor‘s awareness of stock market that leads to accurate investment decisions.

3.3 System Modules:

1. Load Packages
• Numpy
• Panda
• Tweepy
2. Twitter Developer API Configuration
3. Live Stream Twitter Stock Data #NDTV Profit
4. Preprocessing
5. Sentimental Analysis
6. Reports
• Tweets
• Likes
• Retweets
• Stock prediction


3.4 Module Description
3.4.1 Load Packages – Load Packages of Numpy, Panda & Tweepy
• Numpy - This is the elemental package for scientific computing with Python. Besides its obvious scientific uses, NumPy also can be used as associate economical multi-dimensional instrumentality of generic information.
• Panda - This is associate open supply library providing superior, easy-to-use information structures and information analysis tools.
• Tweepy - This is associate easy-to-use Python library for accessing the Twitter API.

3.4.2 Twitter Developer API Configuration - In order to extract tweets for a posterior analysis, we'd like to access to our Twitter account and make AN app.
3.4.3 Live Stream Twitter Stock Data #NDTV Profit - Both Twitter and animal disease expressed the partnership could be a move towards democratising monetary data by sanctioning uncountable Indian investors to simply access exchange and stock-related data through a digital platform.

3.4.4 Preprocessing – The fascinating half from here is that the amount of data contained during a single tweet. If we wish to get knowledge like the creation date, or the supply of creation, we will access the data with this attributes.


3.4.5 Sentimental Analysis - We will conjointly use the re library from Python, that is employed to figure with regular expressions. For this, i am going to offer you 2 utility functions to: a) clean text , and b) produce a classifier to research the polarity of every tweet when improvement the text in it.

3.4.6 Reports - To have an easy thanks to verify the results,
• Tweets - we'll count the amount of neutral, positive and negative tweets and extract the chances.
• Likes - we'll count the number of likes and extract the probabilities.
• Retweets - we'll retweet for all neutral, positive and negative tweets and extract the chances.
• Stock prediction - we'll predict the stock within the market extract the chances.
3.5 System Features
In the life of the software development, problem analysis provides a base for design and development phase. The problem is analyzed so that sufficient matter is provided to design a new system. Large problems are sub-divided into smaller once to make them understandable and easy for finding solutions. Same in this project all the task are sub-divided and categorized.

CHAPTER 4 – DESIGN
Design is the first step in the development phase for any techniques and principles for the purpose of defining a device, a process or system in sufficient detail to permit its physical realization.
Once the software requirements have been analyzed and specified the software design involves three technical activities - design, coding, implementation and testing that are required to build and verify the software.
The design activities are of main importance in this phase, because in this activity, decisions ultimately affecting the success of the software implementation and its ease of maintenance are made. These decisions have the final bearing upon reliability and maintainability of the system. Design is the only way to accurately translate the customer’s requirements into finished software or a system.
Design is the place where quality is fostered in development. Software design is a process through which requirements are translated into a representation of software. deep learning projects ideas is conducted in two steps. Preliminary design is concerned with the transformation of requirements into data.

4.1UML Diagrams:
UML stands for Unified Modeling Language. UML is a language for specifying, visualizing and documenting the system. This is the step while developing any product after analysis. The goal from this is to produce a model of the entities involved in the project which later need to be built. The representation of the entities that are to be used in the product being developed need to be designed.

There are various kinds of methods in software design:
• Use case Diagram
• Sequence Diagram
• Collaboration Diagram
4.1.1Usecase Diagrams:
Use case diagrams model behavior within a system and helps the developers understand of what the user require. The stick man represents what’s called an actor. Use case diagram can be useful for getting an overall view of the system and clarifying who can do and more importantly what they can’t do.




Use case diagram consists of use cases and actors and shows the interaction between the use case and actors.
• The purpose is to show the interactions between the use case and actor.
• To represent the system requirements from user’s perspective.
• An actor could be the end-user of the system or an external system

4.1.2 Sequence Diagram:
Sequence diagram and collaboration diagram are called INTERACTION DIAGRAMS. An interaction diagram shows an interaction, consisting of set of objects and their relationship including the messages that may be dispatched among them.
A sequence diagram is an introduction that empathizes the time ordering of messages. Graphically a sequence diagram is a table that shows objects arranged along the X-axis and messages ordered in increasing time along the Y-axis.


4.1.3 Collaborate Diagram:
A collaboration diagram, also called a communication diagram or interaction diagram, is an illustration of the relationships and interactions among software objects in the Unified Modeling Language (UML).

DFD Diagram
Data Flow Diagram

Data Flow Diagram











CHAPTER 5 - OUTPUT SCREENSHOTS
















CHAPTER 6 – IMPLEMENTATION DETAILS
6.1 Introduction to Html Framework
Hyper Text Markup Language, commonly referred to as HTML, is the standard markup language used to create web pages. Along with CSS, and JavaScript, HTML is a cornerstone technology used to create web pages, as well as to create user interfaces for mobile and web applications. Web browsers can read HTML files and render them into visible or audible web pages. HTML describes the structure of a website semantically along with cues for presentation, making it a markup language, rather than a programming language.
HTML elements form the building blocks of HTML pages. HTML allows images and other objects to be embedded and it can be used to create interactive forms. It provides a means to create structured documents by denoting structuralsemantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as
and
<input /> introduce content into the page directly. Others such as

...

surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page.
HTML can embed scripts written in languages such as JavaScript which affect the behavior of HTML web pages. HTML markup can also refer the browser to Cascading Style Sheets (CSS) to define the look and layout of text and other material.
HyperText Markup Language (HTML) is the standard markup language for creating web pages and web applications. With Cascading Style Sheets (CSS) and JavaScript it forms a triad of cornerstone technologies for the World Wide Web.[1] Web browsers receive HTML documents from a webserver or from local storage and render them into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.
machine learning project ideas 2018 2019 are the building blocks of HTML pages. With HTML constructs, images and other objects, such as interactive forms, may be embedded into the rendered page. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as
and
<input /> introduce content into the page directly. Others such as

...

surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page.
HTML can embed programs written in a scripting language such as JavaScript which affect the behavior and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium (W3C), maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML since 1997.[2]
In 1980, physicist Tim Berners-Lee, a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. In 1989, Berners-Lee wrote a memo proposing an Internet-based hypertext system.[3] Berners-Lee specified HTML and wrote the browser and server software in late 1990. That year, Berners-Lee and CERN data systems engineer Robert Cailliau collaborated on a joint request for funding, but the project was not formally adopted by CERN. In his personal notes[4] from 1990 he listed[5] "some of the many areas in which hypertext is used" and put an encyclopedia first.
The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Tim Berners-Lee in late 1991.[6][7] It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house Standard Generalized Markup Language (SGML)-based documentation format at CERN. Eleven of these elements still exist in HTML 4.[8]
HTML is a markup language that web browsers use to interpret and compose text, images, and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, and these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements (nested annotated ranges with attributes) rather than merely print effects, with also the separation of structure and markup; HTML has been progressively moved in this direction with CSS.
Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language (HTML)" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar.[9][10] The draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes.[11] Similarly, Dave Raggett's competing Internet-Draft, "HTML+ (Hypertext Markup Format)", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms.[12]
After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based.[13]
Further development under the auspices of the IETF was stalled by competing interests. Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium (W3C).[14] However, in 2000, HTML also became an international standard (ISO/IEC 15445:2000). HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004, development began on HTML5 in the Web Hypertext Application Technology Working Group (WHATWG), which became a joint deliverable with the W3C in 2008, and completed and standardized on 28 October 2014.[15]
6.2 Cascading Style Sheets (CSS)
CSS is a style sheet language used for describing the presentation of a document written in a markup language. Although most often used to set the visual style of web pages and user interfaces written in HTML and XHTML, the language can be applied to any XML document, including plain XML, SVG andXUL, and is applicable to rendering in speech, or on other media. Along with HTML and JavaScript, CSS is a cornerstone technology used by most websites to create visually engaging webpages, user interfaces for web applications, and user interfaces for many mobile applications.
CSS is designed primarily to enable the separation of document content from document presentation, including aspects such as the layout, colors, and fonts. This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple HTML pages to share formatting by specifying the relevant CSS in a separate .css file, and reduce complexity and repetition in the structural content, such as semantically insignificant tables that were widely used to format pages before consistent CSS rendering was available in all major browsers. CSS makes it possible to separate presentation instructions from the HTML content in a separate file or style section of the HTML file. For each matching HTML element, it provides a list of formatting instructions. For example, a CSS rule might specify that "all heading 1 elements should be bold", leaving pure semantic HTML markup that asserts "this text is a level 1 heading" without formatting code such as a
<bold>

tag indicating how such text should be displayed.



This separation of formatting and content makes it possible to present the same markup page in different styles for different rendering methods, such as on-screen, in print, by voice (when read out by a speech-based browser orscreen reader) and on Braille-based, tactile devices. It can also be used to display the web page differently depending on the screen size or device on which it is being viewed. Although the author of a web page typically links to a CSS file within the markup file, readers can specify a different style sheet, such as a CSS file stored on their own computer, to override the one the author has specified. If the author or the reader did not link the document to a style sheet, the default style of the browser will be applied. Another advantage of CSS is that aesthetic changes to the graphic design of a document (or hundreds of documents) can be applied quickly and easily, by editing a few lines in one file, rather than by a laborious (and thus expensive) process of crawling over every document line by line, changing markup.



The CSS specification describes a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities (or weights) are calculated and assigned to rules, so that the results are predictable.



Cascading Style Sheets (CSS) is a style sheet language used for describing the presentation of a document written in a markup language.[1] Although most often used to set the visual style of web pages and user interfaces written in HTML and XHTML, the language can be applied to any XML document, including plain XML, SVG and XUL, and is applicable to rendering in speech, or on other media. Along with HTML and JavaScript, CSS is a cornerstone technology used by most websites to create visually engaging webpages, user interfaces for web applications, and user interfaces for many mobile applications.[2]



CSS is designed primarily to enable the separation of document content from document presentation, including aspects such as the layout, colors, and fonts.[3] This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple HTML pages to share formatting by specifying the relevant CSS in a separate .css file, and reduce complexity and repetition in the structural content.



Separation of formatting and content makes it possible to present the same markup page in different styles for different rendering methods, such as on-screen, in print, by voice (via speech-based browser or screen reader), and on Braille-based tactile devices. It can also display the web page differently depending on the screen size or viewing device. Readers can also specify a different style sheet, such as a CSS file stored on their own computer, to override the one the author specified.



Changes to the graphic design of a document (or hundreds of documents) can be applied quickly and easily, by editing a few lines in the CSS file they use, rather than by changing markup in the documents.



The CSS specification describes a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities (or weights) are calculated and assigned to rules, so that the results are predictable.



The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 2318 (March 1998). The W3C operates a free CSS validation service for CSS documents.



In CSS, selectors declare which part of the markup a style applies to by matching tags and attributes in the markup itself.



Selectors may apply to:



all elements of a specific type, e.g. the second-level headers h2



elements specified by attribute, in particular:



id: an identifier unique within the document



class: an identifier that can annotate multiple elements in a document



elements depending on how they are placed relative to others in the document tree.



Classes and IDs are case-sensitive, start with letters, and can include alphanumeric characters and underscores. A class may apply to any number of instances of any elements. An ID may only be applied to a single element.



Pseudo-classes are used in CSS selectors to permit formatting based on information that is not contained in the document tree. One example of a widely used pseudo-class is :hover, which identifies content only when the user "points to" the visible element, usually by holding the mouse cursor over it. It is appended to a selector as in a:hover or #elementid:hover. A pseudo-class classifies document elements, such as :link or :visited, whereas a pseudo-element makes a selection that may consist of partial elements, such as ::first-line or ::first-letter.[5]



Selectors may be combined in many ways to achieve great specificity and flexibility.[6] Multiple selectors may be joined in a spaced list to specify elements by location, element type, id, class, or any combination thereof. The order of the selectors is important. For example, div .myClass color: red; applies to all elements of class myClass that are inside div elements, whereas .myClass div color: red; applies to all div elements that are in elements of class myClass.



CSS information can be provided from various sources. These sources can be the web browser, the user and the author. The information from the author can be further classified into inline, media type, importance, selector specificity, rule order, inheritance and property definition. CSS style information can be in a separate document or it can be embedded into an HTML document. Multiple style sheets can be imported. Different styles can be applied depending on the output device being used; for example, the screen version can be quite different from the printed version, so that authors can tailor the presentation appropriately for each medium.



The style sheet with the highest priority controls the content display. Declarations not set in the highest priority source are passed on to a source of lower priority, such as the user agent style. This process is called cascading.



One of the goals of CSS is to allow users greater control over presentation. Someone who finds red italic headings difficult to read may apply a different style sheet. Depending on the browser and the web site, a user may choose from various style sheets provided by the designers, or may remove all added styles and view the site using the browser's default styling, or may override just the red italic heading style without altering other attributes.



CSS was first proposed by Håkon Wium Lie on October 10, 1994.[16] At the time, Lie was working with Tim Berners-Lee at CERN.[17] Several other style sheet languages for the web were proposed around the same time, and discussions on public mailing lists and inside World Wide Web Consortium resulted in the first W3C CSS Recommendation (CSS1)[18] being released in 1996. In particular, Bert Bos' proposal was influential; he became co-author of CSS1 and is regarded as co-creator of CSS.[19]



Style sheets have existed in one form or another since the beginnings of Standard Generalized Markup Language (SGML) in the 1980s, and CSS was developed to provide style sheets for the web.[20] One requirement for a web style sheet language was for style sheets to come from different sources on the web. Therefore, existing style sheet languages like DSSSL and FOSI were not suitable. CSS, on the other hand, let a document's style be influenced by multiple style sheets by way of "cascading" styles.[20]



As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance, at the cost of more complex HTML. Variations in web browser implementations, such as ViolaWWW and WorldWideWeb,[21] made consistent site appearance difficult, and users had less control over how web content was displayed. The browser/editor developed by Tim Berners-Lee had style sheets that were hard-coded into the program. The style sheets could therefore not be linked to documents on the web.[22] Robert Cailliau, also of CERN, wanted to separate the structure from the presentation so that different style sheets could describe different presentation for printing, screen-based presentations, and editors.[21]



Improving web presentation capabilities was a topic of interest to many in the web community and nine different style sheet languages were proposed on the www-style mailing list.[20] Of these nine proposals, two were especially influential on what became CSS: Cascading HTML Style Sheets[16] and Stream-based Style Sheet Proposal (SSP).[19][23] Two browsers served as testbeds for the initial proposals; Lie worked with Yves Lafon to implement CSS in Dave Raggett's Arena browser.[24][25][26] Bert Bos implemented his own SSP proposal in the Argo browser.[19] Thereafter, Lie and Bos worked together to develop the CSS standard (the 'H' was removed from the name because these style sheets could also be applied to other markup languages besides HTML).[17]



Lie's proposal was presented at the "Mosaic and the Web" conference (later called WWW2) in Chicago, Illinois in 1994, and again with Bert Bos in 1995.[17] Around this time the W3C was already being established, and took an interest in the development of CSS. It organized a workshop toward that end chaired by Steven Pemberton. This resulted in W3C adding work on CSS to the deliverables of the HTML editorial review board (ERB). Lie and Bos were the primary technical staff on this aspect of the project, with additional members, including Thomas Reardon of Microsoft, participating as well. In August 1996 Netscape Communication Corporation presented an alternative style sheet language called JavaScript Style Sheets (JSSS).[17] The spec was never finished and is deprecated.[27] By the end of 1996, CSS was ready to become official, and the CSS level 1 Recommendation was published in December.



Development of HTML, CSS, and the DOM had all been taking place in one group, the HTML Editorial Review Board (ERB). Early in 1997, the ERB was split into three working groups: HTML Working group, chaired by Dan Connolly of W3C; DOM Working group, chaired by Lauren Wood of SoftQuad; and CSS Working group, chaired by Chris Lilley of W3C.



The CSS Working Group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2 on November 4, 1997. It was published as a W3C Recommendation on May 12, 1998. CSS level 3, which was started in 1998, is still under development as of 2014.



In 2005 the CSS Working Groups decided to enforce the requirements for standards more strictly. This meant that already published standards like CSS 2.1, CSS 3 Selectors and CSS 3 Text were pulled back from Candidate Recommendation to Working Draft level.















6.3 MYSQL Server



MySQL is an open-source relational database management system (RDBMS);[6] in July 2013, it was the world's second most widely used RDBMS, and the most widely used open-source client–server model RDBMS. It is named after co-founder Michael Widenius's daughter, My. The SQL acronym stands for Structured Query Language. The MySQL development project has made its source code available under the terms of the GNU General Public License, as well as under a variety of proprietary agreements. MySQL was owned and sponsored by a single for-profit firm, the Swedishcompany MySQL AB, now owned by Oracle Corporation. For proprietary use, several paid editions are available, and offer additional functionality.



SQL Server Management Studio (SSMS) is a software application first launched with Microsoft SQL Server 2005 that is used for configuring, managing, and administering all components within Microsoft SQL Server. The tool includes both script editors and graphical tools which work with objects and features of the server.[1]



A central feature of SSMS is the Object Explorer, which allows the user to browse, select, and act upon any of the objects within the server.[2] It also shipped a separate Express edition that could be freely downloaded, however recent versions of SSMS are fully capable of connecting to and manage any SQL Server Express instance. Microsoft also incorporated backwards compatibility for older versions of SQL Server thus allowing a newer version of SSMS to connect to older versions of SQL Server instances.



Starting from version 11, the application was based on the Visual Studio 2010 shell, using WPF for the user interface.



In June 2015, Microsoft announced their intention to release future versions of SSMS independently of SQL Server database engine releases.[3].







6.4PHP



PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language. Originally created by RasmusLerdorf in 1994, the PHP reference implementation is now produced by The PHP Group. PHP originally stood for Personal Home Page, but it now stands for therecursive backronym PHP: Hypertext Preprocessor.



PHP code may be embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHPinterpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable. The web server combines the results of the interpreted and executed PHP code, which may be any type of data, including images, with the generated web page. PHP code may also be executed with a command-line interface(CLI) and can be used to implement standalone graphical applications.



The standard PHP interpreter, powered by the Zend Engine, is free software released under the PHP License. PHP has been widely ported and can be deployed on most web servers on almost every operating system andplatform, free of charge.



The PHP language evolved without a written formal specification or standard until 2014, leaving the canonical PHP interpreter as a de facto standard. Since 2014 work has gone on to create a formal PHP specification.



PHP is a server-side scripting language designed primarily for web development but also used as a general-purpose programming language. Originally created by Rasmus Lerdorf in 1994,[4] the PHP reference implementation is now produced by The PHP Development Team.[5] PHP originally stood for Personal Home Page,[4] but it now stands for the recursive acronym PHP: Hypertext Preprocessor.[6]



PHP code may be embedded into HTML or HTML5 code, or it can be used in combination with various web template systems, web content management systems and web frameworks. machine learning project titles 2018 2019 is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable. The web server combines the results of the interpreted and executed PHP code, which may be any type of data, including images, with the generated web page. PHP code may also be executed with a command-line interface (CLI) and can be used to implement standalone graphical applications.[7]



The standard PHP interpreter, powered by the Zend Engine, is free software released under the PHP License. PHP has been widely ported and can be deployed on most web servers on almost every operating system and platform, free of charge.[8]



The PHP language evolved without a written formal specification or standard until 2014, leaving the canonical PHP interpreter as a de facto standard. Since 2014 work has gone on to create a formal PHP specification.[9]



PHP development began in 1995 when Rasmus Lerdorf wrote several Common Gateway Interface (CGI) programs in C,[10][11][12] which he used to maintain his personal homepage. He extended them to work with web forms and to communicate with databases, and called this implementation "Personal Home Page/Forms Interpreter" or PHP/FI.



PHP/FI could help to build simple, dynamic web applications. To accelerate bug reporting and to improve the code, Lerdorf initially announced the release of PHP/FI as "Personal Home Page Tools (PHP Tools) version 1.0" on the Usenet discussion group comp.infosystems.www.authoring.cgi on June 8, 1995.[13][14] This release already had the basic functionality that PHP has as of 2013. This included Perl-like variables, form handling, and the ability to embed HTML. The syntax resembled that of Perl but was simpler, more limited and less consistent.[5]



Lerdorf did not intend the early PHP to become a new programming language, but it grew organically, with Lerdorf noting in retrospect: "I don’t know how to stop it, there was never any intent to write a programming language […] I have absolutely no idea how to write a programming language, I just kept adding the next logical step on the way."[15] A development team began to form and, after months of work and beta testing, officially released PHP/FI 2 in November 1997.



The fact that PHP lacked an original overall design but instead developed organically has led to inconsistent naming of functions and inconsistent ordering of their parameters.[16] In some cases, the function names were chosen to match the lower-level libraries which PHP was "wrapping",[17] while in some very early versions of PHP the length of the function names was used internally as a hash function, so names were chosen to improve the distribution of hash values.[18]







6.5 ANGULAR JAVA SCRIPT



AngularJS (commonly referred to as "Angular" or "Angular.js") is an open-source web application framework mainly maintained by Google and by a community of individuals and corporations to address many of the challenges encountered in developing single-page applications. It aims to simplify both the development and the testing of such applications by providing a framework for client-side model–view–controller (MVC) and model–view–viewmodel(MVVM) architectures, along with components commonly used in rich Internet applications.



The AngularJS framework works by first reading the HTML page, which has embedded into it additional custom tag attributes. Angular interprets those attributes as directives to bind input or output parts of the page to a model that is represented by standard JavaScript variables. The values of those JavaScript variables can be manually set within the code, or retrieved from static or dynamic JSON resources.



According to JavaScript analytics service Libscore, AngularJS is used on the websites of Wolfram Alpha, NBC,Walgreens, Intel, Sprint, ABC News, and approximately 8,400 other sites out of 1 million tested in July 2015.



AngularJS is the frontend part of the MEAN stack, consisting of MongoDB database, Express.js web application server framework, Angular.js itself, and Node.js runtime environment.



AngularJS is an open source web application framework. It was originally developed in 2009 by Misko Hevery and Adam Abrons. It is now maintained by Google. Its latest version is 1.4.3.



Definition of AngularJS as put by its official documentation is as follows −



AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you currently have to write. And it all happens within the browser, making it an ideal partner with any server technology.



Features



 AngularJS is a powerful JavaScript based development framework to create RICH Internet Application(RIA).



 AngularJS provides developers options to write client side application (using JavaScript) in a clean MVC(Model View Controller) way.



 Application written in AngularJS is cross-browser compliant. AngularJS automatically handles JavaScript code suitable for each browser.



 AngularJS is open source, completely free, and used by thousands of developers around the world. It is licensed under the Apache License version 2.0.



 Overall, AngularJS is a framework to build large scale and high performance web application while keeping them as easy-to-maintain.



Core Features



Following are most important core features of AngularJS −



 Data-binding − It is the automatic synchronization of data between model and view components.



 Scope − These are objects that refer to the model. They act as a glue between controller and view.



 Controller − These are JavaScript functions that are bound to a particular scope.



 Services − AngularJS come with several built-in services for example $https: to make a XMLHttpRequests. These are singleton objects which are instantiated only once in app.



 Filters − These select a subset of items from an array and returns a new array.



 Directives − Directives are markers on DOM elements (such as elements, attributes, css, and more). These can be used to create custom HTML tags that serve as new, custom widgets. AngularJS has built-in directives (ngBind, ngModel...)



 Templates − These are the rendered view with information from the controller and model. These can be a single file (like index.html) or multiple views in one page using "partials".



 Routing − It is concept of switching views.



 Model View Whatever − MVC is a design pattern for dividing an application into different parts (called Model, View and Controller), each with distinct responsibilities. machine learning ieee papers does not implement MVC in the traditional sense, but rather something closer to MVVM (Model-View-ViewModel). The Angular JS team refers it humorously as Model View Whatever.



 Deep Linking − Deep linking allows you to encode the state of application in the URL so that it can be bookmarked. The application can then be restored from the URL to the same state.



 Dependency Injection − AngularJS has a built-in dependency injection subsystem that helps the developer by making the application easier to develop, understand, and test.







CHAPTER 7- SYSTEM STUDY







7.1 FEASIBILITY STUDY



The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.



Three key considerations involved in the feasibility analysis are



• ECONOMICAL FEASIBILITY



• TECHNICAL FEASIBILITY



• SOCIAL FEASIBILITY







ECONOMICAL FEASIBILITY







This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.



CHAPTER 8-TECHNICAL FEASIBILITY







This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.



SOCIAL FEASIBILITY



The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.







8.1Non Functional Requirements



Non-functional requirements are the quality requirements that stipulate how well software does what it has to do. These are Quality attributes of any system; these can be seen at the execution of the system and they can also be the part of the system architecture.











8.2 Accuracy:



The system will be accurate and reliable based on the design architecture. If there is any problem in the accuracy then the system will provide alternative ways to solve the problem.







8.3 Usability:



The proposed system will be simple and easy to use by the users. The users will comfort in order to communicate with the system. The user will be provided with an easy interface of the system.







8.4 Accessibility:



The system will be accessible through internet and there should be no any known problem.







8.5 Performance:



The system performance will be at its best when performing the functionality of the system.







8.6 Reliability:



The proposed system will be reliable in all circumstances and if there is any problem that will be affectively handle in the design.







8.7 Security:



The proposed system will be highly secured; every user will be required registration and username/password to use the system. The system will do the proper authorization and authentication of the users based on their types and their requirements. The proposed system will be designed persistently to avoid any misuse of the application.











































































CHAPTER 9-SYSTEM TESTING







The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub-assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the



Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.







TYPES OF TESTS







Unit testing



Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.



Integration testing



Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.







Functional test



Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.



Functional testing is centered on the following items:



Valid Input : identified classes of valid input must be accepted.



Invalid Input : identified classes of invalid input must be rejected.



Functions : identified functions must be exercised.



Output : identified classes of application outputs must be exercised.



Systems/Procedures: interfacing systems or procedures must be invoked.







Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.







System Test



System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.







White Box Testing



White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.







Black Box Testing



Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.







9.1 Unit Testing:







Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.







Test strategy and approach



Field testing will be performed manually and functional tests will be written in detail.







Test objectives



• All field entries must work properly.



• Pages must be activated from the identified link.



• The entry screen, messages and responses must not be delayed.







Features to be tested



• Verify that the entries are of the correct format



• No duplicate entries should be allowed



• All links should take the user to the correct page







9.2 Integration Testing



Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.



The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.



ieee research papers on machine learning : All the test cases mentioned above passed successfully. No defects encountered.







9.3 Acceptance Testing



User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.







Test Results: All the test cases mentioned above passed successfully. No defects encountered.



















CHAPTER 10– CONCLUSIONS



The planned model investigated the concurrent result of analyzing differing types of stories in conjunction with historical numeric attributes for understanding exchange behavior. Our planned model improved the prediction accuracy for the long run trend of exchange, by considering differing types of daily news with completely different values of numeric attributes throughout each day. The planned model consists of 2 stages, the primary stage is to see the news polarities to be either positive or negative victimisation naïve Bayes algorithmic rule, and also the second stage incorporates the output of the primary stage as input in conjunction with the processed historical numeric information attributes to predict the long run stock trend victimisation K-NN algorithmic rule. The results of our planned model achieved higher accuracy for sentiment analysis in determinative the news polarities by victimisation Naïve Bayes algorithmic rule up to eighty six.21%. Within the second stage of study, results established the importance of considering completely different values of numeric attributes. This achieved the very best accuracy compared to different previous researches, our model for predicting the long run behavior of exchange obtained accuracy up to eighty nine.80%.



In the planned model, each Naïve Bayes and K-NN strategies result in the simplest performance. The results of the planned model square measure compatible with researches that state that there's a powerful relation between stock news and changes available costs. This model are often updated within the future by as well as some technical analysis indictors, conjointly we are able to take into account the popularity of emotional sentences in determinative news polarities, moreover because the influence of stories that seems in social media.







CHAPTER 11- REFERENCES



[1] B. O. Wyss, ―Fundamentals of the stock market,‖ p. 245, 2000.



[2] M. K. Jiawei Han, Data MiningConcepts and Techniques, Second Edi. Urbana-Champaign, 2006.



[3] K. Tan , steinbach, Introduction to data mining. 2006.



[4] W. Walter, K. Ho, W. R. Liu, and K. Tracy, ―The relation between news events and stock price jump : an analysis based on neural network,‖ 20th Int. Congr. Model. Simulation, Adelaide, Aust. 1–6 December 2013 ww.mssanz.org.au/modsim2013, no. December, pp. 1–6, 2013.



[5] A. Søgaard, ―Sentiment analysis and opinion mining,‖ … Lang. Comput. Group, Microsoft Res. Asia …, no. May, 2013.



[6] K. J. C. Sahil Zubair, ―Extracting News Sentiment and Establishing its Relationship with the S & P 500 Index,‖ 48th Hawaii Int. Conf. Syst. Sci. Extr., 2015.



[7] C. Paper, ―Preprocessing Techniques for Text Mining Preprocessing Techniques for Text Mining,‖ J. Emerg. Technol. Web Intell., no. October 2014, 2016.



[8] pasi tapanainen gregory grefenstette, ―™ what is a word,what is a sentence?problem of tokanization,‖ maylan Fr., p. 9, 1994.



[9] H. D. Hana Alostad, ―Directional Prediction of Stock Prices using Breaking News on Twitter,‖ IEEE/WIC/ACM Int. Conf. Web Intell. Intell. Agent Technol., pp. 0–7, 2015.



[10] M. F. Patrick Uhr, Johannes Zenkert, ―Sentiment Analysis in Financial Markets,‖ IEEE Int. Conf. Syst. Man, Cybern., pp. 912–917, 2014.



[11] P. M. Hoang Thanh, ―Stock Market Trend Prediction Based on Text Mining of Corporate Web and Time Series Data,‖ J. Adv. Comput. Intell. Intell. Informatics, vol. 18, no. 1, 2014.



[12] S. M. Price, J. Shriwas, and S. Farzana, ―Using Text Mining and Rule Based Technique for Prediction of,‖ Int.J. Emerg. Technol. Adv. Eng., vol. 4, no. 1, 2014.



[13] L. I. Bing and C. Ou, ―Public Sentiment Analysis in Twitter Data for Prediction of A Company ‘ s Stock Price Movements,‖ IEEE 11th Int. Conf. E-bus. Eng. Public, 2014.



[14] B. D. T. Yahya Eru Cakra, ―Stock Price Prediction using Linear Regression based on Sentiment Analysis,‖ Int. Conf. Adv. Comput. Sci. Inf. Syst., pp. 147–154, 2015.



[15] Y. Shynkevichl, T. M. Mcginnityl, S. Colemanl, and A. Belatrechel, ―Stock Price Prediction based on StockSpecific and Sub-Industry-Specific News Articles,‖ 2015.



[16] S. S. Umbarkar and P. S. S. Nandgaonkar, ―Using Association Rule Mining : Stock Market Events Prediction from Financial News,‖ vol. 4, no. 6, pp. 1958–



1963, 2015.



[17] R. Desai, ―Stock Market Prediction Using Data Mining 1,‖ vol. 2, no. 2, pp. 2780–2784, 2014.



[18] I. Journal, O. F. Social, and H. Studies, ―TIME SERIES ANALYSIS ON STOCK MARKET FOR TEXT MINING,‖ vol. 6, no. 1, pp. 69–91, 2014.



[19] Y. Kim, S. R. Jeong, and I. Ghani, ―Text Opinion Mining to Analyze News for Stock Market Prediction,‖ Int. J. Adv. Soft Comput. Its Appl., vol. 6, no. 1, pp. 1–13, 2014.



[20] S. S. Abdullah, M. S. Rahaman, and M. S. Rahman, ―Analysis of stock market using text mining and natural language processing,‖ 2013 Int. Conf. Informatics, Electron. Vis., pp. 1–6, 2013.



[21] U. States and E. Commission, ―SECURITIES AND EXCHANGE COMMISSION THE ‗TRANSITION REPORT PURSUANT TO SECTION 13 OR 15 ( d ) OF THE SECURITIES,‖ vol. 302, 2014.



[22] N. L. More, C. T. Any, and O. U. S. Equities, ―About NASDAQ.‖



[23] D. Lyon and B. Cedex, ―N-grams based feature selection and text representation for Chinese Text Classification ZhihuaWEI,‖ Int. J. Comput. Intell. Syst., vol. 2, no. 4, pp. 365–374, 2009.



[24] salton and Buckley, ―Term Weighting Approaches in Automatic Text Retrieval,‖ Inf. Process. Manag., vol. 24(5), p. 513–523., 1988.



[25] V. Kotu and B. Deshpande, Predictive Analytics and Data Mining. 2015.



[26] M. Mittermayer, ―Forecasting Intraday Stock Price Trends with Text Mining Techniques,‖ Proc. 37th Hawaii Int. Conf. Syst. Sci. - 2004, vol. 0, no. C, pp. 1–10, 2004.



[27] S. B. Imandoust and M. Bolandraftar, ―Application of KNearest Neighbor ( KNN ) Approach for Predicting Economic Events : Theoretical Background,‖ vol. 3, no. 5, pp. 605–610, 2013.



[28] Mr. B. Narendra and Mr. K. Uday Sai et al., ―Sentiment Analysis on Movie Reviews : A Comparative Study of Machine Learning Algorithms and Open Source Technologies,‖ IJISA, no. August, pp. 66–70, 2016..



[29] P. A. Idowu, C. Osakwe, A. A. Kayode, and E. R. Adagunodo, ―Prediction of Stock Market in Nigeria Using Artificial Neural Network,‖ IJISA, no. October, pp. 68–74, 2012.



[30] N. and K. J. Navale, ―Prediction of Stock Market using Data Mining and Artificial Intelligence,‖ Int. J. Comput. Appl., vol. 134, no. 12, pp. 9–11, 2016.











</bold>