It examines phenomenon through the numerical representation of observations and statistical analysis. Quantitative research could also be about asking people for their opinions in a structured way in other to have hard facts and statistics to guide 3. RESEARCH PROCESS Research: is a method of describing, exploring, relating or establishing the fact of an existing concept, factors affecting the phenomena and the relationship among them. Process: collection of the activities carried out in the research is referred to as the process. Research process is an orderly and systematic process that requires more art than science through thought and patience.
Since, it encompasses wide range of activities, which most of time transcend factories or national boundary, complex interdependencies are built into it. As the power base continues to shift from companies towards customers, customer demands have gotten more complex. Companies are looking at Big Data analytics to revamp their supply chain, thereby using Big Data Analytics as a strategic lever. Companies are collecting vast amount of supply chain related data with help of technologies such as sensors, Barcode and GPS, Jacob House (2014). Big Data Analytics offers companies the ability to leverage on the enormous amounts of information driving their global supply chains, Harvard Business review, (2013).
Data mining is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. B.2 Introduction The growing popularity and development of data mining technologies bring serious threat to the security of individual's
PLS has some advantages over covariance-based approaches. First, covariance-based approaches yield very unreliable results for theory building studies, called factor indeterminacy. Because, these approaches produce more than one solutions which are mathematically proper but without determining which of the several solutions relates well to the underlying hypothesis. Additionally, covariance-based approaches can support numbers of statistically equivalent models by the same data and thus, it leads a difficulty to justify causality in the models. Therefore, covariance-based approaches are appropriate for empirical validation in well-established theories.
1.1. DATA MINING Data mining refers to extracting or mining knowledge from large amounts of data. Data mining has attracted a great deal of attention in the information industry and in society as a whole in recent years, due to the wide availability of huge amounts of data and the forthcoming need for turning such data into useful information and knowledge. The information and knowledge gained can be used for applications ranging from market analysis, fraud detection, and customer retention, to production control and science exploration. Data mining can be viewed as a result of the natural evolution of information technology.
statistics (Goldstein, 2011; Gay, 2010; Snijders & Bosker, 2012). There are two types of statistical operations to which research data can be subjected for making of appropriate inferences about the population from the sample. They are parametric statistics and nonparametric statistics. It must be reiterated emphatically that of these two, the parametric statistics is incomparably more powerful, sensitive, appropriate, accurate and suitably desirable classically. A more powerful statistical test is that which can detect a small but real difference or relationship in the sample while simultaneously still being able to reject non-real difference or relationship that might be apparent.
This study relied on the questionnaire as another key method of data collection, because it identifies and captures questions about a subject. For this particular study, the survey consisted of both closed ended and open-ended questions. Unlike the IDIs where the researcher wanted have insights , in the survey , there were more closed ended questions to ensure only the needed information was provided. Open-ended questions served as a way of clarifying points and perceptions that were not clear enough in other studies or for the researcher. Overall, the survey questions served as a guide and to give the researcher information that will
Having the CronBach’s Alpha up to the required level, the analysis has been done according to the methodology. 4.3 Descriptive statistics. In this research descriptive statistics has been used to describe, show and summarize raw data in a meaningful way. Descriptive statistics are very crucial since presenting raw data are hard to visualize. In this study the author has used different type of methods to summarize data such as tabulated description (tables) and statistical commentary (discussion of the results).
Analysis of Association Rules for Big Data Using Apriori and FP-Growth Techniques Abstract There is huge collection of data from which information mining is little difficult so the analysis and decision making is made easy by proposing the association rules. Association rule mining plays an important role in data mining as it is one of the most popular methods. There are so many examples of association rule mining and one of the most famous examples is market basket analysis. The relationship between items of a data set is shown by association rules. In this paper, we analyze the performance of two techniques for different number of instances in data set.
DOCUMENTATION A concise and accessible documentation is essential for the management of the collections, research and public services. The process of documentation includes registration, inventory and cataloging, and the use of manual and electronic formats to access to information according to established standards. “There are a number of software packages available which are suitable for producing inventories. Such databases are powerful tools designed to handle large amounts of information” (Xavier-Rowe 2010, p.3). A complete inventory of the collection is fundamental.