I. INTRODUCTION With the rising need for secure, reliable and accessible information in today’s technology environment, the need for distributed databases and client/server applications is also increasing. A distributed database is a single logical database that is spread physically across computers in multiple locations that are connected by data network links. Distributed database is a kind of virtual database whose component parts are physically stored in a number of distinct real databases at a number of distinct locations. The users at any location can access data at anywhere in the network as if the data were all stored at the user’s own location.
A SAN does not provide file abstraction, only blocklevel operations. However, file systems built on top of SANs do provide filelevel access, and are known as SAN filesystems or shared disk file systems. NAS Networkattached storage (NAS) is filelevel computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. NAS not only operates as a file server, but is specialized for this task either by its hardware, software, or configuration of those elements. NAS is often manufactured as acomputer appliance – a specialized computer built from the ground up for storing and serving files – rather than simply a general purpose computer being used for the role.
The data require a relational database since multiple tables were required for completeness of information. Even if at the moment there were no many passengers the planes could increase in the future as well as the passengers and therefore the growth of database in such a scale could not be accommodated by excel. This could translate into thousands of passengers in a given week and therefore access will be the most likely application to handle such a huge database. When database data need to be shared with other applications Access is more compatible with other database applications such as Microsoft SQL and therefore another reason I chose to use this
The enormous volumes of the data also make an application susceptible to spasms or malfunctions if the entire system has to be reliant on any centralized control entity. For principal Big Data-associated applications, like Flicker, Google, Walmart, and Facebook, a huge quantity of server farms are deployed everywhere in the world to guarantee uninterrupted services and rapid answers for local
This can reduce the risk of the data being loss as the database can be saved in a variety of different forms. - This type of database makes it simple for the editor to update as the data is split into separate data fields instead of multiple tables. Disadvantages: - The database doesn’t require a relational link meaning that when information is changed for one individual it will not automatically change for all the records of that individual meaning that you would have to find each record of that person and change it for all the records data collected for that person. - The database normally involves you repeatedly writing the same data which can cause many issues such as human error meaning that mistakes could be made causing records not to be presented when looking for records about a particular person. - When updating the database it can be often hard to identify any errors contained within the database as multiple records of data is contained within the database - The flat file database doesn’t prevent similar data being typed for two individuals which could cause confusion as it would bring up multiple records for multiple people with similar records making it hard to find out information about a
MULTIDIMENSIONAL DATA MODEL Multidimensional data model is the base for data warehouse and OLAP tools. This model views data in the form of a data cube. A data cube allows data to be modeled and viewed in multiple dimensions. A company can keep records with respect dimensions which are perspectives or entities. A table is associated with each dimension, called a dimension table, which tells about the dimension in detail.
Emad A Mohammed et al , in their work big clinical data analytics would emphasize modelling of whole interacting processes in clinical settings and clinical datasets can be evolution of ultra-large-scale datasets. Arantxa Duque Barrachina et al  proposed that using Hadoop techniques large datasets can be used to identification of large dataset. K.Divya et al , for protecting the data used a progressive encryption scheme. Hong song Chen , in their research article a novel Hadoop-based biosensor Sunspot wireless network architecture, ECC digital signature algorithm, Mysql database and Hadoop HDFS cloud storage; security administrator can use it to protect and manage key data. Lidong Wang et al , in their work based on SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis, Radio Frequency Identification Technology
Workflow of Hadoop Cluster: how the data is written, analyzed, stored and read in Hadoop Cluster. Detailed knowledge about HDFS Writes, Rack Awareness, Pipelined Writes, Name Node, Missing Replicas, Unbalanced Cluster and Balanced Cluster. Factors that are important while planning Hadoop Cluster. Recommended Hardware and Network Configurations for Master and Slave. Standard Network Topology Architecture.
Most of us work day and night on computers and keep saving our work by simultaneously pressing the Ctrl + S keys. Since we do not want our work or other data to be lost due to sudden power outage, hard drive failure, or virus infection, we continue to save it. External drives are the most commonly used traditional backup storage systems which still retain some essential utilitarian functions today. However, with external drives, limitations are numerous which could affect the speed and efficiency in storing significant files and data. It is a technique which provides online storage space that means user can store his file and other type of data online and we can easy to access these files and data anytime anywhere all over the global.Most of
2.Problems of centralized storage: User’s data is stored in a centralized server, bringing the server enormous storage pressure. If the server has a fatal fault, all the User’s data will lose, which would make the entire platform face disaster. VI. CONCLUSION AND FUTURE WORK In this paper, we have proposed a novel desktop virtualization framework to provide virtualization services for users. In the proposed framework, server resources are integrated into a powerful computing capability pool which uses our proposed control mechanism to make efficient use of resources.