A Brief History of Database Management

history of database management

DATABASE:

The collection of information that is organized so that it can be easily accessed, updated and managed is known as the Database. Data is organized into rows, columns and tables. It is indexed accordingly to find the relevant information. Data gets updated, deleted and expanded as new information is added. Generally, Databases process workloads to create and update themselves, questioning the data they contain and applications running against it.

history of database management

HISTORY OF DATABASE MANAGEMENT:

A Database Management System allows a user or person to organize, store and retrieve data from a computer. It is the simplest way of communicating with a computer’s “stored memory.” In the very early stage of the computers, “Punch Cards” were used for the storage of data, input and output. They offered a fast way to enter data and to retrieve it. Herman Hollerith an American inventor  is given credit for adapting the Punch cards used for weaving looms to act as the memory for a mechanical tabulating machine. In 1890, much later, databases came along.

Databases have played a vital role in the recent evolution of computers. In the early 1950’s, First computer programs were developed and focused mostly on algorithms and coding languages. At the time, computers were basically giant calculators and data such as names, phone numbers were considered the leftovers of processing information. Computers were commercially available and when business people started using them for real-world purposes, leftover data got the most priority.

In 1960, The Integrated Database System was designed by W. Bachman. It is the first Database Management System (DBMS) but IBM created their own Database known as IMS (Information Management System). Both Database systems are described as the forerunners of Navigational databases.

By the mid-1960’s, As Computers had started becoming popular and also developed their speed and flexibility many types of Databases became available. It resulted in demand of the customers in turn leading to Batchman forming Database Task Group. This group took the responsibility for the design of the language called COBOL (Common Business Oriented Language). The Database Task Group presented this standard in the year 1971, which also referred as the “CODASYL approach.”

The CODASYL approach was a complicated system and requires substantial training. It is based on a manual navigation technique using a linked data set. To search records there are three techniques. Any one of them can be used.

1 Moving relationships or sets to one record from another

2 Scanning all the records in sequential order

3 Using the primary key (CALC key)

The CODASYL approach lost its name as it is simpler easier to work with the systems came on the market.

In the development of Hard Disk Systems Edgar Codd worked for IBM and he was not glad with the lack of a search engine in the IMS model and CODASYL approach. In 1970, to construct Databases He wrote a series of papers. His ideas evolved into a paper titled, A Relational Model of Data for Large shared Data Banks, which described new method for processing large data bases and storing data. This is different from CODASYL model records would not be stored in a free- form list of linked records but instead used a” table with fixed length of records.”

IBM had invested highly in its own model of database (IMS) model and was not interested in Codd’s ideas. In 1973, Michael Stonebraker and Eugene Wong made the decision to research relational database systems. The project was named as INGRES (Interactive Graphics and Retrieval System), and successfully described a relational model could be practical and efficient. INGRES worked with the query language known as QUEL, In turn pressuring IBM to develop SQL in 1974. SQL became ANSI and OSI standards in 1986, 1987). SQL replaced QUEL as the more function query language.

RDBM systems were an efficient way to process and store structured data. Unstructured data is both schema –less and non-relational and Relational Database Management systems Simply were not designed to handle such type of data.

NoSQL:

NoSQL (Not only” Structured Query Language) came as the response to the Internet need for faster speed and unstructured data processing. Below are the advantages NoSQL has over SQL and RDBM systems.

  • Lower costs
  • No complex relationship
  • A flexible schema
  • Higher scalability

DOCUMENT STORES:

Document store is called as Document-oriented Database. It stores, manages retrieves semi-structured data. Document-oriented Information is the other name of semi-structured data here. Documents are said to be independent units that improve performance and makes easy to spread data across large number of servers.

Examples of Document Stores:  Amazon Dynamo DB, Mongo DB.

COLUMN STORES:

A Database Management System using column is different from traditional relational database systems. Instead of rows it stores data in columns.

Examples of Column Stores: HBase(HADOOP based), Cassendra.

KEY-VALUE STORES:

A Key value database is used for strong profiles or shopping cart data. All access to the database is followed by the primary key. There is no fixed Schema or data Model.

Examples of Key value Stores:  Berkely DB , Aerospike.

GRAPH DATA STORES:

Dispatch systems, location awareness system, routing are the primary users of Graph Databases also known as Graph data stores. These are based on the Graph theory and works well with the data that can be displayed as Graphs. They provide a cohesive picture of Big Data.

Many NoSQL systems run on large clusters and nodes. It allows for significant scalability  and backups of data on each node. An application communicating with different database management technologies uses each for the best fit to reach the final goal

 

Leave a Reply

Your email address will not be published. Required fields are marked *