Big Data

Everyone Is Talking About Big Data: History, Definition and Outlook

It is alleged that the computer scientist John Mashey invented Big Data during his lunch break in the canteen of computer manufacturer Silicon Graphics in the mid-1990s. Strictly speaking, it is not a technical invention, but rather a term for a phenomenon that was already becoming apparent at that time.

According to Mashey, computers would soon reach their data processing limits due to the exponential growth of data volumes and the ever increasing variety of data categories. The computer manufacturer Silicon Graphics no longer exists, but today we know that its IT genius Mashey was spot on with his assessment.

Data Volume Doubles Every Two Years

The amount of data generated worldwide is expected to double every two years for the next decade. We will produce a data volume of 45,000 exabytes by 2020. By way of comparison, a capacity of just 5 exabytes would be enough to store all the words that the entire human race has ever spoken.

Although some sceptics consider the term Big Data to be overblown hype, Big Data as a real phenomenon is still as relevant as ever. In the search for a definition, two perspectives emerge amongst researchers, providers and users:

  • Data level: Big Data describes large, unstructured amounts of data that conventional data processing methods are unable to handle.
  • Technology level: As a collective term, Big Data describes new technologies that can be used to process large amounts of data.

Both perspectives do have their raison d’être, but it does show how generic the term Big Data still is. Representatives of a third point of view do not even try to provide a definition. For them, Big Data is less a technical term and more a universal phenomenon that is a logical consequence of digitalisation.

The volume of data generated worldwide doubles every two years and will reach 45,000 exabytes by 2020. Click To Tweet

To understand this phenomenon better, we recommend that you grasp the importance of Big Data by considering five dimensions that characterise data and data handling in an increasingly digitalised world:

  • Volume: People exchange vast amounts of data every day. Companies and operators of online platforms store and process mass data to a degree that can only be achieved with new technologies such as Hadoop.
  • Velocity: The speed at which data is generated and processed is faster than ever. Real-time data processing is no longer a buzzword; it has become a reality in many areas.
  • Variety: The ability to generate data on any device, at any time and in any place has increased the bandwidth of the data. Data is generated in a wide variety of formats – text, images, video, audio and figures.
  • Variability: In addition to the increasing speed and variety with which data is generated, variability is also on the rise. Data is often unstructured, inconsistent and the speed at which it is generated can vary drastically.
  • Complexity: Data comes from a wide variety of applications, devices and systems. Linking it together and classifying it in hierarchy models is one of the greatest challenges for data processing.
Big Data is not a technical term. It describes a universal phenomenon that is a logical consequence of digitalisation. Click To Tweet
Big Data

Four Trends That Make Big Data Business as Usual

Though there is a high degree of freedom when it comes to defining Big Data, its use in everyday business life has long been the norm. “The world’s most valuable resource is no longer oil, but data” was the Economist’s somewhat pathetic title in what was actually a very worthwhile article.

Data is the lubricant for the business processes of many companies because it adds value, drives innovation and facilitates the implementation of future strategies. At the same time, however, these five features outline the challenges facing companies when it comes to data processing.

A good example of how data not only improves business processes but also the customer experience is the credit check that occurs when granting instant loans. The Digital Account Check provides lenders with all the information they need to assess a customer’s creditworthiness in real time. Learn how the Digital Account Check works in this blog post.

To get real measurable added value from Big Data, companies need to further expand their data handling processes and tools. Four developments are emerging that provide a good indicator of the professionalism of Big Data:

  1. The role of the Chief Data Officer (CDO) is being established in many companies. In 2018, 50 percent of CDOs will report directly to the CEO of a company. Data has reached the C-level and Big Data will certainly benefit from this prioritisation.
  2. Forrester estimates that 80 percent of all companies will rely on Insights Service Providers in 2018. Analysts predict a flourishing market for IaaS (Insights as a service) – services that help companies evaluate, categorise and analyse data.
  3. Customer interaction is one of the richest data sources, but also one that poses one of the biggest challenges for companies with unstructured data. Gartner predicts that by 2020 85 percent of customer interaction will be handled by artificially intelligent chatbots that both improve data quality and enable more efficient analysis.
  4. In general, algorithms based on machine learning will increasingly be used to evaluate data. According to the tech magazine CIO, the growing flood of data could hardly be managed any other way. Companies that cannot carry through these investments independently due to a lack of know-how and resources will rely on partnerships with external Big Data specialists.

Big Data is our hand tool: With our own banking API, we have access to more than 100 million online banking accounts in Europe. We also categorise and analyse many millions of data records on behalf of financial institutions and FinTechs every day. Contact us to find out how Big Data can add value to your business.