Big data is a big deal. Everyone knows about it now, even those who aren’t in the tech game. That said, I would argue that the term itself is misleading – and I’ll explain why in a moment.
Ten years ago, you couldn’t move for the hype surrounding big data, the new model for processing data at scale at much lower cost. While that sounded like a mystery for some of the smaller players, many large enterprises were early adopters of the technology, especially in industries like telecommunications, e-commerce, and banking. And they got it right.
Big data is no longer a fad. Most companies that needed to have already deployed Hadoop clusters and have been running them for some time. The continued attempt to present big data as something smoking hot surprises them, as the technology is now on the plateau of the adoption curve.
The term “big data” has proven to be a marketing win in terms of popularizing the concept. However, I believe that now it’s something of a hurdle standing in the way of wider adoption in analytical systems, as if it separates two worlds: an old world of “small data” and a new world of “big data.”
Small is Beautiful
It creates a perception that there’s a high barrier of entry for deploying advanced analytical systems.
Managers of SMEs may well believe that “big” isn’t for them, and all they need is a solution for processing small data. This perception can be misleading. The data assets from the information systems of small companies – and more critically – the open sources in their ecosystem around that can very quickly mushroom.
But, the barrier to entry isn’t so high. A set of tried and tested algorithms mean that not too much coding is needed and cloud big data allows a lean-start with analytics at scale, even for small enterprises.
We can view this as the “democratization of analytics”. Customers can focus on a data management system that is scalable and works best for them.
If there was once a system for hot data, one for cold data, and another for unstructured data like video files, that’s now converging into one.
Big Data is disappearing inside a greater data management paradigm. And Artificial Intelligence will follow the same way.
There are a lot of ways how we can classify data. It can be called “big” or “small”, online or offline, structured or unstructured and various shades in between. But ultimately, we only have two categories:
(1) Profitable data that’s gives a competitive advantage and is a core company asset and (2) non-profitable data.
What we’re talking about is changing our relationship with data and how we consume it. The leader isn’t the one who knows the most, as high volumes of data are available and most of it is available for free.
The leader is the one who has the right data at the right moment.
The relationship between companies and data now resembles the relationships between a carpenter and wood or a potter and clay.
The term “big data” focuses on itself, but it’s clear that its meaning goes beyond technology and what’s truly important is real business outcomes. Enterprises shouldn’t be caught up with the concept of big data; they need to think about avoiding deploying big data systems without knowing what to do with the data.
The data management platform should scale as the volume of data assets grow organically.
Confucius said that, “To know what you know and what you do not know, that is true knowledge.” And so we are recreating the way we relate to information. Big data is just a step to the creation of ubiquitous data management capabilities and humanization of wisdom.
In this sense, Huawei is building a unified platform for data assets that includes three main components:
- Big data system
- Analytical database
- Artificial Intelligence engine
Click the link for more information about Huawei’s big data solutions.