5 trends driving Big Data in 2017
A few years ago, Big Data was a headline-grabbing phrase that could pull in almost any technologist. But now, Big Data is less of an anomaly and simply the way business gets done.
That’s because, when done well, Big Data works and offers businesses concrete return on investment. Using Big Data in sales, marketing, supply chain, manufacturing and R&D can net businesses 20% to 30% gains, according to a recent Boston Consulting Group (BCG) report.
In fact, BCG points out that most of the world’s most valuable companies have shoved aside traditional players by using data-driven models. Six of the 10 most valued companies today are built on data — Apple, Alphabet, Microsoft, Amazon, Facebook and Alibaba — compared with only one in 2006.
But how companies are using data is changing, marking the advancement of tools and the investment from executive leadership of forecasting more parts of the business. To touch on the changing Big Data market, here are five major trends:
1. More companies are taking a DataOps approach.
Forward-thinking companies are begging to adopt a DevOps-like model to Big Data projects, called DataOps.
"DataOps is an approach to building a self-service data platform for delivering insights-driven business decisions across an organization," said Ashish Thusoo, CEO and co-founder at Qubole.
By using automation, companies are making their data teams more effective, de-coupling them from daily demands to make room for agility. Ultimately, the streamlining will lead businesses to faster time to insight in an environment where teams can work in a more efficient and effective way.
2. The rise of more startups to help make data actionable.
Sure, all that data is great. But the perpetual challenge for businesses is what do you do with it? How can you turn Big Data into something that improves the business?
"Enterprises across all industries have accepted that their business lifelines and advantages lie within the data they can accumulate," said Alex Lesser, vice president at PSSC Labs. "Making sense of that data and providing the necessary time sensitive information to the relevant stakeholders is now critical."
But doing so is not easy. As a result, several startups have emerged to help support these initiatives. The common thread between these companies? The need to quickly gather and assess data in as close to real-time as possible.
Startups are emerging because advancements in technology and storage has made their tools possible. "The technologies these companies are using to perform this analysis did not exist two years ago," said Lesser.
3. Increased attention toward data governance.
As organizations move forward with their modern data architecture and data lake initiatives, many quickly realize the need for governance. Data governance cannot be an afterthought for the modern data architecture, otherwise a data lake quickly turns into data swamp.
"Understanding the data as part of the data access and integration into the data lake helps identify the data issues early," said Tendu Yogurtcu, chief technology officer at Syncsort. "Reusable business rules that can be deployed anywhere, i.e. at the source data stores, are becoming critical due to multi-platform and cloud deployment environments."
Yogurtcu says data scientists are still spending about 80% of their time cleansing data, which detracts from the efforts of getting business insights and ROI of the entire data lake initiative.
4. The (continued) rise of the cloud.
Cloud technology allows businesses to scale Big Data usage. As a result, the cloud is being adopted for Big Data use by all types of businesses. Companies born in the cloud era of the last few years have been among the first to adopt the technology for Big Data management and analysis. Now, even legacy IT enterprises are turning to the cloud for the competitive advantages it provides.
Nearly six in 10 companies are currently using at least some cloud resources for Big Data processing, according to a recent report by Dimensional Research and Qubole.
"Increasing amounts of data are being generated at the edge of the internet where the cloud is the natural ingestion point," said Thusoo. "As a result, the cloud is becoming the gravity well for all data. The inherent elasticity and agility of a cloud-based data platform provides a huge operational and financial advantage over on-premise data platforms."
At the same time, Data as a Service and Hadoop as a Service initiatives are shifting to SaaS and PaaS models mainly due to data gravity and for operational efficiency reasons.
"More and more data is originating in cloud, which changes the gravity of data from on-premise to cloud environments," said Yogurtcu. "It is becoming expensive to move the data to on-premise to integrate with the most critical data assets such as customer reference data, typically hosted on-premise."
Therefore, complete portability of applications from on-premise to cloud and ability to run in hybrid cloud environments is becoming more and more critical.
5. The introduction of autonomous data platforms.
As more data is collected, organizations are having to rethink analysis. That has resulted in the shift to autonomous data management platforms.
These data management platforms intelligently automate tedious manual data management tasks, like capacity planning and performance optimization, by self-managing and self-optimizing itself, according to Thusoo.
"Autonomous data management platforms will have machines doing what humans are doing but faster, cheaper and more reliably," said Thusoo. "That frees up data teams to focus on high-value, strategic business outcomes."