The power of big data in driving digital transformation

5 March 2023
The saying “data is the new oil” may have come to mind. It’s actually a pretty good metaphor because oil has to be refined into something like gasoline or plastic before it has any value. In a similar vein, we are all flooded with a lot of data, but in order for that data to be of any real use, it must be refined into business insights. Consequently, we will investigate how digital transformation is fueled by big data.
Marketers frequently overuse and misuse the term “big data.” Not all that matters is having a lot of data. Combining structured and unstructured data to gain previously unattainable new insights is the focus of big data.
The raw Twitter firehose, Google Trends data, public APIs from governments, and feeds from Internet of Things sensors are all examples of unstructured data, which is new information that is frequently too large to fit into a database. The magic happens when you layer structured data on top of unstructured data.
Let’s look at a brief example
A century-old company asked its data team to use big data to come up with a better way to make sales forecasts. The company’s forecasts were based solely on the number of products sold in each month over the previous year and the previous month for decades.
But the data team started looking at what people were saying about its brands and products on Twitter and doing sentiment analysis. It also looked at what brands and products people were looking for most in Google Trends. After that, to see if the data was predictive, it compared it to its actual sales and discovered that it was.
Therefore, it now has a much more intelligent forecast when it makes its sales forecasts because it layers unstructured data (sentiment analysis from Twitter and Google Trends) on top of Additionally, it could conduct sentiment analysis on all of the brands and products of its rivals if it desired to become even smarter. The bottom line is that the company can plan sales, promotions, and marketing campaigns with much greater efficiency thanks to this use of big data.
 All of the articles can be read on ZDNet or downloaded as a single PDF on TechRepublic, where registered users can do so for free.
ZDNet Monday Morning Opener The Monday Morning Opener is our first piece of tech coverage for the week. This editorial goes live on Monday at 8:00 a.m. Eastern Time on Sunday in the United States because we operate a global website. A member of ZDNet’s global editorial board, which includes our lead editors in Asia, Australia, Europe, and the United States, wrote it.
Let’s talk about data silos for a moment. Naturally, towers on farms used to store grain for future use or sale constitute real-world silos. They are tall structures that typically only house one kind of raw material. In general, the term “silo” can be used as a metaphor to describe huge collections of raw data that are kept apart from other raw data.
Devices and servers frequently separate data. Data is stored on various machines, but not always shared with other devices. If an API (application programming interface) is used, only some of the data generated and stored by the application can be shared. Organizations accumulate a lot of data over time, but the majority of it is isolated and stored in separate, metaphorical silos that will never form a cohesive whole.
Data silos naturally occur when it comes to enterprise networking, particularly edge-to-cloud networking. Although every device at the edge generates data, a significant portion of that data may remain on the device itself or, at the very least, within the group of devices located at that edge location. Operations in the cloud are no different. Although a number of cloud providers share data from time to time, the majority of it remains isolated from the rest of the business.
A wall-mounted lighting fixture sold by Home-by-Home uses plastic brackets to attach to the wall. It typically sells well. However, the brackets crack in March and April each year, resulting in an abundance of returns for the business. From Miami to Seattle, the returns come from all over the country. The stores themselves are aware of our initial data set.
A partner company constructs the brackets in a factory. The factory normally operates at temperatures above 62 degrees Fahrenheit, but the average ambient temperature in January and February is 57 degrees. The factory temperature is the second data set we have.
There is no connection between the two data sets.The company wouldn’t be able to determine that a slightly cooler factory was producing substandard brackets that were failing across the nation unless it was able to link a factory data set with store returns statistics.
However, insights are made possible by capturing all of the data, making data sets accessible for analysis, and using AI-based correlation and big data processing. In this instance, Home-by-Home was able to make the connection between factory temperature and returns due to its commitment to digital transformation. As a result, customers who purchase those lighting fixtures now experience significantly fewer failures.