The meaning of Big Data

Big Data is “big” because it represents a significantly larger chunk of information than usual, which is considered to be unprocessable with conventional tools. Hence Big Data is not a specific device or technology, but a concept and the characterization of an era. In the 2010s, we had more opportunities to collect data than what we were used to before. This is partially caused by the improved quantity and quality of data traffic on the internet: the amount and types of data we can gather about, let's say, website visitors and digital services are gradually increasing. Countless companies, from banks through the energy sector to the automotive industry, were able to obtain previously unknown volumes of data about their own activities. They gained access to so much data that it meant taking a qualitative leap compared to previous eras. The reason being that above a certain amount of data, we can get to know a given industrial process, digital service or even patterns of human behavior in a comprehensive way that allows us to formulate predictions with excellent efficiency. That’s why Big Data has opened a whole new era for design, medicine, software development and even marketing.

Today almost all our devices produce data

Data Analysis

Until a few years ago, we didn't necessarily have adequate tools to process the vast amount of data continuously collected by sensors and softwares. When you're working with a big chunk of data, you need a big chunk of computing capacity if you want to learn something about the whole database. Imagining a simple table, we can say that the more rows (the number of items in the sample) it has, the stronger statistical conclusions can be drawn from it, and the more “columns”, the greater the complexity of its data package.

A big amount of data is much harder to work with. We're familiar with the challenges this presents from everyday life: it needs a lot of storage space; it takes longer to evaluate; searches run slowly; it's difficult to share and copy; it's a complex task to edit, arrange or generally modify it in any way; and data security is more difficult to maintain.

Big Data is often not even a fixed database, but a series of records that is continuously produced by a certain source, from which therefore we do not take samples but draw conclusions through continuous observation, which makes it difficult to pull real values from it, especially when trying with non-Big Data tools.

Big Data is also used in predictive analysis and behavioral analysis. Internet search, financial trends, the spread of diseases, crime statistics-based policing, meteorology, medicine, genetics, the simulation of complex physical phenomena, marketing and government functions: Big Data technology can provide help in such areas of life.

Abundant data was also an important prerequisite for the rapid development of Artificial Intelligence (AI) technologies. Even though Big Data is not strictly necessary for AI solutions, it is definitely similar to a fertile soil in which a plant can grow easier and faster and can produce a better harvest.

However, as crops need a mindful farmer, Big Data processes also currently need and will need human oversight for a long time to come. It’s no coincidence that almost every company where Big Data is relevant has a data scientist whose job is to cultivate databases and avoid the traps that an algorithm would inevitably walk into.

Data needs to be cleaned and profiled for optimal operation, and there's also a need for creative control functions, which require human intuitive skills. For example, if a data set consists of surface temperatures measured every five seconds, mostly between 20 and 25 degrees Celsius, a single 800-degree record must certainly be the result of a measurement error. If such errors weren’t removed from the sample by a data scientist or analyst, the validity of the measurement would be rendered worthless.

It’s also common not to have a single dataset only, but to examine complex sets and subsets, and in such cases it's important for someone to remove the irrelevant ones. For example, if there are 100 cities, each with 1000 pieces of data (e.g. daily precipitation), but some of them only have 10 pieces, these cities should be removed from the analysis in case we want to draw comprehensively relevant conclusions from the entire database. Cities with 10 measurements only would distort our sample.

Collection of Data

For any company it's a key question what data they have access to and what additional options they have in order to expand their data collection. It's worthwhile to thoroughly study what measurements are being conducted in the company’s offline and online activities, how do we archive this data, and how can we facilitate evaluation.

How many phone calls does the receptionist receive during a shift? How much fuel is used by colleagues with company cars, one by one, on average and in total? How does a particular workstation perform on average per day? Which days stand out, positively as well as negatively? This can reveal such nuances that if Jill and Jane get to sit next to each other, their efficiency degrades by 30 percent… or maybe it improves! This information can later be used in the next work organization phase.

Data processing

The fusion of Big Data and Machine Learning can get some serious business done.

One possible use is defect screening. If a manufactured product deviates from the specified parameters in any way during the Machine Vision-aided quality control process, the system gives a warning and filters out the defective product. Big Data algorithms are also able to filter out the generation of erroneous data. In such cases a data analyst examines the anomaly and decides whether it really is an erroneous measurement or there's something else causing it. This process can, for example, help identify and repair faulty sensors and correct setting errors.

Filtering out faulty measurements is also part of the data collection process

Prediction can often come in handy in the field of business operations. Using Bayes' theorem, we can determine the probability of an event occurring based on prerequisites. This can prove useful when determining when certain parts of a machine should be replaced according to operational specifications, instead of automatically replacing them based on predetermined milestones such as running time, even though the aforementioned part is still working great. Prediction can also help out in the analysis of customer behavior. For example, if your current data begins to resemble a past dataset that was followed by an increased interest in a particular product, you can prepare for the next increase ahead of time.

Big Data is not witchcraft and AI is not a self-conscious ghost in the machine, but they are undoubtedly new and useful technological tools that can help in all areas of life, so it's worth keeping an eye on these innovative areas.

Are you interested in starting a similar project but would like to ask some questions first?
Make the most of our free consultation service and contact us today. Click here.