Finding the gems in troves of big data

Friday, March 17, 2017

Businesses and institutions are drenched in data and sources of data. For some large organizations, data sources number into the thousands and the data within them can be in any number of formats. This creates mind boggling challenges — from accessing and managing data to integrating, analyzing and abstracting useful information from it.

Unfortunately, the problem is poised to become more complicated. With the growing popularity of smart phones, tablets, cloud storage and devices connected to the Internet, reconciling multiple and disparate volumes of data — big data as it’s known — will become increasingly difficult.

photo of Professor Ihab Ilyas“Although the term big data was coined relatively recently, people have been struggling with how to manage data for some time,” said Ihab Ilyas, professor and Thomson Reuters–funded research chair at the David R. Cheriton School of Computer Science. “Big data deals specifically with problems that arise from the exponential increase in data volume and variety.”

Over the last few decades, computer scientists invented various database management models to sort and manage data in files and data structures. In its early days, these systems dealt mostly with limited amounts of clean data. Amassing ever larger volumes of data was relatively easy, but accessing, managing and reasoning about the data became progressively more difficult — and this growing mismatch created a problem. In a sense computer scientists became a victim of their own success.

Ilyas’s research focuses on methods to unify increasingly large and diverse data from sources that are often dirty and inconsistent.

“Data come from many sources, and they often describe the same things in different ways,” he said. “But data can also have discrepancies and contradictions. To be able to use such data meaningfully you need to be able to reason about these discrepancies and contradictions.”

The assumptions in the early days of data management made solutions to the problem reasonably straightforward — it’s a small amount of data, the data is clean, metadata is available, so it’s all about matching. But in the current big and messy data era, data cleaning and integration have taken centre stage.

“Today data is rarely clean so we have to deal with it probabilistically rather than assuming it’s a collection of facts. So, we assume the data is a collection of observations, and the facts — whatever they are — are out there for us to discover.”

This is more difficult than it may seem, but recent advancements in machine learning have helped speed up the process.

“We begin by training a machine-learning model by giving it lots of examples of things that are the same and lots of examples of things that are not the same,” he explains. “We then let the model figure out a way to judge two new things it hasn’t encountered before and determine the probability that they are the same.”

As the term implies, the machine learns by doing and in the process becomes more accurate.

“Most machine learning is based on probability theory to reason about the likelihood of some event happening — for example, this particular event is likely, but this other event is unlikely,” Ilyas explained. “The machine chooses the more likely options and weeds out impossible ones, those that are highly improbable and so on, pruning the space of possible facts.”

The goal is to create tools and deeper understanding of data curation and abstraction that can be repeatable across many sectors.

“We’d like to transform the field of data curation and cleaning from a kitchen sink of best practices to libraries, tools and a better understanding to drive the economy more efficiently,” he said. “We can accelerate the data science part of that tenfold if we remove the overhead of cleaning and preparing data, which often consumes 90% of the time.”

Such advances will have a pronounced impact across sectors and in such diverse areas as pharmaceuticals and life sciences, financial services and industrial manufacturing. The techniques Ilyas and his team have developed have recently been applied to systematic reviews, a type of literature review that collects, critically analyzes and synthesizes results from multiple research studies.

“Conducting a thorough systematic review can be a lengthy process, often taking a year or more to search a database of literature, harvest key results, organize them, understand the biases in studies and decipher the findings,” he said. “With better and faster systematic reviews, researchers will avoid mistakes, find trends faster and consolidate results earlier, accelerating the pace of scientific discovery.”

In Tamr, a start-up cofounded by Ilyas and collaborators, the clustering, cleaning and integration techniques the team has developed have reduced the time taken to integrate and unify large amounts of data across silos by an order of magnitude.

“We cut the time to conduct large enterprise data integration projects from more than six months to just a couple of weeks,” Ilyas said. “When consolidating spending data of a large Fortune 500 company, the savings realized were in the hundreds of millions of dollars.”

  1. 2024 (25)
    1. April (3)
    2. March (13)
    3. February (1)
    4. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)