The Need for Multi-Paradigm Data Science

Monday 25 September 2017


Organisations are investing heavily in data science and AI to empower a new era of analytics driven decision-making, but are often disappointed with the results. 

One cause is a fixation on applying a single data science approach to all problems. There are many conceptual approaches to different kinds of data and each has many supporting algorithms for extracting insights from the data. We call this multi-paradigm data science. Yet many data scientists stick rigidly to a single paradigm, often classical statistics or, more recently, machine learning.

There are several reasons for this mono-paradigm thinking. Firstly, there is an educational deficit. Not, as many claim, because of the quality of STEM education, but because the very subject matter is focused on hand-calculating techniques. Consequently, it is natural for analysts to use computers only to scale up those simple computations to volume rather than consider more interesting conceptualizations that were not taught because they are impossible to apply, in the classroom, without the computer.

This starting point is then entrenched with the choice of tools, which are typically selected to support the limited set of computations anticipated in advance. When researchers have only statistics software and databases available, every problem looks like a statistics problem or a data selection problem.

Finally, few organizations take a strategic view to planning an enterprise computation platform, instead abdicating choice to individual teams who consider only their own immediate interests, guided by their particular mono-paradigm approach. This means different computational groups within organizations have no common language for exchanging computations except for low-level programming languages. Any enlightened individual who does embrace a multi-paradigm data science approach becomes marginalized and unable to deploy their work to the organization or empower their colleagues. 

Organisations can address this problem by developing an enterprise strategy not just for the storage of data but the multi-paradigm computations that will unlock the data’s value. Putting the widest choice of tools in the hands of researchers and training them in their application will widen the range of insights that can be extracted from the data. By giving teams a common tool-set and making the sharing of ideas and computations with each other easier, new ideas will spread faster and teams will build their skills more widely across all data science paradigms. Finally, making it possible to deploy algorithms in production systems for reporting or real-time decision making will empower your organizations to make better decisions with more agility and drive competitiveness. To find out more join Conrad Wolfram’s talk ‘Enterprise computation: the next frontier in AI and Data Science’ in the Keynote Theatre at 13.40 - 14.10, or drop by our stand M6c. 

Conrad Wolfram, Strategic Director of Wolfram Research will be speaking on Wednesday October 4 at IP EXPO Europe.