Göran Falkman
School of Informatics
An unprecedented growth of data, fed by novel technologies, user behaviour and business models, is one of the most dramatic and important developments in both ICT and society at large. Most of this is Big Data, characterised by vast volume, high velocity, large variety and unknown veracity. These properties pose significant challenges when collecting, managing and processing data.
There is an enormous commercial, societal and environmental potential in exploiting this information, and the ability to make use of Big Data is acknowledged to be one of the most important competitive factors during the next few years. Even though capacity to store, distribute and search large data sets exists today, this is not sufficient for realising the full potential. It is the knowledge hidden within this data that has real value, and extracting that value is the purpose of Big Data Analytics.
The very fact that data is now in such abundance makes a qualitative, not only quantitative, difference and opens up the possibility to develop new tools that are vastly superior to today’s technologies. The data processing and analysis capabilities provided by Big Data Analytics has the potential to become one of the determining factors with respect to corporate and societal value making, but the tools and methods to fully realise this value, especially in distributed and high-velocity streaming scenarios, are not yet in place.
The overall aim of BIDAF is to create a strong distributed research environment for Big Data Analytics. The scientific objectives centre around realising the promise of advanced, near real-time analytics on uncertain data with high volume and velocity through machine learning techniques, with key challenges including:
The BIDAF consortium consists of well-established groups in the international research community with complementary backgrounds and research foci that together possess the tools to tackle key challenges in Big Data Analytics. Together they represent many years of experience, competence, and both applied and theoretic research within areas such as data analysis, machine learning, statistical modelling, computational platforms, distributed algorithms, uncertainty management, data fusion, and visualisation.
The research group at University of Skövde is responsible for the work package High-level Functionality and Analytics. The overall aim of this package is to define methods and techniques that increase the usefulness and usability of Big Data Analytics. The researchers will provide: