Call for Abstract

3rd International Conference on Big Data Analysis and Data Mining, will be organized around the theme “Future Technologies for Knowledge Discoveries in Data”

Data Mining 2016 is comprised of 25 tracks and 134 sessions designed to offer comprehensive sessions that address current issues in Data Mining 2016.

Submit your abstract to any of the mentioned tracks. All related abstracts are accepted.

Register now for the conference by choosing an appropriate package suitable to you.

Big data is data so large that it does not fit in the main memory of a single machine, and the need to process big data by efficient algorithms arises in Internet search, network traffic monitoring, machine learning, scientific computing, signal processing, and several other areas. This course will cover mathematically rigorous models for developing such algorithms, as well as some provable limitations of algorithms operating in those models.

  • Track 1-1Data Stream Algorithms
  • Track 1-2Randomized Algorithms for Matrices and Data
  • Track 1-3Algorithmic Techniques for Big Data Analysis
  • Track 1-4Models of Computation for Massive Data
  • Track 1-5The Modern Algorithmic Toolbox

Business Analytics is the study of data through statistical and operations analysis, the formation of predictive models, application of optimization techniques and the communication of these results to customers, business partners and colleague executives. It is the intersection of business and data science.

  • Track 2-1Emerging phenomena
  • Track 2-2Technology drives and business analytics
  • Track 2-3Capitalizing on a growing marketing opportunity

Big data brings not only opportunities but also challenges. Traditional data process-sing has been unable to meet the massive real-time demand of big data; we need the new generation of information technology to deal with the outbreak of big data.

  • Track 3-1Big data storage architecture
  • Track 3-2GEOSS clearinghouse
  • Track 3-3Distributed and parallel computing

In our e-world, data privacy and cybersecurity have become commonplace terms. In our business, we have an obligation to secure our clients’ data, which has been obtained per their explicit permission solely for their use. That’s an important point if not readily apparent. There’s been a lot of talk lately about Google’s new privacy policies, and the discussion quickly spreads to other Internet monsters like Facebook and how they also handle and treat our personal information.

  • Track 4-1Data encryption
  • Track 4-2Data hiding
  • Track 4-3Public key cryptography
  • Track 4-4Quantum cryptography
  • Track 4-5Convolution
  • Track 4-6Hashing

Clustering can be considered the most important unsupervised learning problem; so, as every other problem of this kind, it deals with finding a structure in a collection of unlabeled data. A loose definition of clustering could be the process of organizing objects into groups whose members are similar in some way.

  • Track 5-1Hierarchical clustering
  • Track 5-2Density based clustering
  • Track 5-3Spectral and graph clustering
  • Track 5-4Clustering validation

A Frequent pattern is a pattern that occurs frequently in a data set. First proposed by [AIS93] in the context of frequent item sets and association rule mining for market basket analysis. Extended to many different problems like graph mining, sequential pattern mining, times series pattern mining, text mining.

  • Track 6-1Frequent item sets and association
  • Track 6-2Item set mining algorithms
  • Track 6-3Graph pattern mining
  • Track 6-4Pattern and role assessment

In machine learning, kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM). The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in datasets.

  • Track 7-1Kernel operations in feature space
  • Track 7-2Kernel for complex objectives
  • Track 7-3High dimensional data
  • Track 7-4Density of the multivariate normal
  • Track 7-5Dimensionality reduction
  • Track 7-6Kernel principal component analysis

The fundamental algorithms in data mining and analysis form the basis for the emerging field of data science, which includes automated methods to analyse patterns and models for all kinds of data, with applications ranging from scientific discovery to business intelligence and analytics.

  • Track 8-1Numeric attributes
  • Track 8-2Categorical attributes
  • Track 8-3Graph data

Over the past two decades, there has been a huge increase in the amount of data being stored in databases as well as the number of database applications in business and the scientific domain. This explosion in the amount of electronically stored data was accelerated by the success of the relational model for storing data and the development and maturing of data retrieval and manipulation technologies. 

  • Track 9-1Multifaceted and task-driven search
  • Track 9-2Personalized search and ranking
  • Track 9-3Data, entity, event, and relationship extraction
  • Track 9-4Data integration and data cleaning
  • Track 9-5Opinion mining and sentiment analysis

Social network analysis (SNA) is the development of examining social structures through the use of network and graph theories. It characterizes networked structures in terms of bulges (individual actors, people, or things within the network) and the ties or edges (relationships or interactions) that connect them.

  • Track 10-1Networks and relations
  • Track 10-2Development of social network analysis
  • Track 10-3Analyzing relational data
  • Track 10-4Dimensions and displays
  • Track 10-5Positions, sets and clusters

The age of Big Data is here: data of huge sizes is becoming ubiquitous. With this comes the need to solve optimization problems of unprecedented sizes. Machine learning, compressed sensing, social network science and computational biology are some of several prominent application domains where it is easy to formulate optimization problems with millions or billions of variables.

  • Track 11-1Computational problems in magnetic resonance imaging
  • Track 11-2optimization of big data in mobile networks

Data visualization or data visualization is viewed by many disciplines as a current equivalent of visual communication. It is not owned by any one field, but rather finds interpretation across many It encompasses the formation and study of the visual representation of data, meaning "information that has been abstracted in some schematic form, including characteristics or variables for the units of information".

  • Track 12-1Analysis data for visualization
  • Track 12-2Scalar visualization techniques
  • Track 12-3Frame work for flow visualization
  • Track 12-4System aspects of visualization applications
  • Track 12-5Future trends in scientific visualization

The process of extracting data from source systems and bringing it into the data warehouse is commonly called ETL, which stands for extraction, transformation, and loading. Note that ETL refers to a broad process, and not three well-defined steps. The acronym ETL is perhaps too simplistic, because it omits the transportation phase and implies that each of the other phases of the process is distinct. Nevertheless, the entire process is known as ETL.

  • Track 13-1ETL Basics in Data Warehousing
  • Track 13-2ETL Tools for Data Warehouses
  • Track 13-3Logical Extraction Methods
  • Track 13-4ETL data structures
  • Track 13-5Cleaning and conforming
  • Track 13-6Delivering dimension tables

Data mining systems and calculations an interdisciplinary subfield of software engineering is the computational procedure of finding examples in vast information sets including techniques like Big Data Search and Mining, Novel Theoretical Models for Big Data, High execution information mining calculations, Methodologies on expansive scale information mining, Methodologies on extensive scale information mining, Big Data Analysis, Data Mining Analytics, Big Data and Analytics. 

  • Track 14-1Novel Theoretical Models for Big Data
  • Track 14-2New Computational Models for Big Data
  • Track 14-3High performance data mining algorithms
  • Track 14-4High performance data mining algorithms
  • Track 14-5Methodologies on large-scale data mining
  • Track 14-6Empirical study of data mining algorithms

Online Analytical Processing (OLAP) is a technology that is used to create decision support software. OLAP enables application users to quickly analyse information that has been summarized into multidimensional views and hierarchies. By summarizing predicted queries into multidimensional views prior to run time, OLAP tools provide the benefit of increased performance over traditional database access tools. Most of the resource-intensive calculation that is required to summarize the data is done before a query is submitted.

  • Track 15-1Data Storage and Access
  • Track 15-2OLAP Operations
  • Track 15-3OLAP Architechture
  • Track 15-4OLAP tools and internet
  • Track 15-5Functional requirements of OLAP systems
  • Track 15-6Limitation of spread sheets and SQL

Open data is the impression that some data should be freely available to everyone to use and republish as they wish, without restrictions from right, patents or other mechanisms of control. The goals of the open data movement are similar to those of other "open" movements such as open basis, open hardware, open satisfied, and open access. 

  • Track 16-1Open Science and Research
  • Track 16-2Technology, Tools and Business
  • Track 16-3Open Development and Sustainability
  • Track 16-4Open Data, Government and Governance

The complexity of an algorithm signifies the total time required by the program to run to completion. The complexity of algorithms is most commonly expressed using the big O notation. Complexity is most commonly estimated by counting the number of elementary functions performed by the algorithm. And since the algorithm's performance may vary with different types of input data, hence for an algorithm we usually use the worst-case complexity of an algorithm because that is the maximum time taken for any input size.

  • Track 17-1Mathematical Preliminaries
  • Track 17-2Recursive Algorithms
  • Track 17-3The Network Flow Problem
  • Track 17-4Algorithms in the Theory of Numbers
  • Track 17-5NP-completeness

Big Data is a revolutionary phenomenon which is one of the most frequently discussed topics in the modern age, and is expected to remain so in the foreseeable future. Skills, hardware and software, algorithm architecture, statistical significance, the signal to noise ratio and the nature of Big Data itself are identified as the major challenges which are hindering the process of obtaining meaningful forecasts from Big Data.

  • Track 18-1Challenges for Forecasting with Big Data
  • Track 18-2Applications of Statistical and Data Mining Techniques for Big Data Forecasting
  • Track 18-3Forecasting the Michigan Confidence Index
  • Track 18-4Forecasting targets and characteristics

Cloud computing is a kind of Internet-based computing that provides shared processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in third-party data centers. It relies on sharing of resources to achieve coherence and economy of scale, similar to a utility over a network.

  • Track 19-1Emerging Cloud Computing Technology
  • Track 19-2Cloud Automation and Optimization
  • Track 19-3Mobile Cloud Computing
  • Track 19-4High Performance Computing
  • Track 19-5Cloud Computing Applications

Artificial Intelligence is the knowledge showed by machines or software.AI exploration is exceptionally specialized and concentrated, and is profoundly partitioned into subfields that frequently neglect to speak with one another. It incorporates Cybernetics, Artificial inventiveness, Artificial Neural systems, Adaptive Systems, Ontologies and Knowledge sharing.

  • Track 20-1Cybernetics
  • Track 20-2Artificial creativity
  • Track 20-3Artificial Neural networks
  • Track 20-4Adaptive Systems
  • Track 20-5Ontologies and Knowledge sharing

In figuring, an information distribution center, otherwise called an endeavor information stockroom (EDW), is a framework utilized for reporting and information investigation. Information Warehousing are focal archives of coordinated information from one or more dissimilar sources. This information warehousing incorporates Data Warehouse Architectures, Case ponders: Data Warehousing Systems, Data warehousing in Business Intelligence, Role of Hadoop in Business Intelligence and Data Warehousing, Commercial utilizations of Data Warehousing, Computational EDA (Exploratory Data Analysis) Techniques, Machine Learning and Data Mining.

  • Track 21-1Data Warehouse Architectures
  • Track 21-2Case studies: Data Warehousing Systems
  • Track 21-3Data warehousing in Business Intelligence
  • Track 21-4Role of Hadoop in Business Intelligence and Data Warehousing
  • Track 21-5Commercial applications of Data Warehousing
  • Track 21-6Computational EDA (Exploratory Data Analysis) Techniques

Data Mining devices and programming projects incorporate Big Data Security and Privacy, Data Mining and Predictive Analytics in Machine Learning, Interface to Database Systems and Software Systems.

  • Track 22-1Big Data Security and Privacy
  • Track 22-2E-commerce and Web services
  • Track 22-3Medical informatics
  • Track 22-4Visualization Analytics for Big Data
  • Track 22-5Predictive Analytics in Machine Learning and Data Mining
  • Track 22-6Interface to Database Systems and Software Systems

Huge information is an expansive term for information sets so substantial or complex that conventional information preparing applications are lacking. Uses of enormous information incorporate Big Data Analytics in Enterprises, Big Data Trends in Retail and Travel Industry, Current and future situation of Big Data Market, Financial parts of Big Data Industry, Big information in clinical and social insurance, Big information in Regulated Industries, Big information in Biomedicine, Multimedia and Personal Data Mining.

  • Track 23-1Big Data Analytics in Enterprises
  • Track 23-2Big Data Trends in Retail
  • Track 23-3Big Data in Travel Industry
  • Track 23-4Current and future scenario of Big Data Market
  • Track 23-5Financial aspects of Big Data Industry
  • Track 23-6Big data in clinical and healthcare
  • Track 23-7Big data in Regulated Industries
  • Track 23-8Big data in Biomedicine
  • Track 23-9Big data in ECommerce
  • Track 23-10Big data in security and privacy
  • Track 23-11Big data in Manufacturing
  • Track 23-12Big data in smart cities
  • Track 23-13Big data in Mobile apps
  • Track 23-14Big data in eGovernment
  • Track 23-15Big data in Public adminstration

Data Mining Applications in Engineering and Medicine focuses to offer information excavators who wish to apply distinctive information some assistance with mining systems. These applications incorporate Data mining frameworks in money related business sector investigation, Application of information mining in training, Data mining and Web Application, Medical Data Mining, Data Mining in Healthcare, Engineering information mining, Data Mining in security, Social Data Mining, Neural Networks and Data Mining, these are a portion of the utilizations of information Mining.

  • Track 24-1Data mining systems in financial market analysis
  • Track 24-2Application of data mining in education
  • Track 24-3Data mining and processing in bioinformatics, genomics and biometrics
  • Track 24-4Advanced Database and Web Application
  • Track 24-5Medical Data Mining
  • Track 24-6Data Mining in Healthcaredata
  • Track 24-7Engineering data mining
  • Track 24-8Data Mining in security

Data mining undertaking can be indicated as an information mining inquiry. An information mining inquiry is characterized as far as information mining assignment primitives. This track incorporates Competitive investigation of mining calculations, Semantic-based Data Mining and Data Pre-preparing, Mining on information streams, Graph and sub-diagram mining, Scalable information pre-handling and cleaning strategies, Statistical Methods in Data Mining, Data Mining Predictive Analytics.

  • Track 25-1Competitive analysis of mining algorithms
  • Track 25-2Computational Modeling and Data Integration
  • Track 25-3Semantic-based Data Mining and Data Pre-processing
  • Track 25-4Mining on data streams
  • Track 25-5Graph and sub-graph mining
  • Track 25-6Scalable data preprocessing and cleaning techniques
  • Track 25-7Statistical Methods in Data Mining