Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Call for Papers

DeLTA is sponsored by INSTICC – Institute for Systems and Technologies of Information, Control and Communication

SCOPE

Deep Learning and Big Data Analytics are two major topics of data science, nowadays. Big Data has become important in practice, as many organizations have been collecting massive amounts of data that can contain useful information for business analysis and decisions, impacting existing and future technology. A key benefit of Deep Learning is the ability to process these data and extract high-level complex abstractions as data representations, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled.

Machine-learning and artificial intelligence are pervasive in most real-world applications scenarios such as computer vision, information retrieval and summarization from structured and unstructured multimodal data sources, natural language understanding and translation, and many other application domains. Deep learning approaches, leveraging on big data, are outperforming state-of-the-art more “classical” supervised and unsupervised approaches, directly learning relevant features and data representations without requiring explicit domain knowledge or human feature engineering. These approaches are currently highly important in IoT applications.

CONFERENCE AREAS

Each of these topic areas is expanded below but the sub-topics list is not exhaustive. Papers may address one or more of the listed sub-topics, although authors should not feel limited by them. Unlisted but related sub-topics are also acceptable, provided they fit in one of the following main topic areas:

1. MODELS AND ALGORITHMS
2. MACHINE LEARNING
3. BIG DATA ANALYTICS
4. COMPUTER VISION APPLICATIONS
5. NATURAL LANGUAGE UNDERSTANDING


AREA 1: MODELS AND ALGORITHMS


  • Recurrent Neural Network (RNN)
  • Evolutionary Methods
  • Convolutional Neural Networks (CNN)
  • Deep Hierarchical Networks (DHN)
  • Dimensionality Reduction
  • Unsupervised Feature Learning
  • Generative Adversarial Networks (GAN)
  • Autoencoders

AREA 2: MACHINE LEARNING


  • Active Learning
  • Meta-Learning and Deep Networks
  • Deep Metric Learning Methods
  • Deep Reinforcement Learning
  • Learning Deep Generative Models
  • Deep Kernel Learning
  • Graph Representation Learning
  • Gaussian Processes for Machine Learning
  • Clustering, Classification and Regression
  • Classification Explainability

AREA 3: BIG DATA ANALYTICS


  • Extracting Complex Patterns
  • IoT and Smart Devices
  • Security Threat Detection
  • Semantic Indexing
  • Fast Information Retrieval
  • Scalability of Models
  • Data Integration and Fusion
  • High-Dimensional Data
  • Streaming Data

AREA 4: COMPUTER VISION APPLICATIONS


  • Image Classification
  • Object Detection
  • Face Recognition
  • Human Pose Estimation
  • Image Retrieval
  • Semantic Segmentation

AREA 5: NATURAL LANGUAGE UNDERSTANDING


  • Sentiment Analysis
  • Question Answering Applications
  • Language Translation
  • Document Summarization
  • Content Filtering on Social Networks
  • Recommender Systems

KEYNOTE SPEAKERS

Ioannis PitasAristotle University of Thessaloniki, Greece
Michal IraniWeizmann Institute of Science, Israel
João FreitasPagerDuty, Portugal

PAPER SUBMISSION

Authors can submit their work in the form of a complete paper or an abstract, but please note that accepted abstracts are presented but not published in the proceedings of the conference. Complete papers can be submitted as a Regular Paper, representing completed and validated research, or as a Position Paper, portraying a short report of work in progress or an arguable opinion about an issue discussing ideas, facts, situations, methods, procedures or results of scientific research focused on one of the conference topic areas.

Authors should submit a paper in English, carefully checked for correct grammar and spelling, addressing one or several of the conference areas or topics. Each paper should clearly indicate the nature of its technical/scientific contribution, and the problems, domains or environments to which it is applicable. To facilitate the double-blind paper evaluation method, authors are kindly requested to produce and provide the paper WITHOUT any reference to any of the authors, including the authors’ personal details, the acknowledgments section of the paper and any other reference that may disclose the authors’ identity.

When submitting a complete paper please note that only original papers should be submitted. Authors are advised to read INSTICC's ethical norms regarding plagiarism and self-plagiarism thoroughly before submitting and must make sure that their submissions do not substantially overlap work which has been published elsewhere or simultaneously submitted to a journal or another conference with proceedings. Papers that contain any form of plagiarism will be rejected without reviews.

All papers must be submitted through the online submission platform PRIMORIS and should follow the instructions and templates that can be found under Guidelines and Templates. After the paper submission has been successfully completed, authors will receive an automatic confirmation e-mail.

PUBLICATIONS

All accepted complete papers will be published in the conference proceedings, under an ISBN reference, on paper and on digital support.
SCITEPRESS is a member of CrossRef (http://www.crossref.org/) and every paper on our digital library is given a DOI (Digital Object Identifier).
The proceedings will be submitted for indexation by SCOPUS, Google Scholar, DBLP, Semantic Scholar, EI and Web of Science / Conference Proceedings Citation Index.

SECRETARIAT

DeLTA Secretariat
Address: Avenida de S. Francisco Xavier, Lote 7 Cv. C
             2900-616 Setúbal - Portugal
Tel.: +351 265 520 185
Fax: +351 265 520 186
e-mail: delta.secretariat@insticc.org
Web: https://delta.scitevents.org

VENUE

Our conference will take place at the Lisbon Marriott Hotel, this 4 star Venue was considered Portugal’s leading conference hotel.

CONFERENCE CO-CHAIRS

Oleg GusikhinFord Motor Company, United States
Kurosh MadaniUniversity of Paris-EST Créteil (UPEC), France

PROGRAM CO-CHAIRS

Ana FredInstituto de Telecomunicações and Instituto Superior Técnico (University of Lisbon), Portugal
Carlo SansoneUniversity of Naples Federico II, Italy

PROGRAM COMMITTEE MEMBERS

Enrico Blanzieri, University of Trento, Italy
Marco Buzzelli, University of Milano - Bicocca, Italy
Claudio Cusano, University of Pavia, Italy
Shyam Diwakar, Amrita University, India
Ke-Lin Du, Concordia University Montréal, Canada
Gilles B. Guillot, CSL Behring / Swiss Institute for Translational and Entrepreneurial Medicine, Switzerland
Chih-Chin Lai, National University Kaohsiung, Taiwan, Republic of China
Chang-Hsing Lee, Ming Chi University of Technology, Taiwan, Republic of China
Marco Leo, National Research Council of Italy, Italy
Yung-Hui Li, National Central University, Taiwan, Republic of China
Xingyu Li, University of Alberta, Canada
Fuhai Li, Washington University Saint Louis, United States
Huaqing Li, Southwest University, China
Perry D. Moerland, Amsterdam UMC, University Amsterdam, Netherlands
Tomoyuki Naito, Osaka University, Japan
Le-Minh Nguyen, Japan Advanced Institute of Science and Technology, Japan
Juan J. Pantrigo, Universidad Rey Juan Carlos, Spain
Oksana Pomorova, University of Lodz, Poland
Mircea-Bogdan Radac, Politehnica University of Timisoara, Romania
Sivaramakrishnan Rajaraman, National Library of Medicine, United States
Jitae Shin, Sungkyunkwan University, Korea, Republic of
Sunghwan Sohn, Mayo Clinic, United States
Minghe Sun, University Texas San Antonio, United States
Ryszard Tadeusiewicz, AGH University Science Technology, Poland
Jayaraman Valadi, Shiv Nadar University, India
Aalt van Dijk, Wageningen University Research Centre, Netherlands
Theodore Willke, Intel Corporation, United States
Jianhua Xuan, Virginia Tech, United States
Seokwon Yeom, Daegu University, Korea, Republic of
Yizhou Yu, The University of Hong Kong, Hong Kong

(list not yet complete)

footer