Cargando…

Propositionalization and embeddings: two sides of the same coin

Data preprocessing is an important component of machine learning pipelines, which requires ample time and resources. An integral part of preprocessing is data transformation into the format required by a given learning algorithm. This paper outlines some of the modern data processing techniques used...

Descripción completa

Detalles Bibliográficos
Autores principales: Lavrač, Nada, Škrlj, Blaž, Robnik-Šikonja, Marko
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7366599/
https://www.ncbi.nlm.nih.gov/pubmed/32704202
http://dx.doi.org/10.1007/s10994-020-05890-8
_version_ 1783560252383297536
author Lavrač, Nada
Škrlj, Blaž
Robnik-Šikonja, Marko
author_facet Lavrač, Nada
Škrlj, Blaž
Robnik-Šikonja, Marko
author_sort Lavrač, Nada
collection PubMed
description Data preprocessing is an important component of machine learning pipelines, which requires ample time and resources. An integral part of preprocessing is data transformation into the format required by a given learning algorithm. This paper outlines some of the modern data processing techniques used in relational learning that enable data fusion from different input data types and formats into a single table data representation, focusing on the propositionalization and embedding data transformation approaches. While both approaches aim at transforming data into tabular data format, they use different terminology and task definitions, are perceived to address different goals, and are used in different contexts. This paper contributes a unifying framework that allows for improved understanding of these two data transformation techniques by presenting their unified definitions, and by explaining the similarities and differences between the two approaches as variants of a unified complex data transformation task. In addition to the unifying framework, the novelty of this paper is a unifying methodology combining propositionalization and embeddings, which benefits from the advantages of both in solving complex data transformation and learning tasks. We present two efficient implementations of the unifying methodology: an instance-based PropDRM approach, and a feature-based PropStar approach to data transformation and learning, together with their empirical evaluation on several relational problems. The results show that the new algorithms can outperform existing relational learners and can solve much larger problems.
format Online
Article
Text
id pubmed-7366599
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-73665992020-07-21 Propositionalization and embeddings: two sides of the same coin Lavrač, Nada Škrlj, Blaž Robnik-Šikonja, Marko Mach Learn Article Data preprocessing is an important component of machine learning pipelines, which requires ample time and resources. An integral part of preprocessing is data transformation into the format required by a given learning algorithm. This paper outlines some of the modern data processing techniques used in relational learning that enable data fusion from different input data types and formats into a single table data representation, focusing on the propositionalization and embedding data transformation approaches. While both approaches aim at transforming data into tabular data format, they use different terminology and task definitions, are perceived to address different goals, and are used in different contexts. This paper contributes a unifying framework that allows for improved understanding of these two data transformation techniques by presenting their unified definitions, and by explaining the similarities and differences between the two approaches as variants of a unified complex data transformation task. In addition to the unifying framework, the novelty of this paper is a unifying methodology combining propositionalization and embeddings, which benefits from the advantages of both in solving complex data transformation and learning tasks. We present two efficient implementations of the unifying methodology: an instance-based PropDRM approach, and a feature-based PropStar approach to data transformation and learning, together with their empirical evaluation on several relational problems. The results show that the new algorithms can outperform existing relational learners and can solve much larger problems. Springer US 2020-06-28 2020 /pmc/articles/PMC7366599/ /pubmed/32704202 http://dx.doi.org/10.1007/s10994-020-05890-8 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Lavrač, Nada
Škrlj, Blaž
Robnik-Šikonja, Marko
Propositionalization and embeddings: two sides of the same coin
title Propositionalization and embeddings: two sides of the same coin
title_full Propositionalization and embeddings: two sides of the same coin
title_fullStr Propositionalization and embeddings: two sides of the same coin
title_full_unstemmed Propositionalization and embeddings: two sides of the same coin
title_short Propositionalization and embeddings: two sides of the same coin
title_sort propositionalization and embeddings: two sides of the same coin
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7366599/
https://www.ncbi.nlm.nih.gov/pubmed/32704202
http://dx.doi.org/10.1007/s10994-020-05890-8
work_keys_str_mv AT lavracnada propositionalizationandembeddingstwosidesofthesamecoin
AT skrljblaz propositionalizationandembeddingstwosidesofthesamecoin
AT robniksikonjamarko propositionalizationandembeddingstwosidesofthesamecoin