Cargando…

Universal activation function for machine learning

This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function by tunin...

Descripción completa

Detalles Bibliográficos
Autores principales: Yuen, Brosnan, Hoang, Minh Tu, Dong, Xiaodai, Lu, Tao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8455573/
https://www.ncbi.nlm.nih.gov/pubmed/34548504
http://dx.doi.org/10.1038/s41598-021-96723-8
Descripción
Sumario:This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function by tuning the UAF’s parameters. For the CIFAR-10 classification using the VGG-8 neural network, the UAF converges to the Mish like activation function, which has near optimal performance [Formula: see text] when compared to other activation functions. In the graph convolutional neural network on the CORA dataset, the UAF evolves to the identity function and obtains [Formula: see text] . For the quantification of simulated 9-gas mixtures in 30 dB signal-to-noise ratio (SNR) environments, the UAF converges to the identity function, which has near optimal root mean square error of [Formula: see text] . In the ZINC molecular solubility quantification using graph neural networks, the UAF morphs to a LeakyReLU/Sigmoid hybrid and achieves RMSE=[Formula: see text] . For the BipedalWalker-v2 RL dataset, the UAF achieves the 250 reward in [Formula: see text] epochs with a brand new activation function, which gives the fastest convergence rate among the activation functions.