scispace - formally typeset
D

Dianhui Wang

Researcher at La Trobe University

Publications -  214
Citations -  6390

Dianhui Wang is an academic researcher from La Trobe University. The author has contributed to research in topics: Artificial neural network & Computer science. The author has an hindex of 31, co-authored 198 publications receiving 5150 citations. Previous affiliations of Dianhui Wang include Nanyang Technological University & Northeastern University.

Papers
More filters
Journal ArticleDOI

Extreme learning machines: a survey

TL;DR: A survey on Extreme learning machine (ELM) and its variants, especially on (1) batch learning mode of ELM, (2) fully complex ELm, (3) online sequential ELM; and (4) incremental ELM and (5) ensemble ofELM.
Journal ArticleDOI

Stochastic Configuration Networks: Fundamentals and Algorithms

TL;DR: In this paper, the authors proposed a stochastic configuration (SCN) algorithm for neural networks, which randomly assigns the input weights and biases of hidden nodes in the light of a supervisory mechanism, and the output weights are analytically evaluated in either a constructive or selective manner.
Journal ArticleDOI

Randomness in neural networks: an overview

TL;DR: An overview of the different ways in which randomization can be applied to the design of neural networks and kernel functions is provided to clarify innovative lines of research, open problems, and foster the exchanges of well‐known results throughout different communities.
Journal ArticleDOI

Fast decorrelated neural network ensembles with random weights

TL;DR: This paper employs the random vector functional link (RVFL) networks as base components, and incorporates with the NCL strategy for building neural network ensembles, and indicates that this approach outperforms other ensembling techniques on the testing datasets in terms of both effectiveness and efficiency.
Journal ArticleDOI

Insights into randomized algorithms for neural networks: Practical issues and common pitfalls

TL;DR: A theoretical result is established on the infeasibility of RVFL networks for universal approximation, if a RVFL network is built incrementally with random selection of the input weights and biases from a fixed scope, and constructive evaluation of its output weights.