scispace - formally typeset
Z

Zdenek Vasicek

Researcher at Brno University of Technology

Publications -  102
Citations -  1936

Zdenek Vasicek is an academic researcher from Brno University of Technology. The author has contributed to research in topics: Genetic programming & Evolutionary algorithm. The author has an hindex of 20, co-authored 96 publications receiving 1315 citations. Previous affiliations of Zdenek Vasicek include Vienna University of Technology.

Papers
More filters
Proceedings ArticleDOI

EvoApproxSb: Library of approximate adders and multipliers for circuit design and benchmarking of approximation methods

TL;DR: The EvoApprox8b library provides Verilog, Matlab and C models of all approximate circuits and the error is given for seven different error metrics.
Proceedings ArticleDOI

Design of power-efficient approximate multipliers for approximate artificial neural networks

TL;DR: The paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers, and a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation.
Journal ArticleDOI

Evolutionary Approach to Approximate Digital Circuits Design

TL;DR: A heuristic seeding mechanism is introduced to CGP which allows for improving not only the quality of evolved circuits, but also reducing the time of evolution and the efficiency of the proposed method is evaluated.
Journal ArticleDOI

Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers

TL;DR: This article replaces the exact multipliers in two representative NNs with approximate designs to evaluate their effect on the classification accuracy and shows that using AMs can also improve the NN accuracy by introducing noise.
Proceedings ArticleDOI

ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining

TL;DR: It is demonstrated that efficient approximations can be introduced into the computational path of DNN accelerators while retraining can completely be avoided, and a simple weight updating scheme is proposed that compensates the inaccuracy introduced by employing approximate multipliers.