N
Nikos Vlassis
Researcher at Netflix
Publications - 149
Citations - 10415
Nikos Vlassis is an academic researcher from Netflix. The author has contributed to research in topics: Expectation–maximization algorithm & Markov decision process. The author has an hindex of 45, co-authored 146 publications receiving 8839 citations. Previous affiliations of Nikos Vlassis include University of Amsterdam & University of Luxembourg.
Papers
More filters
Journal ArticleDOI
The global k-means clustering algorithm
TL;DR: The global k-means algorithm is presented which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N executions of the k-Means algorithm from suitable initial positions.
Journal ArticleDOI
Perseus: randomized point-based value iteration for POMDPs
TL;DR: This work presents a randomized point-based value iteration algorithm called PERSEUS, which backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set.
Journal ArticleDOI
VizBin - an application for reference-independent visualization and human-augmented binning of metagenomic data.
Cedric Christian Laczny,Tomasz Sternal,Valentin Plugaru,Piotr Gawron,Arash Atashpendar,Houry Hera Margossian,Sergio Coronado,Laurens van der Maaten,Nikos Vlassis,Paul Wilmes +9 more
TL;DR: VizBin can be applied de novo for the visualization and subsequent binning of metagenomic datasets from single samples, and it can be used for the post hoc inspection and refinement of automatically generated bins.
Journal ArticleDOI
Efficient greedy learning of Gaussian mixture models
TL;DR: A heuristic for searching for the optimal component to insert in the greedy learning of gaussian mixtures is proposed and can be particularly useful when the optimal number of mixture components is unknown.
Journal ArticleDOI
Collaborative Multiagent Reinforcement Learning by Payoff Propagation
Jelle R. Kok,Nikos Vlassis +1 more
TL;DR: A set of scalable techniques for learning the behavior of a group of agents in a collaborative multiagent setting using the framework of coordination graphs of Guestrin, Koller, and Parr (2002a) and introduces different model-free reinforcement-learning techniques, unitedly called Sparse Cooperative Q-learning, which approximate the global action-value function based on the topology of a coordination graph.