Institution
Carnegie Mellon University
Education•Pittsburgh, Pennsylvania, United States•
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Computer science & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the problem of estimating the graph associated with a binary Ising Markov random field is considered, where the neighborhood of any given node is estimated by performing logistic regression subject to an l 1-constraint.
Abstract: We consider the problem of estimating the graph associated with a binary Ising Markov random field. We describe a method based on l1-regularized logistic regression, in which the neighborhood of any given node is estimated by performing logistic regression subject to an l1-constraint. The method is analyzed under high-dimensional scaling in which both the number of nodes p and maximum neighborhood size d are allowed to grow as a function of the number of observations n. Our main results provide sufficient conditions on the triple (n, p, d) and the model parameters for the method to succeed in consistently estimating the neighborhood of every node in the graph simultaneously. With coherence conditions imposed on the population Fisher information matrix, we prove that consistent neighborhood selection can be obtained for sample sizes n=Ω(d3log p) with exponentially decaying error. When these same conditions are imposed directly on the sample matrices, we show that a reduced sample size of n=Ω(d2log p) suffices for the method to estimate neighborhoods consistently. Although this paper focuses on the binary graphical models, we indicate how a generalization of the method of the paper would apply to general discrete Markov random fields.
776 citations
••
TL;DR: Donnees sur l'adaptation de la methode de Monte Carlo, utilisee en mecanique statistique, pour etudier la dynamique de segregation par dimensions des particules solides.
Abstract: When a can containing one large ball and a number of smaller ones is shaken, the large ball rises to the top, even when the larger ball is more dense than the others. Similarly, a mixture of different sized particles will segregate by size when shaken. An adaptation of the Monte Carlo method is used to study this size segregation. The results show the local, geometric mechanism by which the segregation is produced. Segregation by size is to be distinguished from the more obvious sifting process which occurs when tiny grains filter down through the interstices between large particles.
776 citations
••
TL;DR: This paper presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and cataloging medical equipment for use in the health care system.
Abstract: John A. Swets, Robyn M. Dawes, and John Monahan BBN Technologies (emeritus), Cambridge, Massachusetts; Radiology Department, Brigham and Women’s Hospital, and Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts, Department of Social and Decision Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania, and School of Law, University of Virginia, Charlottesville, Virginia
774 citations
•
08 Jul 2010TL;DR: The expressiveness of the GraphLab framework is demonstrated by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing and it is shown that using GraphLab the authors can achieve excellent parallel performance on large scale real-world problems.
Abstract: Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.
772 citations
••
TL;DR: An adaptive statistical language model is described, which successfully integrates long distance linguistic information with other knowledge sources, and shows the feasibility of incorporating many diverse knowledge sources in a single, unified statistical framework.
771 citations
Authors
Showing all 36645 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yi Chen | 217 | 4342 | 293080 |
Rakesh K. Jain | 200 | 1467 | 177727 |
Robert C. Nichol | 187 | 851 | 162994 |
Michael I. Jordan | 176 | 1016 | 216204 |
Jasvinder A. Singh | 176 | 2382 | 223370 |
J. N. Butler | 172 | 2525 | 175561 |
P. Chang | 170 | 2154 | 151783 |
Krzysztof Matyjaszewski | 169 | 1431 | 128585 |
Yang Yang | 164 | 2704 | 144071 |
Geoffrey E. Hinton | 157 | 414 | 409047 |
Herbert A. Simon | 157 | 745 | 194597 |
Yongsun Kim | 156 | 2588 | 145619 |
Terrence J. Sejnowski | 155 | 845 | 117382 |
John B. Goodenough | 151 | 1064 | 113741 |
Scott Shenker | 150 | 454 | 118017 |