Institution
Center for Information Technology
Facility•Bethesda, Maryland, United States•
About: Center for Information Technology is a facility organization based out in Bethesda, Maryland, United States. It is known for research contribution in the topics: Artificial neural network & Context (language use). The organization has 3884 authors who have published 5897 publications receiving 159570 citations.
Topics: Artificial neural network, Context (language use), Deep learning, Population, Cloud computing
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A heuristic method for partitioning arbitrary graphs which is both effective in finding optimal partitions, and fast enough to be practical in solving large problems is presented.
Abstract: We consider the problem of partitioning the nodes of a graph with costs on its edges into subsets of given sizes so as to minimize the sum of the costs on all edges cut. This problem arises in several physical situations — for example, in assigning the components of electronic circuits to circuit boards to minimize the number of connections between boards. This paper presents a heuristic method for partitioning arbitrary graphs which is both effective in finding optimal partitions, and fast enough to be practical in solving large problems.
5,082 citations
••
TL;DR: This work presents an automatic method for docking organic ligands into protein binding sites that combines an appropriate model of the physico-chemical properties of the docked molecules with efficient methods for sampling the conformational space of the ligand.
2,607 citations
••
TL;DR: The Xplor-NIH package contains an interface with a new programmatic framework written in C++ that supports the general purpose scripting languages Python and TCL, enabling rapid development of new tools, such as new potential energy terms and new optimization methods.
2,266 citations
••
TL;DR: In dit artikel zullen the authors LOFAR beschrijven: van de astronomische mogelijkheden met de nieuwe telescoop tot aan een nadere technische beshrijving of het instrument.
Abstract: LOFAR, the LOw-Frequency ARray, is a new-generation radio interferometer constructed in the north of the Netherlands and across europe. Utilizing a novel phased-array design, LOFAR covers the largely unexplored low-frequency range from 10-240 MHz and provides a number of unique observing capabilities. Spreading out from a core located near the village of Exloo in the northeast of the Netherlands, a total of 40 LOFAR stations are nearing completion. A further five stations have been deployed throughout Germany, and one station has been built in each of France, Sweden, and the UK. Digital beam-forming techniques make the LOFAR system agile and allow for rapid repointing of the telescope as well as the potential for multiple simultaneous observations. With its dense core array and long interferometric baselines, LOFAR achieves unparalleled sensitivity and angular resolution in the low-frequency radio regime. The LOFAR facilities are jointly operated by the International LOFAR Telescope (ILT) foundation, as an observatory open to the global astronomical community. LOFAR is one of the first radio observatories to feature automated processing pipelines to deliver fully calibrated science products to its user community. LOFAR's new capabilities, techniques and modus operandi make it an important pathfinder for the Square Kilometre Array (SKA). We give an overview of the LOFAR instrument, its major hardware and software components, and the core science objectives that have driven its design. In addition, we present a selection of new results from the commissioning phase of this new radio observatory.
2,067 citations
••
TL;DR: A survey of automated text analysis for political science can be found in this article, where the authors provide guidance on how to validate the output of the models and clarify misconceptions and errors in the literature.
Abstract: Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have hindered their use in political science research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods—they are no substitute for careful thought and close reading and require extensive and problem-specific validation. We survey a wide range of new methods, provide guidance on how to validate the output of the models, and clarify misconceptions and errors in the literature. To conclude, we argue that for automated text methods to become a standard tool for political scientists, methodologists must contribute new methods and new methods of validation. Language is the medium for politics and political conflict. Candidates debate and state policy positions during a campaign. Once elected, representatives write and debate legislation. After laws are passed, bureaucrats solicit comments before they issue regulations. Nations regularly negotiate and then sign peace treaties, with language that signals the motivations and relative power of the countries involved. News reports document the day-to-day affairs of international relations that provide a detailed picture of conflict and cooperation. Individual candidates and political parties articulate their views through party platforms and manifestos. Terrorist groups even reveal their preferences and goals through recruiting materials, magazines, and public statements. These examples, and many others throughout political science, show that to understand what politics is about we need to know what political actors are saying and writing. Recognizing that language is central to the study of politics is not new. To the contrary, scholars of politics have long recognized that much of politics is expressed in words. But scholars have struggled when using texts to make inferences about politics. The primary problem is volume: there are simply too many political texts. Rarely are scholars able to manually read all the texts in even moderately sized corpora. And hiring coders to manually read all documents is still very expensive. The result is that
2,044 citations
Authors
Showing all 3888 results
Name | H-index | Papers | Citations |
---|---|---|---|
David W. Bates | 159 | 1239 | 116698 |
Jian Yang | 142 | 1818 | 111166 |
Chao Zhang | 127 | 3119 | 84711 |
H. Bas Bueno-de-Mesquita | 103 | 463 | 37905 |
James M. Gold | 96 | 383 | 32208 |
Gerhard Hummer | 93 | 416 | 34375 |
Sharon M. Wahl | 88 | 271 | 34018 |
G. Marius Clore | 87 | 375 | 29831 |
Thomas Lengauer | 80 | 477 | 34430 |
Martin Vingron | 77 | 359 | 33403 |
Kerstin Dautenhahn | 75 | 461 | 22825 |
Patrick Baudisch | 73 | 249 | 15537 |
Victor W. Pike | 72 | 499 | 17016 |
Paul T. Wingfield | 72 | 238 | 18586 |
Margaret Martonosi | 71 | 277 | 23162 |