scispace - formally typeset
Search or ask a question

Showing papers by "University of Bristol published in 2016"


Journal ArticleDOI
12 Oct 2016-BMJ
TL;DR: Risk of Bias In Non-randomised Studies - of Interventions is developed, a new tool for evaluating risk of bias in estimates of the comparative effectiveness of interventions from studies that did not use randomisation to allocate units or clusters of individuals to comparison groups.
Abstract: Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.

8,028 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: Gaia as discussed by the authors is a cornerstone mission in the science programme of the European Space Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach.
Abstract: Gaia is a cornerstone mission in the science programme of the EuropeanSpace Agency (ESA). The spacecraft construction was approved in 2006, following a study in which the original interferometric concept was changed to a direct-imaging approach. Both the spacecraft and the payload were built by European industry. The involvement of the scientific community focusses on data processing for which the international Gaia Data Processing and Analysis Consortium (DPAC) was selected in 2007. Gaia was launched on 19 December 2013 and arrived at its operating point, the second Lagrange point of the Sun-Earth-Moon system, a few weeks later. The commissioning of the spacecraft and payload was completed on 19 July 2014. The nominal five-year mission started with four weeks of special, ecliptic-pole scanning and subsequently transferred into full-sky scanning mode. We recall the scientific goals of Gaia and give a description of the as-built spacecraft that is currently (mid-2016) being operated to achieve these goals. We pay special attention to the payload module, the performance of which is closely related to the scientific performance of the mission. We provide a summary of the commissioning activities and findings, followed by a description of the routine operational mode. We summarise scientific performance estimates on the basis of in-orbit operations. Several intermediate Gaia data releases are planned and the data can be retrieved from the Gaia Archive, which is available through the Gaia home page.

5,164 citations


Journal ArticleDOI
Theo Vos1, Christine Allen1, Megha Arora1, Ryan M Barber1  +696 moreInstitutions (260)
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) as discussed by the authors was used to estimate the incidence, prevalence, and years lived with disability for diseases and injuries at the global, regional, and national scale over the period of 1990 to 2015.

5,050 citations


Journal ArticleDOI
TL;DR: The emergence of MCR-1 heralds the breach of the last group of antibiotics, polymyxins, by plasmid-mediated resistance, in Enterobacteriaceae and emphasise the urgent need for coordinated global action in the fight against pan-drug-resistant Gram-negative bacteria.
Abstract: Summary Background Until now, polymyxin resistance has involved chromosomal mutations but has never been reported via horizontal gene transfer. During a routine surveillance project on antimicrobial resistance in commensal Escherichia coli from food animals in China, a major increase of colistin resistance was observed. When an E coli strain, SHP45, possessing colistin resistance that could be transferred to another strain, was isolated from a pig, we conducted further analysis of possible plasmid-mediated polymyxin resistance. Herein, we report the emergence of the first plasmid-mediated polymyxin resistance mechanism, MCR-1, in Enterobacteriaceae. Methods The mcr-1 gene in E coli strain SHP45 was identified by whole plasmid sequencing and subcloning. MCR-1 mechanistic studies were done with sequence comparisons, homology modelling, and electrospray ionisation mass spectrometry. The prevalence of mcr-1 was investigated in E coli and Klebsiella pneumoniae strains collected from five provinces between April, 2011, and November, 2014. The ability of MCR-1 to confer polymyxin resistance in vivo was examined in a murine thigh model. Findings Polymyxin resistance was shown to be singularly due to the plasmid-mediated mcr-1 gene. The plasmid carrying mcr-1 was mobilised to an E coli recipient at a frequency of 10 −1 to 10 −3 cells per recipient cell by conjugation, and maintained in K pneumoniae and Pseudomonas aeruginosa . In an in-vivo model, production of MCR-1 negated the efficacy of colistin. MCR-1 is a member of the phosphoethanolamine transferase enzyme family, with expression in E coli resulting in the addition of phosphoethanolamine to lipid A. We observed mcr-1 carriage in E coli isolates collected from 78 (15%) of 523 samples of raw meat and 166 (21%) of 804 animals during 2011–14, and 16 (1%) of 1322 samples from inpatients with infection. Interpretation The emergence of MCR-1 heralds the breach of the last group of antibiotics, polymyxins, by plasmid-mediated resistance. Although currently confined to China, MCR-1 is likely to emulate other global resistance mechanisms such as NDM-1. Our findings emphasise the urgent need for coordinated global action in the fight against pan-drug-resistant Gram-negative bacteria. Funding Ministry of Science and Technology of China, National Natural Science Foundation of China.

3,647 citations


Journal ArticleDOI
TL;DR: A novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate is presented, which is consistent even when up to 50% of the information comes from invalid instrumental variables.
Abstract: Developments in genome-wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse-variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite-sample Type 1 error rates than the inverse-variance weighted method, and is complementary to the recently proposed MR-Egger (Mendelian randomization-Egger) regression method. In analyses of the causal effects of low-density lipoprotein cholesterol and high-density lipoprotein cholesterol on coronary artery disease risk, the inverse-variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR-Egger regression methods suggest a null effect of high-density lipoprotein cholesterol that corresponds with the experimental evidence. Both median-based and MR-Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants.

2,959 citations


Journal ArticleDOI
TL;DR: The first Gaia data release, Gaia DR1 as discussed by the authors, consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues.
Abstract: Context. At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. Aims: A summary of Gaia DR1 is presented along with illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Methods: The raw data collected by Gaia during the first 14 months of the mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC) and turned into an astrometric and photometric catalogue. Results: Gaia DR1 consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues - a realisation of the Tycho-Gaia Astrometric Solution (TGAS) - and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set, consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of 3000 Cepheid and RR Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas yr-1 for the proper motions. A systematic component of 0.3 mas should be added to the parallax uncertainties. For the subset of 94 000 Hipparcos stars in the primary data set, the proper motions are much more precise at about 0.06 mas yr-1. For the secondary astrometric data set, the typical uncertainty of the positions is 10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to0.03 mag over the magnitude range 5 to 20.7. Conclusions: Gaia DR1 is an important milestone ahead of the next Gaia data release, which will feature five-parameter astrometry for all sources. Extensive validation shows that Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data.

2,174 citations


Journal ArticleDOI
Shane A. McCarthy1, Sayantan Das2, Warren W. Kretzschmar3, Olivier Delaneau4, Andrew R. Wood5, Alexander Teumer6, Hyun Min Kang2, Christian Fuchsberger2, Petr Danecek1, Kevin Sharp3, Yang Luo1, C Sidore7, Alan Kwong2, Nicholas J. Timpson8, Seppo Koskinen, Scott I. Vrieze9, Laura J. Scott2, He Zhang2, Anubha Mahajan3, Jan H. Veldink, Ulrike Peters10, Ulrike Peters11, Carlos N. Pato12, Cornelia M. van Duijn13, Christopher E. Gillies2, Ilaria Gandin14, Massimo Mezzavilla, Arthur Gilly1, Massimiliano Cocca14, Michela Traglia, Andrea Angius7, Jeffrey C. Barrett1, D.I. Boomsma15, Kari Branham2, Gerome Breen16, Gerome Breen17, Chad M. Brummett2, Fabio Busonero7, Harry Campbell18, Andrew T. Chan19, Sai Chen2, Emily Y. Chew20, Francis S. Collins20, Laura J Corbin8, George Davey Smith8, George Dedoussis21, Marcus Dörr6, Aliki-Eleni Farmaki21, Luigi Ferrucci20, Lukas Forer22, Ross M. Fraser2, Stacey Gabriel23, Shawn Levy, Leif Groop24, Leif Groop25, Tabitha A. Harrison11, Andrew T. Hattersley5, Oddgeir L. Holmen26, Kristian Hveem26, Matthias Kretzler2, James Lee27, Matt McGue28, Thomas Meitinger29, David Melzer5, Josine L. Min8, Karen L. Mohlke30, John B. Vincent31, Matthias Nauck6, Deborah A. Nickerson10, Aarno Palotie23, Aarno Palotie19, Michele T. Pato12, Nicola Pirastu14, Melvin G. McInnis2, J. Brent Richards17, J. Brent Richards32, Cinzia Sala, Veikko Salomaa, David Schlessinger20, Sebastian Schoenherr22, P. Eline Slagboom33, Kerrin S. Small17, Tim D. Spector17, Dwight Stambolian34, Marcus A. Tuke5, Jaakko Tuomilehto, Leonard H. van den Berg, Wouter van Rheenen, Uwe Völker6, Cisca Wijmenga35, Daniela Toniolo, Eleftheria Zeggini1, Paolo Gasparini14, Matthew G. Sampson2, James F. Wilson18, Timothy M. Frayling5, Paul I.W. de Bakker36, Morris A. Swertz35, Steven A. McCarroll19, Charles Kooperberg11, Annelot M. Dekker, David Altshuler, Cristen J. Willer2, William G. Iacono28, Samuli Ripatti25, Nicole Soranzo27, Nicole Soranzo1, Klaudia Walter1, Anand Swaroop20, Francesco Cucca7, Carl A. Anderson1, Richard M. Myers, Michael Boehnke2, Mark I. McCarthy3, Mark I. McCarthy37, Richard Durbin1, Gonçalo R. Abecasis2, Jonathan Marchini3 
TL;DR: A reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies.
Abstract: We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies, and it can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently.

2,149 citations


Journal ArticleDOI
TL;DR: At a median of 10 years, prostate-cancer-specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments.
Abstract: BACKGROUND The comparative effectiveness of treatments for prostate cancer that is detected by prostate-specific antigen (PSA) testing remains uncertain. METHODS We compared active monitoring, radical prostatectomy, and external-beam radiotherapy for the treatment of clinically localized prostate cancer. Between 1999 and 2009, a total of 82,429 men 50 to 69 years of age received a PSA test; 2664 received a diagnosis of localized prostate cancer, and 1643 agreed to undergo randomization to active monitoring (545 men), surgery (553), or radiotherapy (545). The primary outcome was prostate-cancer mortality at a median of 10 years of follow-up. Secondary outcomes included the rates of disease progression, metastases, and all-cause deaths. RESULTS There were 17 prostate-cancer–specific deaths overall: 8 in the active-monitoring group (1.5 deaths per 1000 person-years; 95% confidence interval [CI], 0.7 to 3.0), 5 in the surgery group (0.9 per 1000 person-years; 95% CI, 0.4 to 2.2), and 4 in the radiotherapy group (0.7 per 1000 person-years; 95% CI, 0.3 to 2.0); the difference among the groups was not significant (P=0.48 for the overall comparison). In addition, no significant difference was seen among the groups in the number of deaths from any cause (169 deaths overall; P=0.87 for the comparison among the three groups). Metastases developed in more men in the active-monitoring group (33 men; 6.3 events per 1000 person-years; 95% CI, 4.5 to 8.8) than in the surgery group (13 men; 2.4 per 1000 person-years; 95% CI, 1.4 to 4.2) or the radiotherapy group (16 men; 3.0 per 1000 person-years; 95% CI, 1.9 to 4.9) (P=0.004 for the overall comparison). Higher rates of disease progression were seen in the active-monitoring group (112 men; 22.9 events per 1000 person-years; 95% CI, 19.0 to 27.5) than in the surgery group (46 men; 8.9 events per 1000 person-years; 95% CI, 6.7 to 11.9) or the radiotherapy group (46 men; 9.0 events per 1000 person-years; 95% CI, 6.7 to 12.0) (P<0.001 for the overall comparison). CONCLUSIONS At a median of 10 years, prostate-cancer–specific mortality was low irrespective of the treatment assigned, with no significant difference among treatments. Surgery and radiotherapy were associated with lower incidences of disease progression and metastases than was active monitoring.

2,016 citations


Journal ArticleDOI
TL;DR: The associations of both overweight and obesity with higher all-cause mortality were broadly consistent in four continents and supports strategies to combat the entire spectrum of excess adiposity in many populations.

1,731 citations


Journal ArticleDOI
Nicholas J Kassebaum1, Megha Arora1, Ryan M Barber1, Zulfiqar A Bhutta2  +679 moreInstitutions (268)
TL;DR: In this paper, the authors used the Global Burden of Diseases, Injuries, and Risk Factors Study 2015 (GBD 2015) for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2015.

Journal ArticleDOI
TL;DR: The large-scale evidence from randomised trials indicates that it is unlikely that large absolute excesses in other serious adverse events still await discovery, and any further findings that emerge about the effects of statin therapy would not be expected to alter materially the balance of benefits and harms.


Journal ArticleDOI
Aysu Okbay1, Jonathan P. Beauchamp2, Mark Alan Fontana3, James J. Lee4  +293 moreInstitutions (81)
26 May 2016-Nature
TL;DR: In this article, the results of a genome-wide association study (GWAS) for educational attainment were reported, showing that single-nucleotide polymorphisms associated with educational attainment disproportionately occur in genomic regions regulating gene expression in the fetal brain.
Abstract: Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases.

Journal ArticleDOI
TL;DR: RIS is the first rigorously developed tool designed specifically to assess the risk of bias in systematic reviews, and is currently aimed at four broad categories of reviews mainly within health care settings.

Journal ArticleDOI
TL;DR: The enormous health loss attributable to viral hepatitis, and the availability of effective vaccines and treatments, suggests an important opportunity to improve public health.

Journal ArticleDOI
TL;DR: A review of the state-of-the-art of this multidisciplinary area and identifying the key research challenges is provided in this paper, where the developments in diagnostics, modeling and further extensions of cross section and reaction rate databases are discussed.
Abstract: Plasma–liquid interactions represent a growing interdisciplinary area of research involving plasma science, fluid dynamics, heat and mass transfer, photolysis, multiphase chemistry and aerosol science. This review provides an assessment of the state-of-the-art of this multidisciplinary area and identifies the key research challenges. The developments in diagnostics, modeling and further extensions of cross section and reaction rate databases that are necessary to address these challenges are discussed. The review focusses on non-equilibrium plasmas.

Journal ArticleDOI
TL;DR: The main outcome measured was abstinence from smoking at longest follow-up, and the most rigorous definition of abstinence was abstinence, and preferred biochemically validated rates where they were reported.
Abstract: Nicotine receptor partial agonists may help people to stop smoking by a combination of maintaining moderate levels of dopamine to counteract withdrawal symptoms (acting as an agonist) and reducing smoking satisfaction (acting as an antagonist). The primary objective of this review is to assess the efficacy and tolerability of nicotine receptor partial agonists, including cytisine, dianicline and varenicline for smoking cessation. We searched the Cochrane Tobacco Addiction Group's specialised register for trials, using the terms ('cytisine' or 'Tabex' or 'dianicline' or 'varenicline' or 'nicotine receptor partial agonist') in the title or abstract, or as keywords. The register is compiled from searches of MEDLINE, EMBASE, PsycINFO and Web of Science using MeSH terms and free text to identify controlled trials of interventions for smoking cessation and prevention. We contacted authors of trial reports for additional information where necessary. The latest update of the specialised register was in December 2011. We also searched online clinical trials registers. We included randomized controlled trials which compared the treatment drug with placebo. We also included comparisons with bupropion and nicotine patches where available. We excluded trials which did not report a minimum follow-up period of six months from start of treatment. We extracted data on the type of participants, the dose and duration of treatment, the outcome measures, the randomization procedure, concealment of allocation, and completeness of follow-up.The main outcome measured was abstinence from smoking at longest follow-up. We used the most rigorous definition of abstinence, and preferred biochemically validated rates where they were reported. Where appropriate we pooled risk ratios (RRs), using the Mantel-Haenszel fixed-effect model. Two recent cytisine trials (937 people) found that more participants taking cytisine stopped smoking compared with placebo at longest follow-up, with a pooled RR of 3.98 (95% confidence interval (CI) 2.01 to 7.87). One trial of dianicline (602 people) failed to find evidence that it was effective (RR 1.20, 95% CI 0.82 to 1.75). Fifteen trials compared varenicline with placebo for smoking cessation; three of these also included a bupropion treatment arm. We also found one open-label trial comparing varenicline plus counselling with counselling alone. We found one relapse prevention trial, comparing varenicline with placebo, and two open-label trials comparing varenicline with nicotine replacement therapy (NRT). We also include one trial in which all the participants were given varenicline, but received behavioural support either online or by phone calls, or by both methods. This trial is not included in the analyses, but contributes to the data on safety and tolerability. The included studies covered 12,223 participants, 8100 of whom used varenicline.The pooled RR for continuous or sustained abstinence at six months or longer for varenicline at standard dosage versus placebo was 2.27 (95% CI 2.02 to 2.55; 14 trials, 6166 people, excluding one trial evaluating long term safety). Varenicline at lower or variable doses was also shown to be effective, with an RR of 2.09 (95% CI 1.56 to 2.78; 4 trials, 1272 people). The pooled RR for varenicline versus bupropion at one year was 1.52 (95% CI 1.22 to 1.88; 3 trials, 1622 people). The RR for varenicline versus NRT for point prevalence abstinence at 24 weeks was 1.13 (95% CI 0.94 to 1.35; 2 trials, 778 people). The two trials which tested the use of varenicline beyond the 12-week standard regimen found the drug to be well-tolerated during long-term use. The main adverse effect of varenicline was nausea, which was mostly at mild to moderate levels and usually subsided over time. A meta-analysis of reported serious adverse events occurring during or after active treatment and not necessarily considered attributable to treatment suggests there may be a one-third increase in the chance of severe adverse effects among people using varenicline (RR 1.36; 95% CI 1.04 to 1.79; 17 trials, 7725 people), but this finding needs to be tested further. Post-marketing safety data have raised questions about a possible association between varenicline and depressed mood, agitation, and suicidal behaviour or ideation. The labelling of varenicline was amended in 2008, and the manufacturers produced a Medication Guide. Thus far, surveillance reports and secondary analyses of trial data are inconclusive, but the possibility of a link between varenicline and serious psychiatric or cardiovascular events cannot be ruled out. Cytisine increases the chances of quitting, although absolute quit rates were modest in two recent trials. Varenicline at standard dose increased the chances of successful long-term smoking cessation between two- and threefold compared with pharmacologically unassisted quit attempts. Lower dose regimens also conferred benefits for cessation, while reducing the incidence of adverse events. More participants quit successfully with varenicline than with bupropion. Two open-label trials of varenicline versus NRT suggested a modest benefit of varenicline but confidence intervals did not rule out equivalence. Limited evidence suggests that varenicline may have a role to play in relapse prevention. The main adverse effect of varenicline is nausea, but mostly at mild to moderate levels and tending to subside over time. Possible links with serious adverse events, including serious psychiatric or cardiovascular events, cannot be ruled out.Future trials of cytisine may test extended regimens and more intensive behavioural support. There is a need for further trials of the efficacy of varenicline treatment extended beyond 12 weeks.

Journal ArticleDOI
TL;DR: In this article, the authors classify different types of sol-gel precursors and how these can influence a solgel process, from self-assembly and ordering in the initial solution, to phase separation during the gelation process and finally to crystallographic transformations at high temperature.
Abstract: From its initial use to describe hydrolysis and condensation processes, the term ‘sol–gel’ is now used for a diverse range of chemistries. In fact, it is perhaps better defined more broadly as covering the synthesis of solid materials such as metal oxides from solution-state precursors. These can include metal alkoxides that crosslink to form metal–oxane gels, but also metal ion–chelate complexes or organic polymer gels containing metal species. What is important across all of these examples is how the choice of precursor can have a significant impact on the structure and composition of the solid product. In this review, we will attempt to classify different types of sol–gel precursor and how these can influence a sol–gel process, from self-assembly and ordering in the initial solution, to phase separation during the gelation process and finally to crystallographic transformations at high temperature.

Journal ArticleDOI
TL;DR: In this analysis of patient-reported outcomes after treatment for localized prostate cancer, patterns of severity, recovery, and decline in urinary, bowel, and sexual function and associated quality of life differed among the three groups.
Abstract: BackgroundRobust data on patient-reported outcome measures comparing treatments for clinically localized prostate cancer are lacking. We investigated the effects of active monitoring, radical prostatectomy, and radical radiotherapy with hormones on patient-reported outcomes. MethodsWe compared patient-reported outcomes among 1643 men in the Prostate Testing for Cancer and Treatment (ProtecT) trial who completed questionnaires before diagnosis, at 6 and 12 months after randomization, and annually thereafter. Patients completed validated measures that assessed urinary, bowel, and sexual function and specific effects on quality of life, anxiety and depression, and general health. Cancer-related quality of life was assessed at 5 years. Complete 6-year data were analyzed according to the intention-to-treat principle. ResultsThe rate of questionnaire completion during follow-up was higher than 85% for most measures. Of the three treatments, prostatectomy had the greatest negative effect on sexual function and u...

Journal ArticleDOI
TL;DR: This paper presents an overview of SA and its link to uncertainty analysis, model calibration and evaluation, robust decision-making, and provides practical guidelines by developing a workflow for the application of SA.
Abstract: Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. In this paper we review the SA literature with the goal of providing: (i) a comprehensive view of SA approaches also in relation to other methodologies for model identification and application; (ii) a systematic classification of the most commonly used SA methods; (iii) practical guidelines for the application of SA. The paper aims at delivering an introduction to SA for non-specialist readers, as well as practical advice with best practice examples from the literature; and at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research. We present an overview of SA and its link to uncertainty analysis, model calibration and evaluation, robust decision-making.We provide a systematic review of existing approaches, which can support users in the choice of an SA method.We provide practical guidelines by developing a workflow for the application of SA and discuss critical choices.We give best practice examples from the literature and highlight trends and gaps for future research.

Journal ArticleDOI
TL;DR: The consensus that humans are causing recent global warming is shared by 90% to 100% of publishing climate scientists according to six independent studies by co-authors of this paper as discussed by the authors.
Abstract: The consensus that humans are causing recent global warming is shared by 90%–100% of publishing climate scientists according to six independent studies by co-authors of this paper. Those results are consistent with the 97% consensus reported by Cook et al (Environ. Res. Lett. 8 024024) based on 11 944 abstracts of research papers, of which 4014 took a position on the cause of recent global warming. A survey of authors of those papers (N = 2412 papers) also supported a 97% consensus. Tol (2016 Environ. Res. Lett. 11 048001) comes to a different conclusion using results from surveys of non-experts such as economic geologists and a self-selected group of those who reject the consensus. We demonstrate that this outcome is not unexpected because the level of consensus correlates with expertise in climate science. At one point, Tol also reduces the apparent consensus by assuming that abstracts that do not explicitly state the cause of global warming ('no position') represent non-endorsement, an approach that if applied elsewhere would reject consensus on well-established theories such as plate tectonics. We examine the available studies and conclude that the finding of 97% consensus in published climate research is robust and consistent with other surveys of climate scientists and peer-reviewed studies.

Journal ArticleDOI
TL;DR: The aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them and recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐ study variance statistic’.
Abstract: Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted genome-wide association studies of three phenotypes: subjective well-being (n = 298,420), depressive symptoms (n= 161,460), and neuroticism(n = 170,911).
Abstract: Very few genetic variants have been associated with depression and neuroticism, likely because of limitations on sample size in previous studies. Subjective well-being, a phenotype that is genetically correlated with both of these traits, has not yet been studied with genome-wide data. We conducted genome-wide association studies of three phenotypes: subjective well-being (n = 298,420), depressive symptoms (n = 161,460), and neuroticism (n = 170,911). We identify 3 variants associated with subjective well-being, 2 variants associated with depressive symptoms, and 11 variants associated with neuroticism, including 2 inversion polymorphisms. The two loci associated with depressive symptoms replicate in an independent depression sample. Joint analyses that exploit the high genetic correlations between the phenotypes (|ρ^| ≈ 0.8) strengthen the overall credibility of the findings and allow us to identify additional variants. Across our phenotypes, loci regulating expression in central nervous system and adrenal or pancreas tissues are strongly enriched for association.

Journal ArticleDOI
TL;DR: In this article, a general framework for smoothing parameter estimation for models with regular likelihoods constructed in terms of unknown smooth functions of covariates is discussed, where the smoothing parameters controlling the extent of penalization are estimated by Laplace approximate marginal likelihood.
Abstract: This article discusses a general framework for smoothing parameter estimation for models with regular likelihoods constructed in terms of unknown smooth functions of covariates. Gaussian random effects and parametric terms may also be present. By construction the method is numerically stable and convergent, and enables smoothing parameter uncertainty to be quantified. The latter enables us to fix a well known problem with AIC for such models, thereby improving the range of model selection tools available. The smooth functions are represented by reduced rank spline like smoothers, with associated quadratic penalties measuring function smoothness. Model estimation is by penalized likelihood maximization, where the smoothing parameters controlling the extent of penalization are estimated by Laplace approximate marginal likelihood. The methods cover, for example, generalized additive models for nonexponential family responses (e.g., beta, ordered categorical, scaled t distribution, negative binomial a...

Journal ArticleDOI
Marielle Saunois1, Philippe Bousquet1, Ben Poulter2, Anna Peregon1, Philippe Ciais1, Josep G. Canadell3, Edward J. Dlugokencky4, Giuseppe Etiope5, David Bastviken6, Sander Houweling7, Greet Janssens-Maenhout, Francesco N. Tubiello8, Simona Castaldi, Robert B. Jackson9, Mihai Alexe, Vivek K. Arora, David J. Beerling10, Peter Bergamaschi, Donald R. Blake11, Gordon Brailsford12, Victor Brovkin13, Lori Bruhwiler4, Cyril Crevoisier14, Patrick M. Crill, Kristofer R. Covey15, Charles L. Curry16, Christian Frankenberg17, Nicola Gedney18, Lena Höglund-Isaksson19, Misa Ishizawa20, Akihiko Ito20, Fortunat Joos21, Heon Sook Kim20, Thomas Kleinen13, Paul B. Krummel3, Jean-Francois Lamarque22, Ray L. Langenfelds3, Robin Locatelli1, Toshinobu Machida20, Shamil Maksyutov20, Kyle C. McDonald23, Julia Marshall13, Joe R. Melton, Isamu Morino18, Vaishali Naik24, Simon O'Doherty25, Frans-Jan W. Parmentier26, Prabir K. Patra27, Changhui Peng28, Shushi Peng1, Glen P. Peters29, Isabelle Pison1, Catherine Prigent30, Ronald G. Prinn31, Michel Ramonet1, William J. Riley32, Makoto Saito20, Monia Santini, Ronny Schroeder23, Ronny Schroeder33, Isobel J. Simpson11, Renato Spahni21, P. Steele3, Atsushi Takizawa34, Brett F. Thornton, Hanqin Tian35, Yasunori Tohjima20, Nicolas Viovy1, Apostolos Voulgarakis36, Michiel van Weele37, Guido R. van der Werf38, Ray F. Weiss39, Christine Wiedinmyer22, David J. Wilton10, Andy Wiltshire18, Doug Worthy40, Debra Wunch41, Xiyan Xu32, Yukio Yoshida20, Bowen Zhang35, Zhen Zhang2, Qiuan Zhu42 
TL;DR: The Global Carbon Project (GCP) as discussed by the authors is a consortium of multi-disciplinary scientists, including atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions.
Abstract: . The global methane (CH4) budget is becoming an increasingly important component for managing realistic pathways to mitigate climate change. This relevance, due to a shorter atmospheric lifetime and a stronger warming potential than carbon dioxide, is challenged by the still unexplained changes of atmospheric CH4 over the past decade. Emissions and concentrations of CH4 are continuing to increase, making CH4 the second most important human-induced greenhouse gas after carbon dioxide. Two major difficulties in reducing uncertainties come from the large variety of diffusive CH4 sources that overlap geographically, and from the destruction of CH4 by the very short-lived hydroxyl radical (OH). To address these difficulties, we have established a consortium of multi-disciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate research on the methane cycle, and producing regular (∼ biennial) updates of the global methane budget. This consortium includes atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions. Following Kirschke et al. (2013), we propose here the first version of a living review paper that integrates results of top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up models, inventories and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, and inventories for anthropogenic emissions, data-driven extrapolations). For the 2003–2012 decade, global methane emissions are estimated by top-down inversions at 558 Tg CH4 yr−1, range 540–568. About 60 % of global emissions are anthropogenic (range 50–65 %). Since 2010, the bottom-up global emission inventories have been closer to methane emissions in the most carbon-intensive Representative Concentrations Pathway (RCP8.5) and higher than all other RCP scenarios. Bottom-up approaches suggest larger global emissions (736 Tg CH4 yr−1, range 596–884) mostly because of larger natural emissions from individual sources such as inland waters, natural wetlands and geological sources. Considering the atmospheric constraints on the top-down budget, it is likely that some of the individual emissions reported by the bottom-up approaches are overestimated, leading to too large global emissions. Latitudinal data from top-down emissions indicate a predominance of tropical emissions (∼ 64 % of the global budget, The most important source of uncertainty on the methane budget is attributable to emissions from wetland and other inland waters. We show that the wetland extent could contribute 30–40 % on the estimated range for wetland emissions. Other priorities for improving the methane budget include the following: (i) the development of process-based models for inland-water emissions, (ii) the intensification of methane observations at local scale (flux measurements) to constrain bottom-up land surface models, and at regional scale (surface networks and satellites) to constrain top-down inversions, (iii) improvements in the estimation of atmospheric loss by OH, and (iv) improvements of the transport models integrated in top-down inversions. The data presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( http://doi.org/10.3334/CDIAC/GLOBAL_METHANE_BUDGET_2016_V1.1 ) and the Global Carbon Project.

Journal ArticleDOI
TL;DR: This paper performs simulation studies to investigate the magnitude of bias and Type 1 error rate inflation arising from sample overlap and considers both a continuous outcome and a case‐control setting with a binary outcome.
Abstract: Mendelian randomization analyses are often performed using summarized data. The causal estimate from a one-sample analysis (in which data are taken from a single data source) with weak instrumental variables is biased in the direction of the observational association between the risk factor and outcome, whereas the estimate from a two-sample analysis (in which data on the risk factor and outcome are taken from non-overlapping datasets) is less biased and any bias is in the direction of the null. When using genetic consortia that have partially overlapping sets of participants, the direction and extent of bias are uncertain. In this paper, we perform simulation studies to investigate the magnitude of bias and Type 1 error rate inflation arising from sample overlap. We consider both a continuous outcome and a case-control setting with a binary outcome. For a continuous outcome, bias due to sample overlap is a linear function of the proportion of overlap between the samples. So, in the case of a null causal effect, if the relative bias of the one-sample instrumental variable estimate is 10% (corresponding to an F parameter of 10), then the relative bias with 50% sample overlap is 5%, and with 30% sample overlap is 3%. In a case-control setting, if risk factor measurements are only included for the control participants, unbiased estimates are obtained even in a one-sample setting. However, if risk factor data on both control and case participants are used, then bias is similar with a binary outcome as with a continuous outcome. Consortia releasing publicly available data on the associations of genetic variants with continuous risk factors should provide estimates that exclude case participants from case-control samples.

Journal ArticleDOI
09 Sep 2016-Science
TL;DR: This work identifies six biological mechanisms that commonly shape responses to climate change yet are too often missing from current predictive models and prioritize the types of information needed to inform each of these mechanisms, and suggests proxies for data that are missing or difficult to collect.
Abstract: BACKGROUND As global climate change accelerates, one of the most urgent tasks for the coming decades is to develop accurate predictions about biological responses to guide the effective protection of biodiversity. Predictive models in biology provide a means for scientists to project changes to species and ecosystems in response to disturbances such as climate change. Most current predictive models, however, exclude important biological mechanisms such as demography, dispersal, evolution, and species interactions. These biological mechanisms have been shown to be important in mediating past and present responses to climate change. Thus, current modeling efforts do not provide sufficiently accurate predictions. Despite the many complexities involved, biologists are rapidly developing tools that include the key biological processes needed to improve predictive accuracy. The biggest obstacle to applying these more realistic models is that the data needed to inform them are almost always missing. We suggest ways to fill this growing gap between model sophistication and information to predict and prevent the most damaging aspects of climate change for life on Earth. ADVANCES On the basis of empirical and theoretical evidence, we identify six biological mechanisms that commonly shape responses to climate change yet are too often missing from current predictive models: physiology; demography, life history, and phenology; species interactions; evolutionary potential and population differentiation; dispersal, colonization, and range dynamics; and responses to environmental variation. We prioritize the types of information needed to inform each of these mechanisms and suggest proxies for data that are missing or difficult to collect. We show that even for well-studied species, we often lack critical information that would be necessary to apply more realistic, mechanistic models. Consequently, data limitations likely override the potential gains in accuracy of more realistic models. Given the enormous challenge of collecting this detailed information on millions of species around the world, we highlight practical methods that promote the greatest gains in predictive accuracy. Trait-based approaches leverage sparse data to make more general inferences about unstudied species. Targeting species with high climate sensitivity and disproportionate ecological impact can yield important insights about future ecosystem change. Adaptive modeling schemes provide a means to target the most important data while simultaneously improving predictive accuracy. OUTLOOK Strategic collections of essential biological information will allow us to build generalizable insights that inform our broader ability to anticipate species’ responses to climate change and other human-caused disturbances. By increasing accuracy and making uncertainties explicit, scientists can deliver improved projections for biodiversity under climate change together with characterizations of uncertainty to support more informed decisions by policymakers and land managers. Toward this end, a globally coordinated effort to fill data gaps in advance of the growing climate-fueled biodiversity crisis offers substantial advantages in efficiency, coverage, and accuracy. Biologists can take advantage of the lessons learned from the Intergovernmental Panel on Climate Change’s development, coordination, and integration of climate change projections. Climate and weather projections were greatly improved by incorporating important mechanisms and testing predictions against global weather station data. Biology can do the same. We need to adopt this meteorological approach to predicting biological responses to climate change to enhance our ability to mitigate future changes to global biodiversity and the services it provides to humans.

Journal ArticleDOI
TL;DR: The proposed approach for a two-sample summary data MR analysis to estimate the causal effect of low-density lipoprotein on heart disease risk is demonstrated and care must be taken to assess the NOME assumption via the IGX2 statistic before implementing standard MR-Egger regression in the two- sample summary data context.
Abstract: Background: MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. Methods: An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2. The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. Results: In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We demonstrate our proposed approach for a two-sample summary data MR analysis to estimate the causal effect of low-density lipoprotein on heart disease risk. A high value of IGX2 close to 1 indicates that dilution does not materially affect the standard MR-Egger analyses for these data. Conclusions: Care must be taken to assess the NOME assumption via the IGX2 statistic before implementing standard MR-Egger regression in the two-sample summary data context. If IGX2 is sufficiently low (less than 90%), inferences from the method should be interpreted with caution and adjustment methods considered.

Journal ArticleDOI
Vardan Khachatryan1, Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam  +2283 moreInstitutions (141)
TL;DR: Combined fits to CMS UE proton–proton data at 7TeV and to UEProton–antiproton data from the CDF experiment at lower s, are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13.
Abstract: New sets of parameters ("tunes") for the underlying-event (UE) modeling of the PYTHIA8, PYTHIA6 and HERWIG++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE data at sqrt(s) = 7 TeV and to UE data from the CDF experiment at lower sqrt(s), are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13 TeV. In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons of the UE tunes to "minimum bias" (MB) events, multijet, and Drell-Yan (q q-bar to Z / gamma* to lepton-antilepton + jets) observables at 7 and 8 TeV are presented, as well as predictions of MB and UE observables at 13 TeV.