Annual Meeting and Student Research Conference

Session Descriptions

October 25–28, 2018
Hyatt Regency San Francisco Airport, California


Table of Contents

Plenary Sessions

Jeff Dean Deep Learning and the Grand Engineering Challenges

Jeff Dean

Senior Fellow in the Research Group, Leader of the Google Brain Project, and Head of the Artificial Intelligence Unit at Google


For the past six years, the Google Brain team has conducted research on difficult problems in artificial intelligence, on building large-scale computer systems for machine learning research, and, in collaboration with many teams at Google, on applying their research and systems to dozens of Google products. The group has open-sourced the TensorFlow system, a widely popular system designed to easily express machine learning ideas, and to quickly train, evaluate and deploy machine learning systems. In this talk, Jeff will highlight some of the team's research accomplishments, and will relate them to the National Academy of Engineering's Grand Engineering Challenges for the 21st Century. This talk describes joint work with many people at Google.

Steve RitzObserving the Universe Broadly, Deeply, and Frequently

Steve Ritz

Subsystem Scientist for the Large Synoptic Survey Telescope's Camera, Professor of Physics at University of California Santa Cruz, Director of the Santa Cruz Institute for Particle Physics


For astronomical facilities, the combination of wide field of view, high sensitivity, broad energy range, and agility offers huge scientific opportunities and new ways of operating. I’ll discuss two seemingly very different observatories that share these characteristics. The Large Synoptic Survey Telescope (LSST), currently under construction, is an integrated survey system with an eight-meter class primary mirror and 3.2 gigapixel camera, designed to conduct a decade-long, deep, wide, fast time-domain survey of the entire optical sky visible from Cerro Pachón in central Chile. The Fermi Gamma-ray Space Telescope, launched into low-earth orbit in 2008, observes the whole sky every three hours at energies that are millions to trillions of times larger, using a huge array of particle detectors. The variety of objects these wonderful facilities can study spans from as close as our own planet to cosmological distances, and scientific topics include understanding the nature of the mysterious dark matter and dark energy. Large, rich data sets (LSST will produce an estimated 20 terabytes, or 20 trillion bytes, of data per night) and diverse community needs pose important challenges and opportunities for software development that will also be discussed. 

Award Winner Keynotes

Tim DavisA Personal Journey into Mathematics, Software, Music and Art

Timothy A. Davis, 2018 Walston Chubb Award for Innovation

Professor of Computer Science and Engineering at Texas A&M University
Award page
Award article 

There's a matrix in your pocket ... most likely more than one, and if it's sparse then Davis’s software is solving it.  Numerical linear algebra, and sparse matrix computation in particular, lies hidden at the heart of many computational questions that affect our everyday lives. If you're one of the owners of the half billion smartphones using his software, then Davis's work is already hiding inside your pocket or purse.

Davis will discuss his work on algorithms and software for solving large sparse matrix problems. Every photo in Google StreetView is placed with help from his solvers. Many of the engineering design tools developed in industry, academia, and government labs rely on his software for a wide range of problems, including VLSI circuit simulation, computer vision, power systems, robotics, text/term document analysis, and computational fluid dynamics. His software uncovers the secrets of earthquakes originating 1000km under Japan. You might see a drone flying overhead that is finding its way on its own with help from his codes. The U.S. Geological Survey uses his codes to create maps of the Moon, Mars, and other planetary bodies: in their words, "helping to pave the way for future human exploration." A DARPA project used his solvers to help analyze the dark web, and in doing so provided a tool the FBI used to rescue six girls from sex trafficking. These are just a few of the many ways that numerical linear algebra touches your everyday life.

Davis's current work spans two domains: creating new sparse matrix algorithms to harness the power of GPUs, and creating software that enables the expression of graph algorithms in the language of sparse linear algebra over semirings.

Sparse matrix methods also bring new beauty to the arts. Appreciating the beauty of computational software requires the mind's eye of a mathematician, but Davis found a way to illustrate the beauty of matrix algorithms in ways that anyone can appreciate. Relying on sparse matrix methods, graph algorithms, MATLAB, Fast Fourier Transforms, and force-directed graph visualization, Davis constructed a mathematical algorithm that converts an entire piece of music from its natural domain of time and frequency into a domain of space and color. His artwork appeared on billboards all around London for the 2013 London Electronic Arts Festival. Davis will describe how he came to create this art, and how it relates to his primary technical contributions on sparse matrix computations.  

David FlanniganUncovering the Fundamental Behaviors of Materials with Ultrafast Electron Microscopy

David Flannigan, 2018 Young Investigator Award

Associate Professor of Chemical Engineering and Materials Science at University of Minnesota
Award page

While a picture is said to be worth a thousand words, it could be argued that a video— comprised of many pictures that additionally capture time—illuminates much more than that of the sum of the individual images. Indeed, a wealth of knowledge about the atomic-scale structures of molecules and materials has been gleaned from static images over the past century. However, such images provide only part of the story and reveal little about the motion of the fundamental building blocks of matter. In this talk, Flannigan will describe an emerging technology called ultrafast electron microscopy (UEM) that now provides direct access to these phenomena. He will show how it is now possible with UEM videography to effectively slow the ultrasmall and ultrafast motions of materials by well over a billion times. Examples of UEM capabilities will be given, including the first-ever demonstration of the imaging of nanoscale hypersonic acoustic waves moving through semiconducting crystals. He will conclude by sharing his thoughts on what developments and discoveries are yet to come.

Daniel RubensteinScience and Society:  How Citizen Science Can Power People for Environmental and Social Good

Daniel Rubenstein, 2018 John P. McGovern Science and Society Award

Professor of Zoology and Director of the Program in Environmental Studies at Princeton University
Award page
Award article

Science is a way of knowing about the natural world through observations, judicious comparisons, and experiments. In the biological sciences, what began with observations by naturalists has become transformed into systematic studies practiced by highly educated, publicly supported, and experienced individuals. While modern scientific breakthroughs routinely create benefits for society, society and science have largely gone their separate ways.  As expert driven science ascended, people also become more disconnected from the natural world around them. Yet ironically, learning about nature through popular media is at an all-time high. Unfortunately, it is not enough to anneal these divides. Instead, connecting people directly with nature will be more powerful. By becoming citizen scientists, people will gain a deep appreciation for how science is done, help generate new knowledge, become confident in their understanding of this knowledge, often helping them shape future investigations, and become better environmental stewards and decision-makers. Examples from our “Scout Program,” “Bio Blitzes,” ‘‘Conservation Clubs,” and “Teachers as Scholars Program” will show how direct participation in science enables diverse peoples to champion and democratize science. 


Anna Marie Skalka

Lessons from Viruses: Agents of Disease and Drivers of Evolution

Anna Marie (Ann) Skalka, 2018 William Procter Prize for Scientific Achievement

Professor Emerita at Fox Chase Cancer Center
Award page

Viruses are among the tiniest of infectious agents, yet they are the oldest and most abundant entities in our biosphere; they are present everywhere. Because viruses depend on host cells for propagation, their study has provided unique insight not only into virus biology but also the biology of the cells that they infect. Groundbreaking work in the 1960s and 1970s with viruses that infect bacteria laid the foundation of modern molecular biology and recombinant DNA technology, opening up new opportunities to explore the biology of animal viruses and their host organisms. Molecularly cloning and analysis of the genetic material of retroviruses in my laboratory afforded the first insights into some of the unique features of these animal viruses. 

Retroviral genomes are composed of RNA, not DNA as are genomes of all living cells. What distinguishes retroviruses from other RNA-containing viruses is their possession of two critical enzymes:  reverse-transcriptase, which copies its RNA genome into DNA, and integrase, which splices this DNA into essentially random sites in the DNA of its host cell. While the host cell normally survives this invasion, the integrated retroviral DNA becomes a genetic parasite, co-opting cellular machinery to produce new retroviral particles that can infect other cells. Some retroviral DNA integrations can be deleterious, leading to abnormal cellular proliferation and cancer, or to immune deficiencies as in HIV-AIDS. In addition, integration into the DNA of germ cells (eggs or sperm) can result in the permanent inclusion of retroviral DNA copies in every cell in the developing organism—and all future offspring. Indeed, about eight percent of the DNA in our own bodies comprises copies of ancient retroviral sequences, genetic “fossils” of integrations in the germline of our evolutionary progenitors, millions of years ago.  

Results from our work on the molecular structure and biochemistry of the three common retroviral enzymes, reverse transcriptase, integrase, and protease (which is required to process viral proteins), have contributed to a fundamental understanding of their mechanisms. A novel, but critical, activity for reverse transcriptase was identified, and we demonstrated that purified integrase and protease proteins possess all of the catalytic activities expected for their major roles in the virus life cycle. The simple assays that we developed to measure the activities of these enzymes have been adopted by all investigators in the field, and have also formed the basis for screening systems that enabled drug companies to identify effective antiviral inhibitors to treat HIV infection. More recently, in a foray into big data with Systems Biology collaborators at the Institute for Advanced Studies, we made the unexpected discovery that retroviruses are not the only RNA viruses with “fossils” in vertebrate DNAs. Despite their lack of reverse transcriptase, sequences derived from ancient members of the highly pathogenic bornavirus, and Ebola and Marburg virus families, have been conserved in many vertebrate DNAs (including our own) over millions of years.  As with retroviruses, some of these sequences have been purloined for use by the host cells, affecting normal growth or affording resistance to infection by their present-day relatives. 

Big Data in Biology and Medicine Symposium 

Keynote Sessions

Dana CrawfordThe Importance of Diversity in Big Genomic Data

Dana Crawford

Assistant Director for Population and Diversity Research at the Institute for Computational Biology and Genetic Epidemiologist at Case Western Reserve University

In this era of precision medicine, genomic research has the potential to inform point-of-care clinical decisions, impacting diagnoses, treatments, and prevention strategies. These added big genomic data promise to improve individual-level health outcomes for all patients. A major barrier towards this promise is the substantial known bias in genomic datasets. It is now well documented that the overwhelming majority of genomic research is based on European populations, and this bias has already led to misdiagnoses and other missed opportunities in the delivery of precision medicine. This talk will discuss the evidence and consequence of this bias and the importance of diverse representation in biomedical research to fulfill the promise of big genomic data. 

Mark Gerstein

Personal Genomics and Data Science

Mark B. Gerstein

Albert L. Williams Professor of Biomedical Informatics and Professor of Molecular Biophysics and Biochemistry, and of Computer Science at Yale University; Co-director of the Yale Program in Computational Biology and Bioinformatics

In this seminar, I will discuss issues in transcriptome analysis. I will first talk about some core aspects—how we analyze the activity patterns of genes in human disease. In particular, I will focus on disorders of the brain, which affect nearly a fifth of the world’s population. Robust phenotype-genotype associations have been established for a number of brain disorders including psychiatric diseases (e.g., schizophrenia, bipolar disorder). However, understanding the molecular causes of brain disorders is still a challenge. To address this, the PsychENCODE consortium generated thousands of transcriptome (bulk and single-cell) datasets from 1,866 individuals. Using these data, we have developed a set of interpretable machine learning approaches for deciphering functional genomic elements and linkages in the brain and psychiatric disorders.

In particular, we deconvolved the bulk tissue expression across individuals using single-cell data via non-negative matrix factorization and non-negative least squares and found that differences in the proportions of cell types explain >85% of the cross-population variation observed. Additionally, we developed an interpretable deep-learning model embedding the physical regulatory network to predict phenotype from genotype. Our model uses a conditional Deep Boltzmann Machine architecture and introduces lateral connectivity at the visible layer to embed the biological structure learned from the regulatory network and QTL linkages. Our model improves disease prediction (by six-fold compared to additive polygenic risk scores), highlights key genes for disorders, and allows imputation of missing transcriptome information from genotype data alone. In the second half of the talk, I will look at the "data exhaust" from transcriptome analysis—that is, how one can find additional things from this data than what is necessarily intended.

First, I will focus on genomic privacy: How looking at the quantifications of expression levels can potentially reveal something about the subjects studied, and how one can take steps to protect patient anonymity. Next, I will look at how the social activity of researchers generating transcriptome datasets in itself creates revealing patterns in the scientific literature.

Ben Langmead

Making the Most of Petabases of Genomic Data

Benjamin Langmead

Assistant Professor in the Department of Computer Science at Johns Hopkins University

With the advent of modern DNA sequencing, life science is increasingly becoming a big-data science. The main public archive for sequencing data, the Sequence Read Archive (SRA), now contains over a million datasets and many petabytes of data. While large-scale projects like GTEx, ICGC and TOPmed have been major contributors, even larger projects are on the horizon, e.g. the All of Us and Million Veterans programs. The SRA and similar archives are potential gold mines for researchers but they are not organized for everyday use by scientists. The situation resembles the early days of the World Wide Web, before search engines made the web easy to use. Langmead will describe the progress toward the goal of making it easy for researchers to ask scientific questions about public datasets, focusing on datasets that measure abundance of messenger RNA transcripts (RNA-seq). He will describe how trends in big-data wrangling and cloud computing are borrowed to make public data easier to use and query. He will motivate the work with examples of applications in research areas concerned with novel (e.g. cryptic) splicing patterns and the splicing factors that regulate them. This is work in progress, and Langmead will highlight ways in which we are learning to make tools better suited to how scientists work. This is joint work with Abhinav Nellore, Chris Wilks, Jonathan Ling, Luigi Marchionni, Jeff Leek, Kasper Hansen, Andrew Jaffe, and others.

Additional Sessions 

Network Approaches Identify Brain Regions and Gene Hubs Associated with Genetic Predisposition for Methamphetamine Intake

Ovidiu D. Iancu,* Cheryl Reed, Harue Baba, Robert Hitzemann, and Tamara J. Phillips of Oregon Health & Science University; Veterans Affairs Portland Health Care System

Risk for methamphetamine (MA) intake is partially explained by a polymorphism in the gene, Taar1. Although Taar1 explains >50% of the genetic variance in MA intake, its mechanisms of action and the brain regions involved are largely unknown. Transcriptional analysis across brain regions, coupled with genotype data, offers access to a rich set of endophenotypes and can generate a mechanistic understanding of the links between causal polymorphisms and behavior. Genetic and transcriptional differences were examined between mice selectively bred for high or low voluntary MA intake in three brain regions within the “addiction circuit”: nucleus accumbens (NACC), prefrontal cortex (PFC) and ventral midbrain (VMB). Differential expression (DE) analysis was complemented with a gene network-based approach to identify and rank genes most central to the genetically-determined MA intake differences. Between-line DE was moderate in the NACC and PFC, but pronounced in the VMB, which had an order of magnitude more DE genes. Functional enrichment analysis identified transmembrane and synaptic genes most consistently affected across regions. Independently constructed gene correlation/coexpression networks for each region identified “gene modules,” or subnetworks of correlated genes, several of which shared common biological annotations. Differential network analysis identified genes and modules that appeared “differentially wired” (DW) between the lines. As in DE, DW appeared more pronounced in the VMB. Our results illustrate how network/graph analysis techniques allow identification of brain regions and genes/pathways associated with the genetics of MA intake in an animal model of addiction. Affected network hubs emerge as key targets for potential molecular manipulations. 

Supported by NIH NIDA grants P50DA018165 and U01DA041579, and by a grant from the Department of Veterans Affairs, I01BX002106, and the VA Research Career Scientist program. The contents do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.

The 1000 Genomes Project: Challenges and Opportunities

Latifa Jackson,* Adjunct Assistant Professor, Department of Pediatrics and Child Health, Howard University; Fatimah Jackson, Professor, Department of Biology, Howard University

The genomics of African Americans remain an understudied and underappreciated area of research, even though Africa is acknowledged to be the ancestral home of all humans. The 1000 African American Genome project is a research collaboration between Howard University, National Geographic, and Helix to A) better understand African American ancestry, B) provide context to clinical and population history models, and C) develop a reference population upon which a broader foundation can be built for precision medicine. To date, this project has collected samples (N=461) from individuals of African American ancestry collected at Howard University. These samples are currently being whole exome sequenced in collaboration with Helix. Their analysis presents some significant opportunities and challenges for ancestry analyses, including the opportunity to characterize African-African admixture proportions resultant from new world admixture events, identifying appropriate supervised and unsupervised machine learning approaches to discern how patterns of genomic variation represent potential cryptic population substructure, and relating this big data to potential environmental factors impacting health. The 1000 African American genomes project will ultimately serve as a big data resource to redefine what it means to have African American ancestry. 

Therapies for Hearing loss: From Bench to Clinic 

Pranav Dinesh Mathur, Scientist, Research, Otonomy Inc.

Hearing loss is a growing global health concern, affecting over 5% of the world’s population, with no approved drug therapies. The causes are varied and include genetics, environmental noise exposure, certain disease conditions, ototoxic drugs and aging. Recent scientific advances have shed light on the mechanisms that underlie different hearing problems and provided the opportunity to develop specific therapies for the treatment of hearing loss using the identified molecular targets. Equally important is an ability to deliver drugs effectively to the inner ear, a protected compartment in the body that is not easily accessed by systemically administered drugs. The steps involved with identifying a candidate for clinical development include: demonstrating activity in assays that feature the key cell types and pathways for therapeutic benefit; confirmation of activity with delivery via the intended clinical route to the inner ear; demonstration of appropriate pharmacological specificity, pharmacokinetics, and safety. Finally, the safety profile of the drug must be carefully evaluated as mandated by the U.S. Food and Drug Administration (FDA) before approval for human clinical trials. In this presentation, I will summarize the steps involved in the development of a therapeutic agent for the treatment of hearing loss, from conceptualization through the required safety studies, and highlight how a unique sustained exposure technology can aid in rapid development of a novel therapeutic for patients with hearing loss. 

Ultrasonic Vocabulary Defined by Social Behavior of Mice 

Joshua P. Neunuebel,* Assistant Professor, Department of Psychological and Brain Science, University of Delaware; Co-authors: Daniel T. Sangiamo of the University of Urbana–Champaign, and Megan R. Warren of the University of Delaware

Communication plays an integral role in human social dynamics and a myriad of neurodevelopmental disorders are characterized by abnormal social communication. Because of their genetic tractability, mice are emerging as an important model system for studying social communication deficits. However, the extent to which mouse vocalizations influence social dynamics has remained elusive due to the challenges of precisely identifying the vocalizing animal in group interactions. Using a novel microphone array system to track the vocal behavior of individual mice during social interactions, we report here that distinct patterns of vocalization emerge across specific social contexts. Eleven groups of mice (2 males and 2 females) were each recorded for 5 hours generating ~99 GB of video data and ~1584 GB of audio data. Machine learning programs were trained to automatically extract distinct behaviors and an algorithm was developed to group phonically similar vocal signals that were detected in the audio recordings using an unsupervised machine learning approach. We discovered twenty-two types of signals and emission depended upon the behavior of the vocalizing animal. Mice acting aggressively towards another mouse emit different vocal signals than mice avoiding social interactions. In particular, aggressive behaviors were associated with vocalizations that decreased in pitch, whereas avoidance behaviors were associated with vocalizations that increased in pitch. Further, we directly show that these patterns of aggressive vocal expression impact the behavior of only the socially-engaged partner receiving the auditory signals. The findings clarify the function of mouse communication and reveal an operational ultrasonic lexicon.

Pharmacogenomics of Arsenic Trioxide: GWAS Identifies WWP2, an E3 Ubiquitin Ligase

Steve Scully* of Thaddeus Medical Systems, Inc.; Pinar Aksoy-Sagirli of Istanbul University; Quin Wills of Oxford University; Krishna R. Kalari, Anthony Batzler, Gregory Jenkins of Mayo Clinic; Brooke L. Fridley of the Moritz Cancer Center; Liewei Wang and Richard Weinshilboum of Mayo Clinic

Arsenic trioxide (ATO) is used clinically to treat patients with acute promyelocytic leukemia (APL), however, in up to 30% of patients treated with ATO, a life threatening side effect called cytokine storm can develop. We set out to test the hypothesis that genomic factors might contribute to individual variation in response to ATO.  We determined ATO cytotoxicity in a panel of 287 Lymphoblastoid Cell Lines (LCLs) utilizing the MTS assay and these data were then analyzed for associations with over 7 million SNPs in a GWAS.  In addition, expression data from 54,000 3’ exon probes, and over 1 million whole exon probes for each cell line was analyzed in the R environment. The GWAS yielded a genome-wide significant signal in rs8044920 with a p-value of <2.7 E-08 for IC50 as the phenotype. This SNP signal mapped to an intron of WWP2 on chromosome 16q22.1, a gene encoding a HECT E3 Ubiquitin-Protein Ligase. This SNP also resides in a haplotype block which contains an expression trait quantitative loci (eQTL) with the variant allele decreasing the expression of exon probe: 3666918, in WWP2.  Using the TRANSFAC (Professional) database, we found that the variant haplotype creates a putative YY1 repressive site. In keeping with this function, LCLs with the variant haplotype were found to be associated with a lower level of WWP2 expression after treatment with ATO, and YY1 selective binding was confirmed by a Chromatin Immunoprecipitation (ChIP) assay. Ingenuity Pathway Analysis was also performed to explore biological insights of WWP2.

Big Data Analytics for Rare Variations and Psychiatric Disorders

Shaolei Teng, Assistant Professor, Department of Biology, Howard University

Psychiatric disorders, including schizophrenia, bipolar disorder, and major depressive disorder, are an important global public health issue. The pathogenic causes of the mental illnesses remain unknown, and understanding the genetic mechanisms is a research priority. Recent sequencing studies showed that rare variations have large effects on the risk for schizophrenia, which indicates that rare variants account for much of the missing heritability in the development of mental disorders. Disrupted in schizophrenia 1 (DISC1) gene is a convincing candidate gene for developing mental diseases. The molecular studies have shown that DISC1 functions as a scaffold protein in neuronal development through a large set of pathway genes. We applied next-generation sequencing to sequence 213 DISC1 pathway genes in 1543 samples. We observed an enrichment of rare disruptive variants in schizophrenia patients, and the increased burden of damaging mutations could reduce cognitive measures. We used sequence-based machine learning tools to investigate the functional effects of rare mutations in DISC1 pathway. It was shown that rare mutations can impact protein stability and be involved in post-translational modifications. In addition, we utilized structure-based methods to quantitatively assess the effects of rare mutations on protein structure and function. The energy calculation revealed that the disease-causing mutations could reduce folding energy and affect binding energy. The findings improve our understanding of the roles of rare variations in the DISC1 pathway in susceptibility to psychiatric disorders, and provide the targets for further functional studies and mental illness treatments.

Big Data in Physics and Astronomy Symposium

Keynote Sessions

Fred Adams

Big Data and Theoretical Astrophysics

Fred Adams

Ta-You Wu Collegiate Professor of Physics at University of Michigan

Although the development of big data science is having a tremendous impact on astrophysics, it faces important challenges in the relatively near future. This field is driven by an enormous growth in the available observational data and a corresponding increase in computational capabilities. This talk considers a number of issues that provide important constraints on the possible evolution of big data in general, and specifically in astrophysics, and considers the evolving role of traditional theoretical analysis in this context.

Frank SummersFrom Supercomputers to the IMAX Screen: Cinematic Scientific Visualizations

Frank Summers

Outreach Astrophysicist, Space Telescope Science Institute; Host, Hubble's Universe Unfiltered Video Podcast

In an age of movies with computer graphics superheroes doing the physically impossible and making it seem routine, how do we present science topics to an audience jaded by such cineplex blockbusters? One answer lies in embracing the twin pillars of Truth and Beauty, along with a truly astounding number of CPU cycles! Immense computer simulations provide the verisimilitude of the physics of the universe along with the expansive dynamic range required to fill the giant screen. Complex visualization pipelines eschew the traditional analysis software and have co-opted Hollywood tools for visually rich, and computationally expensive, presentations. In work spanning more than two decades, hundreds of Hubble press releases, dozens of cosmic sequences, and five IMAX films, Dr. Summers has balanced the accuracy of astronomical research with the aesthetics of cinematic arts in order to create grand scenes of astronomical splendor.

RisaWechsler148x197Risa Wechsler

Cosmologist; Director of the Kavli Institute for Particle Astrophysics and Cosmology and Associate Professor of Physics at Stanford University and at the SLAC National Accelerator Laboratory

What is most of the matter in the universe? Why is the universe’s expansion rate speeding up? How do galaxies form over cosmic time? We are pursuing these large outstanding questions with a new generation of large galaxy surveys that will map out billions of galaxies over the entire sky and probe the universe’s evolution over more than 13 billion years of cosmic history. In order to make sense of these data, we need a combination of tools, including large simulations with trillions of particles, effective tools for data analysis, and tools for scientific inference that make the most of both observed and simulated data. I will discuss some of the data challenges we face in this new generation of cosmological measurements, and the ways in which our approaches to inference in cosmology can both learn from and inform other areas of data science. 


Big Data in Climate, Energy, and the Environment Symposium 

Keynote Sessions

How Big (Biological) Data is Changing our Approach to Environmental Health

Cavin Ward-CavinessCavin Ward-Caviness

Principal Investigator at the US Environmental Protection Agency’s Environmental Public Health Division

Big data has always been an important aspect of environmental health as we attempted to understand how pollution at scales of cities and nations altered disease and mortality risk. However, as technology and methodologies have advanced the increasing role of high-dimensional biological data has risen to the forefront. Assessments of genetics, epigenetics, and metabolomics at genome/metabolome-wide scales are becoming common place, allowing existing environmental data to be re-examined in novel ways. Researchers are now able to gain unprecedented insights into the molecular basis of environmental health. Even more recently the proliferation of electronic health records has allowed us to make new insights into how environmental pollutants affects understudied individuals, such as those with pre-existing disease. The deep clinical phenotyping from electronic health records also holds the potential to provide increased clarity on the health outcomes affected by pollutant exposures and possibly uncover novel health outcomes we never considered before. Pushing forward the field of environmental health using big data requires input and expertise from a diverse array of disciplines but holds an essential key to continued improvements in health and wellness for all individuals.


Deep Learning for Climate Science


Leader of the Data and Analytics Services Team at the National Energy Research Scientific Computing Center, a division of Lawrence Berkeley National Laboratory

Deep learning has revolutionized the fields of computer vision, speech recognition, robotics and control systems. Can deep learning be applied to solve problems in climate science? This talk will present our efforts in applying deep learning for binary pattern classification, localization, detection, and segmentation of extreme weather patterns (tropical cyclones, extra-tropical cyclones, atmospheric rivers, fronts). We have developed supervised convolutional architectures, semi-supervised convolutional architectures, and “tiramisu” segmentation architectures for tackling these problems. I will briefly present results from scaling these architectures on large scale HPC systems, obtaining PetaFlop-ExaFlop class performance. The talk will conclude with a list of open challenges in deep learning, and speculations about the role of artificial intelligence in climate science.

David Reidmiller

The Fourth National Climate Assessment: Translating Data to Inform Decisions

David Reidmiller

Director of the National Climate Assessment by the U.S. Global Change Research Program 

The United States government maintains one of the biggest collections of climate data in the world, including archives of satellite observations, in situ measurements, and projections from global climate models. In total data storage, the Federal government’s climate data archive is expanding exponentially. In addition to collecting the data, the federal government also  translates that data into relevant, actionable information to assist in addressing climate-related risks.

The U.S. Global Change Research Program (USGCRP) coordinates the global change research needs and investments of thirteen Federal agencies. This begins with turning observational data into fundamental understanding of Earth system processes, in part by incorporating it into global climate models. Scientific assessments synthesize this fundamental understanding—typically through expert analysis of published research. In this way, scientific assessments leverage observed climate data and model projections to provide a narrative that makes that information useful to decision makers. The National Climate Assessment (NCA) is USGCRP’s mandated synthesis of climate change and its impacts. 

As the nation’s premier source for communicating climate-related risks to society, the NCA is a national effort, bringing together experts from USGCRP agencies and the broader federal government, as well as hundreds of experts from state and local governments, tribal communities, academia, non-profit organizations, and private sector entities.

This keynote will discuss three pillars of the Fourth National Climate Assessment. First, the science contained in comprehensive assessment efforts such as the NCA is a critical public good for the American people. Second, the provenance of the findings and underlying data of the report is well-documented and traceable, lending an enhanced level of transparency and credibility to its conclusions. And third, the NCA translates our investments in scientific data and fundamental understanding into policy-relevant products to inform decisions across the country.

Additional Sessions

Reduction of a Manufacturing Plant’s Electricity Demand During Grid Consumption Peaks with an Energy Storage System Managed by Neural Network-Based Big Data-Fed Demand Forecast and a Decision Model

Richard Boudreault* of Sigma Energy Storage, N. Courtois, C. Sélérier, D. Klassen, R. Lotfalian, M. Najafiyazdi

To provide enough power during consumption peaks, electricity grids are typically over-sized with intensively-managed generation sources. To reduce peak demand, different incentive measures have been implemented by power distributors. In Ontario (Canada), surcharge for major electricity consumers (called Class A clients) within a given year is calculated based on their share of consumption during the top five peak hours of that year, as determined at year end. That surcharge may be avoided by reducing plant electricity consumption during these five hours (called “ICI hours” here) by storing energy behind the meter during non-ICI hours then delivering electricity directly for the customer’s own use during peaks. Such a solution requires customers to predict the current year’s ICI hours in advance, which is a challenge as this is only determined at end of year and demand forecasting is complex. This article presents a method that determines when the energy storage system should provide electricity in order to strategically reduce a plant’s consumption during annual peaks. The method is based on a demand forecast model and a decision model. The first tool predicts Ontario electricity demand 24 hours in advance. This model is based on several years of data that are treated by a neural network. Data include hourly temperature, irradiation, and population by Ontario region. The second tool is a model that decides, given the demand forecast for the next 24 hours, whether releasing electricity from the energy storage system is advisable. The decision model uses both forecast and real historical data.

Global Parameterization of Climate Change Indicators

Micha Tomkiewicz* of Brooklyn College of CUNY and Ph.D Programs in Physics and in Chemistry CUNY/Graduate Center

The fifth international IPCC report states that the two leading contributors to changes in annual CO2 emissions by decade are changes in population and and changes in GDP/capita; the latter reflects changes in standard of living. Changes in carbon intensity (defined by the ratio of CO2
emissions to GDP) and changes in energy intensity (defined by the ratio of change in energy use to change in GDP) show negative contributions to the changes in CO2 emissions. Data for calculating both carbon and energy intensities were extracted from the World Bank database. Within well-defined error margins, the carbon and energy intensities are shown to be independent of both the population and GDP of countries and thus can serve as a convenient parametrization of human impact. The data show that the intensity parameters are better starting points for future projections of environmental impact. Longitudal studies of these parameters reflect global changes that are relatively independent of economic and demographic developments but still keep the focus on decarboxylation. This removes one of the main obstacles to productive international agreements on limiting greenhouse gas emisssions and other environmental impediments by eliminating the need to divide the world into developed and developing countries with separate rules as was done with the Kyoto protocol agreement.

SAS Demonstration

SAS Viya Data Mining and Machine Learning

Andre de Waal, Analytical Consultant in the Global Academic Program, SAS Institute

SAS Viya Data Mining and Machine Learning is a new product from SAS that showcases a rich set of data mining and machine learning capabilities that run on a robust, in-memory, distributed computing infrastructure. This product provides a single environment for the data scientist to perform necessary tasks associated with data preparation, feature engineering, model training, assessment, and deployment. The main interface for SAS Viya is SAS Studio, a web-based user interface that offers an array of utilities and conveniences for composing your applications in the SAS programming language. In this demonstration we will show how to build various machine learning models in SAS Studio and then also in SAS Visual Data Mining and Machine Learning.  SAS Visual Data Mining and Machine learning adds machine learning functionality to the SAS Visual Analytics web client. This enables users to experience powerful statistical and machine learning techniques running on SAS Viya through and easy to-use, drag-and-drop interface.

Ethics and Responsible Conduct of Research Sessions

Keynote: Big Data Challenges from Total-Body Positron Tomography to Explore Addiction, Schizophrenia, and Other Diseases

Tom Budinger, Professor in Residence Emeritus, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley

A disadvantage of the contemporary medical imaging methods for radionuclide and magnetic resonance is that current instruments do not allow simultaneous observations of the total-body biologic and physiologic function due to their limited axial extent. X-Ray CT allows whole body coverage by rapid transport of the subject through the x-ray exposure, but the exposure is increased with each dynamic time point. Positron emission tomography (PET) and only to some extent magnetic resonance imaging and spectroscopy can collect whole-body kinetic information such that the relative metabolism of multiple organs can be evaluated after a single injection in the case of PET.

Big data acquisition and analysis can enable investigations of relationships between human mental behavior and the chemical dynamics of the lung, heart, liver, gut, kidneys and the musculoskeletal system. The potentials for understanding mechanisms of major diseases such as depression, schizophrenia, PTSD, and to gain insights regarding the autoimmune system are hindered by the limitations in instrumentation and by the lack of methods to manipulate and analyze the data bases comprised of a billion data for each individual patient study.

This presentation focuses on the medical science rationale and potentials for total-body positron emission tomography that has been under development at a few institutions over the past five years (e.g. Cherry et al. 2018). Advantages of increasing the axial extent of PET imaging systems for total-body coverage include not only improved sensitivity through an increase in the area of detector material, but of greater importance is the ability to collect metabolic and chemical information simultaneously from multiple organs and body regions. The improvements in detector technology of the last few years can provide a sensitivity improvement beyond a factor of 10 thus reducing the required radiation dose and opening the technique for studies of infant and even fetal health. But acquisition of even one billion data points will be enabled by current detector-computer systems, the reduction of these data to useable images is a big data problem that requires advances in both computational algorithms and hardware architecture to render the results of a patient in a timely manner.

The presentation will include motivating examples of total body imaging in addiction, arthritis, heart disease and a hypothesis regarding the metabolism of schizophrenia.

Big Data Ethics: From Writing Code to Coding Rights in an Era of Intelligent Machines

Kenneth Goodman, Professor and Director, Institute for Bioethics and Health Policy, University of Miami; Co-Director, University of Miami Ethics Programs

The nature and goals of biomedical research have shifted quickly. The transition from hypothesis-driven science to knowledge discovery in databases has replaced centuries of scientific standards with tools that analyze not the world but data and information apparently about the world. The software for accomplishing this adheres to few standards, is not clearly fit for the purposes to which it is put and, sometimes, is guided more by intellectual priority and property than the desire to conduct more reliable and reproducible scientific inquiry. Moreover, when such systems are used for decision support, the challenges related to accountability, responsibility, peer review, and appropriate uses and users emerge as fundamental to the scientific enterprise. For all that, in the world of biomedical research, ordinary citizens have benefited from the analysis of others’ data, and so have acquired an obligation to share theirs—if certain conditions are met.

Tele-Cybernetics: Socio-politico-Ethical Background for Governing Big Data

G. Arthur Mihram,* Princeon, New Jersey; Danielle Mihram, Acting Head, Science and Engineering Library, University of Southern California

The term, "cybernetics" has today become "the computerization of data and/or its access." Originally, the term was defined as "communication(s) plus their control(s) in the animal and the machine."  Tele-communicative access for today’s ‘big data’ leads one virtually to suggest the term, "tele-cybernetics: tele-communications plus their ‘tele-controls." 

Since science is that human activity devoted to the search for the very explanation for (i.e., for the truth about) any particular naturally-occurring phenomenon, the recent deduction, from the history of science, of the nearly-algorithmic "method" accounting for the success of our Modern Science, leads us to enquire of the socio-political structures which Science would still require in our Age of Tele-communications with its concomitant, Big Data.  One example: How are we satisfying the requirement for Congress’s (virtually Constitutionally-mandated) "National Electronic Postal Service," with its “enhanced electronic postmark”?

We are thus also reminded that ethics is defined as "the science of human duty," so that our/these resulting requirements within tele-cybernetics relates, e.g., both to the means for establishing "priority of publication" and to maintaining the implicit "secrecy" of every editorial review policy, now tele-communicatively, for "Big Data-sets."

Successful science itself rather defines human progress overall. Since we must remain aware that its method (one storing models extracorporeally: museums, libraries, then “the Cloud”) is a primarily unrecognised, yet a truly isomorphic, mimicry of each of the biological processes (genetic; then neural) accounting for the survival of all life on Earth to date, the responsibility for our ethical conduct in handling "big data" becomes paramount.

Ethical Debate On Standards For Authorship

Pamela Saha, Psychiatrist, University of Washington Medicine Harborview Medical Center; Subrata Saha, Research Professor, University of Washington

The criteria for authorship have changed throughout the centuries and continue to cause dispute today. The placement and order of names on a publication, for example, often generate conflict among professionals. The practice of using well known scientists with limited to even no direct involvement in a work persists for the purpose of enhancing chances for publication. Subordinates are exploited by heads of labs and by department chairs. Authorship is not only about credit. It is also about responsibility. An author must be prepared to take responsibility for content circulated in a Journal. Data has to be supported and verified by detailed and careful records. Claims presented without documentation to back them up can lead to embarrassment of scientists, universities, and periodicals alike. Journals now have strict guidelines that say that authors must state that they have contributed significantly to a work. Mere financial support for a project without participation in the experiments, the writing of the paper, or supervision of the project does not qualify one as an author. In this paper, we shall address the ethical issues concerning authorship in scientific publications.

Panel Discussion: Ethical Issues and Challenges in Research 

Allen A. Thomas, Associate Professor, Organic Chemistry, University of Nebraska at Kearney.

Cristina Gouin, Biological Scientist, USDA Animal and Plant Health Inspection Service (APHIS)
Pranav Mathur, Scientist, Research, Otonomy Inc.
Subrata Saha, Affiliated Professor, Department of Restorative Dentistry and the Department of Oral and Maxillofacial Surgery, University of Washington

The challenges of doing scientific research and competition to obtain funding and/or publish positive results are immense. Along with these tasks, being a successful scientist and human being also requires a commitment to good ethical practices. The panelists will share their varied experiences from industry, academia, and government research labs to address ethical questions such as: “How should authorship on papers be decided?”; “How can we ethically use animals in research?”; “Is it possible to objectively review manuscripts and grant proposals?”; “Can we eliminate gender bias?” After the panelists have presented their experiences and viewpoints on these and other relevant ethical topics, they will happily accept questions and comments from the audience to promote discussion.


Science Communication Sessions

Seeking Science: A Roundtable with California Legislative Staff

Teresa Feo PhD, 2018 CCST Science & Technology Policy Fellow with the California State Senate Office of Research (Moderator); Naomi Ondrasek PhD, 2018 CCST Science & Technology Policy Fellow with the California State Assembly Education Committee (Moderator); Katharine Moore PhD, Consultant, California State Senate Natural Resources and Water Committee; Julianne McCall PhD, Consultant, California State Senate Office of Research. Organized by the California Council on Science and Technology

How can scientific research inform real-world policies? How can researchers get a sense of pressing policy priorities in need of scientific input? One important pathway is for the scientific community to interface with legislative staffers—a crucial first point of contact and the daily workhorses of any state or federal lawmaking process. In this panel featuring Ph.D. scientists-turned-policy professionals, committee staff from the California State Legislature will provide a useful overview of how technical information is consumed in the course of crafting and analyzing legislation, and explain the role that staffers play for elected officials and legislative committees. Then, panelists will share their thoughts on perennial and emerging topics facing the State of California in need of scientific data and advice — "the science I need right now." For any Sigma Xi member seeking to translate their research for broader impact and public service, this will be an informative and engaging glimpse into the world of science policy!

Science Communication Workshop Part I  and Part II

Darcy J. Gentleman, Principal, DJG Communications LLC

Scientists’ and engineers’ innovative work thrives on communication between peers. As collaboration with interdisciplinary teams and complex projects (e.g. “big data”) become more the norm, individual researchers find themselves engaging with more diverse audiences. No longer can shorthand terms of art (jargon) and bespoke diagrams convey information in an immediately useful form. The stakes are high: researchers that are comfortable presenting to non-specialist audiences will find it easier to secure support for their work. This workshop in two parts will quickly help participants become aware of how to better engage audiences of non-specialists; possibly including non-technical experts such as public audiences. This workshop is useful for all experience levels of students and professional researchers.

Connecting with your Audience, a Workshop

Ben Young Landis, Creative Externalities. 

Organized by the California Council on Science and Technology

As researchers, there is a very real need to communicate the value and application of our good work. However, communicating your science effectively depends on how well you understand your audience — and how well you frame your message to match!

Designed for those who have little to no experience or those who need a refresher, this workshop offers a primer on communicating science to the public — with the specific philosophy that “the public” are made of diverse peoples each with unique cultural contexts — and the researcher-communicator must always be ready to adapt their messaging to the audience and conversation at hand. To practice this, workshop participants will be led through a lively, interactive lecture, followed by a worksheet activity in which they will prepare and rehearse a 90-second elevator pitch explaining their research. 

By the end of the course, participants should understand the importance of audience and context when communicating to the public, and ways to engage their audience through metaphors, messaging, and other tricks of the trade — growing their craft as science communicators and as ambassadors of the scientific endeavor.

(Capped at 20 participants)

Increasing Data Literacy and Building Science Communication Skills – The Biodiversity Literacy in Undergraduate Education Data Initiative (BLUE Data)

Lisa D. White, University of California, Berkeley; Anna Monfils, Central Michigan University 

Rapid advances in data research and technology are transforming science methodologies, changing the preparation of students for 21st century careers, and driving the need to re-shape undergraduate student training in biology. A new project, the Biodiversity Literacy in Undergraduate Education Data Initiative (BLUE)  funded by the National Science Foundation Research Coordination Network (RCN) addresses the need to incorporate biodiversity information and data skills into undergraduate education by developing materials and approaches to encourage greater use of biodiversity data in instruction. The goals of BLUE are to: (1) Cultivate a diverse and inclusive network of biodiversity researchers, data scientists, and biology educators focused on undergraduate data-centric biodiversity education; (2) Build community consensus on core biodiversity data literacy competencies; (3) Develop strategies and exemplar materials to guide the integration of biodiversity data literacy competencies into introductory undergraduate biology curricula, and (4) Extend the network to engage a broader community of undergraduate educators in biodiversity data literacy efforts. In our first year of the initiative we have brought together discipline-based researchers and science educators who are designing and assessing effective classroom activities using biodiversity data. Through these efforts we have begun to highlight skills and competencies most needed throughout an undergraduate biology curriculum that leverages biocollections, metadata, and other authentic resources and information from large aggregated datasets to study biodiversity. 

Challenges of Sharing Data in Long-Distance Collaborations

Allen A. Thomas, Associate Professor, Organic Chemistry, University of Nebraska at Kearney

Collaboration is an essential component of scientific progress, but it is rarely easy. E-mail can readily lead to miscommunication. Yet, we scientists rely heavily on e-mail for sharing and discussing data. I will present results from published studies on the effectiveness of various methods for interpreting large data sets within a team. Additionally, I will briefly describe case studies of how multi-disciplinary, multi-site teams have used large data sets to make project-changing decisions.

Professional Development Sessions

Sigma Xi Chapters and Interdisciplinary Professional Development: How Can We Build on Our Strengths? 

Facilitators: Chris Olex, Corporate Trainer and Facilitator at The Point; Sue Weiler, Senior Research Scientist at Whitman College

While successful collaborative interdisciplinary research is contingent on the ability to work effectively while navigating the dynamics of group interactions, students and early career researchers are rarely taught effective team skills. Sigma Xi chapters are uniquely positioned to provide opportunities for interdisciplinary collaborative research, training, and fellowship. In this session, we will explore the five traits of successful teams identified recently in a study of Google employees, and discuss them in the context of Sigma Xi’s potential role in facilitating interdisciplinary collaborative research opportunities. We will also brainstorm about ways in which the Sigma Xi chapter structure is currently being used, or could be used more effectively to foster interdisciplinary collaborative team skills and networking opportunities.

How to Survive AND Thrive in a Research Career

Facilitators: Chris Olex, Corporate Trainer and Facilitator at The Point; Sue Weiler, Senior Research Scientist at Whitman College

Life as a researcher can be very rewarding, but navigating this new environment as a student or early career researcher can be stressful, confusing, and even overwhelming. Fortunately there are practices and opportunities for both surviving and thriving in the research environment. Participants will be provided with techniques and self ‘checks’ to identify individual preferences, thinking habits, and motivations, and explore techniques to develop better resilience and perseverance/grit in the face of adversity.

Grant Writing Workshop

Emma Perry, Professor of Marine Biology, Unity College and Former Chair of the Sigma Xi Committee on Grants-in-Aid of Research

Does your grant application have that je ne sais quoi? Together we will look at what makes a successful grant application, what elements a good grant application should contain, and discuss the elements of a compelling big picture. After examining a variety of grants (please feel free to bring along grants you are interested in applying to), we will end by examining the nuts and bolts of Sigma Xi’s Grants-in-Aid of Research evaluation process.

General Sessions

Global SPHERE Network

Raja GuhaThakurta, professor and astronomer, University of California Observatories, Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz; Emily Entress Clark, Science Internship Program coordinator, University of California, Santa Cruz

This interactive session about the Global SPHERE Network will be hosted by SPHERE co-founders Raja GuhaThakurta, Emily Clark, and a few of their colleagues. Launched a year ago, the SPHERE network aims to become a community of practice for STEM researchers who mentor (or wish to mentor) high school students in their research. The network’s mission is to: (1) facilitate the growth of the mentor pool, (2) increase the size and diversity of the pool of high school students who avail of such opportunities, and (3) create the first categorized online directory of global STEM programs for high school students to use. Presenters and participants will discuss mechanisms to engage with Sigma Xi chapters and partnering Sigma Xi clubs to expand the STEM pipeline and support high school researchers, both locally and globally. The SPHERE network was co-founded by programs at the University of California, Santa Cruz; New York Academy of Sciences; the University of California, Berkeley; Harvard University; American Museum of Natural History; and Rockefeller University.

Exhibits and Networking

Visit the exhibitors: GEICO, SAS Institute, the Graduate School of Basic Medical Sciences at New York Medical College, National Postdoctoral Association, SMART Scholarship program, March for Science, and Mentors Advancing STEM Education and Research (MASER).

Sigma Xi Town Hall

Hear remarks from President Joel Primack; President-elect Geri Richmond; Director of Membership, Chapters, and Programs Eman Ghanem; Executive Director and Chief Executive Officer Jamie Vernon; and ask President Primack questions in a Q & A session. 


Science Café: The Who and Why of Environmental Health

RSVP on Eventbrite. RSVPs required due to limited seating.

Location: Rise Pizzeria, 1451 Burlingame Avenue, Burlingame, CA

Speaker: Cavin Ward-Caviness, Principal Investigator in the Environmental Public Health Division of the U.S. Environmental Protection Agency

The environment is one of the most important determiners of health and wellness. Despite this we understand relatively little about who is most likely to be affected by environmental exposures and why certain exposures make us sick. In this talk, we will use a variety of approaches to explore environmental health in under-studied populations and to understand what groups of individuals might be at greatest risk of adverse health outcomes when exposed to air pollution. We will also explore how molecular changes associated with environmental exposures, e.g. changes in epigenetic markers, can be indicators of accelerated aging and may give us clues into the biology linking exposure and health effects. Through the use of electronic health records and high-dimensional “omics” data, we will explore some of the latest insights into environmental health and what this means for the most vulnerable among us and how we might be able to leverage these insights into a more personalized approach to environmental health.

Attendees will be able to purchase dinner at the venue. A complimentary shuttle will pick up participants from the Hyatt Regency San Francisco Airport hotel lobby at 6:30 p.m. on October 25 and provide a return ride. 

Science Café: "So What Kind of Doctor Are You, Anyway?” My Long Journey from Medicine to Public Policy, with Stops in Lasers and Corn Farming

RSVP on Eventbrite. RSVPs required due to limited seating. 

Presented by the California Council on Science and Technology Fellows

Speaker: Colin Murphy, Deputy Director of the Policy Institute for Energy, Environment and the Economy, University of California, Davis; 2014 Science and Technology Policy Fellow, California Council on Science and Technology

Why do scientists “science”? What motivates a person to spend years educating themselves about a field and years more working in it, only to leave that world for other pastures? At this Sigma Xi Science Café event, speaker Colin Murphy will ponder the many roads taken in his pursuit of science and in service to society. A biomedical engineer turned biofuel expert turned policy staffer turned climate advocate, Murphy will explain how there is more to science than just research. Instead, scientists harness their talents and even hobbies, taking their expertise to important playing fields such as government policymaking. Murphy will reflect on his experience going from PhD research to working in the halls of the California State Capitol— learning how science can and can’t influence the tough decisions and complex compromises in politics— and how science plays a role in lawmaking in the Golden State.

The evening will provide helpful context for attendees interested in a science policy (#scipol) career—and a fun look for all on how “the sausage is made” in Sacramento, and why perfectly sane people choose to lead a life in science, or politics, or both!

Attendees will be able to purchase dinner at the venue. A complimentary shuttle will pick up participants from the Hyatt Regency San Francisco Airport hotel lobby at 8:00 p.m. on October 26 and provide a return ride. 

Think Cosmically, Act Globally, Eat Locally

Joel Primack, President of Sigma Xi; Nancy Ellen Abrams, cultural philosopher

Sigma Xi President Joel Primack helped to create modern cosmology. He will briefly explain the new picture of the universe and how we fit into it. It turns out that we are central or special in surprising ways. 

Nancy Abrams worked on science and policy at the Ford Foundation, the Congressional Office of Technology Assessment, and the government of Sweden. She will describe how scientists can advise politicians and the public effectively even when they disagree on the policy implications. But some scientists are not reliable sources of information and advice, as Nancy’s song and video “Hired Brain” illustrates.

They both look forward to interacting with the audience.

Science Comedy Show

Brian Malow, Science Comedian, Producer, Consultant

Updated on 10/24/2018

* Denotes meeting presenter

Go back to the main Annual Meeting and Student Research Conference page.