ASU for You, learning resources for everyone

  • News/Events
  • Arts and Sciences
  • Design and the Arts
  • Engineering
  • Global Futures
  • Health Solutions
  • Nursing and Health Innovation
  • Public Service and Community Solutions
  • University College
  • Thunderbird School of Global Management
  • Polytechnic
  • Downtown Phoenix
  • Online and Extended
  • Lake Havasu
  • Research Park
  • Washington D.C.
  • Biology Bits
  • Bird Finder
  • Coloring Pages

Experiments and Activities

  • Games and Simulations
  • Quizzes in Other Languages
  • Virtual Reality (VR)
  • World of Biology
  • Meet Our Biologists
  • Listen and Watch
  • PLOSable Biology
  • All About Autism
  • Xs and Ys: How Our Sex Is Decided
  • When Blood Types Shouldn’t Mix: Rh and Pregnancy
  • What Is the Menstrual Cycle?
  • Understanding Intersex
  • The Mysterious Case of the Missing Periods
  • Summarizing Sex Traits
  • Shedding Light on Endometriosis
  • Periods: What Should You Expect?
  • Menstruation Matters
  • Investigating In Vitro Fertilization
  • Introducing the IUD
  • How Fast Do Embryos Grow?
  • Helpful Sex Hormones
  • Getting to Know the Germ Layers
  • Gender versus Biological Sex: What’s the Difference?
  • Gender Identities and Expression
  • Focusing on Female Infertility
  • Fetal Alcohol Syndrome and Pregnancy
  • Ectopic Pregnancy: An Unexpected Path
  • Creating Chimeras
  • Confronting Human Chimerism
  • Cells, Frozen in Time
  • EvMed Edits
  • Stories in Other Languages
  • Virtual Reality
  • Zoom Gallery
  • Ugly Bug Galleries
  • Ask a Question
  • Top Questions
  • Question Guidelines
  • Permissions
  • Information Collected
  • Author and Artist Notes
  • Share Ask A Biologist
  • Articles & News
  • Our Volunteers
  • Teacher Toolbox

Question icon

show/hide words to know

Blood-brain barrier: a protective layer that surrounds the brain and controls what things can move into the area around the brain.

Circadian rhythm: the body's natural clock that runs on roughly a 24 hour cycle. Many animals have a 24 hour cycle that includes sleeping, eating and doing work...  more

CLSM: confocal laser scanning microscope (CLSM) makes high quality images of microscopic objects with extreme detail...  more

Metabolism: what living things do to stay alive. This includes eating, drinking, breathing, and getting rid of wastes...  more

Puberty: the change from child to adult where the body is able to reproduce.

Vertebra: any of the bones that make up the backbone.

What Are the Parts of the Brain?

Every second of every day the brain is collecting and sending out signals from and to the parts of your body. It keeps everything working even when we are sleeping at night. Here you can take a quick tour of this amazing control center. You can see each part and later learn what areas are involved with different tasks. 

Brain Cells

There are two types of cells in your brain, neurons and glial cells (glia - Greek word for glue). For a long time biologists have thought that the neurons were the only cells that controlled our bodies and were also where our memories are kept. Glial cells were just in the brain to support neurons, insulate them, provide nutrition, and do basic housekeeping. New research is beginning to show that glial cells are doing more than these jobs.

 Brain Anatomy

Computer animation credit: BodyParts3D, Copyright© 2010 The Database Center for Life Science licensed under CC Attribution-Share Alike 2.1 Japan.

Astrocyte movie credit : Confocal scanning laser image courtesy of Professor Dennis McDaniel.

Read more about: A Nervous Journey

View citation, bibliographic details:.

  • Article: What's in Your Brain?
  • Author(s): Brett Szymik
  • Publisher: Arizona State University School of Life Sciences Ask A Biologist
  • Site name: ASU - Ask A Biologist
  • Date published: May 5, 2011
  • Date accessed: June 5, 2024
  • Link: https://askabiologist.asu.edu/parts-of-the-brain

Brett Szymik. (2011, May 05). What's in Your Brain?. ASU - Ask A Biologist. Retrieved June 5, 2024 from https://askabiologist.asu.edu/parts-of-the-brain

Chicago Manual of Style

Brett Szymik. "What's in Your Brain?". ASU - Ask A Biologist. 05 May, 2011. https://askabiologist.asu.edu/parts-of-the-brain

MLA 2017 Style

Brett Szymik. "What's in Your Brain?". ASU - Ask A Biologist. 05 May 2011. ASU - Ask A Biologist, Web. 5 Jun 2024. https://askabiologist.asu.edu/parts-of-the-brain

Babies, birth, and brains

Learn more about growth and size of the human brain at Ask An Anthropologist's story Babies, Birth, and Brains .

A Nervous Journey

more bio bits

Coloring Pages and Worksheets

Neuron Anatomy

What's In Your Brain

What's Your Brain Doing?

Puzzles Pages

Be Part of Ask A Biologist

By volunteering, or simply sending us feedback on the site. Scientists, teachers, writers, illustrators, and translators are all important to the program. If you are interested in helping with the website we have a Volunteers page to get the process started.

Share to Google Classroom

Learn Bright

Human Brain

Human Brain teaches students about the six major parts that make up the brain: cerebrum, cerebellum, brain stem, hypothalamus, pituitary gland, and amygdala. Students will discover which functions of the body each part controls or is responsible for. The worksheets at the end will reinforce their grasp of the lesson content.

The “Options for Lesson” section provides some ideas for alternative ways to play the charades game for the lesson activity. Because it’s a tournament, there are several ways you can adjust the game to fit your needs with your class. One option actually suggests changing the game to Pictionary instead.

Description

Additional information, what our human brain  lesson plan includes.

Lesson Objectives and Overview: Human Brain teaches students about the major components of the brain and their functions. Students will be able to identify and describe these major parts and functions and learn how they all work together. This lesson is for students in 5th grade and 6th grade.

Classroom Procedure

Every lesson plan provides you with a classroom procedure page that outlines a step-by-step guide to follow. You do not have to follow the guide exactly. The guide helps you organize the lesson and details when to hand out worksheets. It also lists information in the yellow box that you might find useful. You will find the lesson objectives, state standards, and number of class sessions the lesson should take to complete in this area. In addition, it describes the supplies you will need as well as what and how you need to prepare beforehand. You won’t need any extra supplies for this lesson, but you will need to prepare a few things ahead of time. Create a large set of tournament brackets for the Brain Charades activity to display. Divide your students into teams of three or four each. Cut apart the charade cards, and make several sets if you plan to play several games at the same time.

Options for Lesson

The “Options for Lesson” section lists several suggestions for additional activities and tasks or alternate ways to go about certain parts of the lesson. For this lesson, all of the suggestions are related to the brain charades activity worksheet. You could choose two students for each team to increase the number of teams playing in the tournament, or you could split the class into just two teams rather than making it tournament. (In this case, you wouldn’t need make the brackets.) You may even want to adjust the rules a little bit so that you either add or subtract the amount of time students have for each game. Another idea suggests saving the game for after the practice and homework assignments as an incentive for students to study and prepare for the tournament. Another option is to adjust the bracket as necessary for the number of teams or if you want to use double elimination rules. The last idea is to change the game into a drawing game such as Pictionary instead.

Teacher Notes

The paragraph on the teacher notes page provides a little extra information or guidance for the lesson overall. It reminds you to focus your efforts on helping students become familiar with the parts of the brain and how different areas control different parts of the body. It suggests that you use additional resources, like videos, to further journey into the workings of the human brain. You may find and want to use additional content that discusses parts of the brain that this lesson doesn’t cover. You can use the blank lines to write any other thoughts or ideas you have as you prepare.

HUMAN BRAIN LESSON PLAN CONTENT PAGES

The basics of the brain.

The Human Brain lesson plan contains three pages of content. The first page explains what the brain is and what it does for the body. The brain is the main part of the body’s nervous system and constantly sends signals throughout the body. It has several different parts that work together to ensure we can live, learn, work, and play. The six main parts are the cerebrum, cerebellum, brain stem, pituitary gland, hypothalamus, and amygdala.

There are other parts as well, but these six major parts are the ones that control everything that we do. Scientists are able to create maps of the brain. They have been able to locate areas within this organ that control specific parts or functions of the body. For instance, a doctor can stimulate a certain area of the human brain, and it will feel like someone is touching your arm or leg.

The lesson outlines and explains the functions of each of the six major brain components. It provides a few diagrams that label the different parts. You may want to review these with the class a few times since they will need to memorize where each part is located for one of the worksheets. The instructions dictate that they use their memory and not refer to the content pages for help.

The biggest part of the brain is the cerebrum. It makes up roughly 85% of the brain’s weight. The cerebrum allows people to control voluntary muscles, which are the muscles that they can control. In other words, the cerebrum is what allows people to kick a ball, walk down the street, or jump in the air. It also allows us to think. Taking a test, making decisions, or playing video games are all things that activate the cerebrum.

In addition, we depend on this important component when it comes to short-term (recalling recent events) and long-term (recalling much older memories) memory. The cerebellum has two halves, one on each side of the head. The right half helps us with abstract things like art, music, and other parts of the imagination. The left half is more analytical and helps us speak, make decisions, and reason things out. Scientists remain unsure about which half of the brain controls each half of the body.

There are four lobes that make up the cerebrum. The frontal lobe is at the front of the brain (hence its name). Behind the frontal lobe is the parietal lobe. The temporal lobe is on the side of the head. Finally, the occipital lobe is at the back of the head. Both halves of the cerebrum have these four lobes.

Brain Stem and Cerebellum

Students will then learn about the brain stem and cerebellum parts of the human brain. The brain stem is responsible for all the functions of the body that are vital to survival. These functions include breathing, digesting food, and circulating blood throughout the body. It is below the cerebrum and in front of the cerebellum. It connects the rest of the brain to the spinal cord.

The brain stem also controls involuntary muscles, which are those that work on their own without our having to think about it. It tells the heart to pump blood to the body and the stomach muscles to break food down. In addition, it sends and receives millions of messages back and forth between the brain and body.

Located in the back of the brain, under the cerebrum, the cerebellum controls balance, movement, and coordination. To put it simply, this part helps us stand, move, and balance. It is only about one-eighth the size of the cerebrum, but it is still a vital part of the brain. Without it, a person would have great difficulty moving around.

The Other Three Parts of the Human Brain

The lesson next describes the pituitary gland. The pituitary gland controls the growth of the body by producing and releasing hormones. Even though it is only about the size of a pea, our bodies would never change as we aged if it didn’t function properly. It also controls sugars and water in the body and keeps our metabolism going. Metabolism relates to how the body uses energy.

Students will then learn about the hypothalamus, which controls the body’s temperature. Because humans are warm-blooded animals, we can control our body temperature. The hypothalamus is the part of the brain that actually makes this happen. When it’s too hot, this part of the brain tells the body to sweat. When it’s too cold, it tells the body to shiver.

Finally, students will learn about the control center for feelings—the amygdala. The amygdala is a group of cells that is responsible for emotions. There are actually two amygdala in the human brain. One is on the left side of the brain, and the other is on the right, but they work together to function correctly.

These six parts connect with the body’s nervous system, which is comprised of thousands of nerves that communicate information to and from the brain. Memories and thoughts move through cells as tiny electrical charges. They connect to one another via synapses, the junctions between two cells. This is how habits develop and how we learn new skills. The more we practice, the stronger those connections become.

HUMAN BRAIN LESSON PLAN WORKSHEETS

The Human Brain lesson plan includes three worksheets: an activity worksheet, a practice worksheet, and a homework assignment. Each one will help students solidify their comprehension of the material in different ways. The guidelines on the classroom procedure page outline when to hand out each worksheet to the class.

BRAIN CHARADES ACTIVITY WORKSHEET

This activity requires some preparation on your part (see the classroom procedure section). Each team of students will first pick a creative name for their team that relates to the brain in some way. Teams will compete against each other in a single-elimination tournament. To play, one player from each team will randomly choose charade cards and act out what it says to do. The remaining members will guess the action and which part of the brain controls that action. All other students must remain silent. If the actor speaks, the team loses a point. Any time the team guesses both the action and the correct brain part, you will give them a point. The game continues for up to five minutes (or more if you choose). Once they time runs out, they will reshuffle the cards for the opposing team.

LABEL THE DIAGRAMS PRACTICE WORKSHEET

The practice worksheet has two diagrams of a human brain. Students must label each diagram using the terms in the word bank. The word bank is different for each diagram.

HUMAN BRAIN HOMEWORK ASSIGNMENT

For the homework assignment, students must circle the correct answer for 18 questions. Several of these questions provide short scenarios that demonstrate how certain parts of the brain can work well or work improperly. You may or may not allow them to use the content pages for reference.

Worksheet Answer Keys

The last few pages of the document are answer keys for the worksheets. The activity answer key lists which charade cards correspond to the various parts of the brain to make it easy for you to ensure students are correct when they guess. For the practice worksheet answer key, the correct answers are in red for both diagrams. Similarly, the answer key for the homework assignment shows red circles around the correct responses.

Thank you for submitting a review!

Your input is very much appreciated. Share it with your friends so they can enjoy it too!

Human brain is a great resource

Although this resource is designed for use in the classroom it was easily adapted to a homeschool setting and for use with a mix of ages. It was informative without being overwhelming and had some fun activities to help learning. I will certainly be using other units from Learn Bright

Excellent information provided. The problem that I had was the pictures did not exactly match those in the pamphlet, so the students found that a little confusing. Overall it was great!

Brain Charades Rock!

The Brain Charades were the highlight of our review about the human brain. Engaging and fun, it was a way for all students to get involved and review the content together.

I have all of the Human Body lessons. They are very concise and great for 4-6 grade. The pictures are real and that helps to teach the material.

Home teaching my granddaughter and she thoroughly enjoyed the unit. Just perfect for 4th going to 5th grade. Engaging and factual!!!

Related products

This is the title page for the Is It Fair to Compare? lesson plan. The main image is of a cheese burger and a crispy chicken sandwich next to each other as if being compared. The orange Learn Bright logo is at the top of the page.

Is It Fair to Compare?

This is the title page for the Fun Facts for 50 States lesson plan. The main image is of a globe with a yellow thumbtack stuck in the general area of Washington, DC. The orange Learn Bright logo is at the top of the page.

Fun Facts for 50 States

This is the title page for the Clouds STEM lesson plan. The main image is of pink and orange and dark purple clouds. The orange Learn Bright logo is at the top of the page.

Clouds STEM

This is the title page for the Hamsters lesson plan. The main image is of a cream colored hamster gnawing on the stem of a red tomato. The orange Learn Bright logo is at the top of the page.

Make Your Life Easier With Our Lesson Plans

Stay up-to-date with new lessons.

homework 2.0 label the brain

  • Lesson Plans
  • For Teachers

© 2024 Learn Bright. All rights reserved. Terms and Conditions. Privacy Policy.

  • Sign Up for Free

U.S. flag

BRAIN 2.0: From Cells to Circuits, Toward Cures

The Advisory Committee to the NIH Director (ACD) BRAIN Initiative Working Group 2.0 and the ACD Working Group on BRAIN 2.0 Neuroethics Subgroup (BNS) were formed in April 2018. For more detailed information on the BNS, please visit this page. The BRAIN Working Group 2.0 worked to assess the BRAIN Initiative’s progress and advances within the context of the original BRAIN 2025 report, identify key opportunities to apply new and emerging tools to revolutionize our understanding of brain circuits, and designate valuable areas of continued technology development. Over the course of one and a half years, they undertook a deliberative and open process consisting of portfolio review, scientific workshops, town halls, and public solicitation. 

Continuing in this manner, the Working Group 2.0 shared with the community its initial thoughts on the current state of the BRAIN Initiative, including opportunities for keeping pace with the evolving scientific landscape, as well as the identification of new opportunities for research and technology development, within a solid ethical framework, to ensure BRAIN Initiative research is of the utmost value to the public it intends to serve. Following a public comment period, the Working Group 2.0 reviewed all responses as they iterated the report to the ACD for consideration at its meeting on June 14th, 2019.

After receiving feedback from the ACD and the NIH Director, the Working Group 2.0 and the BNS presented their reports to the ACD via teleconference on October 21, 2019. The reports, “The BRAIN Initiative® 2.0: From Cells to Circuits, Toward Cures”, and “The BRAIN Initiative® and Neuroethics: Enabling and Enhancing Neuroscience Advances for Society” were endorsed by the ACD. Former NIH Director, Dr. Francis Collins, accepted the ACD endorsed reports and the NIH BRAIN Initiative continues to carefully consider how to integrate the findings of these reports in future BRAIN Initiative priorities and investments.

Executive Summary

In 2019, we are at the midway point of The Brain Research through Advancing Innovative Neurotechnologies® ( BRAIN ) Initiative. To date, this large-scale investment of resources and time has made significant progress in its quest to understand the brain. Given remarkable progress in technology development, the neuroscience community is poised to apply these new technologies, and accumulated knowledge, to further understand what many perceive to be one of the most complex entities known to humankind: the human brain. We should also be humbled about what studying ourselves reveals – and be prepared to tread carefully when we don’t know what we don’t know about possible consequences of newfound abilities to control the activity of brain cells and circuits. While this is not possible on a wide scale yet, it will be in time, and probably sooner than we think.

In April 2013, recognizing the many scientific and ethical issues companion to The BRAIN Initiative®, NIH Director Dr. Francis Collins convened a high-level working group of the NIH Advisory Committee to the Director (ACD), the BRAIN Working Group (WG 1.0) and charged them with reviewing recent advances in neuroscience; articulating short-, medium-, and long-term goals for achieving a scientific and ethical vision of The BRAIN Initiative®; and developing a scientific plan for achieving those goals. The BRAIN Initiative® WG 1.0 established a strategic roadmap (BRAIN 2025: A Scientific Vision) structured into seven Priority Areas.

BRAIN 2025 recognized that the fast pace and unpredictable path of neuroscience research would require that recommendations be re-examined as The BRAIN Initiative® progressed. Dr. Collins convened a new working group (WG 2.0) to revisit the 2025 report’s priorities to assess progress to date and to identify new scientific opportunities. Beginning in April 2018, WG 2.0 members reviewed the existing BRAIN Initiative investment and progress and considered potential areas for growth and expansion. In so doing, WG 2.0 sought input from the broader neuroscience community and other BRAIN Initiative stakeholders through multiple modalities: a series of public workshops held between August 2018 and November 2018, three Town Hall events held between April 2018 and April 2019, and two requests for information (RFI).

This report presents the findings and analyses of WG 2.0 regarding the NIH BRAIN Initiative investment to date and offers some specific suggestions regarding NIH activities in The BRAIN Initiative®. WG 2.0 proposes that the ACD recommend to the NIH Director that the NIH BRAIN Initiative considers the findings, analyses, and suggestions in this report for incorporation into the ongoing research program. Some of our findings and suggestions may extend beyond the NIH mission or may require collaborative efforts with other federal agencies and organizations. In those cases, WG 2.0 proposes that the ACD recommend to the NIH Director that NIH engage with broader stakeholder communities as necessary and appropriate to achieve outcomes consistent with the content of this report.

Priority Areas

We have structured this report around the seven scientific Priority Areas identified by BRAIN 2025. Each of these constitutes a chapter that provides a brief description of how the Priority Area fits into the goal of understanding circuits; reviews accomplishments to date in context of the BRAIN 2025 short- and long-term goals; identifies gaps and opportunities; and presents revised short- and long-term goals. Next, we frame these scientific directions in a chapter entitled “Priority Area 8. Organization of Science,” that presents a discussion of the overarching topics that affect all areas of science. These include data management and sharing; scientific workforce-related considerations; sharing and using BRAIN Initiative technologies; public engagement strategies; and connecting basic research to disease models under study. We conclude by offering ideas for transformative projects, which all involve complex and multiscale lines of inquiry. A brief accounting of progress and promise for each Priority Area appears below and is articulated in more detail in this report. Programming accomplished in the context of the NIH BRAIN Initiative from 2014 to the present is identified as “BRAIN 1.0,” while “BRAIN 2.0” represents upcoming programming from the present to 2026. NIH should be prepared to evaluate the outcome of these findings and suggestions and review the accomplishments of BRAIN 2.0 in 5 years.

Priority Area 1. Discovering Diversity: Identify and provide experimental access to the different brain cell types to determine their roles in health and disease

  • Progress in this Priority Area has been faster than anticipated, enabled by advances in high-throughput technologies and analytical methods. New opportunities for BRAIN 2.0 include expanding cell-type profiling and data analysis to integrate measurements of additional phenotypic features of brain cells; generating a protein-based understanding and access to cell types; enabling genetic and non-genetic access to cell types across multiple species; expanding human cell biology; and performing cell-type based models of circuit function. At the completion of The BRAIN Initiative®, we expect that current and additional progress in this area will clarify, and perhaps even define, contributions of distinct cell types to circuit function and the physiological and pathological sequelae.

Priority Area 2. Maps at Multiple Scales: Generate circuit diagrams that vary in resolution from synapses to the whole brain

  • We have seen substantial progress in this Priority Area, reflected by impressive improvements in tissue processing and imaging that are bringing brain regions and circuitry into sharper relief for continued investigation. Opportunities for BRAIN 2.0 include increasing the speed and efficiency of these powerful new tools; expanding analyses to larger brains; increasing mapping of non-neuronal cell types and synapses; integrating structure and function mapping in the same brain; and acquiring and refining data-science advances to facilitate cross-species comparisons. At the completion of The BRAIN Initiative®, we expect that continued progress in this area will allow us to understand the structure of the brain and its numerous functions more fully. This multidimensional view will be transformative for developing therapeutic approaches appreciative of this complex organ.

Priority Area 3. The Brain in Action: Produce a dynamic picture of the functioning brain by developing and applying improved methods for large‐scale monitoring of neural activity

  • We have seen good progress in this Priority Area, driven in part by improvements in hardware and integrated strategies that combine electrophysiology with optical imaging, optogenetics, and pharmacologic modulation. Opportunities for BRAIN 2.0 include expanding the ability to understand neuromodulatory function; tools to study larger (primate) brains; and sophisticated, computational tools to better assess behaviors (especially in natural settings). At the completion of The BRAIN Initiative®, we expect that continued advances in this area will provide a clearer understanding of how dynamic activity in and across brain regions drives so many distinct behaviors in animals and in humans. 

Priority Area 4. Demonstrating Causality: Link brain activity to behavior by developing and applying precise interventional tools that change neural circuit dynamics

  • We have seen considerable progress in this Priority Area. All major short- and long-term goals are in the process of being completed. During BRAIN 2.0, we are poised to grasp new research opportunities in single-cell control, nanotechnologies, and machine learning. It may be time to consider applying methods developed in model systems to understanding neuropsychiatric disease states at the circuit level – as well as seeking to understand ancestral principles governing circuit operation shared across phylogeny and evolution. At the completion of The BRAIN Initiative®, we envision widespread adoption of integrated neurotechnologies that enable scientists to modulate activity throughout the brain to drive desired and predictable outcomes. We expect that the fundamental understanding obtained as a culmination of the integration of theory, observation, and closed-loop experimentation described herein will allow the design of neurotechnologies that perform these perturbations safely and predictably.

Priority Area 5. Identifying Fundamental Principles: Produce conceptual foundations for understanding the biological basis of mental processes through development of new theoretical and data-analysis tools

  • This Priority Area has achieved many of its goals, stimulating development of new approaches to deepen understanding of motor control, decision-making, and other brain functions. Network-training methods now enable artificial neural networks to learn to perform complex tasks similar to those used experimentally. In BRAIN 2.0, more attention could be paid to integrating the work of quantitative scientists of various types with experimental neuroscientists. Fuller integration of theory can also guide experimental design and enhance the validity of model systems. At the conclusion of The BRAIN Initiative®, advances in this area will bring together theory and experiment to solve profound and overarching questions central to systems neuroscience, which will ultimately explain how intricately connected networks of neurons acquire the ability to govern behaviors, thoughts, and memories.

Priority Area 6. Advance Human Neuroscience: Develop innovative technologies to understand the human brain and treat its disorders; create and support integrated human brain research networks

  • We are poised for much progress during BRAIN 2.0 in this Priority Area. Advances in BRAIN 1.0 have set the stage for conducting neuroscience research with human participants, but these opportunities arrive with the need to consider several issues. Moving forward in this area requires extraordinary levels of collaboration, involving integrated teams of clinicians, scientists, device engineers, patient‐care specialists, regulatory specialists, and neuroethicists – to ensure not only innovation, but also safety and scientific rigor. Important ethical concerns center on use of integrated technologies and implantable devices in humans, including considerations of long-term management of those shown to be effective through research. Advances in this critical area, at the completion of The BRAIN Initiative® and beyond, are at the heart of the goals of The BRAIN Initiative® – revealing mysteries of humans’ unique cognitive abilities but also helping us treat or prevent devastating consequences of brain dysfunction. 

Priority Area 7. From The BRAIN Initiative® to the Brain: Integrating new technological and conceptual approaches produced in goals #1‐6 to discover how dynamic patterns of neural activity are transformed into cognition, emotion, perception, and action in health and disease.

  • Many opportunities and goals listed in Priority Areas 1 to 6 hinge upon integration, making it likely that Priority Area 7 will see substantial growth during BRAIN 2.0 in several areas. These include: i) tools to integrate molecular, connectivity, and physiological properties of cell types; ii) connectivity and functional maps at multiple scales that retain cell-type information; iii) integration of fMRI with other activity measures and anatomical connections; iv) integration of electrophysiological and neurochemical methods; v) integration of perturbational techniques with other technologies; vi) more interactions between experimentation and theory; and vii) development of approaches and tools to integrate human data from different experimental approaches. These integrated approaches will truly advance The BRAIN Initiative® to an understanding of complex brain functions such as perception, emotion and motivation, cognition and memory, and action, and inspire new cures for brain disfunction.

Organization of science

The second phase of The BRAIN Initiative® will benefit from careful consideration of issues that transcend scientific areas and that are as or more important to fulfilling its goals. These include data sharing, workforce development, technology dissemination, connecting basic research to disease models under study and public engagement as it relates to ensuring diverse expertise in the second half of The BRAIN Initiative® and beyond. Sharing data and technologies both within and outside BRAIN-Initiative collaborations is essential for extending value from individual datasets by enabling re-use, but it must be strategically managed. Although The BRAIN Initiative® investment is substantial, its reach should be even more so as a result of widespread sharing of developed technologies and knowledge. These practices, although seemingly simple, are not always straightforward and require both incentives and dedicated resources. Such targeted investments will likely pay for themselves many times over given the expansion of opportunity for new discoveries. Because cultural issues are central to sharing technology and data, though, putting into place appropriate reward, review, and expectation systems are vital to ensure that data is FAIR (findable, accessible, interoperable, and reusable). Moreover, there are some situations, such as those involving sensitive human data, in which universal data sharing is unwise as it could potentially cause harm to individuals.

WG 2.0 Major Findings

  • Our first major suggestion is that we encourage BRAIN 2.0 to stay on the productive path already underway, continuing support for technology development and targeted study of circuit components.
  • Our second major suggestion is to devote ample BRAIN Initiative resources to transformative projects of a similar scale
  • Our third major suggestion is to encourage BRAIN 2.0 to consciously balance individual-investigator research with team science – as both levels of inquiry are vital for advancing our understanding of the brain.

As always in biomedicine, workforce issues are paramount. For The BRAIN Initiative®, enhanced diversity (both of scientific expertise and of background and experiences) will broaden the level of inquiry and its generalizability to application to both tools and treatments. Increased emphasis on integrating theory and experiment, neuroscience and neuroethics, and basic and clinical approaches will likely pay good dividends for reaping value from this public research investment. In addition, maturation of BRAIN Initiative research into clinical applications will require very collaborative, novel partnerships with the private sector. Finally, transformative approaches that directly engage the broad American public around biomedical research practice can amplify its goals substantially, and that engagement must occur at all levels of the population.

Transformative projects

Finally, as we look toward the second chapter of The BRAIN Initiative® – equipped with much more knowledge and powerful new tools – many fundamental questions remain unanswered about the brain, both in the context of health and in injury or disease. Recognizing progress and promise, BRAIN 2.0 is poised to take on a set of new grand challenges. Such “transformative projects,” some examples of which we have provided in this report, involve complex and multiscale lines of inquiry and will require new technological, scientific, and organizational inventions. Although these projects are risky in nature, their results could truly transform our intellectual and technological access to brain function and inspire new cures for brain disorders. Examples of transformative projects suggested in this report include generating and implementing methods to access, manipulate, and model clinically relevant cell types across species; achieving a comprehensive map of the entire mouse brain to enable study of brain circuitry from synapses to coordinated function and behavior; generating a comprehensive cell-type atlas in the human brain; achieving a circuit-level understanding of, and interventions for, a vulnerable circuit as a move toward protecting or correcting a major human neuropsychiatric disease symptom; and understanding how the brain retrieves and leverages information from internal models and memory systems.

Our newfound knowledge about the human brain comes with a pledge to consider many ethical issues that accompany neuroscience research – especially research that generates technologies that link humans and machines in intimate ways. Many of these new technologies, also used in novel combinations, will enable us to understand behavior in real-life settings, which will yield important knowledge about the brain and its circuitry’s roles in daily life as well as in disease.

Introduction

Among all the sciences, neuroscience holds a special place. As humans, we are intrigued by the brain and its central role in distinguishing us from all other living things. Through the experimental journey of studying the human brain, we are studying ourselves – and the questions are innumerable. How do electrical impulses, the coordinated output of nerve cells firing, pen the content for a great novel? Sear a harrowing memory from a traumatic event? Generate thrill from a rollercoaster ride? How do electrical impulses sometimes misfire and create misery and suffering – causing Alzheimer’s, schizophrenia, or depression? Identifying, measuring, mapping, and interpreting activities of the 100 billion neurons that make up the human brain seems an impenetrable undertaking, especially since these electrically active cells are intricately connected together 100 trillion ways. Yet it remains an irresistible challenge fueling the fascination of scientists and non-scientists across the globe.

The Brain Research through Advancing Innovative Neurotechnologies® ( BRAIN ) Initiative, launched in 2013, has made remarkable progress toward understanding its tremendously complex namesake organ. Now at the halfway point of this endeavor, we look to the horizon with a renewed focus on using new tools and technologies to expand fundamental knowledge about brain biology. Understanding normal brain circuit functions will illuminate human nature and build a critical foundation for understanding the diseased brain and designing cures. Indeed, understanding brain circuits will be an engine for technological innovation with far-reaching implications for technology and medicine, as was the case with the Human Genome Project and the knowledge that followed.

Charting the course of The BRAIN Initiative®

Our aspirational quest to connect the functions of a physical organ with seemingly abstract concepts such as thinking, feeling, movement, and sensation appears a daunting task. How does this exquisite control happen, and what are the ethical and societal implications of both the quest for those answers and of any eventual findings? Beyond the structural challenges and the problem of scale – the sheer number of nerve cells, connections, and interconnections – is the question of identifying which variables matter. Studying the brain means being able to dissect and then reconstruct the activity of pathways, cells, and molecules at the dimensions and speeds at which these natural processes occur. Such biological programming has been sculpted by many millions of years of evolution, a fraction of the time we have spent studying biology. Moreover, the human brain is one of the most complicated computational systems known to humankind. Simply recognizing this complexity, however, does not engender the ability to mimic such computational sophistication, nor prepare us for the implications of success. Once again, we are faced with the need to study ourselves but limited by the knowledge we have created and the best tools of the day. Now is a special time in the history of experimental neuroscience, with fast-paced progress on many fronts. For the first time, we have technologies and tools that seemed matched to the challenge of understanding the brain. New and unprecedented methods are revolutionizing exploration from the nanoscale to brain- and organism-wide exploration. We have observed rapid progress in molecular tools and analyses, benefiting significantly from large-scale science efforts that enable detailed and high-throughput study of not only genes, but also the structure and function of proteins, carbohydrates, and lipids. Thanks to the emergence of light-based optical control, synthetic-biology methods, and gene-editing tools, we have a newfound ability to observe biology in action, bringing us ever closer to studying life undisturbed compared to using laboratory systems and approximate models. A companion reality is that we are awash in data. What to do with the deluge of information pouring out from new technologies, as well as how to interpret it, remains a major challenge. How should we interpret human data and manage access to it, by whom and for what purposes, while addressing privacy concerns? Well-placed strategies that align data production, collection, analysis, and sharing are essential to reap maximal benefits from The BRAIN Initiative® investment.

BRAIN 2025: An ambitious vision 

“The BRAIN Initiative® is a challenge and an opportunity to solve a central mystery - how organized circuits of cells interact dynamically to produce behavior and cognition, the essence of our mental lives. The answers to that mystery will not come easily. But until we start, the progress we desire will always be distant. The time to start is now.” – BRAIN 2025: A Scientific Vision

The BRAIN Initiative® focuses upon transforming our understanding of how the nervous system processes massive amounts of information, in real time, to generate our experience of the world and our actions in it. Guided by broad input from the scientific community, this multi-year, multi-billion-dollar project is transforming the landscape of neuroscience research. Its initial focus, on cells and their organization into circuits, aimed to ready the field for further exploration through development of a wide range of technologies, noting that “The analysis of circuits is not the only area of neuroscience worthy of attention, but advances in technology are driving a qualitative shift in what is possible, and focused progress in this area will benefit many other areas of neuroscience.” NIH Director Dr. Francis Collins convened a high-level working group of the NIH Advisory Committee to the Director, the BRAIN Working Group (WG 1.0), in April 2013 and charged them with reviewing recent advances in neuroscience; articulating short-, medium-, and long-term goals for achieving a scientific and ethical vision of The BRAIN Initiative®; and developing a scientific plan for achieving those goals. Answering the call, WG 1.0 reviewed the state of the field and identified key research opportunities. It held workshops around the country with invited experts to discuss technologies in chemistry and molecular biology; electrophysiology and optics; structural neurobiology; computation, theory, and data analysis; and human neuroscience. Workshop discussions addressed the value of appropriate experimental systems, animal and human models, and behavioral analysis, and each workshop included opportunity for public comments. WG 1.0 issued a preliminary report in September 2013, presenting high‐priority areas for research, before its final BRAIN 2025 report in June 2014. In addition to endorsing the use of model systems to pave the way toward applications in humans, while recognizing urgency to improve the lives of people with severely disabling neural disorders, WG 1.0 also pursued a measured approach to investigate human neuroscience. As noted in BRAIN 2025: “BRAIN aims to discover general brain mechanisms, many highly relevant to brain disorders, but does not pursue clinically directed questions aimed at specific diseases and patients. Instead, the goal is to understand the basic mechanisms underlying the function of the healthy brain as a foundation to understand brain disorders.” Programming accomplished in the context of the NIH BRAIN Initiative from 2014 to the present is identified as “BRAIN 1.0,” while “BRAIN 2.0” represents upcoming programming from the present to 2026.

BRAIN 1.0: Strategies and principles

To date, NIH BRAIN funding has focused upon seven high-priority research areas, whose scientific goals are summarized briefly below. The core structure of this report is built around these seven scientific areas, with additional focus given to issues that circumscribe the science of science. Those include data acquisition, management, and sharing; technology dissemination; and human capital – they build upon the BRAIN 2025 Core Principles (see text box). BRAIN 2025 Core Principles

  • Pursue human studies and non‐human models in parallel.
  • Cross boundaries in interdisciplinary collaborations.
  • Integrate spatial and temporal scales.
  • Establish platforms for preserving and sharing data.
  • Validate and disseminate technology.
  • Consider ethical implications of neuroscience research.
  • Create mechanisms to ensure accountability to the NIH, the taxpayer, and the community of basic, translational, and clinical neuroscientists.

Priority Area 1:  Discovering Diversity: Identify and provide experimental access to the different brain cell types to determine their roles in health and disease Priority Area 2:  Maps at Multiple Scales: Generate circuit diagrams that vary in resolution from synapses to the whole brain Priority Area 3:  The Brain in Action: Produce a dynamic picture of the functioning brain by developing and applying improved methods for large‐scale monitoring of neural activity Priority Area 4:  Demonstrating Causality: Link brain activity to behavior by developing and applying precise interventional tools that change neural circuit dynamics Priority Area 5:  Identifying Fundamental Principles: Produce conceptual foundations for understanding the biological basis of mental processes through development of new theoretical and data analysis tools Priority Area 6:  Advance Human Neuroscience: Develop innovative technologies to understand the human brain and treat its disorders; create and support integrated human brain research networks Priority Area 7:  From The BRAIN Initiative® to the Brain: Integrating new technological and conceptual approaches produced in goals #1‐6 to discover how dynamic patterns of neural activity are transformed into cognition, emotion, perception, and action in health and disease

NIH in action

The major objective of The BRAIN Initiative® is to develop new tools and technologies and employ them in research aimed at understanding how networks of cells (e.g., circuits) in the brain generate behaviors. In the first 5 years of The BRAIN Initiative®, NIH created a bold implementation of the vision established by BRAIN 2025, ramping up this major initiative with remarkable speed, while applying ethical consideration, stewardship and efficiency required for a taxpayer-funded endeavor. NIH has also led The BRAIN Initiative® community through collaborative engagement taking the form of annual investigator meetings, self-study, and several publications and reviews.

Supported by Congress through both the regular appropriations process and the 21st Century Cures Act, the NIH-funded BRAIN Initiative is a very large, multi-year investment. Through fiscal year 2018, the endeavor has funded more than 550 projects and hundreds of scientists across a substantial range of neuroscientific inquiry including exploration of attendant neuroethical issues. This investment, totaling nearly $1 billion to date, is expected to ramp up considerably over the second half of the project. Less than half of the lifetime funding (estimated $4.9 billion) has been spent – combined with the rich progress to date, we predict that the most productive period for The BRAIN Initiative® is yet to come.

In establishing The BRAIN Initiative®, NIH recognized the importance of oversight and stakeholder guidance. Shortly after the publication of BRAIN 2025, NIH created a Multi-Council Working Group (MCWG) comprised of non-governmental representatives from each of the 10 NIH Institutes or Centers (ICs) that contribute to the initiative, as well as five at-large members. In addition, the MCWG includes ex officio members from the Defense Advanced Research Projects Agency (DARPA), the Food and Drug Administration (FDA), the Intelligence Advanced Research Projects Agency (IARPA), and the National Science Foundation (NSF) – the latter three of which are federal partners involved in The BRAIN Initiative® Alliance. The MCWG provides ongoing oversight of the long-term scientific vision of The BRAIN Initiative®, in the context of the evolving neuroscience landscape. In addition, the MCWG ensures that each of the BRAIN IC Advisory Councils is informed about BRAIN awards and progress – a critical task as the individual IC Advisory Councils perform the formal second level of review of BRAIN Initiative applications. Finally, the MCWG regularly offers an assessment of the progress of current projects and programs supported by The BRAIN Initiative®.

Building on the recommendations in Gray Matters, authored by the former Presidential Commission for the Study of Bioethical Issues, and on the goals for neuroethics described in BRAIN 2025, the neuroethics strategy for the NIH BRAIN Initiative emphasizes proactive, ongoing assessment of the neuroethical implications of the development and application of BRAIN-funded tools and neurotechnologies. Early on, the Initiative established an external NIH BRAIN Neuroethics Working Group (NEWG) comprising both neuroethicists and neuroscientists to provide expert input on neuroethics. The NEWG is part of the Initiative's MCWG, and the synergistic input of these experts help to ensure that neuroethical considerations are fully integrated into the Initiative.

BRAIN 2.0: The road ahead

BRAIN 2025 recognized that the fast pace and unpredictable path of neuroscience research would require that recommendations be re-examined as The BRAIN Initiative® progressed, noting “Ensure that the scientific vision of The BRAIN Initiative® is updated in response to new technological and conceptual advances that will be made over the course of the next 10 years.” NIH built upon the emphasis on technology development in BRAIN 1.0 and convened a new working group (WG 2.0) to revisit the 2025 report’s priorities through the lens of progress to date, rising scientific opportunities, and the new set of tools and technologies emerging from The BRAIN Initiative®. Charting the course for the second half of The BRAIN Initiative®, the WG 2.0 assessed progress and advances within the context of the original BRAIN 2025 report.

Strategic planning process and summary of public input

The BRAIN ACD WG 2.0 sought input from the broader neuroscience community and other BRAIN Initiative stakeholders through three principal means: a series of public workshops (mirroring the process of the BRAIN ACD WG 1.0), multiple town hall meetings, and two requests for information (RFIs).

The WG 2.0 held its first town hall in Bethesda, MD, at The BRAIN Initiative® Investigators Meeting in April 2018 to receive input from attendees about The BRAIN Initiative®’s progress and the process for its evaluation. Three public workshops were held between August and October 2018 across the country on the topics of Human Neuroscience, Looking Ahead: Emerging Opportunities, and From Experiments to Theory and Back. These three workshops included discussions of recording and stimulating the human brain, brain connectivity, translating from animal models to humans, tool development and dissemination, data-analysis strategies, development of unifying brain theories, and the role of team science. Each workshop included opportunities for public comment during the open session. Two additional town halls were held at Neuroscience 2018 – the annual meeting of the Society for Neuroscience – in San Diego, CA and at the 2019 BRAIN Initiative Investigators Meeting in Washington, DC so that members of the scientific community could voice additional ideas. These events also attracted health professionals, policymakers, professional societies, private entities, and members of the general public. The NIH BRAIN Initiative programmatic strategy has been both effective and efficient, organizing the BRAIN 1.0 investment into eight research programs, each of which relates to one or more of the priorities and core principles of BRAIN 2025. This report presents the findings and analyses of WG 2.0 regarding the BRAIN-Initiative investment to date and offers some specific suggestions regarding NIH activities within The BRAIN Initiative®. The WG 2.0 proposes that the ACD recommend to the NIH Director that NIH, specifically the NIH BRAIN Initiative, consider the findings, analyses, and suggestions in this report for incorporation into The BRAIN Initiative® research program. Some of our findings and suggestions may extend beyond the NIH mission or may require collaborative efforts with other federal agencies and organizations. In those cases, the WG 2.0 proposes that the ACD recommend to the NIH Director that NIH engage with broader stakeholder communities as necessary and appropriate to achieve outcomes consistent with the content of this report.

Neuroethics integration

Given the devastating impact of brain diseases upon individuals and society, we have an ethical imperative to conduct brain research and to develop technologies that may prove useful for ameliorating these diseases. In its pursuit of this quest to understand the human brain, The BRAIN Initiative® comes with a responsibility to identify and analyze potential ethical issues that might arise as a result of its investigations and to help to address their implications. In recognition, the authors of BRAIN 2025 established neuroethics as a key focus of discussion and research. They noted: “Although brain research entails ethical issues that are common to other areas of biomedical science, it entails special ethical considerations as well. Because the brain gives rise to consciousness, our innermost thoughts and our most basic human needs, mechanistic studies of the brain have already resulted in new social and ethical questions.” For example, we are already in the beginning stages of learning how to affect some brain states that in so doing may alter an individual’s memories, agency, or identity – and thus potentially undermine privacy. How far to go to correct perceived health deficiencies – and with what implications and consequences – is not a simple calculus. Who makes these choices, using what ethical principles or approaches, and on what basis of the understanding of the minds of others? Such questions demand sophisticated and scientifically informed analyses, also recognizing that various cultures and societies may have different approaches to conceptualizing, addressing and answering these types of questions. Furthermore, inequities in which neurotechnologies are developed, the study populations with whom the technologies are developed, and who reaps the benefit of these technologies or bears their risks are immediate concerns. In addition, as neurotechnologies become easier to use, the issues of nonclinical use and direct-to-consumer marketing of these technologies (including malign uses) require serious thought. In addition to concerns related to human subjects and study populations, the necessity of research with animals for reaching the goals of the NIH BRAIN Initiative elicits other ethical considerations. Much of experimental neuroscience, and most of the research supported by BRAIN, involves laboratory animals. The ethics of animal research are of enduring concern to scientists, bioethicists, and the public, and are safeguarded by a regulatory context that includes grant review, review by mandated institutional committees, and regulation and inspection by the Public Health Service and the United States Department of Agriculture (USDA). We are aware that the advances generated by The BRAIN Initiative® may raise new ethical questions, and as such challenge current regulatory systems. A preponderance of evidence indicates that these systems have worked well in the past, and they are designed to have the flexibility to accommodate new questions and hard choices, whether those arise in the context of BRAIN 2.0 or in the broader neuroscience research initiatives funded by NIH and other federal agencies.

One particular challenge will be the likely increased value of research with non-human primates (NHPs) during BRAIN 2.0. Many of the technological advances of BRAIN 1.0, initially made in other species, are becoming widely available for use with NHPs. The enhanced cognitive capacity of these animals, and their relative physiological and genetic proximity to humans, make them valuable subjects for research aiming to illuminate principles of human-relevant cognition and biology. NHPs have a particularly important role to play in establishing models of human disease, because research based on other species has often failed to transfer to humans. Thus, research with NHPs is likely to be necessary to translate knowledge gained with other species to applications in humans. While the risks and harms to NHPs are not qualitatively different from those in research with other laboratory animals, their enhanced cognitive capacity is widely taken to imbue them with enhanced moral status compared to other animals used in research. This status is reflected in the substantially higher standards to which all aspects of research with NHPs, including that funded by the NIH BRAIN Initiative, are now held, compared to research with other species.

We do not anticipate that BRAIN 2.0 will by itself will make a significant difference to the challenges of doing, regulating, or monitoring research with animals. Nor is it likely to qualitatively change the ethical considerations related to such work. But the combined advances from all areas of neuroscience call for continued vigilance and engagement between animal researchers, regulators, funders, and the public. The neuroethics machinery of BRAIN 2.0 can and should play a role in facilitating this engagement. Given the broad range of consequential ethical considerations touched upon by the NIH BRAIN Initiative, its second chapter must maintain vigorous efforts related to ethics. The aforementioned NIH BRAIN Neuroethics Working Group has held multiple public meetings and workshops directed at understanding BRAIN Initiative-related issues such as the use of human tissue and brain stimulation. This group also published a set of basic Neuroethics Guiding Principles to help BRAIN Initiative-funded researchers navigate many of the neuroethical issues associated with their work and has participated in the development of five neuroethics questions to guide neuroscientists. Importantly, the NIH BRAIN Initiative has also funded research grants devoted to understanding how neuroethics is used in research, as well as other related topics. In 2018, the NIH Director convened a subgroup of the BRAIN 2.0 WG, the NIH ACD BRAIN Initiative Neuroethics Subgroup (BNS), to develop a neuroethics roadmap for BRAIN 2025, taking into consideration any proposed updates to BRAIN 2025. The BNS held a series of meetings, conducted selected interviews, and hosted a workshop on neuroethical issues related to anticipated progress during the second half of BRAIN 2025 and potential long-term outcomes from brain research. The standalone roadmap that BNS has developed, “The BRAIN Initiative® and Neuroethics: Enabling and Enhancing Neuroscience Advances for Society,” considers these issues in greater detail. In addition, given the integral role of neuroethics in neuroscientific endeavors, we highlight selected neuroethical topics throughout this report.  

Overview of BRAIN WG 2.0 findings

BRAIN 2025 aligned its main focus of inquiry to analyses of circuits and their many and dynamic components. We believe this to have been a wise choice, as the many advances arising from BRAIN 1.0 have in fact enabled us to accomplish far more than was possible before 2013, thanks to substantial, and in some cases unexpected, technology advances. As noted in BRAIN 2025, “Truly understanding a circuit requires identifying and characterizing the component cells, defining their synaptic connections with one another, observing their dynamic patterns of activity as the circuit functions in vivo during behavior, and perturbing these patterns to test their significance. It also requires an understanding of the algorithms that govern information processing within a circuit and between interacting circuits in the brain as a whole.” In particular, the BRAIN 2025 Priority Area 1. Discovering Diversity that sought to generate a census and taxonomy of cell types in the brain has created a springboard not only for further understanding cell typology but for enabling new techniques – and prompting many new questions – in the purview of all the other scientific areas defined by BRAIN 2025. For our first major suggestion , we thus endorse staying on the productive path already underway, continuing support for technology development and targeted study of circuit components. Although the scientific landscape has changed in most areas, small but significant shifts that capitalize on specific advances can maintain progress. Certainly, as articulated in Priority Area 7. The BRAIN Initiative® to the Brain, combining methods, knowledge, and new collaborative models will enable us to look more multidimensionally at brain function – toward a truly integrative science of cells, circuits, brain, and behavior. One area where we believe The BRAIN Initiative® can push forward is by encouraging development of several large-scale projects – many involving integrative strategies. We predict doing so will generate knowledge and tools to begin organ-level analyses that incorporate molecular underpinnings and behavior. As we describe later in this report, development and implementation of The BRAIN Initiative® Cell Census Network (BICCN) is a model of success for what is possible through large-scale investments and creative assemblies of talent and resources. This investment laid the groundwork for many avenues of investigation – both at the level of individual laboratories and involving large, collaborative teams. Our second major suggestion is thus to devote ample BRAIN Initiative resources to transformative projects of a similar scale.  After substantive conversations with the scientific community, we assert that various models exist for team science and due to the extraordinary complexity of the human brain, large-scale collaborative approaches are necessary. One area for focused attention centers on incentives and rewards for individuals to participate in an effort that differs from what has been the norm for most of the history of biomedical research. Yet careful planning is needed to maintain an environment that supports innovation and exploration by individual groups. Our third major suggestion is thus that BRAIN 2.0 consciously balance individual-investigator research with team science – as both levels of inquiry are vital for advancing our understanding of the brain. As The BRAIN Initiative® continues, we support continued neuroethics efforts to aid in identifying and navigating concerns associated with topics such as generating new models for neurocircuitry and the ability to manipulate circuits through neuromodulation. We suggest that ethical analyses and neuroethics expertise be included in relevant BRAIN Initiative-funded research areas to minimize risks and optimize benefits for both individuals and society. Additionally, BRAIN research teams should continue to strive to be richly diverse in perspectives, backgrounds, and academic disciplines, and provide full opportunity and participation to individuals and groups underrepresented in neuroscience. Another related and important dimension of inclusion and diversity is in the recruitment of diverse participants in research involving humans, to ensure that benefits from such studies can have wide applicability. Finally, we note key points that were mentioned in BRAIN 2025, but may deserve added emphasis in BRAIN 2.0: Importance of behavior: The brain evolved to control behavior. Therefore, beyond mapping purely sensory and perceptual representations, brain-activity patterns will be difficult to understand without understanding their relationship to meaningful behavioral responses. Yet, most commonly used behavioral paradigms are too constrained to provide rich enough information for a sophisticated investigation of circuit function and dysfunction. Further, advances in machine learning have created new opportunities to quantify behavior objectively and consistently. This is particularly evident related to monitoring naturalistic behaviors in freely moving animals in a manner that is rapid, robust, and reproducible. We therefore suggest support for the development of rich behavioral paradigms, new and improved methods of quantitative behavioral analysis – as well as integrating such methods with those that record neural activity. Importance of subcortical structures: Many essential brain functions and dysfunctions are subcortical. Nevertheless, most work on mammalian nervous systems has been cortico-centric, especially in humans and NHPs. Consequently, subcortical circuits in primates have been relatively understudied, with the exception of a few areas such as the amygdala. We therefore suggest support for broader study of subcortical circuits, with a particular relevance to NHPs and humans. Importance of model organisms: While The BRAIN Initiative®’s ultimate objective is to understand the human brain, genetically tractable model organisms such as roundworms, fruit flies, and zebrafish embryos have been and will continue to be key for revealing general principles of nervous-system function. These model systems also often enable inquiry at a fraction of the cost required to study mice and NHPs and given the advent of CRISPR/Cas9 genome-editing technology, are even more powerful than in the recent past. We therefore suggest support for the study of nervous-system function in non-mammalian model organisms (including invertebrates), as well as in mammals, together with support for direct study of the human nervous system.

Structure of this report

We have structured this report around the seven scientific Priority Areas identified by BRAIN 2025. Each of these constitutes a chapter that provides a brief description of how the Priority Area fits into the goal of understanding circuits; reviews accomplishments to date in context of the BRAIN 2025 short- and long-term goals; identifies gaps and opportunities; and suggests revised short- and long-term goals. Notably, based upon the statements above calling for continued study in many areas, we have in most cases characterized what is new and what should be ongoing activity. Next, in “Priority Area 8. Organization of Science,” we frame these scientific directions with a discussion of overarching topics that affect all areas of science. These include sharing and using BRAIN Initiative technologies; data management and sharing; scientific workforce-related considerations, public engagement strategies and connecting basic research to disease models under study. We conclude by offering ideas for transformative projects, which all involve complex and multiscale lines of inquiry. We believe that providing examples would help articulate our vision for out-of-the-box approaches to truly transform neuroscience inquiry on our quest to understand the amazing organ that is the human brain.   Priority Area 1: DISCOVERING DIVERSITY   This BRAIN 2025 Priority Area aims to generate a census and taxonomy of cell types in the brain – a master “parts list,” and with BRAIN-Initiative support, progress in this area has far exceeded expectations. This avenue of research is both foundational and enabling. A census of brain cell types is foundational because it provides a unique framework to understand brain function and development and raises fundamental issues about how cell diversity relates to function: •    How do the brain’s diverse parts relate to one another? •    How do the parts, alone and together, contribute to function? •    Is the cell the basic unit for defining function, and ultimately, for defining behavior and dysfunction? •    What is a “cell type” in the brain? Is such a concept even useful? This research is enabling because it will form the basis for the development of novel technologies to visualize cell connectivity and to manipulate cell function in models of human conditions. In turn, advances in cell access will create a powerful new platform for understanding and curing brain disorders. The study of cell types as units of brain dysfunction will open doors to new therapeutic approaches beyond genetic manipulations by offering an opportunity to influence circuits as interventional targets.  Understanding brain cell types: Fundamental insights for discovery  Neurons differ with respect to many observable characteristics, or phenotypic features: cytoarchitecture, connectivity, location, electrophysiological properties, genetic make-up, and transcriptomic characteristics. Moreover, neurons and their functional integrity are highly influenced by the billions of supporting cells known as glia. Must all these features align to define a cell “type?” Single-cell RNA sequencing (scRNAseq) has revolutionized the characterization of cellular diversity through broad analysis of gene expression (transcriptomic profiling) of individual cells, in a scalable and high-throughput way. By contrast, methods to characterize other phenotypic features remain less efficient and are thus not yet applied broadly across brain cell types. Thus, the question arises: Should a cell’s transcriptomic profile become a surrogate and sufficient definition of its type, simply because measuring the transcriptome is faster and easier than measuring other phenotypic features? In the retina, for example, there is excellent congruence between transcriptomic type and phenotypic features defined by other measures. However, the retina may be a special case, since it is a sensory “chip,” and we do not know if such congruence will hold for the many other cell types in the brain. Thus, we face both power and limitation from technological advances in single-cell sequencing: Do transcriptomic and genomic features align with other neuronal phenotypic characteristics, and if not, how we should integrate information about other characteristics into our view of neuronal identity? A second major issue, raised by scRNAseq technology and data, is: Why does the brain need so many cell types, and how much detail is necessary to understand function? Relying on one technology, such as high-dimensional transcriptomic analyses, provides very fine detail about clusters of cell types. The visual cortex, for example, contains more than 100 transcriptomic clusters at the finest “leaves” of a hierarchical, branched tree. Theoretical neuroscientists have been able to successfully mimic many aspects of brain function with artificial neural networks composed of only a single type of generic electronic neuron. What additional explanatory power, efficiency, or computational flexibility do we gain by incorporating the specialized firing dynamics of precisely subclassified cell types into such models, and at what level of granularity? Resolving these issues will require new experimental methods, new computational algorithms for integrating and curating data of different phenotypic characteristics, and closer interaction between theoretical and experimental neurobiologists. It is likely that these answers will not only transform our understanding of brain function, but also of cellular diversity in other organs and tissues. Toward cures: Future platform for therapeutics It is well established that certain brain disorders, particularly neurodegenerative diseases, affect specific neuronal cell types and/or surrounding glial cells. For example, Parkinson’s disease causes selective degeneration of dopaminergic neurons in the substantia nigra; Huntington’s disease targets medium spiny neurons in the striatum; while in Alzheimer’s disease some of the key genetic risk factors are expressed in glial cells, which in turn affect circuit function. This raises two important and related questions: i) What is the cellular and molecular basis of such selectivity? And ii) Do other brain disorders, including neuropsychiatric disorders, affect specific cell types? For example, are depression, autism, or schizophrenia diseases of specific cell types, and if so, which ones? Our ability to understand and distinguish cell types in healthy brains compared to diseased brains may answer this fundamental question as well as introduce targeted and precise therapeutic approaches with potentially fewer side effects. BRAIN 2025 vision: Generate a first-draft cell census and develop enabling technologies  Cell census How many cell types are there in the brain? This deceptively simple question may not have a single answer, but rather depend on the definition of “cell type.” Neuronal and non-neuronal cells are not all alike, with variable phenotypic characteristics, and the various cell types also differ from each other in morphology, electrophysiology, transcriptomic features, and other dimensions. The first challenge posed by BRAIN 2025 was to achieve a consensus set of criteria for defining neural cell types, which would encompass multiple facets of cellular identity. This set of criteria would first be established, in mouse cells, in one or two relatively well-characterized systems, such as the retina and spinal cord. Achieving such an objective would, in turn, require development of new technologies for multiplexing and integrating measurement of neuronal phenotypic characteristics (e.g., transcriptomic, electrophysiological, anatomical). If that objective were achieved, then BRAIN 2025 proposed to extend the approach to a few additional selected areas in the central brain (also in mice), toward building a census of cell types in other organisms, including NHPs and humans, as well as in certain genetically tractable model organisms (zebrafish, fruit flies, and others). At the time, in 2014, before the advent of scRNAseq, achieving these objectives would have been considered a success for the first 5 years of the NIH BRAIN Initiative. Technologies, reagents, and datasets BRAIN 2025 anticipated that datasets generated from building the neuronal cell census would be large, diverse, and high-dimensional. This expectation pointed to a need for publicly accessible databases for annotation, curation, and storage of such data in a manner that allowed not only data retrieval but also searches and other computational analyses by the broader research community. Such an effort would require a concerted and collaborative effort between those generating the data – as well as with data analysts, software and database engineers, user-interface developers, and computational scientists. BRAIN 2025 also outlined the need for reagents, or markers, to identify specific neural cell types. Because cells can be transcriptionally profiled using RNA fluorescent in-situ hybridization (FISH), discordance between mRNA levels and protein levels – along with the inability to use FISH in living cells – renders this technique limiting by itself. BRAIN 2025 thus called for development of antibody reagents to identify cell types, emphasizing cross‐species reactivity (rodents, non‐human primates, humans) and immunohistochemical applications. Monoclonal antibodies to cell-surface epitopes, in particular, offer advantages due to their ability to identify and isolate living cells for electrophysiological or developmental studies, as well as to purify in vitro-generated cell types for transplantation. Manipulation and perturbation In genetically tractable organisms (e.g., mice, zebrafish, fruit flies), transgenic approaches that enable selective targeting of specific cell types (e.g., Cre lines of transgenic mice) offered opportunities for cell manipulation toward understanding cell function. In parallel, BRAIN 2025 highlighted a need for new technologies to allow experimental access to specific cell types across species. In particular, it emphasized methods that do not require germline genomic modification and that could potentially be applied to humans for therapeutic purposes, as well as to animal models not amenable to genetic manipulation (rats, NHPs). Such methods could be used to drive reporter or effector gene expression directly via replication-incompetent viral vectors. They include, for example, compact, cell type-specific cis-regulatory DNA modules such as adeno-associated virus, or AAV; CRISPR/Cas9-based methods for homologous recombination into cell type-specific endogenous gene loci in post-mitotic neurons; or cloned monoclonal antibodies to cell type-specific surface antigens for pseudotyping enveloped animal viral vectors (e.g., lentivirus). To identify cell types with sufficient specificity, BRAIN 2025 also anticipated a need for intersectional methods that assess expression of multiple gene markers simultaneously. BRAIN 2025 projected generating a “first-draft” cell-type census spanning the entire mouse brain and spinal cord, along with a suite of associated reagents (e.g., transgenic mouse lines, compact cis-regulatory DNA modules/enhancers), permitting access to at least 200 different cell types in the mouse brain, as well as complete access to all cell types in selected regions (e.g., the retina). BRAIN 2025 also anticipated generating a first-draft census of cell types for selected brain regions in NHPs and humans. Achieving these objectives would enable cell type‐specific optical imaging and optogenetic perturbations in multiple mammalian species, including NHPs and in human tissue. An aspirational goal for this time frame aimed to achieve proof‐of‐principle, cell type‐specific targeting for therapeutic manipulation in humans, independent of germ-line genomic modifications. NIH funding to date: Discovering Diversity NIH has implemented its vision of an integrated census of brain cell types in three stages. NIH first funded 10 pilot projects in a 3-year pilot phase. These projects took diverse approaches to characterize cell types in the brain, and investigators worked closely together, collectively generating more than 50 publications during this time period. Encouraged by this progress, NIH launched phase 2, a coordinated set of awards organized as The BRAIN Initiative® Cell Census Network (BICCN), aiming to develop by 2021 a comprehensive mouse brain cell atlas of cell types, as well as to advance techniques for cell-type mapping in human and NHP brains. NIH anticipates that lessons learned from the mouse cell-census program and the coordinated work on studying larger brains will enable an increasing focus on NHP and human brains in preparation for phase 3 of the program, beginning around 2022.   Where are we now in defining the brain parts list? Priority Area 1. Discovering Diversity component of BRAIN 2025 is a model of success. Thanks to timely development of key, transformative technologies, many of the specific stated goals within this Priority Area have been achieved and, in some cases, substantially exceeded Transcriptional profiling The advent of high-throughput ways to profile large numbers of single cells at the molecular level has enabled extensive single-cell, molecular “atlasing” in an increasingly large number of brain areas and in the retina. This has generated vast datasets for comprehensive analyses that will help construct a theoretical framework of cell diversity. One concrete and important deliverable is an open‐access database of integrated information with computational search tools, which has built a successful pipeline for collection, standardization, analysis, storage, and distribution of data. Additional technologies that enable spatial mapping of identified cell types will complement existing findings significantly over the next few years. At this point, integrative approaches to match molecular identity with positional information, connectivity, and physiological properties of single cells are ongoing and will play a key role in understanding how transcriptomic data relate to other phenotypic characteristics of neural cell types. Cell-type diversity We now have several examples of the logic of cell-type diversification in different regions of the rodent brain, as well as in other organisms. These include a collective of millions of single cells that have been molecularly profiled in the mouse motor cortex, V1 visual cortex, hypothalamus, and retina. In addition, several regions in the adult mouse brain have been profiled at a lower coverage rate that has also highlighted phenotypic attributes of glial cells. As noted, however, transcriptomic phenotyping cannot yet be considered a surrogate measure of neuronal identity that correlates with and predicts all other cell characteristics. That is because there are clear cases in which differences in axonal projections do not coincide with transcriptomic differences. There are likely other circumstances in which knowledge lags behind technology – and thus we don’t know what we don’t know. Integrative technologies Work in other organisms is underway, most prominently in human and marmoset brains, and large single-cell datasets have emerged from BRAIN Initiative-funded research. We are now poised to learn more about additional regions of the adult brain and about various time windows in human brain development. Also, now available are technologies that integrate molecular information with connectivity and positional and physiological information, which will move us closer to investigating cell diversity in the context of circuits. Recent in-situ sequencing technology advances including MERFISH, seqFISH, STARmap, and others have enabled positional, transcriptional mapping of vast arrays of cell types in tissue at single-cell resolution. These efforts will no doubt continue to be central contributors toward atlasing cell types, as well as toward linking cell identity to connectivity and function. New integrative technology that assesses multiple identity parameters of a cell, at scale, include patch-seq (for sampling electrophysiological and transcriptional properties of cells), as well as Map-seq and MERFISH (for integrating connectivity and cell-specific transcriptional information). Advanced tissue labeling and imaging Further, advances in imaging and labeling allow more detailed mapping of cells in intact, three-dimensional tissue. These technologies are also now being integrated with high-throughput single-cell sequencing approaches. Study of the epigenomic landscape is now possible with ATAC-seq and snmC-seq, which allow mapping of chromatin modifications and methylation patterns at single-cell resolution. Such approaches can be combined with RNAseq data to reveal simultaneous single-cell transcriptomic and epigenomic patterns. Goals unmet: Cell-type targeting and protein-analytical tools Technology is urgently needed to achieve cell type-specific targeting methods that are applicable and effective in the human and NHP brain, as well as in other traditional neuroscience model organisms such as the rat. A current limitation in cell-type analysis is a lack of antibodies to recognize cell types. Similarly, genetic tools to recognize and manipulate cells with class specificity remain modest in scale. In this arena, perhaps the largest concerted effort was put into place by the Allen Institute for Brain Science, through their generation of a bank of Cre lines in primary visual cortex (area V1). While an important resource, these studies reveal a limitation of the approach in that each Cre line typically labels multiple transcriptomic cell types. More tools are needed to investigate a more comprehensive set of cortical and subcortical regions, as well as to expand genetic access to specific cell types. In most cases, this will require flexible, intersectional genetic methods involving multiple recombinases or entirely new genetic approaches such as those that build on CRISPR/Cas9 and that scale production of new Cre and Flpe lines. Complementary knowledge emerging from single-cell profiling endeavors should provide an opportunity to identify cell-type specific promoter regions and enhancers to drive expression in specific cell types via viruses. The ability to manipulate viral serotypes or pseudotypes will help to expand genetic access in multiple species, most prominently in human and NHP tissue. Refining those techniques may benefit from use of brain organoids or fresh brain slices, and in some cases, human in-vitro systems – all of which may warrant close examination and possible re-evaluation of existing ethical frameworks  in the context of  evolving methodologies and brain models. Beyond use in characterizing cell-type protein distribution, antibodies are also needed to purify, tag, and target specific cell types in species that are not accessible genetically. Gaps and new opportunities for brain 2.0 The success of the brain-wide transcriptional cell census in mice has introduced a vast number of exciting new opportunities that hinge upon access to newly transcriptionally defined cell populations in the various conditions in which they exist in animals and in humans. Further, though comprehensive and powerful, the current transcriptionally-based cell census requires application of additional, independent multi-modal methods that integrate physiology, anatomy, connectivity, and function. The multimodal definition of cell types that will arise from these efforts should inspire and inform theory and cell-type based models of circuit function. Other approaches that will facilitate rich study of human cells and circuits in health and disease include new technologies for molecular profiling with single-cell resolution that are now applicable to the human brain; three-dimensional cellular models of the human nervous system (i.e., brain organoids/assembloids); methods to label and manipulate human cells; and cross-species comparisons. Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity. 1. Development of new tools and technologies 1.    Tools to integrate molecular, connectivity, and physiological properties of cell types. Now available are large datasets of single-cell transcriptomic and epigenomic data as well as in-situ multimodal profiling tools (patch-seq, MAP-seq, MERFISH, seqFISH, STARmap), offering an opportunity to integrate data at single-cell resolution, at scale. One example might be bridging cell identification with connectivity through combined cell barcoding and in-situ sequencing. Similarly, tools to integrate data from single cell transcriptional, epigenetic, and proteomic profiling may be developed. Additionally, a critical next phase of analyzing cell type will be to combine large-scale (whole brain) single cell reconstructions based on light microscopy, combined with molecular characterization (i.e., mFISH) of all cell types. In addition to their intrinsic value, tools to link transcriptomics and cell structure would create a critical link between transcriptomics and EM connectomics, which are otherwise technically incompatible. 2.    Tools to access a large number of defined cells with class specificity. Transitioning from our current understanding of cell diversity to defining cell function and enabling higher-order understanding of cell-cell interactions and circuits requires the development of multiple ways to target gene expression in specific cell types and in multiple species. This should include neuronal as well as non-neuronal cells (glia, neurovascular elements) and include peripheral nervous systems (autonomic, sensory, enteric). 3.    Technology to merge activity and molecular maps. Use of activity reporters (genetically encoded calcium indicators, others) and multiplex FISH will enable layering functional information onto transcriptional maps of cell populations in a specific brain area. 2. Generate a protein-based understanding and protein-based access to cell types. Proteins, not gene transcripts, are the functional components of healthy and diseased cells, and cell types may be better defined by proteins than gene transcripts. Methods to quantify, compare, and differentiate catalogues using this information may yield novel strategies for accessing cell types across species for selective manipulation. It is therefore imperative to move beyond a solely transcriptionally based cell atlas and generate a protein-based description of key cell types that includes quantification of the subcellular distribution of proteins in different cell types. Moreover, abnormal variants rather than missing genes are often at the basis of disease states. Thus, while most molecular cell-census efforts have been based so far on cell transcriptomics, an essential next step will be to gain a precise understanding of the expression of specific transcript and protein variants in different cell types and assess their subcellular localization. In addition, new approaches should be developed to study proteins in native environments rather than in vitro, which, combined with the development of specific antibody reagents, will enable the development of anatomically resolved large-scale in-situ investigations of proteins, including variants. The compatibility of nanobodies with electron microscopic (EM) reconstructions and live-cell analyses makes significant expansion of these tools particularly attractive. 3. Exploit cell-type information to understand and modulate circuits. Cell-census results provide a new platform for a comprehensive connectivity mapping and functional targeting of brain circuits. The integration of the various cell type phenotypic characteristics is important for answering fundamental questions such as whether transcriptionally defined cell types define basic units of functional specialization in the nervous system, or whether they reflect some other axis of biological identity, such as developmental specification based on genetically based patterning. Critically, through direct collaborations between experimentalists and theorists, cell-type information including neuronal and non-neuronal, can now be integrated into theories of brain function. Currently, insufficient cross-talk between results from experimental data arising from investigations of multicellular regions (e.g., visual cortex) and modeled data from neural networks (using a few simplified cell types) hinders a deeper understanding of the impact of cellular diversity and specificity on circuit function. Expanded grounding in theory will help technology-driven data collection at high precision to attain a better understanding of biology and of how neural networks compute. 4. Expand study of human brain biology. New technologies for single-cell unbiased analyses of many cell types, combined with improved access to cells that do not rely upon germline modification, offer an opportunity to study the postnatal human brain and, in parallel, to study accessible stages of the human brain throughout the course of key developmental stages, such as through use of human brain samples collected from a large spectrum of individuals and brain specimens collected by neurosurgeons. Additionally, there is a need to develop and employ non-rodent models of the human brain. Genetic studies of psychiatric and prominent neurodevelopmental conditions (e.g., schizophrenia, bipolar disorder, autism) indicate complex polygenic etiologies in the vast majority of individuals. Understanding these genetic contributions will require use of human models. Brain organoids, derived from human pluripotent stem cells, offer an opportunity to study human brain tissue. While these quasi-cellular systems are still primitive, and substantial progress will be required to model complex aspects of human circuit functionality and disease, they represent an opportunity to study aspects of human brain development and functionality that would otherwise not be accessible. 5. Determine whether cell types are a fundamental unit of etiology and pathophysiology, as well as whether they may serve as potential targets for therapies in human brain disorders.  We already know that certain neurodegenerative diseases are associated with the death of specific cell types (e.g., Parkinson’s disease, amyotrophic lateral sclerosis, others). Can other types of neurological disorders, especially neuropsychiatric disorders such as schizophrenia or bipolar disorder be similarly traced to malfunctions of specific cell types? Given precise new knowledge of cell types in the mouse brain, more genetically accurate animal models of brain disorders may be able to determine whether disease processes affect certain cell types and not others – and in turn, whether cell type-specific, circuit-level approaches can ameliorate or reverse symptoms in some of these models. An essential step in this direction is to use evolutionary principles and cell-type analyses from additional animal models (non-mammalian) to probe connections between cell types and disease manifestations or specific behaviors. Comparing neuronal and non-neuronal cell types across species (from humans to mice, fish, and other model organisms) requires close collaborations with experts in various model systems and theorists. Such comparisons may uncover relationships between cell types, circuit function, and disease states. Suggested short-term goals for BRAIN 2.0: 1.    Establish a data ecosystem for cell types allowing the integration of different facets of neuronal phenotype. We currently lack robust and scalable methods to integrate complex data reflecting distinct features, or facets, of brain-cell phenotypes. Having this information would likely be transformative for neuroscience – akin to the human genome sequence for biomedicine. Such a system might include appropriate ontologies/nomenclature and data formats, storage, and access infrastructure, as well as be dynamic to accommodate new data types and improved analysis techniques as they evolve. This technology is important, since as noted above it is not yet clear that scRNAseq data alone can be used as a surrogate measure of cell type in the central brain, independent of morphological or physiological properties (as it can be used in the retina). Achieving a consensus definition of cell types in the brain will therefore require a computational framework that integrates different data modalities, in a quantitative manner and be consistent with current FAIR (findable, accessible, interoperable, and reusable) data-science principles. Accomplishing this goal requires funding to support a balance of dedicated leadership, project management, data-science expertise (computation, data infrastructure, software, database engineering, ontologies, machine learning), as well as incentive structures to reward valuable data-management and data-sharing practices. Investment in such computational resources will maximize the value of ongoing breakthroughs in scRNAseq, imaging, and other technologies, to establish the relationship between molecular/genetic, anatomic, and physiological facets of brain-cell identity. As with other aspects of potentially sensitive collections of large amounts of data, access should be governed by a set of appropriate policies that reflect well-considered ethical principles, and attention to how human brain data and the privacy of the participants from whom data are acquired can be protected both in case of use by those collecting it or in secondary uses. 2.    Produce a consensus typology/taxonomy of brain-cell types. Fundamental multimodal knowledge of cell types should facilitate an informed consensus for defining brain cell types. Such knowledge should be applicable to all brain regions, spinal-cord, and peripheral nervous systems (sensory, autonomic and enteric) and should also include vascular and other non-neuronal cell types. Generating a fully anatomically informed cell typology with high granularity across brains of mice, zebrafish, flies, worms, and other genetically tractable model organisms is both valuable and doable. An anatomically resolved, three-dimensional atlas of the mouse brain in particular would provide a framework to study cell-cell interactions and neural circuit function. Finally, cell-type profiling during nervous-system development will facilitate understanding how the emergence of specific cell-type identity correlates with function and connectivity. 3.    Enable genetic and non-genetic access to cell types across multiple species. Molecular profiling in cortical and subcortical brain areas shows that individual cell populations are often defined by combinations of markers, suggesting that simple intersectional genetic approaches are insufficient to visualize or functionally interrogate specific cell types. We thus need new tools and strategies to access cell types across multiple species, using a range of genetic and non-genetic methods without germline modification that can be applied to virtually any mammalian species. These tools will enable systematic mapping of activity and connectivity and reveal other circuit-relevant information. Tools may include use of enhancers, CRISPR/Cas9-based gene-editing methods, viral pseudotyping, and serotyping. These approaches should enable anatomical access (via forward (anterograde) and reverse (retrograde) labeling) as well as the ability to study and collectively manipulate large number of cell types as fundamental units of etiological and pathophysiological significance. 4.    Employ cell-census data to update and test models and theories of neural circuit function. Recent strides in cell-type identification have only just begun to penetrate theoretical neuroscience research, calling for more plausible theoretical tools and platforms to integrate cell-specific information into broader models of circuit function (see Priority Area 5. Identifying Fundamental Principles). 5.    Develop protein labels, especially those with cross-species applicability. Significant advances in nucleic-acid labeling (e.g., MERFISH/STARmap/seqFISH) point to the importance of concomitant development of protein labels for major cell-type markers of various types. These include chemically based protein labels and combination labels that permit integrative analysis of nucleic acids and protein variants in a selected set of 20 to 50 functionally important cortical and subcortical brain areas including parts of the human brain. 6.    Create cell reconstruction, connectivity and functional maps at multiple scales while retaining cell-type information. For example, electron-microscopy reconstructions should include contrast agents or immunolabeling that preserve membrane structure. Various fluorescence methods may also advance multiplexed labeling. Single-cell reconstructions, which provide critical information about cell types and how cell types fit into circuits, should be discoverable in EM data sets to link transcriptomes to cells identified in EM reconstructions. 7.    Extend single-cell, multimodal profiling to additional species, including NHPs and human brains. Analysis of functionally important cell types in species that reflect variable evolutionary distances from humans should employ several phenotypic approaches (transcriptional, morphological, connectivity, functional), and also address measuring variability according to state. A cell census of carefully selected NHP and human-brain areas and regions that encompass functionally important cortical and subcortical areas of the healthy brain, at multiple stages of postnatal development, is feasible. These human data should be integrated with the more complete mouse cell census. An initial focus on the healthy brain will provide the solid platform needed to understand various diseases.  Suggested long-term goals for BRAIN 2.0: 1.    Integrate cell-type data platforms for theory development. Achieving this goal will bring together neurobiologists, computational scientists, and theorists to generate emerging models for brain development, function, and disease. 2.    Create an anatomically resolved census of the whole brain, in 6 to 10 species, with high granularity and genetic and non-genetic access.. Achieving this goal will enable multidimensional study of key cortical and subcortical cell types and help discover principles/parameters that define cross species homology in cell types. 3.    Support development of three-dimensional cellular systems modeling the human brain (organoids/assembloids). Experimental systems that provide human cell specification and diversity (neurons, glia, and vasculature), circuit formation, integration, and plasticity may enable reliable and effective modeling of defined aspects of brain development that would otherwise be inaccessible for ethical and practical reasons. Although currently derived brain organoids are still primitive, they nonetheless offer an unprecedented opportunity for experimental access to aspects of the developing human brain and for study of key developmental processes affected by human genetic diversity and disease. Current organoids/assembloids have substantially less circuit complexity compared to the brains of worms or typical computer circuits, but much greater complexity is conceivable in the future. With such complexity comes the potential for development of sophisticated circuit functions, raising new ethical considerations. Systematic research by interdisciplinary teams of ethicists and scientists may be needed for proper ethical considerations of these models.  In summary, progress in Priority Area 1. Discovering Diversity has been faster than anticipated, enabled by advances in high-throughput technologies and analytical methods. New opportunities for BRAIN 2.0 include expanding cell-type profiling and data analysis to integrate measurements of additional phenotypic features of neuronal and non-neuronal brain cells; generating a protein-based understanding and access to cell types; enabling genetic and non-genetic access to cell types across multiple species; expanding human cell biology; and performing cell-type based models of circuit function. At the completion of The BRAIN Initiative®, we expect that current and additional progress in this area will clarify, perhaps even define, contributions of distinct cell types to circuit function and the physiological and pathological sequelae. Priority Area 2: MAPS AT MULTIPLE SCALES The first crude brain versions of visual field maps in the brain were constructed a century ago by correlating brain lesions in the occipital lobe of wounded soldiers with visual field deficits. Many new methods of mapping neural circuits have emerged since then, both anatomically and functionally and across a wide range of scales, from non‐invasive whole human brain imaging to dense reconstruction of synaptic inputs and outputs at the subcellular level. In this section, we focus primarily on the anatomical mapping of neural circuits, while the next section, Priority Area 3. Brain in Action, addresses the challenge of functional mapping. Improved methods for reconstructing anatomy of neural circuits at all scales and linking these data to circuit function, described in the next section, will contribute significantly to achieving an essential objective of neuroscience research: understanding the logic of structural relationships to function. BRAIN 2025 envisioned improved technologies – faster, less expensive, and scalable – for anatomic reconstruction of neural circuits at all scales, from non-invasive maps of human brain circuits to dense reconstructions of synaptic wiring diagrams in animals. Many powerful brain-mapping tools that were in their infancy in 2013 have now emerged as a result of BRAIN Initiative support. These include serial-sectioning EM, brain clearing, expansion and labeling methods, high-speed optical functional imaging, optogenetic excitation of cellular circuits, and human laminal/columnar scale functional mapping. Other novel approaches include the use of genetic barcodes for large-scale connectomics. Rapid, worldwide adoption of these relatively new methods affirms their scientific impact, and they have spurred other new deliverables such as detailed, large-scale cell atlases in a variety of species including fish, rodents, NHPs, and humans. Nevertheless, the overall potential of microscale-mapping methods has yet to be fully unlocked, with bottlenecks remaining in tissue processing, imaging throughput, and subsequent comprehensive and quantitative analysis of resulting data. The explosive growth in machine learning tools over the past 2 to 3 years has great potential to overcome these technical barriers, making possible more analyses of even larger data sets. These include dense-EM and x-ray microscopic image collections not anticipated at the start of BRAIN 1.0; these very large new datasets offer an important opportunity for BRAIN 2.0 to initiate closer and deeper engagement with data scientists. Another evolving opportunity is connecting new knowledge about structure to dynamical measures of in vivo cell and circuit activity, across scales ranging from local synaptic connections to whole-brain networks. To solve some of the most vexing challenges in human health, from addiction to dementia, it will be essential to understand links between molecular, cellular, and network-wide activity, and how changes in anatomic circuitry can lead to aberrant function. We are thus faced with a need to correlate anatomical architecture at the nanometer level with data reflecting dynamic behavior of cellular circuits continually shaped and modified in real time by neuromodulatory and other forces. The ability to assess these dynamic factors in vivo across large scales is an essential priority. In addition, we will need more ambitious and comprehensive models and theory to truly understand how function is built on structure to yield behavior. Finally, with consolidation of tools to map at scales from individual synapses to whole human brains, a major challenge for BRAIN 2.0 will be to support a shift from mapping at multiple individual scales to mapping across scales – in key animal models and ultimately in humans. These advances should offer key insights into disease prevention and treatment.   BRAIN 2025 vision: Mapping at multiscale and linking anatomy to function BRAIN 2025 set an ambitious agenda with both short- and long-term goals in the domain of multiscale mapping. Several BRAIN 2025 goals focused on creating projectional and connectional maps in animal models of increasing size and complexity. These included development of methods for efficiently mapping and annotating projectomes in experimental animals, including NHPs, as well as in human-tissue blocks, using clearance methods or serial-sectioning techniques. A second key goal was development of new techniques to use EM and/or super‐resolution light microscopy to integrate molecular signatures of cells and synapses to their nanoscale connectivity. To be feasible, these methods required companion computational tools, e.g., to reduce time needed to segment volume-EM data sets by 100- to 1,000‐fold, toward reconstruction of micro‐connectomes of individual animals studied physiologically and behaviorally (e.g., zebrafish). Finally, BRAIN 2025recognized the importance of increasing spatial resolution of current functional imaging tools in humans, including validating magnetic resonance imaging (MRI) based methods for mapping the macro-connectome, improving resolution of human functional MRI to the range of 0.3-0.4 cubic millimeters, and applying mapping and projectomic tools developed in animal models to human brain-block specimens. Longer-term goals from BRAIN 2025 included linking circuit anatomy and behavior and understanding how individual variance across scales affects animal and human behavior in health and disease. A long-term objective of microscale reconstruction of key brain areas in animals was to assess the relationship between individual connectivity variation to functional differences in outputs. Similar approaches aimed to define links between projectomes of specific circuitry in individual animals with behavioral variation in those individuals. Both of these goals require analysis of hundreds to thousands of animals – hence the concurrent need to vastly improve speeds of data collection and analyses. In research with humans, BRAIN 2025 goals included even more aggressive targets for increasing spatial resolution of functional magnetic resonance techniques, down to 0.1 cubic millimeters, and in the longer term, mapping connectomes with sensitivity to individual variance in hundreds to thousands of human subjects. Documenting and understanding human brain variation, including making use of measurements in humans with neurological and psychiatric disorders, would be facilitated by the use of standardized formats.   NIH funding to date: Maps at Multiple Scales In each year of The BRAIN Initiative®, NIH has issued a general call for tools and techniques to access and characterize cells of the brain with cell-type and circuit-level specificity. The first notice of funding opportunity ( NOFO ), “Tools for Cell- and Circuit-Specific Processes,” has supported a range of projects for accessing neurons and mapping their connections, as well as monitoring and manipulating their activity. In fiscal year 2018, NIH issued two additional NOFOs to address gaps in this research portfolio: one NOFO to target non-neuronal cells (three awards issued) and another NOFO to develop methods and capacity for micro-connectivity analyses with synapse-level resolution (five awards issued).   Where are we now in brain mapping? The Priority Area 2. Maps at Multiple Scales component of BRAIN 2025 has produced dramatic advances in a range of areas, reflecting funding by multiple BRAIN 1.0 programs. Below, we highlight some of these new tools to map brain circuitry – at widely varying scales and in post-mortem samples as well as living tissue – that have been spurred by BRAIN 1.0 efforts. Structural analysis We have seen remarkable advances in serial EM, X-ray tomography, and automated segmentation. While these techniques predate The BRAIN Initiative®, various technology leaps – including large-scale parallel EM-microscope array and machine learning-aided advances in image segmentation of individual cells and subcellular structures – have moved these studies from limited-scale demonstration projects to powerful tools for neuroscience inquiry. Nanoscale imaging and computational segmentation in zebrafish is an exciting example, unveiling an entire projectome of myelinated fibers in a vertebrate brain. The work seems ripe for expansion to analyses of entire mammalian brains and portions of human cortex. This should allow reconstruction of connections between every neuron in a sample, along with detailed cellular anatomy of non-neural cells associated with these circuits. The recent development of molecular identification approaches that are compatible with EM reconstructions, such as the use of fluorescently tagged nanobodies with NATIVE opens exciting new bridges between structure, connectivity, and cell-type identification. New histology driven by neuroscience Improved brain tissue analysis techniques are another good example of active progress resulting from the fusion of neuroscience and engineering that BRAIN 2025 had envisioned, in this case through ideas from chemical engineering via the proliferation of hydrogel-tissue chemistry over the past 6 years, which have now taken root across biology. New materials and approaches have significantly reinvigorated many aspects of century-old histology and microscopy methods, achieving sample clarity, accessibility for protein labeling and nucleic-acid labeling/sequencing, tissue-size changes (expansion/contraction), and quality/process reliability. Newer approaches address fluorescent-labeling challenges in innovative ways. These include preserving genetically encoded fluorescent markers such as GCaMP and its multispectral derivatives, as well as deep-sample antibody labeling. Repeated staining cycles are now possible, assisted by non-destructive imaging technologies. These approaches (retrograde and anterograde tracers, activation-driven labeling such as Calcium-Modulated Photoactivatable Ratiometric Integrator or CaMPARI) still need improvement but allow rich and diverse analyses of activated cell distributions and connectivity across the brain. When combined with in situ sequencing, such approaches may provide new and high content information in many species. Microscopy Expansion microscopy has matured as a way to expand cleared tissue to enable the use of confocal and light-sheet microscopy to visualize structures below the diffraction barrier, with minimal photobleaching. Recent work combining expansion of brain samples with super-resolution lattice light-sheet microscopy is nearing EM resolution and has an added benefit of multiplex-capable fluorescent labeling. Thus, it is now possible to image an entire fruit-fly brain or a slice of mouse cerebral cortex in 2 to 3 days, using multiple markers and achieving an effective resolution of about 60x60x90 nanometers, reflecting a 4-fold expansion from previous levels of resolution. Molecular identification Barcoding is another promising new approach for anatomic mapping, in which specific genetic sequences are inserted into cells to identify and record activity patterns or other characteristics. Distinct from in-situ sequencing that seeks to identify native genetic information in single cells (e.g., MERFISH, seqFISH, STARmap), barcoding uses viruses capable of transferring a wide diversity of sequences into cells. Recent in-situ sequencing technology advances and others have enabled positional, transcriptional mapping of vast arrays of cell types in tissue at single-cell resolution. In the barcoding method MAPseq, cells are transfected in one region with viral barcodes that are then trafficked along axons, enabling tracking across long-range projections. The technique has been applied to image projections of hundreds of neurons in primary visual cortex throughout the brain. Because it uses microdissection to extract tissue regions for sequencing-based analyses, barcoding provides a more statistically broad evaluation of connectivity between regions than is readily achievable with imaging methods. Human functional imaging Significant advances, especially those fMRI-related, have emerged in top-down mapping approaches of human brains, yielding detailed cytoarchitectural and functional maps with cortical laminar and columnar specificity. While the biological limits of hemodynamic-response measures remain an active topic of inquiry, several NIH BRAIN Initiative projects employ advanced cameras that should push the technical limits of fMRI to size scales of 0.125 microliters, with temporal resolution of 1Hz or higher. During BRAIN 2.0, these approaches will allow investigators to bridge such measurements to those made at the mesoscale in animal models (including NHPs) using optical methods. This should lay the steps of a credible path to large-scale understanding of networks and circuitry in the human brain. Linking structure to function  The essential goal of mapping brain structure is to gain insight into function. Advances in multiscale functional imaging and large-scale recording methods have greatly advanced our ability to compare functional connectivity to multiscale structure. Fast, functional microscopy methods allow us to image activity at the single-cell level across the brains of increasingly popular small-animal model systems such as zebrafish larvae, fruit flies, and roundworms (which are ever closer to having complete structural connectome information). These techniques span from high-speed functional microscopy approaches such as light-sheet microscopy, to real-time meso-scale imaging of fluorescent indicators of cellular function across awake mouse cortex (e.g., wide-field optical mapping).  Further, electrical recording has undergone dramatic scaling – for example, Neuropixels, Neurogrid – and new tools have been developed to combine such recording with optogenetics and imaging. As will be discussed within Priority Area 3. Brain in Action, the significant development of strategies for large-scale, real-time recording of functional activity across scales has afforded new opportunities to begin to link structure to function. This is exemplified by recent work, in which whole brain 2P-calcium imaging in zebrafish identified single neurons across the brain with neuromodulatory function. The same brains were then fixed and labeled, revealing molecular cell type of the same neurons. Progress toward linking structure to function is also being made in research on the human brain. Resting-state fMRI has coined the term “functional connectivity” to refer to regions of the brain that are inferred to be connected based on the synchrony of activity between regions – even if that activity is seemingly random. Widespread findings of brain-wide maps of functional connectivity throughout the human brain over the past 10 years using resting-state fMRI have revealed that such patterns differ in a wide range of disease states. Although these observations have stimulated much wider efforts to map and correlate functional connectivity to a physical connectome using approaches such as diffusion-based imaging, more recent results show that connectivity of these networks can vary dramatically from moment to moment. Similar studies are now being conducted on awake mouse and macaque brains. In some cases, these have been complemented by optogenetic or electrical excitation of key regions, which identify functionally connected regions independently. New functional-measurement technologies have thus brought us to a point where in vivo functional mapping of circuits can be combined with mapping of anatomical connectivity in a single animal or human research participant. These studies will play a critical role in development of theoretical frameworks and discovery of fundamental principles. In this way, the brain structures can be mapped to their functional correlates across all spatial and temporal scales. Data processing  Speed of data processing has improved. The time to segment volume-EM data sets has dropped 100- to 1,000‐fold since the start of BRAIN 1.0 through the use of advances in machine learning such as convolutional neural networks for EM-section segmentation, and improvements in microscope design to include multiple parallel paths numbering upwards of 60/microscope. Thus, large-scale studies of large regions of mammalian brains (including those from humans, such as whole-thickness cortical columns) are feasible – meaning that analysis of entire brains of animals of increasing size and complexity is a realistic goal for BRAIN 2.0. Goals unmet: Brain mapping All of the major goals set by this Priority Area have been met. Gaps and opportunities: Next steps for BRAIN 2.0 A key goal of this BRAIN 2025 Priority Area is to generate circuit diagrams that vary in resolution from individual synapses to neuronal ensembles. Doing so will improve our ability to integrate knowledge from anatomical and functional domains, in the contexts of both health and disease. Moving forward, we see a need for advancing methods related to tissue processing and data acquisition. 1.    Increase imaging, tissue clearing, and labeling speeds for large brain regions and whole brains. The time required for tissue clearing and labeling has become the rate-limiting impediment to imaging large samples. Improvements in this domain could greatly enhance ability of many labs to add anatomical mapping to their functional experiments. 

Additionally, previously identified challenges identified by BRAIN 1.0 deserve continuing emphasis:   2.    Improve multi-scale observations of structure and function. Advances in this area will lead to better ways to link micro-level analyses (i.e., synapse-level) with meso- and macro-scale observations of functioning circuits. Combining functional data with transcriptional and anatomical findings, at all levels of the brain, from single synapses to individual neurons and neuronal ensembles, and in the same individual animal, will clarify links between structure and function – and ultimately, specific behaviors. Several groups are now embarking on efforts to combine results from functional-behavioral studies (from advanced optical methods) with serial-EM data of the relevant local circuits. These efforts, however, have been limited to small regions of mammalian brain or to invertebrate brains. Extending these studies to larger, distributed brain networks linked to complex mammalian behaviors awaits better ways to functionally image and structurally map entire mammalian brains. Tasks that are becoming increasingly commonplace, such as brain clearing, imaging, and annotation, are nonetheless tedious and prone to variability among laboratories. Thus, it may be valuable for BRAIN 2.0 to establish some type of commercial service to standardize brain-sample processing as well as to generate reliable annotations of cell types and connections.   3.    Create dynamic maps that include non-neural cell types. This step is especially relevant for probing human disease mechanisms. Recent BRAIN 1.0-inspired technology revealed disease-associated genes (particularly those related to psychiatric illness) in astrocytes. Thus, maps that exclude non-neuronal cell types are likely to be incomplete. It will be important to combine within single data sets electrophysiological, metabolic, neurochemical, and gene-expression measurements from a broad diversity of cell types. This should be done both locally and in distributed networks, at both meso- and macro- scales.   4.    Integrate fMRI with invasive activity measures in animals. Given the limitation to use primarily non-invasive techniques in humans, it will be important to strengthen small-animal or NHP fMRI approaches to improve integration of non-invasive functional MRI signals with invasive fluorescent optical methods and electrophysiology in awake animals performing a range of behaviors. In this way, we can build a deep link between functional mapping studies in humans and the functional and anatomical in animals.   5.    Improve methods for anatomical data analysis. Modern machine learning technqiues offer new, faster, and more powerful approaches to remove noise from data as well as to analyze large and disparate datasets, including EM datasets at the micro- and nanoscales. Annotation of cell types, boundaries, and interfaces in very large (petabyte) volumes is a realistic goal for BRAIN 2.0, as are comprehensive definitions of connectomes across those scales. Creative approaches to academic/industrial partnerships may accelerate progress. An even more ambitious machine learning opportunity relates to inferring molecular properties of identified cell types in dense morphological EM datasets. New forms of multiscale and multimodal data training sets in which cell-labeling methods are accurately registered with EM data is a good start. It is essential for BRAIN 2.0 to consider as a high priority data storage, access, analysis, and sharing such that the NIH BRAIN Initiative investment is applicable as broadly as possible throughout the biomedical research community.   6.    Improve functional MRI resolution to better than 0.01 cubic millimeters. This goal remains elusive due to two fundamental limitations. First, reducing spatial scale from 400 microns voxels (a short-term BRAIN 2025 goal) to 100 microns implies a 64-fold drop in both volume and signal-to-noise ratio. New/improved data-science methods, such as machine learning for image reconstruction, and even higher magnetic-field strengths (14T or above) will be needed to achieve this ambitious goal for BRAIN 2.0. MRI alternatives such as BRAIN Initiative-supported magnetic-particle imaging offer another path to achieving the needed high sensitivity, albeit with substantial engineering challenges. The second key challenge is biological – hemodynamic signatures currently define MRI functional contrast, and the spatiotemporal control of blood flow is defined by biology, not technology. While additional work in animal models using optical methods – or even focal microscopy in humans during invasive procedures – should shed light on this issue during BRAIN 2.0, current data suggest that such a technological goal is worth achieving.   7.    Develop signatures of microstructural features mappable by magnetic resonance-based methods. Methods to perform projectional mapping and tissue microstructure in vivo in humans are today limited to diffusion- magnetic resonance methods. Further technical advances allowing tracing of small fiber bundles across long distances in the human brain may arise from BRAIN 2.0 investments, especially within Priority Area 6. Human Neuroscience.   8.    Integrate theory and experimental expertise. A clear gap remains in our ability not just to acquire maps of the brain at multiple scales, but to understand conceptually how the different levels arise from and interact with each other. This gap in conceptual understanding might be bridged through increased support of theorists to guide interpretation of results obtained by new methods which span spatial scales (also see Priority Area 5. IdentifyingFundamental Principles).   Suggested short-term goals for BRAIN 2.0: Model systems Suggested new short-term goals include: 1.    Improve throughput in clearance and labeling methods; and develop and disseminate software and machine learning tools to efficiently analyze the resulting dense three-dimensional datasets. Generate extensive collections of nanobodies directed against key cell-type markers, which are compatible with EM reconstruction and will greatly enhance the impact of EM-based structural analyses. Access to cloud-based graphical-processing unit-based technology and data-storage solutions are likely to be necessary for maximal use of these data.   2.    Continue to develop and expand the study of neuromodulators action both microscopically and at meso- and macro- scales. In particular invest in synapse specific visualization of distinct neurotransmitter systems in intact circuits. Develop tools to monitor release of specific neurotransmitters and activation of their cognate receptors.   3.    Improve transsynaptic anterograde viral tracing in living cells and expand viral tracing to models other than the mouse brain. Transsynaptic retrograde viral methods have greatly enhanced our ability to identify input from defined neuronal populations (and thus trace neural circuits). Although barcode-based tracing has provided new information about brain-wide projections of defined neuronal populations, jumping forward across synapses (anterograde methods) remain unavailable (except those requiring use of toxic or inefficient reagents). Thus, we cannot yet identify postsynaptic targets (i.e., the actual connectivity) of labeled axons, creating a significant challenge for mesoscale-circuit mapping.    Previous BRAIN 1.0 goals related to model systems and humans that warrant continuing attention include:       Model systems 4.    Integrate optical imaging and electrophysiology with functional magnetic resonance (fMRI) methods in rodents and NHPs. These steps will ultimately enable measurement of diverse behaviors in awake animals and build an important bridge from animal to human studies.   5.    Continue to invest in efforts to map both structure and function in the same animals. A special set of challenges arise for studies that aim to directly correlate structure to function in individual animals, e.g., registering maps functionally and structurally. As already mentioned, it may be valuable for BRAIN 2.0 to establish a commercial service to generate reliable, standardized annotations of cell types and connections.

Humans   6.    Advance our understanding of non-invasive measures of brain microstructure made using MRI or other electromagnetic methods as well as PET. Similarly, efforts should continue to clarify the strengths and limitations of structural and functional connectivity measures. Detailed validation studies could combine animal studies with both invasive and noninvasive stimulation paradigms in humans, combined with functional studies, and include detailed measurements made in ex vivo specimens.   7.    Reproducibly characterize individual variance (including across the lifespan) from structural and functional measurements. Suggested long-term goals for BRAIN 2.0: Several long-term goals not highlighted in BRAIN 1.0 warrant attention in BRAIN 2.0: Model systems 1.    Evaluate the whole-mouse brain connectome at the EM level (see transformative project #3), integrating in-vivo functional and molecular correlates acquired before death. This goal strives to infer circuit function from physical structure, as well as identify limits to that goal; i.e., to determine which additional parameters are needed to predict function and behavior across scales. Corresponding in vivo measurements of synaptic activity (both inhibitory and excitatory) in conjunction with neuromodulatory inputs, will be critical for developing and testing network theories of brain function. Of note: optical and sequencing-based methods are likely to contribute substantially to this goal together with EM connectomic approaches.   2.    Obtain whole-primate (NHP, then human) brain projectomes from brains of individual animals that have been functionally characterized. A critical shortcoming of current primate atlases is that they are compiled from many individuals, which is problematic due to variation across individuals in layout of areas and connectivity. Longer-term scaling of these approaches with high-throughput capability will inform the study of individual variation.

Humans   3.    Achieve whole-brain, high-resolution (spatiotemporal), functional magnetic resonance unrestrained by biological limits from rapid-gradient switching and high-field radiofrequency coils. Achieving this goal will deepen our understanding of how functional activation and connectivity work together across brain regions. While BRAIN 1.0-funded instrumentation advances have set the stage for this endeavor, significant new investments are needed, such as in field-generating mechanics and novel gradient design. At the onset of BRAIN 2.0, this appears to be a realistic, albeit challenging, goal.   4.    Apply machine learning methods to compare and contrast homologous regions in the mouse and human brain. These data, combined with knowledge from the mouse whole-brain connectome project, will inform understanding of the human connectome at the nanoscale.

Other BRAIN 1.0 goals require continued effort going forward:   5.    Use improved high-throughput clearance and labeling methods and rapid serial-sectioning EM tools to study human cortical and subcortical structures.   6.    Develop a high-throughput paradigm to develop and apply novel PET tracers for key molecular targets (e.g., neuromodulatory receptors, synapses). Achieving this goal awaits improved PET-camera technology and would benefit greatly from an academic/industrial partnership.   7.    Use improved high-throughput clearance and labeling methods and rapid serial-sectioning EM tools to study human cortical and subcortical structures.   8.    Combine in-vivo and ex-vivo data to establish fundamental links between structural and functional connectivity in humans, including the role of natural variation. When are these directly interpretable? When not, what is the significance of observed functional connectivity?  In summary, we have seen substantial progress in Priority Area 2. Maps at Multiple Scales reflected by impressive improvements in tissue processing and imaging that are bringing brain regions and circuitry into sharper relief for continued investigation. Opportunities for BRAIN 2.0 include increasing the speed and efficiency of these powerful new tools; expanding analyses to larger brains; increasing mapping of non-neuronal cell types and synapses; integrating structure and function mapping in the same brain; and acquiring and refining data-science advances to facilitate cross-species comparisons. At the completion of The BRAIN Initiative®, we expect that continued progress in this area will allow us to understand the structure of the brain and its numerous functions more fully. This multidimensional view will be transformative for developing therapeutic approaches appreciative of this complex organ.   Priority Area 3: BRAIN IN ACTION While identifying all brain cell types and the wiring diagrams connecting them form essential groundwork for understanding neural circuits, recording neuronal activity in behaving animals and evaluating what signals are encoded and how they change in different behavioral contexts is essential for a mechanistic understanding of brain circuits. Linking hypothesis-driven experiments with modeling and theory will lead to genuine insights into the basic principles of brain circuit organization and function. A critical step ahead is to study more complex behavioral tasks and to use more sophisticated methods of quantifying behavioral, environmental, and internal state influences on individuals. Further, to continue to make progress in understanding the brain in health and disease, we need to be able to access specific cell types or other identified circuit elements and measure various aspects of their dynamic functions (e.g., sensitivity to neuromodulation). We also need effective methods to map aberrant patterns of brain activity to derive therapies for treating diseased brains central to so many debilitating human conditions. BRAIN 2025 vision  BRAIN 2025 set out the ambitious goal of achieving 10- to 100-fold improvements in capabilities for electrical and optical monitoring of brain activity and similar improvements in human neuroimaging. Expected advances included substantial increases in the number of individual cells recorded, improvements in activity sampling, and better methods for linking cells and cell types with specific behaviors (e.g., through immediate early-gene tagging). BRAIN 2025 posited that a few test cases in model organisms such as larval zebrafish or the roundworm would empirically answer the question: How many neurons do we need to record? In these relatively simple systems, large-scale, nearly complete neuronal recordings are possible, allowing scientists to begin to understand how many neurons are essential to build a comprehensive picture of circuit function. NIH funding to date: Brain in Action NIH has issued three NOFOs to develop technologies for recording neural activity – each representing a different stage in the development pipeline from initial concept through technology optimization and iterative engagement with early adopters. The primary goal of these NOFOs is to enable new capabilities for in vivo experiments, at or near cellular resolution, in animal models. Neural activity is defined broadly to include electrical activity, neurotransmitter and neuropeptide signaling, as well as plasticity and intracellular signaling. Technologies funded through these NOFOs represent diverse approaches including optical, electrical, magnetic, acoustic, and genetic recording. The current NIH research portfolio of imaging methods offers the opportunity for investigators to continue to develop instrumentation capable of imaging the brain faster, deeper, and more broadly.   Where are we now with recording neural activity? The Priority Area 3. Brain in Action component of BRAIN 2025 has made good progress on many fronts.  Electrical recording Methods for deep tissue and surface-level electrocorticography (ECoG) recording have seen dramatic leaps in volume with electrode numbers increasing from 10 to nearly 1,000. Although funded outside of The BRAIN Initiative®, the Neuropixels probes have stimulated substantial excitement. Electronics for filtering, amplification, multiplexing, and digitization have been integrated into recording devices. Many complementary approaches now enable electrophysiology to be combined with optical imaging and interrogation techniques such as optogenetics and pharmacology. These include monolithic integration of light-emitting devices into recording probes; transparent conductive oxides; and graphene structures that permit optical imaging or stimulation; and multifunctional fibers with recording, optical stimulation, and drug/gene delivery capabilities. Optical recording Sensors for monitoring calcium signals now offer an order-of-magnitude greater signal-to-noise ratio compared to a decade ago, sometimes allowing detection of individual action potentials, and long-sought robust optical-voltage detection is becoming a reality. Recent work has also engineered fluorescent indicators to monitor a range of neurotransmitters and neuromodulators, such as dopamine, opening an important new chapter on understanding cellular activity in the brain. Encouragingly, the research community developing these molecules has established a healthy culture of rapidly sharing their discoveries via the plasmid repository Addgene, viral-vector core facilities, and suppliers of transgenic animals. We are also seeing progress toward achieving the necessary temporal resolution and fields-of-view in behaving animals for using these new forms of optical recording. Small-model organisms such as roundworms, zebrafish larvae, and fruit flies can rapidly advance novel optical-labeling and manipulation techniques. Faster forms of light-sheet microscopy, spinning-disk confocal microscopy, computational imaging, and two-photon microscopy have improved volumetric-imaging speeds in these model organisms. Meanwhile, sophisticated strategies for tracking and generating more complex behavioral experiments that are compatible with simultaneous optical recording are opening the door to real-time, whole-nervous system read-outs in awake, behaving organisms. Other improvements are apparent in speed and field of view, which enhance the ability of microscopic methods to analyze mammalian brains. Advances incorporating 3-photon excitation for deeper imaging within the mammalian brain have also been widely adopted. Another area of progress for large-scale mapping is two-photon mesoscope technology that includes robotically controlled systems for imaging NHP brains. Gradient-index, lens-based micro-endoscope imaging is transforming our understanding of functional activity of specific cell types in deep-brain structures. This technology, which mounts a small, lightweight camera directly on the head of an awake, behaving rodent, has been widely adopted. A commercial version is available, and hundreds of laboratories are using an open-source version called miniscope. Wide-field, fluorescence-based mapping of cellular activity and hemodynamics over the entire dorsal cortical surface of behaving mice has also been refined, permitting acquisition of rich (albeit lower-resolution) high-speed recordings in both head-fixed and freely moving rodents. Also advancing rapidly is the ability to obtain optical readouts from multiple cell types simultaneously using spectrally distinct probes, which will help us understand links between cell-type activity and behavior. Other optical methods will soon be able to visualize binding of specific neurotransmitters. Data analysis (see also Priority Area 5. Identifying Fundamental Principles) The faster streams and higher volumes of data generated by new in vivo imaging and recording technologies confront end users with new challenges for data analyses and interpretation. Fortunately, a range of new analytic tools have been developed for spike sorting. For example, Kilosort can process data from 384-channel electrode arrays in near real time. Software for optical microscopy and miniscope calcium and voltage-imaging data has also been improved and shared, enabling extraction of the time courses and locations of individual cells. It is also now possible to quantify animal behavioral data such as webcam streams of freely behaving mice. Machine learning and other artificial intelligence approaches are integral to this progress. Functional labeling for ex vivo evaluation A type of approach that departs from in vivo imaging exploits the induction of immediate early genes during episodes of neural activity associated with a behavioral response to identify cells involved in that specific behavior. Further, improvements to systems such as CaMPARI may lead to new generations of photoconvertible proteins that can image integrated calcium activity of large populations of cells over precisely defined time windows. Other technology improvements for circuit labeling include engineered adeno-associated viruses that can transport genes efficiently and noninvasively across the blood-brain barrier. Similarly, new generations of improved engineered rabies virus permit longitudinal functional studies on identified projection neurons based upon non-toxic, retrograde labeling. Larger-scale imaging technologies Hardware advances including high-magnetic fields and newly developed radio-frequency coils and sequence designs are improving both spatial resolution and imaging speed. For example, high-field scanners (7T) bring fMRI resolution to near 0.5 millimeters. Functional ultrasound has emerged as a minimally or non-invasive technique to map hemodynamics of entire rodent brains, offering higher spatial resolution than fMRI and applicability for use in behaving animals. Photoacoustic tomography now permits the imaging of volumes at  rates of 50-Hz and sensitivity to hemoglobin oxygenation, while studies are underway to explore the potential of photoacoustic approaches to measure direct indicators of cellular activity such as the genetically encoded calcium indicator GCaMP. Functional near-infrared spectroscopy (fNIRS), a technique that offers lower-resolution hemodynamic recording but does not include the use of ultrasound, has been dramatically improved and much more widely adopted for human studies in both adults and infants. Goals unmet: Recording neural activity The most ambitious goals from BRAIN 2025, such as recording the activity of 1 to 10 million neurons in a behaving mammal, remain out of reach. Recording devices. While recording microelectrodes have advanced significantly, probes with configurations that go beyond simple laminar arrangements are still needed. Novel recording devices using nanowires or other nanofabricated structures await in vivo testing and validation compared to existing electrophysiological approaches. Carbon-fiber electrodes have emerged as inexpensive and chronically viable electrodes, but they introduce challenges due to laborious hand-assembly methods and the need for sophisticated implantation techniques due to the delicate nature of ultra-miniaturized electrodes. While we have seen good progress in wireless recording, artefacts from concurrent wireless power delivery remain a problem. Recent efforts in artefact-rejection circuits, however, promise high-density closed-loop stimulation and recording. There also remains a need for high-throughput electrophysiological and multimodal tools that operate consistently over extended periods. Finally, new methods must be developed that allow identification of cell type in extracellular recordings. Optical methods, such as phototagging, are insufficiently high-yield and can yield results that are difficult to interpret. One possibility is that cell types could be distinguishable based on their electric fields. High-density recording arrays could, in principle, be sufficient to measure these with precision. For these and other new recording approaches, improvements in spike sorting will be essential. A critical first step will be community agreement on the relevant quality metrics and application of these in a standardized way. Understanding neuronal codes. Optical methods used in zebrafish larvae and roundworms now permit recording from every neuron in the brain or nervous system of these small, tractable organisms during increasingly complex behaviors. Such approaches will likely soon inform estimates of the number of neurons required to account for observed behaviors based on circuit activity (see Priority Area 5. Identifying Fundamental Principles). Gaps and opportunities: Next steps for BRAIN 2.0 To understand the dynamic interplay of activity across cells and circuits in behaving organisms is the ultimate aim of this Priority Area. Progress is underway, but several avenues present new opportunities. These include improving (and making accessible to individual investigators) electrophysiological, neurochemical, and imaging tools that can access more brain regions than currently possible (in humans and in NHPs); developing and applying powerful data-science techniques; and expanding human capital through interdisciplinary approaches and via increased investment in theoretical research. Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity. Among these are some aspects not directly emphasized in BRAIN 2025: 1.    Expand functionality and integration of electrophysiological and neurochemical methods. While silicon probes have upped electrode counts dramatically, these instruments remain poorly suited for chronic long-term studies in freely moving subjects. If refined and optimized, nanomaterials-based tools could greatly advance in vivo recording capability. Chemists and materials scientists should be encouraged to collaborate with neuroscientists. BRAIN 1.0 focused on electrophysiological, calcium, and hemodynamic recording of circuit dynamics, but neurochemical targets (including neuropeptides) are also ripe for new applications. In addition, the increased integration of neural probes should explicitly consider scalability and reliability challenges arising in packaging and at the backend interface with the external equipment.   2.    Capitalize upon machine learning-based data analyses. Major recent progress in machine learning techniques introduce significant opportunities for automating data analysis in a range of settings including measuring animal behavior and cleaning/de-noising varied datasets – and ultimately, for creating models, predictions, and frameworks to map brain activity with behavior. Unfortunately, these burgeoning techniques are beyond the reach of many laboratories who nonetheless will soon acquire huge quantities of real-time neuroscience data. BRAIN 2.0 should help provide investigators access to, and help in adoption of, such powerful methods. Improved interdisciplinary education and training for both trainees and established scientists should be promoted to advance understanding of the capabilities and limitations of various data-science techniques. Within the field of data science itself, improved methods are needed to interpret neural-net/deep-learning models and outputs, toward deriving mechanistic insights about brain function (see Priority Area 5. Identifying Fundamental Principles).   3.    Improve tools for studying primate brains. Currently, a chasm exists in the availability of imaging tools used for studies of mice and those used for studies of humans. Challenges arise from a range of issues including brain size, motion-related measurement barriers, limited progress from small studies, cost, and ethical concerns. Refining tools used in rodents to make them applicable to NHPs is generally is the most effective way to gain insights on primate brain function and to address the technical challenges that prevent tools and treatments from applying to humans. Improvements in transgenic labeling and fluorescent indicators (and techniques for imaging and electrophysiological mapping of larger and deeper brain areas) could bridge the gap in brain-mapping studies between mice and humans. Examples for increased NHP-research emphasis include testing and optimizing electrical-recording technologies, non-invasive imaging, and neuromodulation technologies – which would in turn contribute toward understanding compatibility and safety of use in humans. Improving NHP-compatible functional-recording technologies to levels currently possible in mice would provide valuable correlates for ongoing cell-type analyses in all primates. Extending imaging depths for NHP studies will also facilitate studies of smaller mammals, vertebrates, and invertebrates, while enabling broader cross-scale and cross-species comparisons. While experiments involving NHPs are critical for understand human brain circuits and the computations they perform, these studies should include neuroethics research and input to address ethical considerations that might arise.   4.    Develop novel human brain-imaging technologies beyond fMRI. Potentially game-changing approaches include ultrasound, electroencephalography, magnetoencephalography, positron-emission tomography (PET), and functional near-infrared spectroscopy, but all require further investigation to establish their value.

Other opportunities identified by BRAIN 1.0 deserve continuing emphasis:   5.    Expand optical imaging. Most functional-imaging progress in mammalian brains has been limited by accessibility to cortical and other superficial structures. Optical recording from deep-brain regions with endoscopic methods is still tissue-destructive and too infrequent for robust comparison to that from other regions. Imaging from deep cortical layers also remains an important challenge, as feedback to subcortical pathways originates in these layers.   6.    Develop tools to measure synaptic strength and neuromodulatory function. One goal is to gain the ability to quantify the function and strength of inhibitory synapses. Another direction is to further access the function elucidate the study of excitatory synapses, extending technology beyond calcium imaging technology will benefit the study of excitatory synapses as well. Human studies of synaptic density and neuroreceptor signaling are achievable via PET but will require new tracers with better understood targeting and pharmacodynamic properties. We are still unable to fully assess key elements of synaptic function in vivo in humans. The use of next-generation PET cameras with higher sensitivity, coupled with novel synaptic tracers, can be combined with distributed functional MRI data to measure the influence of neuromodulators on distributed networks and circuitry, and map these dynamic changes to quantitative measures of receptor trafficking, all while tracking behavioral measures. Parallel strategies could be employed in model species.   7.    As human datasets become more multimodal and complex, it is likely that we will gain granularity in deciphering human behaviors, memories, thoughts, and emotions – and neuroethical issues will become increasingly significant. This is particularly true regarding deciphering circuit function, since circuits are key to understanding higher-order behaviors. Neuroethical concerns include but are not limited to:  a.    To what degree are extractable, identifiable elements of a research participant’s memories and thoughts reflected in collected data? b.    Who has access to these data and for what uses? c.    Are existing legal and regulatory structures adequate to ensure that brain data are not misinterpreted or misused beyond the laboratory in contexts such as legal cases, consumer marketing, and national security? d.    With improved data analysis techniques, how likely is it that data intended for one use will yield unforeseen information into other aspects of a participant’s privacy?   8.    Importantly, as such models are used in human analyses linking brain activity to behavior, scientists must carefully consider biases that may skew hypotheses, algorithm development, and data analyses. Scientists and ethicists can explore together, from the inception of an experimental design, how these biases and assumptions might inform their studies.   Suggested short-term goals for BRAIN 2.0: Some suggested new short-term goals for studying the brain in action include: 1.    Explore real-time interactions between different cell types, neuromodulators, and activity during short- and long-term behaviors. Achieving this goal requires development of a diversity of sensing and imaging techniques to monitor a variety of electrical and chemical signals in a range of cell types. An important prerequisite for performing such studies is to improve transfection technology, such that the activity of large numbers of neuromodulator systems can be tested while avoiding costly and time-consuming genetic approaches in mice.   2.    Combine ultrasound methods with direct sensing of neural activity, possibly through development of near-infrared photoacoustic-compatible indicators of neural activity. Functional ultrasound is a minimally/non-invasive, lower-cost alternative to ECoG/fMRI with potential for use in awake, behaving animals and even humans (see Priority Area 4. Demonstrating Causality). This and similar photoacoustic methods can image hemodynamic contrast in an entire rodent brain and are suitable for imaging newborn humans.   3.    Develop new NHP recording and imaging technologies. NHP models can serve a critical bridging role in bringing approaches that have been developed and highly-refined in rodents to a stage where they can be used in the human brain. Such studies will also develop and validate a pipeline toward new human brain recording technologies – both in terms of safety and efficacy, as well as to establish the utility and value of such recordings for clinical use. Improving NHP-recording technologies could also advance our dynamic understanding of the brain regarding complex cognitive tasks not possible with rodents. This goal might also include development of more transgenic NHP models, pending comprehensive review of scientific, budgetary, and ethical factors relevant to use of NHPs.   4.    Develop tools to analyze naturalistic (untrained) and trained behaviors. The use of simple, highly constrained behaviors through decades of biomedical research has been fruitful for understanding the nervous system. This approach will remain important going forward, but now we also need to include more ethological behaviors and more naturalistic environments, for example through use of virtual reality. However, little will be learned if behavioral constraints are relaxed without the ability to measure various behavioral components. Thus, we need robust, automated methods to detect and classify naturalistic behaviors in freely moving animals and humans, in various settings from monitoring individuals to monitoring social interactions. Importantly, the resulting data will only be useful for integration with other knowledge if such methods have temporal resolution commensurate with calcium or electrophysiological signals.   5.    Develop tools to assimilate and link brain recordings with behavior. Integrating activity mapping with theory is an important step in neuroscience discovery: it is necessary to implement approaches that describe activity in local areas related to that in distributed circuits as well as brainwide circuits. Fortunately, increasingly sophisticated data-science analysis techniques, including several machine learning approaches, can suggest mechanistic links between cerebral signals and behaviors. These modern techniques go beyond predictive value; they can find and expose patterns and connections not possible by human study or intuition. Moreover, it is likely that data-science methods can penetrate the growing volume of complex multi-dimensional recordings unlocked by BRAIN Initiative-funded technologies. Engaging data scientists – as well as supporting data-science training for both neuroscientists and neurotechnology developers – is urgently needed to leverage this burgeoning opportunity.   6.    Integrate technology development and information transfer between model systems. During BRAIN 2.0, we must face the obstacles that make it difficult to take emerging tools from small, laboratory-specific studies in rodents to application in primates, including humans. Achieving this goal calls for collaboration with neurosurgery teams and FDA, as well as considering ethical implications.

Some specific goals identified by BRAIN 1.0 deserve special emphasis going forward:   7.    Continue to advance electrophysiological technologies. Dense microelectrode probes are poised to transform electrophysiological recordings, with electrode counts potentially reaching 100,000 channels or more. BRAIN 2.0 should aim for the capacity to record neurons in behaving animals stably across years, not weeks – to promote understanding of both development and aging. The ability to track neurons in behaving animals over long periods (electrically or optically) will help us understand neuronal plasticity and stability of neuronal populations. Also needed are methods for high-density recording in freely moving animals with the ability to sample over large expanses of cortex and sub-cortical structures. Achieving this goal might require shifting emphasis from silicon probes to softer materials with capabilities approaching those of silicon devices.   8.    Continue to advance optical-recording technologies. Especially for studying the human brain, we need the ability to record with single-cell resolution in regions other than the cortex (as well as in combination with cortical regions) of freely moving animals. Whole-brain or whole-nervous system imaging of small model organisms will inform trade-offs between comprehensive, ensemble-level recordings, or distributed recordings of subsets of single cells. Thus, the need to improve imaging technologies for high-speed, comprehensive imaging of such small-animal systems is still relevant.   9.    Develop better optical reporters of cellular activity. Direct imaging of voltage allows a direct link between structure and function, which is difficult to obtain using electrophysiology. In particular, the ability to image neuronal voltage at synaptic resolution would allow much better understanding of how inhibitory and excitatory inputs onto a dendritic tree are transformed into an output. More generally, being able to restrict optical indicators to the soma or nucleus of brain cells will help to isolated signals from neighboring cells, but it hinges on defining technical specifications for different recording tools for regions and cellular locations – including neuropil, which also contains glial cells.   10.    Develop dynamic methods for detecting the release of specific neuropeptides in vivo, in real time. Neuromodulators can affect brain computations profoundly, and we cannot understand their spatial impact based solely on anatomical connectivity. Achieving a functional chemical connectome will likely require combinations of methods. These include use of fluorescence-based, synaptic-level measurements and optical detection of neuromodulator signaling through G-protein coupled receptors. These measurements should not be limited to "traditional" neurotransmitters but also be able to monitor neuropeptides, lipids, and other signaling molecules. Improved chloride sensors would uncover important information about inhibitory signaling. Combining electrophysiology with neurochemical measurements will enhance what we know about neurotransmission mechanisms and drug-receptor interactions. Moreover, combining optogenetic perturbation of one deep-brain region with micro-endoscopic imaging in another deep-brain region will advance understanding about neuromodulation across structures.   11.    Develop methods to label active neurons. Improved methods for permanent “activity stamping” to label active neurons in vivo at high spatiotemporal resolution will help determine which neurons participate in, or drive, specific behaviors.   12.    In studies of human-brain circuitry analyses, it is recommended that neuroethics deliberations, considerations, and recommendations be incorporated in the work from the onset of the experiments and through the lifetime of the study. This might entail including a neuroethicist as part of the research team. Some potential issues for consideration might include rules for data sharing and access (including sharing and possible use of participant data while protecting participant privacy). Suggested long-term goals for BRAIN 2.0: Some suggested new longer-term goals for BRAIN 2.0 include:  1.    Determine the number of cells that must be recorded simultaneously to account for specific behaviors at a given level of precision. This question remains largely unanswered, highlighting lack of a theoretical framework to guide experiments outlined by this Priority Area and associated technologies (see Priority Area 5. Identifying Fundamental Principles).   2.    Develop analytic tools to establish causal links between large-scale neural population activity and complex behavior. This goal is challenging, and to date, it remains largely unsolved. Rapid advances in computer vision and machine learning offer tremendous opportunity to improve spatiotemporal resolution and objectivity of methods that classify naturalistic behaviors in an automated fashion. Most behaviors used in neuroscience experiments, whether sophisticated or simple, are ad hoc designs developed on a laboratory-by-laboratory basis. As a result, behavioral findings are often difficult to compare. While it is likely that top-down specifications of preferred behaviors would be counterproductive, there is value in finding consensus for selection of a set of robust and standardized behaviors (preferably relatively natural behaviors).   3.    Image high-speed neural activity throughout the human brain. Watching the human brain in action is a goal that remains beyond our grasp yet achieving it should remain a high priority for BRAIN 2.0. Animal studies may (but have not yet) provide insights about ideal information to record as well as its practical uses (see also Priority Area 4. Demonstrating Causality). In summary, we have seen very good progress in Priority Area 3. Brain in Action, driven in part by improvements in hardware and integrated strategies that combine two or more approaches: electrophysiology, optical imaging, optogenetics, and pharmacologic modulation. Opportunities for BRAIN 2.0 include expanding the ability to understand neuromodulatory function; advancing tools to study larger (primate) brains; and sophisticated, computational tools to better quantify complex behaviors (especially in natural settings). At the completion of The BRAIN Initiative®, we expect that continued advances in this area will provide a clearer understanding of how dynamic activity in and across brain regions instigates so many distinct behaviors in animals and in humans.   Priority Area 4: DEMONSTRATING CAUSALITY This BRAIN 2025 Priority Area aims to derive interventional technology to test cause-and-effect relationships between structure and function. This type of approach has been fundamental in driving basic understanding of how complex living systems work and has powered remarkable progress in biology over the past century. By way of example, our understanding of what genes do – that is, what genes actually cause to happen in cells – both in health and disease, has been driven by the ability to generate gain-of-function or loss-of-function mutations in single genes. With continued basic research investments, such tools have steadily increased in sophistication, from mutagenesis screens in organisms to transgenic and knockout technologies to RNA interference and CRISPR/Cas gene-editing interventions. Such causal tools targeting gene expression are especially powerful coupled with observational tools that allow assessment of (and experimental guidance by) naturally occurring gene expression patterns, and downstream events resulting from the experimental intervention itself. Genomes in living cells and organisms, like brains, are highly nonlinear. They are rich in feedback, parallel-processing and exhibit redundancy, interconnectedness and interdependence, history-dependence, and context-specific states. In using the perturbation tools listed above to test the causal significance of genes in mediating particular processes across biological systems, geneticists became adept at meeting the challenges of such complexity, using observational tools to guide perturbation, rigorous control experiments, and appropriate conceptual frameworks. Similar thinking regarding causality has helped revolutionize cell biology and biochemistry as well, via gain- or loss-of-function interventions at the elemental level of single biochemical messengers and even single amino acids, within the corresponding complex nonlinear biological systems as they operate. In the years leading up to the BRAIN 2025 report, neuroscientists had begun to develop analogous capabilities for providing gain- or loss-of-function of neuronal activity, at an elemental level – that is, a cell type within the brain of a behaving animal. These causal tools, which now include a broad range of interventional methods, have led to many thousands of discoveries over the past 15 years. The BRAIN 2025 report took note of this revolution as well as opportunities for further developing and accelerating the opportunities provided by causal circuit neuroscience. Among the goals identified were: •    Increase the number of orthogonal interventional tools to control multiple elements independently •    Refine intervention beyond cell types to the level of single cells or multiple individually-defined cells BRAIN 2025 recognized that substantial synergy existed between the development of interventional tools and other domains of neuroscience in The BRAIN Initiative®: 1.    Advances in the enumeration of cell types, and in the definition of cell type itself, can inform causal cell type interventions. The BRAIN Initiative®'s emphasis on cell-type diversity yielded synergy in this area. 2.    The development of tools to observe activity is critical for the ability to test causality in neural dynamics by matching or modifying naturally occurring activity patterns, as well as for testing the relevance of context – such as monitoring ongoing activity in other cellular populations – for determining elicited physiological or behavioral outcomes. 3.    Advances in computational algorithms are necessary to implement closed-loop interventions – those interventions that measure the current state to interpret responses in controlled cells and in downstream populations. 4.    The potential significance of understanding causality for clinical impact can hardly be overstated. There is little doubt that a major barrier to the development of new, safe, effective, and specific therapies for neurological and psychiatric diseases has been lack of causal knowledge: Which cells and cell types and circuits actually cause – rather than correlate with – clinically relevant cognition and actions? BRAIN 2025 vision  BRAIN 2025 sought to accelerate the development and application of interventional tools for demonstrating causal relationships between cell and circuit activity, as well as for physiological and behavioral processes. Although the initial phase of The BRAIN Initiative® was by design intended to boost technology development, it also considered applying these new technologies and other tools in real-world biological settings. New and improved perturbation technologies suitable for controlling cells specified by type, wiring, location, and other characteristics At the outset of The BRAIN Initiative®, causal circuit-targeting tools were chiefly optical (optogenetic, in which a microbial opsin gene is introduced into target cells to confer light-triggered actuation) or chemical. Chemogenetic approaches introduce genetically engineered receptors into targeted cells. Such receptors are not naturally present and respond to non-natural chemicals only – thus limiting this method to affecting only the targeted cells containing the chemical and the gene. While these methods were already broadly adopted at the time, only two or three independent channels of control could operate simultaneously and independently in the same preparation. BRAIN 2025 aimed not only to increase the diversity of optical and chemical interventions, but also to explore, develop, refine, and deliver other tools such as genetically encoded actuators, small molecules, and new devices. Other emerging approaches included magnetic neuromodulation and ultrasonic (acoustic) approaches – potentially via disrupting the blood-brain barrier, focal heating, and/or direct neuromodulation.. The ability to target cell types for causal experiments relies on cell-type identification, which in 2012 was based on a limited number of defining features, and usually only one feature, such as the activity of a single promoter. BRAIN 2025 aimed to increase the diversity and complexity of cell identification and targeting by combining many features and selecting features based on more thoroughly validated cell typing. The ability to control individually targeted cells (as opposed to all cells of a particular type) was still in its infancy, reflected by the novel use of a guided light beam for single-cell optogenetic control in vivo in mammals. BRAIN 2025 supported scaling up this single-cell intervention to achieve independent control of many individually specified neurons in behaving animals. Application of perturbation tools to behaving animals   While by design the initial phase of the NIH BRAIN Initiative was intended to favor technology development, BRAIN 2025 also placed critical emphasis on ongoing application – both for its own sake and to help guide development of practical and useful tools in the pursuit of fundamental knowledge. An overarching goal to be enabled by new technology was to determine the causal relationships between neural dynamics and behavior in a range of systems. Analyzing and perturbing these relationships would require advances in imaging (e.g., wider-field, deeper, and multi-site imaging) or other recording methods – such as electrophysiology (e.g., higher resolution) and imaging (e.g., wider-field, deeper, and multi-site imaging), sophisticated online algorithms for analysis and classification of activity, next-generation automated movement tracking and motion correction methods for integration with behavior, and novel devices enabling fast integration of causal interventions with recording. Aligning perturbation to naturally-occurring neural patterns  Tools combining electrophysiological recording with electrical and optical neuromodulation permit direct monitoring of the electrophysiological effects associated with these interventions. They also “close the loop” on control of neural activity through behavior and by conditioning activity features of the network itself. This allows activity-triggered interrogation of neural activity with millisecond precision in behaving animals, enabling studies of spike timing-dependent plasticity in developing or repairing circuits. Similarly, the emerging ability to control selected cells using optically-detected naturally-occurring activity of those same neurons during behavior opened the door to probing how the firing of individual neurons, cell types, or neural ensembles depends causally upon (or helps to shape) the dynamics of the network within which they are embedded. Optical or electrophysiological tracking of local and global activity patterns thus provides critical information about brain or circuit context, which is likely to have a strong influence upon perturbation responses. The methods and advances within Priority Area 3. Brain in Action  are critical for this work. The ability to observe and detect global or local firing events should be used in combination with statistical methods for causal inference to test interactions that may mediate behavior. To complement observation and to demonstrate causality, The BRAIN Initiative® sought to develop and advance tools to control brain activity in a manner that increasingly approaches physiological firing patterns in single cells as well as in distributed ensembles across the brain. Application of perturbation tools to humans to understand normal nervous system function and mechanisms, causes, and treatments of psychiatric and neurological disease A complete understanding of brain mechanisms at all levels may not be necessary to modulate the brain therapeutically. As tools emerge to modulate the brain in increasingly physiologically relevant ways, we may discover heuristic methods to shift pathological brains toward healthy function, yielding novel therapeutic strategies for treating brain disorders ranging from autism to Alzheimer’s disease. Thus, perturbation tools used in animals (e.g., optogenetic, chemogenetic) need not be the same tools ultimately used in humans. Identifying regions and projections important for particular behaviors in animal models may guide interventions with electrodes or transcranial magnetic stimulation. Identifying causal cell types could lead to molecular and medication-based strategies, especially if aligned with the ability to phenotype cells at the molecular level. This is occurring with hydrogel-tissue chemistry and its many variants, an approach also emerging at the outset of The BRAIN Initiative®. NIH funding to date: Demonstrating Causality This Priority Area is tightly linked with Priority Area 3. Brain in Action, with many of the same programmatic goals. NIH issued three NOFOs to develop technologies to modulate neural activity – each representing a different stage in the development pipeline from initial concept through technology optimization and iterative engagement with early adopters. The primary goal of these NOFOs is to enable new capabilities for in vivo experiments, at or near cellular resolution, in animal models. Neural activity is defined broadly to include electrical activity, neurotransmitter and neuropeptide signaling, as well as plasticity and intracellular signaling. Technologies funded through these NOFOs represent diverse approaches including optical, electrical, magnetic, acoustic, and genetic modulation. Multiple research groups are attempting to integrate calcium imaging and optogenetic stimulation into experiments in NHPs, although further optimization of expression and imaging conditions is needed. A fourth NOFO in fiscal year 2018 called for projects to systematically characterize, model, and validate membrane, cellular, circuit, and adaptive-biological responses of neuronal and non-neuronal cells to various types of stimulation. This research aims to inform future device development (three awards issued).   Where are we now with demonstrating causality?  The Priority Area 4. Demonstrating Causality component of BRAIN 2025 has been successful in paving the way for application of new technologies to behaviors, at scales ranging from individual cells to brain regions – in organisms ranging from invertebrates to rodents to primates. Improved tools Two-photon techniques have progressed from single-cell control to the control of whole-organism (mouse) behavior via multiple, individually specified cells. Optogenetic methods now operate across time scales, with fast opsins working at the millisecond scale, and step-function opsins allowing chronic excitation or inhibition without the need for continuous light delivery in living organisms. Chemogenetic tools (e.g., Designer Receptors Exclusively Activated by Designer Drugs, or DREADDs) also permit control over extended time scales without the need for light delivery, using the pre-drug clozapine-N-oxide. Researchers have combined experimental palettes of opsins and DREADDs to influence mixed-cell populations; they are also compatible with fluorescent-activity imaging. Many relevant activity-readout methods are directly relevant to defining causal relationships between neural transmission and various biological outputs, including genetically encoded voltage and neurochemical indicators, such as dopamine. These advances join the emergence of complementary tools developed outside NIH investment. Along with optogenetics and chemogenetics, magnetic tools are improving, and ultrasonic techniques have also emerged with complementary uses and capabilities. Further development of acoustic and magnetic tools may augment our ability to control multiple, independent-cell populations using several stimuli. As these methods mature, they will improve our ability to study activity, structure, and function at cellular resolution – toward understanding circuits that drive precisely defined behaviors, both in health and disease. As our understanding of higher brain functions evolves, we will need to continue discussions of ethical considerations related to experimental studies and treatments of disease. Combining perturbation and observation Determining causality requires that perturbation tools and observational methods work together, which has been enabled by the design of specialized hardware. One example is fiber photometry from single-site regions to multi-site regions that enable calcium fluorescence to be detectable at each site through use of the same type of probe that delivers optogenetic control. This method thus aligns naturally occurring timing and local neural-activity magnitude. Other types of hardware combine multiple functional features to permit read-write capability. The use of closed-loop electrical recording and stimulation in humans is also progressing. New hardware designs have increased the spatial extent of interfaces with neural circuitry, in line with the BRAIN 2025 goal of providing access to broad and deep volumes of neural tissue during behaviors. Wide-field imaging approaches now allow visualization of activity in up to tens of thousands of neurons – over millimeters or farther with little temporal delay within an experiment. Other advanced microscopy methods have led to improved optical-imaging depth during holographic optogenetic stimulation with multi-photon (2p and 3p) methods. Variants of light-sheet microscopy provide high-speed and high-quality imaging in shallow cortical imaging. Invasive optics such as endoscopes and miniscopes now offer access to deep brain structures in freely moving animals and have been widely shared through an open-source framework. This technology garnered the honor of Nature Methods’ 2018 “Method of the Year.” Application to humans Finally, we have seen progress in applying use of perturbation tools in research with humans. This should advance our understanding of both healthy nervous-system function as well as mechanisms, causes, and treatments of psychiatric and neurological diseases. Researchers have applied temporal-lobe, closed-loop deep-brain stimulation (DBS) to improve memory, and the therapeutic efficacy of DBS for Parkinson’s disease has been enhanced by closed-loop control. Optogenetics is currently guiding clinical application of transcranial magnetic stimulation (TMS), DBS, pharmacology, and combination therapeutics, and the concept of an optogenetics-guided clinical trial has surfaced. Together, these interventional approaches can achieve far greater specificity, even comparable to optogenetic intervention without gene transduction, if guided by causal knowledge from optogenetics. Optogenetic research in rodent models has already inspired novel treatment concepts leading to reported therapeutic clinical benefit. In a rat model of compulsive drug seeking, prolonged cocaine self-administration decreased intrinsic excitability of mPFC pyramidal neurons, especially pronounced in drug-seeking animals. Compensating for hypoactivity of these projection neurons with optogenetic mPFC stimulation prevented cocaine-seeking behavior. Guided by this finding, scientists have demonstrated that stimulation of dorsolateral prefrontal cortex reduces drug use and cue-induced craving in both cocaine- and heroin-addicted people, respectively. Optogenetic methods have also underscored the special efficacy of targeting white-matter tracts with electrical DBS. Guided by these insights, electrical, low-frequency DBS combined with dopamine antagonists in cocaine-adapted mice is being developed as a potential therapy. Finally, there is exciting recent progress in developing and testing in human research participants methods for real-time observation, decoding, and closed-loop stimulation, particularly related to stabilizing mood. These methods are based upon multiscale modeling of brain activity that incorporates both neural firing and local-field recordings. Goals unmet: Demonstrating causality While none of the goals have been fully met, all goals are being addressed (quantitative metrics and milestones have been achieved). Gaps and opportunities: Next steps for BRAIN 2.0 The brain is a closed-loop, non-linear system that integrates external information with internal representations at multiple spatial and temporal scales to generate thoughts, feelings, and actions. Priority Area 3. Brain in Action outlined a vision for monitoring brains in action. Once fully implemented, the approaches described therein have the potential to uncover key biological substrates of thoughts and behavior by monitoring many components of the brain concurrently with its inputs and outputs. Combined with advanced statistical methods for causal inference, these experiments may reveal mechanisms whereby those substrates interact to mediate behavior. However, implementing perturbations is a gold standard for testing hypotheses and demonstrating causality within organized systems. Within the scope of the NIH BRAIN Initiative, the goal of causal experimentation is to determine the integrated processes in which the brain causes thoughts, feelings, and actions to emerge from its components and under its various physiological constraints. Critically, we do not believe that it will be necessary to develop neurotechnology that monitors and modulates every physiological variable concurrently (from proteins and synapses to activity of cells and circuits) to achieve physiologically constrained perturbations. Rather, the fundamental principles and theoretical frameworks discovered and advanced through The BRAIN Initiative® are revealing which physiological variables (what, where, and when) will be sufficient to approximate broader brain function and to facilitate physiological tuning of modulatory outcomes. Thus, by integrating the monitoring tools proposed in Priority Area 3. Brain in Action with the interventional tools outlined in this section, we expect to be able to demonstrate causality with a degree of certainty far greater than that of large-scale monitoring and causal inference alone (just as with demonstrations of causality in other nonlinear biological systems, such as genomes operating within cells and organisms). Advancing this vision will require development and integration of multiple technologies described throughout this report. New challenges for data storage and management arise as we gain the ability to monitor large-scale brain systems and behavior. These challenges will grow exponentially as we complement these data with vast information about the impact of our new perturbation tools on brain and behavior. Additional challenges for data sharing to facilitate analyses across multidisciplinary teams will need to be addressed. Finally, the neuroscience community will need to tackle new theoretical issues (see Priority Area 5. Identifying Fundamental Principles) such as development of fast, online analysis tools in closed-loop protocols conditioned by neural activity – as well as new capabilities in nonlinear control theory appropriate for a system as complex as the brain. Together, the components and vision of BRAIN 2025 (supported by capital investment in the overall BRAIN Initiative by federal agencies and guided by ongoing and careful examination of ethical principles) through demonstrating the causal mechanisms underlying brain function, will lead to deeper understanding of ourselves as well as enable novel diagnostics and treatments for a wide range of disorders. Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity. Demonstrating causality and manipulating cells and circuits will likely be critical toward developing precise and beneficial treatments to improve brain health and alleviate suffering from brain diseases and disorders, and could also affect some cherished features of human life – such as agency, emotions, decision-making, and the ability to exercise free will. These issues are similar in many ways to those associated with the prescription of personality-altering drugs in treating psychiatric disorders, or therapeutic deep brain stimulation that can alter cognitive function. However, neurotechnologies bring the potential of more precise or more potent interventions. Risks to humans should be balanced with the importance of the scientific question at hand or the severity of an underlying condition. For some experimental questions and technologies, such risk-benefit balancing may argue for proceeding cautiously or even forgoing some areas of research. These and similar ethical issues should be addressed as neuroethics research questions, using both conceptual and empirical methods, in partnership with evolving scientific advances. Suggested short-term goals for BRAIN 2.0: New suggested short-term goals for BRAIN 2.0 include: 1.    Develop methods for precise single-cell optogenetic control in mobile animals and deep structures. Current efforts, which allow rich and complex asynchronous cell-ensemble stimulation, are extremely useful and informative but are largely limited to head-fixed vertebrate models, minimizing complexity of behaviors tested.   2.    Define, in mammals, the minimal number of individually-specified neurons needed to alter behavior in detectable ways. Although we have a general understanding of how mammalian behavior can be perturbed by activating or silencing small numbers of carefully targeted cortical neurons, a logical next step is to define appropriate theory to explain results in the context of computationally framed issues such as noise and controllability.   3.    Define causal circuits for selected maladaptive behavioral disorders, such as addiction, impaired social cognition, aggression, and compulsive behaviors.    4.    Expand machine learning algorithms capable of sophisticated behavioral analysis in model organisms (rodents and fruit flies). The NIH support during BRAIN 1.0 succeeded in driving integrated evolution of quantitative, precise, high-content behavioral methods appropriate for freely-moving or restrained animals. The next step, for BRAIN 2.0, is to target large, intact primate or human systems in addition to various smaller model systems, building on advanced instrumentation, computation, data management, and analyses.

Several BRAIN 2025 short-term goals should be reshaped and continued:   5.    Develop strategies to perform quantitative, tunable real-time perturbations of specific circuit dynamics. One example is excitation-inhibition balance, with a goal of mirroring subtlety of various human circuits/conditions and devising control/treatment strategies.   6.    Align perturbations with naturally occurring signals (brain states, behavioral states, circuit states) to measure effect of temporal and contextual variation on behavior(s).   7.    Predict and control behavioral consequences of perturbations, combining experiments and theory.    8.    Define causal circuits for key adaptive behaviors of interest, such as cognition, movement planning, sensory perception, and ethologically naturalistic behaviors.    9.    Address challenges of genetic perturbation tools in primates, which remain much less effective than in rodents. There is substantial need for greater transduction volumes for optogenetic and chemogenetic tools. Light, virus, and chemical delivery into large enough volumes to affect behavior is crucial.   10.    Enable direct correlation between circuit manipulation and activity recording with real-time, neural-ensemble analyses. Tighter connection between experimental approaches and with theoretical neuroscience is necessary to design “physics-like” model-testing experiments during behavior with perturbation tools.   11.    Apply emerging perturbation tools (e.g., magnetogenetic, acoustogenetic) to circuits that are currently less easily accessible with established techniques, such as both deep and distributed brain circuits. Alternatively, or in parallel, develop optogenetic approaches (opsins, hardware) that allow deep and distributed control.   12.    Integrate perturbation techniques with other key BRAIN Initiative‐sponsored technologies: cell-type identification (e.g. align with deep molecular phenotyping of hydrogel-tissue chemistry, MERFISH, STARmap), anatomical circuit tracing (MAPseq), large-scale recording of native activity patterns aligned with cell typology (MultiMAP), precise quantification of behavior, and tests of specific theories of neural coding, computation, and dynamics.   13.    Support neuroethics research (conceptual and empirical) and/or include a neuroethicist as part of the research team to address neuroethical concerns arising from altering behavior in humans through directed manipulation of brain circuits.   14.    Ensure equitable participation in research studies whose findings may affect large numbers of people.   15.    Continue research to clarify the ethical implications of NHP models that more closely mimic human physiology, with subsequent guidance developed based on the findings.   Suggested long-term goals for BRAIN 2.0:

New suggested goals for BRAIN 2.0 include: 1.    Discover ancestral and canonical principles, establishing deep conceptual links between animals and humans. An ancestral or canonical principle of neural circuitry might be exemplified by identifying the molecular identity of cells in a model organism, then mapping corresponding physiology and behavior across species to test evolutionary conservation.   2.    Translate nanomaterial-based techniques (upconversion, magnetic, ultrasonic) for neural interrogation, taking these from in vitro and boutique applications to robust use in behavioral experiments for circuit dissection. Nanomaterials can act as transducers, delivery tools, and readouts. These technologies remain largely “trapped” in materials science and chemistry laboratories and require rigorous evaluation in vivo to facilitate their use for discovery of circuit function, connectivity, and dynamics.    3.    Based upon deeper understanding of causality in brain-wide dynamics, develop novel diagnostic and treatment-design approaches for neuropsychiatric disorders. Clinically relevant progress in demonstrating causality will enable the entire neuropsychiatry community to more efficiently leverage the vast human-subject literature, unleashing new diagnostic strategies and individualized interventions.

BRAIN 2025 long-term goals that could be reshaped and continued include:   4.    Advance the scale of multiple single-cell perturbation by approximately one order of magnitude per year. At present, ~100 cells can be currently controlled independently along with one modality of readout data (calcium imaging) and behavior in the same preparation. Although it is an ambitious goal, we should strive for the ability to access 1,000 cells in year 6, 10,000 in year 7, and so on. Each numerical milestone would achieve, along with millisecond-level activity control, information about cellular-resolution activity imaging, local and global wiring, molecular annotation, behavior, and modeling.   5.    To align perturbation with local and global contexts of neural activity and brain state, develop and apply acoustic and magnetic methods to both perturb and read out from deep-brain regions. For example, magnetic-imaging methods can be extended beyond hemodynamic responses, to probing ions and neurotransmitters. In summary, we have seen considerable progress in Priority Area 4. Demonstrating Causality. All major short- and long-term goals are in the process of being completed. BRAIN 2.0 should be tuned and reshaped to take advantage of new developments and opportunities that have emerged in single-cell control, nanotechnologies, and machine learning. New to BRAIN 2.0, we suggest that since causal technologies have advanced rapidly, it may be time to consider applying these methods to understanding neuropsychiatric disease states at the circuit level. Application of these techniques will continue to provide insight into fundamental principles of circuit operation. At the completion of The BRAIN Initiative®, we envision widespread adoption of integrated neurotechnologies that enable scientists to modulate activity throughout the brain to drive desired and predictable outcomes. We expect that the fundamental understanding obtained as a culmination of the integration of theory, observation, and closed-loop experimentation described herein will allow the design of neurotechnologies that adjust neural activity to produce desired clinical outcomes safely and reliably. Priority Area 5: IDENTIFYING FUNDAMENTAL PRINCIPLES In biology, the goal of theory is to help organize experimental observations into conceptual frameworks – and from these, to build predictive models. The need for theory is especially acute in neuroscience, where system complexity is very high. Deciphering relationships between observable properties of the brain and the underlying algorithms these structures and dynamics implement is critical for our ability to both understand the brain and to diagnose and design interventions for disease. Technological advances from The BRAIN Initiative® continue to provide rich data that capture electrical and chemical activity within large populations of neurons, along with detailed knowledge of the diverse and dynamic characteristics of cells and connections. Central to the NIH BRAIN Initiative’s mission is the development of statistical and analytical methods to make sense of the data, along with theories for the algorithms that underlie brain function. Such theories, implemented in and explored through computational models, provide a framework for hypothesis-driven experimental design and analysis strategies that make optimal use of rigorously obtained experimental data.  Ultimately, theoretical formulations of key questions will reveal the brain’s fundamental computations and also provide clinical access to targeted interventions under conditions of dysfunction. Answers to these questions will define how network dynamics depend on properties of single neurons and their connections; how behaviors are selected, initiated, implemented and flexibly modified by environmental conditions and internal brain states; and which cellular, synaptic and circuit mechanisms support different types of learning. Studying a variety of model systems can help to identify the principles underlying computations that may be implemented in different ways. BRAIN 2025 vision: Integrating datasets over multiple scales to reveal fundamental principles A fundamental goal of systems neuroscience is to understand how neural activity gives rise to natural behavior. In order to achieve this goal, we must first build comprehensive models that offer quantitative descriptions of behavior. Goals for this Priority Area included: New analytic approaches for large, complex datasets. BRAIN 2025aimed to leverage the ability of experimental datasets to identify fundamental principles of brain organization and function. Our ability to fulfill this vision will be bolstered by progressively richer experimental datasets, close collaborations between theorists and experimentalists, and training of students and postdocs in quantitative methods. Critical to success are new techniques for analyzing large, complex data sets and connecting them to rich behavioral measures. The vast increase in the number of neurons that can be measured simultaneously has generated a major interpretation challenge – creating a new urgency for handling this unanticipated but welcome success. Identifying multiscale linkages. A second key component prioritized by BRAIN 2025 for this Priority Area was the need to bridge multiple scales. Doing so entails evaluating detailed biology at one scale (e.g., single neuron activity and cellular identity) and understanding its impact at another scale (e.g., EEG measurements in behaving humans). Bridging these scales is essential if the discoveries made in animal models are to truly advance our understanding of the human brain in health and disease. Uncovering general principles. BRAIN 2025 envisioned that achieving the goals defined above would support the ability to identify general principles applicable to understanding the human brain. Such principles include identifying computations common to multiple scales and systems; constructing a mechanistic understanding of how movement is controlled; and understanding how the brain makes decisions. Insights from many animal models provide an opportunity to uncover general biological concepts shared among species. Building a quantitatively-trained workforce. BRAIN 2025 identified a need for quantitative training of neuroscientists as well as support of theory research, prioritizing collaborative integration of theory into experimental work. It also laid out a vision for new systematic approaches to data management that would facilitate broad access to annotated data sets. This would enhance collaboration between laboratories and with theorists, as well as support validated and reproducible science. A key aim was to accelerate incorporation of theory, modeling, computation, and statistics perspectives and techniques into experimental neuroscience research.   NIH funding to date: Identifying Fundamental Principles NIH has funded projects that apply quantitative models to test foundational theories and models of circuit-level mechanisms in the context of specific behaviors or brain states. Importantly, a separate NOFO related to understanding neural circuits has been issued for projects to develop theories and computational models, as well as to build an analytic toolset to understanding brain data. The NOFO “Theories, Models, and Methods” provides opportunities for novel theory development and has been re-issued routinely. NIH also: i) encourages theory-driven experimental design in all experimental brain circuit requests for applications (RFAs); ii) plans to re-compete the U19 awards to elaborate and innovatively test explicit theories; iii) considers other funding mechanisms to promote and support novel theory development; and iv) incorporates quantitative methods and high quality/resolution behavior in most team RFAs.

Where are we now? How advances in theory and analysis have transformed the landscape Over the past 5 years of investments in modeling and data-analysis methods, research in this Priority Area has fueled development of new paradigms that are helping to unravel circuit complexity related to brain functions such as motor control and decision-making. The NIH BRAIN Initiative has been for the most part successful in emphasizing to the community the importance of theory and analysis for neuroscience research. Yet, there is still a tremendous amount of work needed to uncover fundamental principles of brain function. We do not yet have a comprehensive theory of any specific brain system, nor do we have definitive answers to most of the key challenges laid out in the ambitious short- and long-term goals of BRAIN 2025. Network modeling We have seen progress in simulation via use of relatively large-scale neural networks. Rather than being limited to selecting parameters by hand, scientists can now take advantage of advances in network-training methods such that artificial neural networks can learn to perform complex tasks analogous to those used in some experiments. Since all parameters can be characterized in detail, by analyzing these trained networks, scientists can derive core dynamics underlying computations and then explore the relevance of low-level mechanisms such as specific synaptic learning rules. A now-classic example is training feedforward convolutional networks to recognize objects; in these networks, some of the receptive-field properties of the visual system naturally emerge. Highly interconnected neural networks allow simulation of computations that can function over many timescales. Exploring computation via recurrent neural networks can help to refine our thinking about specific brain mechanisms. While recurrent neural networks trained to carry out a task may have many different parameter settings, their dynamics typically converge to a common, low-dimensional structure dictated by the demands of the task, revealing the core computation that is invariant to possible individual implementations. Modelers have used such recurrent neural networks to understand the dynamics of perceptual integration, flexible decision-making, motor control, spatial navigation, and the ability to estimate time. In a recent study of how networks can retain and manipulate memories, investigators revealed how short-term synaptic plasticity permits memory storage without persistent activity; however, new dynamical architectures emerged when the networks were trained to manipulate the information stored in memory. Data-analysis tools  Large-scale data collection is essential both to test modeling predictions and to drive the development of new models, but methods to process these data rapidly are critical. Advances in data processing are generally keeping pace with technological developments, and new tools are being rapidly released to the community. Effective, freely available software includes Kilosort and Mountainsort (Flatiron Institute) for sorting spikes. For denoising signals and segmenting neurons in imaging data, constrained nonnegative matrix factorization and suite2P are both now widely used and allow scientists to identify many more neurons than previously possible. For studies in humans and NHPs, where automated spike sorting remains challenging, “clusterless” approaches to electrophysiological data analysis that obviate the need for precise spike sorting may be effective in some applications. Variants of these methods have been implemented for real-time analysis, including for calcium imaging, spike sorting, and deconvolution of multiunit signals. Furthermore, machine learning methods for automatic tracking of video data after relatively little hand-training along with behavioral segmentation are revolutionizing quantitative analyses of complex naturalistic behaviors. Once data is preprocessed, analysis methods are needed to test and develop models. Dimensionality reduction methods help identify low-dimensional structure in neural activity, allowing complex data to be summarized and visualized more easily. Improved variants of dimensionality-reduction tools are now widely used. These include demixed principal component analysis, which can isolate components of neural variations according to task parameters such as context or choice outcome, and Gaussian process-factor analysis, which incorporates variations over time. New statistical methods using Bayesian graphical modeling and deep learning can build data-driven models that increasingly mirror the dynamical systems structure of artificial network models. A powerful recently reported deep-learning approach infers latent dynamics from single-trial neural-spiking data. Such methods can be used to compare data directly with information obtained via recurrent neural network models.  Key data-analysis challenges remain. These include extracting information from experiments that acquire data involving complex stimuli and responses associated with natural behaviors; analyzing variations among individual trials; reconciling dependence of neural responsiveness on internal state; and incorporating other slowly varying changes such as those that occur during learning. We are seeing progress in several directions. The Latent Factor Analysis via Dynamical Systems method can extract de-noised single-trial firing rates from spiking data. New techniques that enable online updating of response models using adaptive filtering are being used successfully for population decoding brain-computer interfaces. Emerging methods can incorporate latent-state variables that may evolve over time. Thus, these methods offer the ability to analyze slowly varying changes as well as different behaviors or states. It is now possible, for example, to implement frameworks that can reconcile noisy, blurred, and undersampled measurements quickly and stably. Such models (recurrent switching linear dynamical systems) are an important step toward understanding neural drivers of natural behaviors in model organisms such as larval zebrafish. Hierarchical generalized linear models, useful for untangling interrelated processes that are part of a complex hierarchy, have discovered multiple behavioral states in fruit flies. Theory will play an important role in guiding experimental design and building valid models. For example, new methods make it possible to capture complete information from coarse-grained calcium-activity images, reducing data-storage burdens. In the retina, investigators have been able to incorporate only partial knowledge of anatomy to infer circuit structure from sparse neural recordings. By computing a limit of the number of neurons needed to capture relevant network activity as a function of task complexity, new conceptual insights provide quantitative guidelines for future large-scale experimental design. Multiscale modeling and analysis The ability to account for brain-wide electric fields is necessary for interpreting in detail animal and human data obtained from experiments recording local fields. Models for accomplishing this goal would serve as a framework to test theories, and ultimately, to design clinical-stimulation protocols, by predicting the fields generated by targeted stimulation. Solving this problem requires approaches that can bridge experimental scales. One example is the Human Neocortical Neurosolver, a user-friendly online modeling tool that simulates electrical activity across neocortical layers by incorporating biophysical information on cell type, layer-specific inputs and outputs. This tool allows researchers to test hypotheses about circuit mechanisms underlying electrical fields measured by electroencephalography and magnetoencephalography. Scientists have developed less biophysically detailed network models spanning multiple brain areas, and other investigators developed a framework for use in prototype brain-computer interfaces that combines spike and local-field potential data in a multiscale decoding model. Discovery of general principles of neural coding and dynamics The NIH BRAIN Initiative has supported an extensive set of projects in which advanced modeling and data-analysis methods are leveraged to understand many brain functions: sensory representation, learning, flexible behavior, and decision-making. Studies in rodents show how sensory and task representations are intermingled and controlled by state across brain regions. A ground-breaking analysis of whole-brain activity in zebrafish shows how sensory inputs drive movements via sequential operations of sensory integration, competition, and demixing. Enabled by tools for automatic behavioral segmentation and sophisticated data analysis, we are getting a glimpse at how behavioral switching occurs – dorsolateral striatal activity, for instance, reflects rapid transitions between behavioral motifs. We have also identified common strategies used by multiple systems. For example, the observation of random sampling of olfactory space in both fruit flies and rodents has led to robust theories of sensory coding. The identification of the organization of head-direction cells in fruit flies as a ring-attractor network was inspired by models of visual coding mechanisms in mammals. Another concept emerging across multiple systems is that of activity sequences, apparent in high vocal center neurons in birds, hippocampi in mice, and posterior parietal cortex in mice. Theoretical work has helped explain how such activity sequences might arise from recurrent neural networks. Training a new generation of theorists We are amid a culture shift that aims to democratize tools and method development to provide much broader accessibility. This transformation will benefit from the emergence of data-pipeline systems that streamline data collection, annotation, processing, analysis, storage, and sharing. Training in quantitative methods has been partially supported during BRAIN 1.0 through summer courses in computational neuroscience. Despite the absence of targeted hiring incentives by the NIH BRAIN Initiative, increased support for theoretical research has no doubt helped to stimulate rapid growth in hiring in academic positions in computational neuroscience over the past 5 years. Goals unmet on the path to identifying fundamental principles of brain function Despite exciting potential for generating insight into many brain mechanisms, recently developed network models rarely account for or utilize biophysical details of neurons, synapses, glial cells, and neuromodulators that can have significant influences on the dynamics and computations of networks. For example, experimental evidence is outpacing theory regarding the important role of cell-type specific responses to neuromodulation that influence activity patterns – these observations await theoretical study. We must continue to develop modeling frameworks that take these properties into account. Such investigations are well underway in specific brain regions such as cortex, the basal ganglia, and the cerebellum. Bridging micro- and macro- scales remains a major challenge. There are few clear-cut examples of studies that coordinate detailed biology data at one scale to understand its impact at another scale. Any modeling strategy that attempts to include all biological details is unlikely to provide deep insights with broad applicability. However, oversimplification is at odds with mounting evidence from BRAIN 1.0 that show diverse cortical population circuitry with very rich responses. We will likely see short-term progress on well-defined subproblems. For example, which excitatory-inhibitory connectivity structures are consistent with observations from population-wide neuronal activity? Such studies will begin to link spatial and temporal scales of cortical circuits with the dynamic outputs these circuits produce.  Diverse theoretical approaches are required to support continued progress. Novel proposals of computational algorithms are important, since conceptual models may not be easily discoverable from network modeling. One recently reported framework that employs summary statistics – a message-passing algorithm operating at the level of redundant neural populations – may be such a computational motif. More broadly, reinforcement learning has been a very powerful paradigm for experimentally based interpretation of motor learning and decision-making. Within this framework, for example, investigators have interpreted dopamine signals as reward-prediction errors. Yet, new evidence shows that dopamine signals convey a wide range of diverse information, suggesting that updated learning frameworks may be needed. Expanding the scope of reinforcement learning models is an area of active development by theorists. Predictive coding is another potentially powerful normative framework for understanding both brain development and function. In general, given the infant state of our understanding of the brain, and the diversity of general principles that may be discovered, we must be wary of following a single theoretical approach and instead encourage diverse ideas. Gaps and Opportunities: Next steps for BRAIN 2.0 The goals of BRAIN 2025 provided a roadmap for development of statistical and modeling tools essential for extracting information from experimental data. They identified core conceptual areas where interactions between theory and experiment are critical for fully understanding neural systems. We suggest that BRAIN 2.0 should continue to support (and scale and disseminate) quantitative methodologies. Conceptual challenges posed by BRAIN 2025 still lack definitive answers; indeed, solutions to several of these questions are still in their infancy. For conceptual work – where goals are far-reaching – continued support is needed for iterative progress. New network frameworks open promising paths, but these are still in early days of application to a wide range of problems. While a concerted NIH-funding strategy during BRAIN 1.0 focused upon neuroimaging, support for advances in other areas has been more piecemeal – and often only in conjunction with experimental studies. Specific areas of theory that warrant more aggressive support include theories on the role of non-neuronal cell types in brain function; theories that bridge scales from biophysics to network-level computation; theories that account for large-scale activity patterns such as waves, oscillations and sharp-wave ripples; incorporation of neuromodulation into network models; theories that help constrain the definitions of cell type; and theories that interpret the connectivity data obtained from mapping at all scales. We must continue to seek to unify high-level conceptual theories that provide broad explanatory frameworks across brain areas and model systems. To identify canonical principles, cross-species comparisons are likely to be helpful. Translating findings from model organisms (and from models, more generally) to humans and applying this knowledge to treating diseases depends on our ability to connect information from single neurons to recording and manipulation approaches in humans. Moving forward, we highlight several specific opportunities for BRAIN 2.0, reflecting a balance of new directions and continued activity. 1.    Continue development of data-analysis methods. While automated spike-sorting methods applicable to large-scale data are widely available and have substantially reduced the time from experiment to result in rodents, equivalent tools are not yet available for NHPs and humans, where signal-to-noise ratio and electrode density are typically much lower. More work also remains for handling data that varies over time as well as results from single-trial analyses. A number of groups are working with real-time feedback control, both in animal studies and in brain-computer interfaces. Driven by promise for clinical applications, the most active work is in research with humans, which is limited to relatively coarse-grained information. Advances will extend such approaches to closed-loop control of neural activity in animal-model systems in which decoding algorithms can handle much more precise neural-activity information. Such advances will be critical to attain the goals of Priority Area 4. Demonstrating Causality.   2.    Better understand the role(s) of cell types. Newly emerging data sets on cell type and connectivity, and comparisons across species, raise theoretical questions not yet addressed on a wide scale. What computational role is served by so many cell types? Can network theories constrain the number of effective cell classes? What can we infer from connectomic measurements about circuits and network dynamics? What is the appropriate level of connectomic detail to support predictive models? Comparative connectomics, like comparative genomics, is likely to reveal patterns and conserved rules for both neuronal structure and function. In understanding the role of cell types, it will be critical to understand the biophysical properties that are associated with each type. For instance, membership in a particular cell class might imply a specific pattern of ion channels that confers important functional properties. Such a consideration will allow the field to go beyond merely cataloging cell types to instead more deeply understanding their computational role in a circuit.   3.    Continued emphasis on novel theoretical and multiscale frameworks. Although modeling via recurrent neural networks offers promise, we will likely need new conceptual frameworks to interpret and understand circuit dynamics deeply. As an example, one fruitful area may be control theory. Progress in generalizing control theoretic approaches to complex nonlinear networks may introduce explanatory frameworks for neural function and thus serve as a vital tool for closed-loop brain manipulation. New methods are also needed for multiscale analyses as well as to build models that span multiple timescales of synaptic function, cellular dynamics, plasticity, and neuromodulation.   4.    Foster more interactions between experimentation and theory. Large collaborative projects funded by the NIH BRAIN Initiative have provided strong support for experimental/theoretical collaborations. However, this collaborative investment should be more widespread at the level of individual investigators or small groups of investigators. One BRAIN 2025 proposal not yet implemented is encouraging ongoing projects to undertake 3- to 6-month exploratory collaborations between neuroscience laboratories and scientists from theoretical, computational, and statistical backgrounds. This sort of effort would likely reap benefits in increasing wide-scale theoretical input into BRAIN Initiative-funded projects.   5.    Expand and broaden training and recruitment of quantitative expertise. There remains an acute need to make training in quantitative approaches available to all neuroscientists, to continue to fund theorists, and to recruit scientists from quantitative disciplines who can bring novel approaches to answer outstanding questions. While NIH funding during BRAIN 1.0 has supported short-term summer courses, additional modes of training support should both grow the pool of theorists and raise the level of quantitative training broadly.   6.    Monitor how advances in analysis and modeling might impact the privacy of experimental subjects. The advancing ability to extract information from neural data and to use analysis tools and modeling to design interventions in complex brain dynamics raises the need for the application of neuroethical principles. Particular concerns are the potential extraction of private information from analysis of brain or behavioral data without proper consent, and the need to study how interventions in brain activity influence core human traits such as personality, agency, and free will.         BRAIN 2025 identified a wide range of important short and long-term goals for the advancement of fundamental understanding of brain processes across many scales. Here we reiterate, summarize and update these goals for the next decade.

Suggested short-term goals for BRAIN 2.0: Continue development of techniques for analyzing large, complex data sets. 1.    The development of rapid methods for spike sorting and information analysis of encoding should continue and scale to 100,000 to 1,000,000 neurons recorded simultaneously. Spike sorting methods must include metrics of success that are agreed upon in the community so that datasets can be compared and pooled. Techniques that specifically focus on the distinct recording conditions for NHP and human data are urgently needed.    2.    Develop real‐time rapid-visualization and signal-processing algorithms for all types of neurophysiological data:  o    Functional imaging: fMRI, positron-emission tomography, and near-infrared spectroscopy o    Neurophysiology: EEG, magnetoencephalography (MEG), and local-field potentials; single cell- and multiple-cell spike trains o    Optical recordings: genetically‐encoded or chemical reporters of voltage (including subthreshold voltage), calcium, neurotransmitters, synaptic activity, and biochemical states o    Behavior: force and motion, processed video data, and animal and human psychophysical data o    Cell‐level data: anatomy, connectivity, gene expression, and biophysical properties   3.    Develop principled methods, potentially including novel developments in nonlinear control theory, for real‐time feedback-control experiments to manipulate and analyze neural circuits using novel perturbation and recording techniques. Include real‐time applications to neural devices and prosthetics in humans. Explore the role of precise neuronal-level manipulation.   4.    Integrate statistical and analytic approaches with models of neural circuits that are based on connectivity maps and cell types. Multiscale linkages 1.    Establish biophysical sources of the major brain rhythms in EEG and MEG recordings, as well as in the more local sources that give rise to local field potentials in various brain regions and different cortical layers.   2.    Develop a formal statistical-inference framework to conduct network-connectivity analyses from different types of neuroscience data such as fMRI, EEG, local field potentials, and multiple single-neuron recordings.   3.    Explore theoretical and statistical frameworks for fusing information from neuroscience experiments across different experimental techniques and different temporal and spatial scales.   4.    Develop computationally efficient solutions to high-dimensional inverse problems, with particular attention to the interpretation of EEG and MEG data in humans.   5.    Develop theories and models of collective neuronal activity on spatial scales that span individual synapses, neurons, circuits, networks, and systems; develop theories of dynamic activity that span timescales of synapses, action potentials, network activity (including attractors and persistent activity), and internal-circuit states (including neuropeptides and neuromodulatory systems).  Identifying general principles  1.    Develop theoretical insights into how circuit dynamics depend on properties of single neurons and their connections. Explore computational principles comparatively in a range of model species. Identify conditions for which insights from small circuits become relevant to larger circuits. Determine which general rules of circuit function depend on specific biological details of neuronal and synapse function.   2.    Develop systematic theories of how information is encoded in chemical and electrical activity of neurons and glia; how these are used to determine behavior on short time scales; and how they are used to adapt, refine, and learn behaviors on longer time scales.   3.    Develop a detailed understanding of circuit and plasticity mechanisms behind different forms of learning.   4.    Propose, study, and validate mechanisms that allow information to be gated, switched, and transmitted between specific brain regions. In general, systems neuroscience is commonly broken down into the study of different specialized systems, and our understanding of the interfaces between these systems remains much less understood.   5.    Develop methods to detect and classify internal brain states; relate these states “downward” (to neuromodulatory mechanisms) and “upward” (to memory formation, motivation, and internal models).   6.    Construct a mechanistic understanding of how cellular-level neuronal activity and neuromodulation in multiple brain areas contribute to major brain functions:  o    how motor acts are initiated, controlled, sequenced, and ended o    decision‐making o    goal‐directed and flexible behavior   7.    Continue the efforts of the NIH BRAIN Initiative Neuroethics Working Group to provide ethics consultation to select projects or applications, when appropriate, to help BRAIN Initiative-funded researchers navigate neuroethical issues associated with their work. Accelerate incorporation of theory, modeling, computation, and statistics perspectives and techniques in neuroscience departments and programs  1.    Encourage experimental research projects to support brief (3- to 6-month) exploratory collaborations with theoretical/computational/statistical scientists.   2.    Support more purely theoretical or statistical approaches and those that enable small-scale theory/experimental collaborations.   3.    Tailor new graduate and postdoctoral training grants to theoretical neuroscientists and enhance training of experimental neuroscientists in quantitative methods.     4.    Continue to provide incentives for hiring theory, modeling, computation, and statistics faculty, and support dissemination of new computational methods for postdoctoral and graduate students through summer courses, institutional curricula, web‐based courses, meeting workshops, and other mechanisms. Suggested long-term goals for BRAIN 2.0:

Develop new techniques for analyzing large, complex data sets 1.    Integrate statistical and analytic approaches with models of neural circuits based on connectivity maps and cell types.   2.    Extend the solutions for spike sorting, encoding, connectivity, and decoding to data sets larger than 1,000,000 simultaneously recorded neurons, and integrate with connectomic data and other types of data. Multiscale linkages 1.    Establish a generic framework for fusing information from neuroscience experiments across different experimental techniques and across different temporal and spatial scales.   2.    Enable real‐time high-dimensional inverse solutions from MRI, EEG, and MEG recordings.   3.    Identify essential elements of widely distributed, time‐varying neuronal processes by bridging between detailed realistic models and qualitative behavioral models. Define principles governing those computations at each spatial and temporal scale important for understanding system-wide behavior.  Identify general principles 1.    Establish theoretical approaches to understand general principles applicable in micro-, meso- and macroscale circuits in multiple animals. Of particular interest are theoretical studies that illuminate circuit-circuit interactions and consequent complex human cognition. An ultimate goal is to seek high-level theories of the brain that can unify the many diverse phenomena currently studied by neuroscientists in different subdisciplines (e.g., vision, memory, decision-making).     2.    Work toward a complete computational theory of one or several chosen systems, for example, hydra, roundworm, fruit fly, or zebrafish larva – making use of data, simulations, and modeling that bridge single cells to behavior and provide a basis to extract computational principles at multiple scales. Such a program could provide constraints on the physical measurements necessary to build explanatory models.   In summary, Priority Area 5.Identifying Fundamental Principles has achieved many of its goals, stimulating development of new data analytic and modeling approaches to deepen understanding of motor control, decision-making and other brain functions. Network-training methods now enable artificial neural networks to learn to perform complex tasks similar to those used experimentally, to generate new hypotheses and to capture hidden structure in data. In BRAIN 2.0, attention should be paid to integrating emerging biological knowledge into models and nurturing a diversity of theoretical approaches at multiple levels of description and spatial scales. At the conclusion of the BRAIN Initiative, advances in this area will bring together theory and experiment to solve profound and overarching questions central to systems neuroscience, which will ultimately explain how intricately connected networks of neurons acquire the ability to govern behaviors, thoughts, and memories.   Priority Area 6: HUMAN NEUROSCIENCE A primary goal of The BRAIN Initiative® is to understand the function of the human brain and the peripheral and autonomic nervous systems in a way that will translate new discoveries and technological advances into effective diagnosis, prevention, and treatment of human brain and nervous-system disorders. The study of human brain function faces major challenges because many experimental approaches applicable to laboratory animals cannot be immediately translated to study in humans. Nevertheless, direct study of the human brain is critical because of our unique cognitive abilities as well the profound personal and societal consequences of human brain disorders. Advances in genetics, single-cell studies, imaging, and physiology introduce possibilities for studying the human brain at multiple levels to understand its normal function and what goes awry in neurological and neurodegenerative disorders. Understanding the brain: Fundamental insights for discovery  Improvements to existing technologies such as MRI and PET have revolutionized our ability to noninvasively study the structure, wiring, function, and chemistry of the human brain. Physiological approaches benefit from the increasing number of humans undergoing diagnostic brain monitoring with recording or stimulating electrodes, and from those who are receiving neurotechnological devices for therapeutic applications or diagnosis (e.g., DBS and epilepsy monitoring). Advances in DNA sequencing are revealing genes that, when altered, create symptoms that link molecules to behavior. These new opportunities will allow combining techniques to cross barriers of spatial and temporal scale that allow us to understand the human brain much more deeply. For example, we can combine observations from noninvasive brain imaging with high-resolution cellular and physiological data obtained from humans with implanted devices to quantify activity and chemistry at a cellular level. We can also envision translational opportunities afforded by combined measurement of noninvasive and cellular‐level signals in animal models. Breaking through barriers of scale – enabling us to compare and combine data from distinct experimental approaches – would yield substantial benefits for diagnosing and treating diseases as well as for basic discovery about the human brain. Toward cures: Future platform for therapeutics The last 20 years have seen explosive growth in the development and use of noninvasive brain-mapping methods, predominantly MRI (complemented by MEG and EEG) to study the human brain under normal and pathological conditions, as well as across the human lifespan. Methods to stimulate the nervous system are advancing beyond the experimental phase and toward therapeutic use. In addition to invasive DBS, TMS, and direct-current stimulation (tDCS) are being explored as therapies. We anticipate significant progress in using these sensing and stimulation methods to measure wiring and function of the human brain at multiple scales: in neuronal ensembles, in circuits, and in larger-scale networks (“circuits of circuits”). In turn, these capabilities will allow us to visualize and understand how circuit‐level disruptions lead to human brain disorders. Brain-measurement techniques are also valuable for validating emerging technologies, such as functional ultrasound, and for evaluating the effects of medications and genetic therapies. By integrating data at multiple levels, BRAIN 2.0 can be a platform to link molecules, networks, and behavior. BRAIN 2025 Vision The overall objective of human neuroscience research is to understand certain brain functions that can either only be studied by evaluating humans, or to validate/translate concepts derived from animal studies. Examples of the former include such functions as language, higher‐order symbolic mental operations, and individual‐specific aspects of complex brain disorders such as schizophrenia or traumatic brain injury. Examples of the latter include neuropsychiatric models for addiction and obsessive-compulsive disorder that have been derived via optogenetic manipulation of rodents – a technique that is not yet usable in humans. The availability of clinically approved investigational technologies, including devices that are surgically implanted into the brain, provide a unique research opportunity to investigate cause-and-effect relationships in neural function by stimulation or recording at cellular and circuit-level resolution. Our ability to record electrical activity at the cellular level, in humans, is expanding, providing a unique opportunity to link the activity of individual neurons with more global signals obtained using noninvasive imaging methods such as fMRI. In turn, both cellular‐level and global signals can then be linked to human behavior, thought, and emotion. Research involving human research participants, however, comes with a special mandate to ensure that these rare and valuable data are collected according to rigorous scientific and neuroethical standards, curated carefully, and shared responsibly among the research community. Assembling and funding specialized teams of researchers and neuroethicists is necessary to ensure coherence between experimental studies and clinical treatment approaches. BRAIN 2025 focused short-term goals on developing both tools and an ecosystem to enable the conduct of human neuroscience research. These goals included developing innovative tools translatable to human applications; establishing pilot projects for collaborative human-neuroscience clinical-trial networks; supporting training grants for human research; developing and implementing methods for archiving and sharing electrophysiological, structural, and clinical data; and establishing neuroethical guidance and training programs. The long-term goals of BRAIN 2025 aimed to build upon this foundation of technology and expertise to advance human neuroscience research. Key priorities for technology included attaining higher‐resolution recording and stimulation approaches in humans; supporting alternative technologies involving electrical, optical, acoustic, and magnetic modalities for greater precision and less invasiveness; and taking advantage of surgical settings for capturing more data and accessing human neural circuits. Infrastructure priorities included establishing international collaborative networks to expand impact, integrating human and animal data to identify fundamental mechanisms, promoting effective sharing of curated multi-level human data, and ensuring that human neuroscience research adheres to consensus ethical principles.   NIH funding to date: Human Neuroscience NIH issued three NOFOs to develop non-invasive imaging technologies, starting with planning grants in the first 2 years of The BRAIN Initiative®, followed by NOFOs for proof-of-concept and production-level projects in fiscal years 2017 and 2018. These opportunities cover the technical-development spectrum regarding idea maturity, development stage, and availability of preliminary data. In addition, a separate NOFO called for studies of cellular and population events underlying signals from existing non-invasive approaches, especially neuronal, glial, and vascular responses that are the basis for fMRI. The NOFO also sought research on combinations of recording and imaging approaches to bridge spatial and temporal scales for more precise understanding of the information coded in meso- and macro-scale signals recorded from the brain. Complementing the human-imaging NOFOs, NIH issued two NOFOs for non-invasive neuromodulation in research with humans – one NOFO to develop and optimize technologies (three awards issued in fiscal year 2018) and another NOFO calling for mechanistic studies to understand the technologies’ effects (five awards issued in fiscal year 2018). Additionally, in pursuing recommendations from BRAIN 2025 and a workshop, NIH developed a public-private partnership (PPP) program, engaging manufacturers of implantable recording and stimulating devices for potential use in humans. In conjunction with this program, NIH issued a series of NOFOs supporting small clinical studies (such as Early Feasibility Studies as defined by FDA) for new or improved therapeutics using the latest generation implantable neuromodulation devices. Projects may include both translational and clinical phases, and include activities required for trial approval such as benchtop or animal testing for safety and reliability. The funded projects cover a range of brain disorders, with small exploratory trials intended to test for safety and preliminary evidence of efficacy, as well as to obtain neurobiological information essential for therapy development. NIH’s expectation is that studies with positive results will provide incentives for the commercial sector to begin larger trials toward market approval for the intended therapies

Progress in this priority area anticipated several tangible outcomes. Integrated teams of clinicians, scientists, device engineers, patient‐care specialists, regulatory specialists, and neuroethicists are working together to identify and pursue unique research opportunities offered by the participation of informed, consenting human research participants. Cooperation of clinical and academic research teams and private companies in a pre‐competitive space will enable implementation integration, and long-term support of innovative new technologies for human neuroscience research. Integrated technologies will combine recording and stimulation capabilities from implantable devices and integrate aspects of electrical, optical, acoustic, genetic, and other approaches for research with humans and for clinical applications. Where are we now? The Human Neuroscience component of BRAIN 2025 saw dramatic successes but also revealed several ongoing tensions from this complicated endeavor. Successes arrived with technology breakthroughs, whereas challenges surround the human element of scientific investigation, including promoting and assembling interdisciplinary interactions as well as accessing, analyzing, and sharing data in a responsible but productive way. Imaging MRI and PET. The most dramatic yields from NIH BRAIN Initiative investments in non-invasive functional mapping have been MRI advances, in particular related to magnetic-field gradient, radiofrequency, and static-field increases. Preliminary reports of human imaging at 10.5 T, the highest fields ever used in research with humans, are just beginning, and next-generation integrated systems with advanced designs at 7 T promise equally exciting advances. Other MRI-based projects seek to remove restrictions of strict immobility, offering the potential for more naturalistic assessment of human behaviors using full tomographic three-dimensional mapping. These systems are expected to become available during the next few years. All of the next-generation human neuroimaging grants awarded during BRAIN 1.0 thus far are scheduled to complete final design, construction, and testing of their novel instruments during BRAIN 2.0. In this regard, BRAIN 1.0 has already made significant progress toward meeting BRAIN 2025’s human-brain mapping goals, although success awaits full implementation. Integration of whole-brain MRI-based measurements with invasive electrophysiological recording is also emerging, given that the safety of invasive recording devices with structural and now-functional magnetic resonance has in some settings been established. Yet, the means to fully integrate and interpret these data streams is a work in progress. Extending initial efforts to “best-in-class” measurements in each domain (high-field functional MRI, dense-invasive recording arrays) remains a highly challenging opportunity for the future. Advanced instruments that can push forward molecular-imaging acquisition, predominantly with PET, are being developed through NIH BRAIN Initiative funding. As with MRI, PET technology has yet to achieve very significant performance gains in resolution and sensitivity, but this work – including instruments that integrate PET with high-field MRI – is in development (Phase 1). Thus far, only one Phase-2 award has been made, suggesting that PET technology improvement is a work in progress. Tools and probes New tools, including magnetic-particle imaging methods, have begun to emerge for use in research with humans. This technology likely has the ability to improve hemodynamically based brain mapping by an order of magnitude beyond that achievable for ultra-high field MRI – albeit with some uncertainty in the availability of appropriate magnetic nanoparticles. Hybrid electrophysiology/ultrasound systems are also in early developmental stages. These may fundamentally change the way we reconcile imaging characteristics and at-scale physiological function (the electromagnetic inverse problem) and thus introduce the opportunity for limited tomographic mapping of electrophysiological signals. A comparably smaller NIH BRAIN Initiative investment has gone to development of novel probes to be used with these emerging instruments. The expected order-of-magnitude increase in sensitivity in PET instrumentation during the BRAIN 2.0 period – combined with integration of such capabilities with functional and structural MRI at high field – presents a significant opportunity to study neurotransmitter and neuromodulator dynamics and their distributed functional consequences during cognitive paradigms. Discovering and validating new probes in line with these capabilities is a key prospect. Probes that are capable of precise targeting of specific neuromodulatory systems, specific cell types, and other key molecular targets remains an important opportunity for BRAIN 2.0. Partnerships with the private sector could accelerate this process significantly. Closed-loop DBS The first 5 years of The BRAIN Initiative® defined foundational elements (both sensing and operational) required for human neuroscience instrumentation, including studies aiming to demonstrate causality. This endeavor was catalyzed by a novel public-private partnership (PPP) between industry, NIH , and FDA. The initial focus of the PPP was to explore sensing and stimulation devices for application to closed-loop brain stimulation. Over the course of the first 2 years, a series of templates were created to facilitate confidentiality and partnership agreements for NIH-funded studies. These templates saved many person-years of negotiations for each grant and helped make NIH BRAIN Initiative support of technology palatable for companies. The agreements reflect a compromise on intellectual property rights, with partners achieving a balanced share of value. Since then, NIH has issued periodic requests for funding to support researchers prototyping new therapy concepts with these tools. Non-invasive brain stimulation Since the inception of BRAIN 1.0, we have seen significant, albeit somewhat limited, advances from support of research on minimally invasive brain stimulation – with important potential for human neuroscience. Studies investigating the dose-response relationships of external electrical and magnetic stimulation are providing an empirical foundation for efforts to standardize treatments, while at a more fundamental level, the NIH BRAIN Initiative is now supporting studies to understand the underlying biophysical mechanisms of action of electrical and magnetic fields at a cellular level. Several BRAIN Initiative-funded grants have been awarded to evaluate functional ultrasound as a means for non-invasive neuromodulation, either directly or through opening of the blood-brain barrier and subsequent focal delivery of medications. Organoids and assembloids Brain organoids and assembloids, derived from human pluripotent stem cells, are an emerging technology developed outside of the scope of BRAIN 2025 that may offer new opportunities to study human brain tissue. While very valuable for dissecting key determinants of brain development or potential alterations due to specific disease states, it is not yet an apt substitute for the use of intact brains for studying the interplay between cells and networks, toward understanding functional connectivity. Gaps and opportunities: Next steps for BRAIN 2.0 Generally, human neuroscience research in BRAIN 2.0 will benefit most from enhanced support of technology development and sharing, integration with genomic data and advances, human-resource issues including collaboration and training, and data-science improvements. Moving forward, several opportunities exist for BRAIN 2.0, reflecting a balance of new directions and continued activity. 1.    Technology. There is a pressing need for better invasive and noninvasive tools to understand and manipulate brain function, which will help define mechanisms of effects and/or benefit from brain stimulation. Ethical considerations require that invasive human neuroscience research obtain clinical justification for participation, which by default may also motivate less-invasive methods for expanding the pool of neurotypical controls and also leverage existing clinical procedures in more creative ways. Establishing more PPPs could help, recognizing that economic considerations factor into companies’ willingness to invest in research with long-term timeframes. Better tools are also needed to study and control brain activity at higher levels of spatial and temporal resolution. Novel tracers, for example, will advance study of synaptic function in humans. This will contribute to better understanding of brain development, psychiatric diseases, and potentially provide biomarkers for neurodegenerative disorders. Development of safe and efficient viral vectors for replacing genes or other cell-specific tools for manipulating various molecules and pathways will be key steps for manipulating the human brain at the level of circuits as well as with medications. While the field of gene therapy has been revived during the past few years, outside of the scope of the NIH BRAIN Initiative, broad and well-distributed delivery of molecules throughout the nervous system using viral vectors remains a challenge requiring more investment in better vectors and or other technologies. 2.    Collaborative efforts. A lot of exciting neuroscience that is funded outside of The BRAIN Initiative®. Thus, it is critical for scientists studying the human brain at different scales to collaborate and to form networks around human neuroscience to benefit both fundamental research and translational research. Bringing clinicians, neuroscientists (systems and molecular), engineers, computational biologists, and ethicists to work together is still a gap and opportunity. One particular missed opportunity from BRAIN 1.0 is collaborative human neuroscience trial networks that lower the boundaries for research. Going forward, standard templates for informed consent and certain neuroethical considerations could facilitate progress, similar to the PPP process for developing devices.  3.    Training. BRAIN 2.0 should increase emphasis on funding trainees, especially those striving to become interdisciplinary neuroscientists, as well as supporting clinical/surgical investigators with access to human-brain tissue. More broadly, there is a pressing need to train clinical investigators, scientists, and physician scientists in various aspects of human neuroscience. These clinical investigators will benefit from training grants to facilitate access to, and awareness of, BRAIN Initiative-funded research. In addition, support mechanisms such as the neuroscience trial network might help early stage researchers initiate their programs. A critical part of training should also include integrating neuroethics into discussions about neuroscience when relevant. Neuroethics knowledge and awareness can help neuroscientists develop an awareness of what constitutes a neuroethical concern and where to get assistance or find collaborators to address such concerns. 4.    Biological discovery. In addition to continuing BRAIN 2025’s intentional focus on technology and tool development for the neuroscience research community, studies probing fundamental biological function are also necessary. One significant gap and opportunity is integration of genetic data and meta-data with data arising from imaging, physiology, and brain-modulation studies. Genetic and pathway data provide clues for areas of further study, and knowledge of behavioral or phenotypic features will help link cells to networks to global brain function. Given that there are so many behaviors that can only be studied in humans, this is at once a gap and an opportunity. 5.    Data dissemination. The collection of human data, both digital and material, has not been as effective as it should have been during BRAIN 1.0. Many scientists cannot access primary data after study publication, data are scattered in different areas or lack appropriate meta-data for effective analyses, and scientists are not aware of the wealth of useful data available from the BRAIN 1.0 investment. This has been exacerbated by the absence of a unified repository for all relevant data and a hesitance to require (and enforce) data sharing. Better coordination could also include integration with related activities funded elsewhere, notably through the Human Connectome Project and the Adolescent Brain Cognitive Development study, which have both made important contributions to large-scale data structures for human neuroimaging and cognitive data. Also critical are consolidated, user-friendly search tools for BRAIN Initiative-funded project data, to facilitate research by the broader biomedical research community. As a reference, the Human Genome Project’s impact came not so much from the groups that sequenced the genome but from all the users that worked with the data, transforming the landscape of disease-gene discovery, evolutionary biology, genome architecture, and the vast universe of various non-coding RNAs. As is the case with the Human Genome Project, widespread data sharing should be controlled to ensure data security and privacy.   Suggested short-term goals for BRAIN 2.0: The evolving scientific landscape suggests some new short-term goals for BRAIN 2.0: 1.    Develop better approaches to acquire, preserve, and study, living human tissue from surgical procedures, as well as from post-mortem samples, to enable studies of the human brain and peripheral/autonomic nervous system that include structure-function mapping, transcriptional approaches, and proteomic. One example is cortical tissue from epilepsy surgery, but there are opportunities to acquire tissues from a number of brain areas given the diversity of neurosurgical procedures. Acquiring these samples can provide material for optogenetic manipulation and other methods that are not currently feasible in humans. Encouraging research partnerships between neurosurgeons and scientists can enhance studies on neural connectivity using tools ranging from imaging to physiology.   2.    Increase mechanistic understanding of DBS and closed-loop modulation in preclinical and clinical models. While modest success was achieved in movement disorders therapies, clinical trial failures in depression and poor market penetration in epilepsy motivate continued refinement in DBS-stimulation methodologies. Continuing support through the PPP helps to lower the risk profile for industry to invest in novel therapies, which might otherwise be abandoned or significantly delayed.   3.    Expand research beyond invasive devices. The BRAIN Initiative PPP should also emphasize more than just bi-directional medical implants to increase the potential impact of this work. Examples include conducting human imaging coupled with simultaneous, minimally-invasive electrophysiological and tomographic “functional” mapping of local and distant effects of such neuromodulatory actions. Rebalancing the portfolio with imaging technology, surgical procedures, and molecular approaches should be considered.   4.    Continue to invest in the physics/engineering of non-invasive imaging instrumentation and support the development of non-invasive approaches with high spatial and temporal resolution to monitor neural activity (including non-electrical activity) in humans.   5.    Establish standards for teams developing human-use tools. This research requires high standards for design practices, quality-management systems, basic program management, and other aspects subject to regulatory scrutiny. As reflected in limited return on investment in this area during BRAIN 1.0, many academically trained scientists do not have such skills and as such struggle to deploy systems at scale without appropriate resources. Encouraging collaborations with industry may advance clinical applicability of human neuroscience research involving technology.   6.    Support interdisciplinary research to allow successful use of fMRI technology in clinical settings. Currently, fMRI use is limited to research, and its clinical use is limited mainly to presurgical mapping. Combining better understanding of human brain connectivity, with improved functional imaging and computation might help advance the applicability of fMRI in clinical settings. A good example is that provided by disorders of consciousness and psychiatric disorders, in which both structural and functional connectivity mapping will likely be of important clinical value for prognosis and guiding advanced neuromodulatory treatments in the future.   7.    Training (short- and long-term): Support neuroscience-oriented training of scientists outside neurobiology, including computational scientists, physicists, and engineers, toward advancing progress in imaging and non-invasive electrophysiology technologies. Also needed is interdisciplinary training of various types: computation with biology, virology with brain research, engineering with biology, and ethics and neuroethics with neuroscience.   8.    Improve data access. Key to the success of the BRAIN Initiative is broad data accessibility, to encourage varied experimental approaches and introduce novel hypotheses. Primary data must be collected and shared in formats that are accessible, user friendly, and contain publication source codes. A centralized data repository might facilitate this objective (see more details in Priority Area 8. Organization of Science of this report).   9.    Develop a set of actionable neuroethical guidelines for neurostimulation and neuromodulation in humans (short- and long-term). Plans to implant devices into humans for long periods, including consideration of closed-loop systems that limit independent control by an individual with an implant, should continue to be carefully reviewed. For example, at the completion of a research study, if a research participant has attained therapeutic benefit and would like to continue to use his or her device, then both funding agencies and companies involved with the research should work with investigators to address the question of how to provide long-term support for such individuals. Suggested long-term goals for BRAIN 2.0: Several new long-term goals related to human neuroscience could be pursued in BRAIN 2.0: 1.    Develop better technologies and assay systems for targeting neurons and glia in humans, including improved viral vectors, next-generation CRISPR technologies, and other non-viral methods. New approaches now allow large-scale screens with DNA barcodes to select viral vectors. Better technologies are needed to identify safe and neurotropic viruses for future gene-replacement and gene editing-based therapies. These studies need careful ethical oversight.   2.    Discover and validate novel PET tracers to monitor neural activity and molecular signatures in human synapses. PPPs with the pharmaceutical industry (by virtue of their significant neurochemical databases), akin to those for development of devices, may uncover diagnostic applications of compounds with limited therapeutic potential.   3.    Improve electrophysiological source localization, bringing near or true tomographic capabilities to non-invasive electromagnetic recording. Advances in machine learning may offer significant opportunities for progress in this domain, as will forthcoming developments in magnets that are operational at room temperature.   4.    Develop multiscale approaches and tools to integrate data generated using different experimental approaches. The integration of multiple and diverse data sets (imaging, physiology, behavior, and clinical records) is a prerequisite for solving human specific neurobiological questions.   5.    Develop better translational model systems offset the need for human studies. Developing suitable models to explore foundational aspects of disease states and treatment mechanisms might help accelerate the process to defining applications in humans.   In summary, we are poised for progress during BRAIN 2.0 in Priority Area 6. Human Neuroscience. Advances in BRAIN 1.0 have set the stage for conducting neuroscience research with human participants, but these opportunities arrive with the need to consider several issues. Progress in this area requires extraordinary levels of collaboration, involving integrated teams of clinicians, scientists, device engineers, patient‐care specialists, regulatory specialists, and neuroethicists – to ensure not only innovation, but also safety and scientific rigor. Important ethical concerns center on use of integrated technologies and implantable devices, including considerations of long-term management of those shown to be effective through research. New challenges also face us with regard to managing this type of human data. Advances in this critical area are at the heart of the goals of The BRAIN Initiative® – revealing mysteries of humans’ unique cognitive abilities and helping us treat or prevent devastating consequences of brain, peripheral and autonomic nervous system dysfunction.   Priority Area 7: From BRAIN to Brain The topics described in Priority Areas 1 to 6 outline the most critical scientific topics for the NIH BRAIN Initiative. Collectively, they cover key questions and needs for progress toward understanding how populations of neurons create unitary perceptions, optimal decisions, and coordinated movements. However, addressing those various priorities individually will not be the fastest route to discovery. Technologies and experimental insights that come from work in the different scientific Priority Areas are often complementary; offering more impact in combinations. For example, theoretical work and modeling are most effective when tightly melded with experimentation. Similarly, physiological experiments that both monitor and perturb neuronal activity during behavior can provide insights that could not arise using isolated approaches. The most productive experiments will be those that can exploit tools and knowledge from multiple areas: cell-type identity, circuit connectivity, functional maps, theory, activity monitoring, and perturbation. BRAIN 2025 vision: The power of integrated technologies   BRAIN 2025 recognized that the NIH BRAIN Initiative must prioritize combining complementary approaches toward using fully integrated systems to explore neuronal mechanisms driving higher brain function. Development of integrated approaches is itself a major challenge requiring sustained encouragement and support. Effective integration requires much more than simultaneous uses of multiple techniques. Moreover, use of integrated systems pose issues that do not arise in use of isolated methods. Examples include crosstalk between wavelengths used for optical imaging and those used for optical stimulation; contamination of electrophysiological recordings by electrical-stimulation artifacts; and incompatibility between methods for establishing cell identity or neuronal projectomes and nanoscale reconstruction of circuits. In general, combining approaches requires tailoring fully integrated systems to address those issues that do not exist when the component methods are used independently. For this reason, BRAIN 2025 emphasized use of integrated approaches by making it a separate Priority Area. Integrating diverse approaches was expected to require large consortia of experimentalists, technologists, theorists, and data scientists – invoking a special importance for team science as well as technology dissemination and training. NIH funding to date: The BRAIN Initiative® to the Brain In addition to developing new technologies, the NIH BRAIN Initiative has funded efforts to integrate and apply the cutting-edge approaches to answer fundamental questions about circuit function. A series of “BRAIN Circuits” NOFOs support research that integrates experimental, analytic, and theoretical capabilities for comprehensive analysis of specific neural circuits or systems. Across these NOFOs, projects are expected to record and perturb circuit function with cellular and sub-second resolution, as well as to apply quantitative models to test foundational theories and models of circuit-level mechanisms in the context of specific behaviors or brain states. The resulting projects represent a diverse research portfolio of approaches to understand circuits and their contributions to perceptions, motivations, actions, and other mental processes throughout the nervous system. NIH issued four NOFOs representing different stages of research in animal models, plus a separate NOFO for research in humans using electrode devices implanted for therapeutic recording and/or stimulation or for pre-surgical neural activity monitoring. Collectively, this suite of four NOFOs issued 41 awards in fiscal year 2018.   Where are we now with integrating approaches? BRAIN 2025 expected that the development and implementation of integrated approaches would lag behind progress in other Priority Areas based upon initial development of independent approaches. As such, NIH allocated during BRAIN 1.0 relatively modest funding for the first 5 years of the NIH BRAIN Initiative, followed by substantial increases. Progress has been largely consistent with those expectations. Some of the examples of integrated technologies described in BRAIN 2025 have come to light, such as electrophysiological or optical recording while stimulating genetically identified cells. Multiple fabrication strategies have yielded penetrating and surface-recording probes compatible with optical stimulation and/or imaging, as well as drug and gene delivery. We are also seeing progress with the use of spatial-light modulators and holographic techniques to create patterns of two-photon stimulation that can activate populations of individually targeted neurons in specified spatiotemporal patterns. This work has significantly extended single-cell control capabilities in rodents, fish, and invertebrates. Most recently, this approach has been used to control mouse behavior; when combined with complementary optical imaging of neuronal responses to natural stimuli, we should be able to assess behavioral consequences of playing back natural patterns of population activity in the brain. Efforts to combine genetic access with connectomics is also moving along. Exhaustive serial-EM reconstruction of specific cell types or cells with specific projections using genetic tools that yield EM contrast or molecular recognition of epitopes using nanobodies are ongoing. Progress has been made on even some of the most ambitious integrated technologies put forth by BRAIN 2025. For example, the MICrONS project supported by IARPA has made impressive progress. Measuring connectomics of neuronal circuits after large-scale recording, this project aims to achieve synaptic-level EM reconstruction of a cubic millimeter of mouse visual cortex in which the response properties of ~100,000 neurons have been determined using optical-imaging methods. Beyond the examples provided by BRAIN 2025, other integrative efforts are underway, such as the structural and functional MRI-based measurements that are being combined with invasive electrophysiological recording in human studies, as discussed in Priority Area 6. Human Neuroscience. Additional strategies are also ripe for development. For instance, the ability to record neurochemicals, as discussed in Priority Area 3: Brain in Action, awaits integration with electrophysiological and optical recording of neural activity, as well as optical and electrical neuromodulation, and likely medication-based interventions to correlate local neurochemistry with circuit electrophysiology. As mentioned in the discussion of Priority Area 2. Maps at Multiple Scales, technologies that measure various activities have reached a point where functional analysis of circuits might be combined with anatomical-connectivity mapping, potentially uncovering advanced theoretical frameworks. Measurements using PET cameras to track synaptic activity could be combined with fMRI data to measure the influence of neuromodulators in brain circuits of human research participants undergoing monitorable behaviors. Next steps for integrative efforts in BRAIN 2.0 Because this priority area deals with a broad approach rather than specific approaches or types of tools, BRAIN 2025 did not list individual short-term and long-term goals, opting instead to describe examples. Following that lead, we do not think it is necessary for BRAIN 2.0 to include an exhaustive set of suggested goals for integrative approaches. However, many opportunities and goals listed in Priority Areas 1 to 6 hinge upon integration. These include: •    Tools to integrate molecular, connectivity, and physiological properties of cell types •    Connectivity and functional maps at multiple scales that retain cell-type information •    Integration of fMRI with other activity measures and anatomical connections •    Integration of electrophysiological and neurochemical methods •    Integration of perturbational techniques with other technologies •    More interactions between experimentation and theory •    Development of approaches and tools to integrate human data from different experimental approaches In addition, new themes have emerged that must become part of any comprehensive strategy to achieve integrative neuroscience. Specifically, analyses of brain circuit properties require integrated approaches for the study of both neuronal and non-neuronal functions; relative contributions of cortical and non-cortical brain structures; more naturalistic behavior paradigms; and finally, various models of team science for accomplishing these complex tasks. There is no question that integrated approaches – truly advancing BRAIN to the brain – will remain key for progress toward understanding brain circuits. We suggest continued BRAIN-Initiative support for the development and application of integrated approaches. In summary, many opportunities and goals listed in Priority Areas 1 to 6 hinge upon integration, making it likely that Priority Area 7 will see substantial growth during BRAIN 2.0 in several areas. These include: i) tools to integrate molecular, connectivity, and physiological properties of cell types; ii) connectivity and functional maps at multiple scales that retain cell-type information; iii) integration of fMRI with other activity measures and anatomical connections; iv) integration of electrophysiological and neurochemical methods; v) integration of perturbational techniques with other technologies; vi) more interactions between experimentation and theory; and vii) development of approaches and tools to integrate human data from different experimental approaches. These integrated approaches will truly advance BRAIN to an understanding of complex brain functions such as perception, emotion and motivation, cognition and memory, and action, and inspire new cures for brain disfunction.   Priority Area 8: ORGANIZATION OF SCIENCE: BRAIN 2.0 Science is an intensely human endeavor. Many challenges in modern biomedicine arise from the reality that the fruits of science – discoveries, tools, and cures – require human actions and often significant teamwork to find meaningful application. Moreover, given that the outcomes of neuroscience research are so relevant to people’s lives, ensuring that this taxpayer-funded science draws from the entire intellectual capital of our nation is critical. We need the broadest perspectives at work to define and solve problems in the integrated framework of science and society. Collaborations among people with diverse expertise (e.g., basic scientists and clinicians or technology developers and technology users) can be challenging. Forging human alliances is an ongoing sociological issue that is difficult to solve and requires culture change built around shared goals and a desire to advance human health. The BRAIN Initiative® has not been immune to these challenges and solving them is essential for the ultimate success of The BRAIN Initiative®. We suggest several proactive steps to address five areas for growth regarding the overall organization of science: i) data sharing, ii) technology dissemination, iii) workforce development, iv) public engagement, and v) connecting basic research to disease models under study. I. Sharing Data BRAIN Initiative-funded researchers generate vast amounts of data in a wide array of formats. In tandem with the growth of massive storage capabilities and high-speed computing, more diverse, fragmented, and heterogeneous quantities of data are being generated than ever before. These include both quantitative and qualitative datasets from scientists conducting studies with both model organisms and humans. Metadata, “data about data,” provides information such as data content, context, and structure. Metadata enables data re-use and expands discovery beyond individual laboratories. A major current challenge is that few laboratories are effectively sharing data. Of those that do, few use a standardized format, and few adequately handle their metadata. As an example, a MATLAB structure full of spikes is of limited use if information such as animal age, strain, sex, and other characteristics are not included with the dataset. Currently, many experimenters record such metadata data in laboratory notebooks, which keeps it unconnected with actual data. Sharing data and code both within and outside collaborations is an essential component of The BRAIN Initiative®. In addition to extending value from individual datasets by enabling re-use, data sharing promotes higher standards for data management and curation even before data are made public. NIH funding to date: Data management and data sharing NIH has taken initial steps toward development of an informatics infrastructure by issuing NOFOs to support infrastructure for three distinct activities: i) creating standards to describe common experimental protocols and data; ii) aggregating data in archives; and iii) developing software for data integration and analyses. A functional example is the launch of the Brain Cell Data Center (BCDC), which is tasked with establishing a web-accessible information system to capture, store, analyze, curate, and display all data and metadata on brain cell types and their connectivity from the BICCN. NIH also issued NOFOs to support infrastructure for three distinct activities: i) creating standards to describe common experimental protocols and data; ii) aggregating data in archives; and iii) developing software for data integration and analyses. The NOFOs identify distinct experimental areas as “sub-domains,” which are defined by applicants, with a suggestion that appropriate sub-domains might comprise research funded by distinct BRAIN Initiative NOFOs. This might include, for example, data from non-invasive neuromodulation experiments, from human MRI experiments, or from invasive devices for recording and modulation. This approach is based on differing characteristics of the types of research supported by The BRAIN Initiative® – although NIH expects the program to evolve as understanding of how to link different modalities matures. Finally, in January 2019 NIH released a Data-sharing Notice that will require BRAIN Initiative researchers to submit their data to BRAIN data archives, develop a resource-sharing plan, and include in grant applications costs for data preparation and archiving.   However, as noted above, creating an environment conducive to widespread data sharing is a challenge. Common arguments against wide availability/sharing of data and code include perceptions of irrelevant focus of time, effort, and resources; no apparent immediate utility; and a disincentive to conducting hard/risky experiments. Because cultural issues are central to data sharing, putting into place appropriate reward, review, and expectation systems are vital to ensure that data is findable, accessible, interoperable, and reusable (FAIR). Practices must both incentivize and reward researchers who comply with these rules and expectations, perhaps treating datasets as “products” that are valued outcomes of research in the same way that papers are considered and valued. Still, more steps could facilitate progress toward a more open-science environment for The BRAIN Initiative®. For example, at what processing stage will data be stored? What metadata will be included? Will all laboratories use the same data standard? At what point will data be released to the public? Will code be released to the public? Who will fund the data storage? Practically speaking, a straightforward first step to implement the NIH BRAIN Initiative data-sharing policy is for NIH BRAIN Initiative-funded grantees and their collaborators to notify team members at the outset of BRAIN Initiative-funded studies that their data will eventually be made public and that adhering to relevant standards as data emerges will ensure that careful curation from the earliest stages of a project eliminates time-consuming and costly data organization later. Core principles for data management, sharing, and standards To facilitate adoption and enforcement of the NIH BRAIN Initiative data-sharing policy, we suggest the following core principles for BRAIN 2.0: 1.    Data from NIH BRAIN Initiative-funded projects must be shared publicly upon first publication in a peer-reviewed journal. NIH BRAIN Initiative-funded scientists must communicate this principle to all team members, including those (often trainees) who collect the data. Rigorous science requires that data be shared so that it can be replicated, confirmed, and expanded toward more discovery, yet there may be circumstances that preclude sharing of human data. For example, a research participant’s identity can be compromised by combining that individual’s composite datasets in ways that were neither envisioned nor specified in the informed-consent process. Determining when an exception applies is not straightforward, but exceptions to the norm of data sharing should be rare and carefully considered.   2.    Data should be stored in standardized formats. Within the arena of systems neuroscience, considerable resources are available to facilitate use of data standards, such as Neurodata Without Borders (NWB). Given a standard, people can begin to write libraries for data query and analysis using that standard. Without the standard, high level analyses can be generated, but will always require substantial time and effort organize the data into the right format to feed into those routines. However, even researchers who are enthusiastic about this resource struggle to understand how to fit their data into the NWB standard. Most laboratories have a strong preference for putting data into the NWB format when they are ready to share it. BRAIN 2.0 should provide resources to support this activity, for instance, additional funds to cover the cost of a professional software engineer. The Allen Institute leads in this arena and has adopted a workflow that academic laboratories may also favor: They use an internal data format that is ideally suited to their analyses and then transform data into the NWB format when ready to share it. However, sufficient resources are not currently available for the entire BRAIN scientific community, especially because professional software engineers are expensive. Moreover, some subfields fields such as imaging already employ their own data standards (e.g., Brain Imaging Data Structure, BIDS). Ideally, data should be as interoperable as possible. Researchers must be strongly encouraged to place data in a standardized format, and failure to do so must have some consequences.   3.     NIH BRAIN Initiative data should be stored on an NIH-maintained central server. We suggest that all teams funded by the NIH BRAIN Initiative must place data on this server, at least for sharing with the broader neuroscience community. For most projects, these data will have already undergone considerable pre-processing (such as spike sorting for electrophysiology or, for imaging data, de-noising and segmentation). Although sharing raw data is ideal in some ways, modern datasets are often too large for this to be financially practical. Pre-processed data might be suitable for many purposes.   4.    Assign credit to those who collect the data. Considerable time and effort are required to generate high-quality datasets in systems neuroscience – even more so for datasets collected from NHPs. A natural solution to this problem is to generate for each shared dataset a citable identifier, such as a research resource identifier (RRID) or digital-object identifier (DOI). Further, publicly available datasets should be included routinely on publication records and as criteria for hiring and promotion decisions.   5.    Metadata must be stored systematically. BRAIN 2.0 should convene scientific community input on metadata parameters. For laboratory animals, these may include strain, sex, age, light-dark cycle, and number of cage mates.   6.    Enable storage of raw data as much as possible but this will rapidly become untenable. Key to the feasibility of this principle will be the development of a strategy to estimate the useful half-life for data of different types and different levels of extractions.   7.    Data standards should include standards and guidance for ethically acceptable collection, use, storage, and access to data.  II. Human Capital Modern neuroscientific discovery thrives from close interactions among researchers from a broad range of fields and backgrounds. We suggest that BRAIN 2.0 should continue efforts to diversify the talent pool to include increased representation from quantitative scientists, theoreticians, clinicians, and researchers from a range of experiences and backgrounds that offer important perspectives to research aiming to understand the basis of our own thoughts and behaviors. Such diversity is essential for identifying knowledge toward understanding and managing disorders that affect millions of people. Moreover, both individual-laboratory science and team-science approaches are necessary in biomedicine. Various models exist for team science and due to the extraordinary complexity of the human brain, large-scale collaborative approaches are necessary. Attention must be paid to incentives and rewards for individuals to participate in a larger effort than is customary for most of the history of biomedical research. There are likely to be some areas of neuroscience inherently more ready for team science, but overall, there is a need to be flexible and dynamic – as technology and knowledge emerge and questions shift. BRAIN 2.0 can play a formative role in enabling multiple models of team science that have the flexibility to change according to progress – and that are not excessively managed in a top-down manner. NIH funding to date: Human Capital A high-priority area for BRAIN 1.0 included the goal, “Attract new investigators to neuroscience from the quantitative disciplines (physics, statistics, computer sciences, mathematics, and engineering), and training graduate students and postdoctoral students in quantitative neuroscience.” To achieve this goal, BRAIN 1.0 implemented various career-development/career-enhancement programs, employing the K18, R25, and F32 mechanisms. At the BRAIN 2025 halfway mark, these specific training mechanisms have yet to be broadly adopted. For example, The BRAIN Initiative® is currently funding only about a dozen F32 postdoctoral awards each year, and a handful of others (K18s and R25 short courses). To encourage diversity in NIH BRAIN Initiative-funded research, similar to general NIH diversity funding programs, the NIH BRAIN Initiative also employs two diversity training mechanisms. These include BRAIN diversity supplements - roughly half of which go to graduate students, postdoctoral fellows, and early career independent investigators - and a new K99/R00 BRAIN Initiative Advanced Postdoctoral Career Transition Award to Promote Diversity aimed at enhancing diversity in the neuroscience workforce and maintaining a strong cohort of new and talented, NIH-supported independent investigators from diverse backgrounds (including women).   BRAIN 1.0 progress to date As detailed in the congressionally commissioned 2018 National Academies of Science, Engineering, and Medicine report, “The Next Generation of Biomedical and Behavioral Sciences Researchers: Breaking Through,” the majority of NIH-supported graduate and post-doctoral trainees are funded through research project grants awarded to their principal investigator mentors. Since these individuals are not tracked, we are unable to accurately measure the constituency of the NIH BRAIN Initiative’s training investment, but we assume the existence of many uncounted NIH BRAIN Initiative-supported trainees. A fuller understanding of this training investment requires that BRAIN 2.0 monitor the number of trainees supported by NIH-funded research project grants (possibly through progress reports or other means). Furthermore, BRAIN 2.0 should track the number of new (no previous NIH funding) investigators from quantitative disciplines receiving NIH BRAIN Initiative-funded research project grants either as principal investigators or as key senior personnel (i.e., a co-investigator) WG 2.0 findings BRAIN 2025 outlined a vision for training that promoted recruitment of scientists from quantitative disciplines and encouraged quantitative training for graduate students and postdoctoral students. We suggest continued implementation of this strategy. Nevertheless, to achieve the aims and transformative projects proposed for BRAIN 2.0, we will also need a new workforce of investigators trained to bridge the gaps between academia and industry. Three findings below support this vision. 1.    Attract quantitative expertise to neuroscience. There remains a pressing need to rapidly expand recruitment of quantitative scientists and their trainees into neuroscience. BRAIN 2.0-dedicated funding should be used to attract new talent from quantitative disciplines into neuroscience. Doing so could bolster efforts currently funded by relevant NIH Institutes and Centers (not via The BRAIN Initiative® funding stream) to attract new talent from quantitative disciplines into neuroscience – in particular those scientists never before supported by NIH research project grants. 2.    Enhance diversity in The BRAIN Initiative®-funded workforce. BRAIN 2.0 should continue to recognize that enhancing diversity of the research workforce is a scientific imperative. It should continue to recruit and support students, postdocs, and investigators from diverse backgrounds in NIH BRAIN Initiative-funded projects These include individuals from groups underrepresented in health-related research. 3.    As highlighted in Priority Area 4. Demonstrating Causality, more clinical and translational expertise is needed to achieve bold outcomes envisioned for BRAIN 2.0. NIH has recently created and expanded support mechanisms to address challenges faced by physician scientists outlined in the Physician-Scientist Workforce Working Group Report of 2014. We suggest a portfolio of BRAIN 2.0 strategies to address this gap. Examples include:  o    Offering and expanding new research opportunities at specific training periods, with priority given to residents across neuroscience-related disciplines including neurosurgery, neuroradiology, neurology, psychiatry, ophthalmology, and others o    Integrating training across diverse residency specialties in topics relevant for future human translation of BRAIN Initiative-funded discoveries o    Recruiting individual residents with prior research training relevant to BRAIN into the workforce (One example is the NIMH administrative-supplement mechanism, “Enable Continuity of Research Experiences of MD/PhDs during Clinical Training”) o    Recruiting and retaining outstanding, postdoc-level health professionals who have demonstrated potential and interest in pursuing careers as clinician-investigators Achieving our vision for BRAIN 2.0 will necessitate realignment of the relationship between the private sector, government, and academia to enable discovery science that finds application in real-world settings (research or clinical). As highlighted in the “Breaking Through” report, nearly 80 percent of postdoctoral trainees in the life sciences pursue careers outside of the academic enterprise. We believe that this pool of trainees reflects a major opportunity for The BRAIN Initiative® to strategically build the workforce needed to fulfill the full promise of BRAIN 2.0. 1.    The NIH BRAIN Initiative should consider approaches to support training across industrial and academic sectors. This might involve support for training in a BRAIN Initiative-funded academic laboratory coupled with time in an industrial setting, and/or a start-up company. Scientists should be trained to understand and address the barriers to translation that exist across these sections. Trainees could focus their efforts on issues related to tool developed, data management, facilitating data sharing/standardization, the maintenance of technology developed through BRAIN, or other areas. Similar programs have been implemented by NSF for placing postdoctoral trainees in startup environments.   2.    The NIH BRAIN Initiative should support the transition of advanced postdocs to independence within the commercial startup space. For example, this approach could combine advanced postdoctoral training in an academic/industrial setting with committed industrial support based on clearly specified success metrics. Trainees could focus their research program on overcoming issues related to tool development/implementation, data management, facilitating data sharing/standardization, providing broad training for tool usage across academic laboratories, maintaining technology developed through BRAIN, or other topics.   3.     NIH and other BRAIN Initiative partners should consider adding additional neuroethics training opportunities within existing responsible conduct of research (RCR) training requirements for neuroscientists and other BRAIN Initiative-funded researchers. This might also include education and awareness training about possible dual-use implications of brain research. An additional step to build neuroethics knowledge and awareness includes establishing training grants, career-development awards, and other funding strategies to explore more formalized forms of neuroethics training   III. SHARING AND USING BRAIN INITIATIVE TECHNOLOGY The promise of The BRAIN Initiative® rests with understanding how neural circuits produce behavior – as well as how dysfunctional circuits contribute to, or possibly cause, diseases. Realizing this promise hinges upon tools to monitor and control circuits; detailed, multidimensional brain maps of circuits; and use of those tools and maps to connect circuit function with perception, emotion, cognition, and action.  Many neurotechnology innovators express frustration with the complexities and demands of disseminating and translating technologies they develop; often, they are scientists who do not have the experience or expertise to function beyond invention. Such activities are also often incompatible with an investigator’s academic position or institutional infrastructure/resources. As with many other types of biomedical discoveries that fail to reach the market, the gap between invention and commercialization can prevent BRAIN Initiative-funded innovations from reaching research or clinical application. To unlock the impact of BRAIN investments in technology development, we believe that strategic investments in BRAIN 2.0 will facilitate rapid, efficient, and effective collaborative dissemination of techniques from innovators to end users. NIH funding to date: Technology dissemination The first half of the NIH BRAIN Initiative emphasized development and optimization of new technologies, but as the techniques and resources mature, dissemination to the research community will be critical to The BRAIN Initiative®’s success. NIH has taken some initial steps in this direction, including issuing a NOFO for individual laboratories interested in incorporating new technologies (as well as career-enhancement awards for learning new techniques), a recent NOFO in fiscal year 2018 for research-resource grants for technology integration and dissemination, and a set of small business NOFOs for technology commercialization.   BRAIN-Initiative technologies: Unique challenges The BRAIN Initiative® creates new challenges for technology use and dissemination compared to most NIH-funded research. Beyond the initial spark of invention, many neuroscience-related technologies demand additional focus on deployment, use, and long-term support (life-cycle management). Successful tool deployment for The BRAIN Initiative® requires development and funding of processes to promote the use of tools, including development of new skill sets within the neuroscience community. Although some recently developed tools are being commercially developed and distributed, development timelines for sophisticated methods frequently fail to meet the needs of neuroscientists eager to adopt the latest techniques. Many of the most powerful new techniques exceed the technological abilities of most neuroscience laboratories, making it difficult for scientists to use prototype forms. Currently, research laboratories that develop and employ state-of-the-art tools end up being “taxed” by donating their time to train researchers less familiar with the techniques and assist with building, supporting, and troubleshooting prototype systems for others. One challenge is the broad spectrum of technologies emanating from The BRAIN Initiative® investment – including software, viruses and animal models, microscopes, electrodes, and human-use technologies such as implants and MRI-based innovations. No single approach serves the needs of both developers and users (including research participants and patients) of these technologies, given the highly variable range of costs, market size, and urgency for use. Several NIH BRAIN Initiative-funded projects illustrate successes and challenges noted above. To clarify the distinct paths related to BRAIN 2.0, we discuss these examples and related issues for both human-use technologies and laboratory-use technologies. Human-use technologies Various constraints limit the deployment of integrated technologies for use in humans. Translating research probes from use in rodents to use in humans is currently insurmountable for a typical neural-engineering laboratory with average resources. Establishing essential collaborations with clinicians remains challenging based upon perceived risks associated with new technologies. Such constraints point to the need for research investments in less-invasive sensing and operational technologies – and highlight an essential role of PPPs to move neurotechnologies into clinical evaluation in humans. BRAIN 1.0 helped establish a foundation for translational neuroscience to flourish by supporting development and deployment of human-use technologies. Early on, BRAIN 1.0 developed a series of PPP templates to facilitate confidentiality and partnership agreements for NIH-funded studies. Development of these templates saved hundreds of days of negotiations for each grant, making it feasible for companies to support BRAIN Initiative technologies. The agreements reflected compromise on intellectual-property rights, assigning a balanced share of value to PPP participants. The PPPs provide key access to human networks through proven technology – a difficult process for an academic laboratory to navigate, given regulatory requirements. Private-sector partners during BRAIN 1.0 included modest-sized companies such as Blackrock Microsystems and NeuroPace, Inc., as well as large entities such as Medtronic and Boston Scientific. The PPP structure had broad benefit: giving researchers access to next-generation tools years ahead of their formal release, while companies received early feedback on prototype versions and a window into the most promising clinical areas for further exploration. Research is underway across a variety of disease areas, ranging from Parkinson’s to epilepsy to mood disorders to dementia. This varied investment thus reflects a balanced portfolio of iterative research to advance current commercial interests, while exploring high-risk, high-reward concepts of interest to NIH and its stakeholders. For ethical reasons, these technologies are all being tested in individuals with an underlying condition that adequately justifies procedure-associated risk. BRAIN 1.0 can claim several success stories related to use of neurotechnologies in humans. For example, closed-loop brain stimulation that measures physiological signals and adjusts stimulation accordingly is being used in the context of movement disorders and epilepsy. An apt example of translational research, these systems apply newly discovered neuroscience principles to improve patient care. A practical point to make about this work is that the technology used for the systems was already largely in place before The BRAIN Initiative®, in particular chronic implantable bidirectional interfaces. Human-use technologies, especially implantable ones, can take many years to develop. Indeed, many of the programs highlighted during BRAIN 1.0 used existing technology. However, it is worth noting that The BRAIN Initiative® facilitated better use of these technologies, and new technologies are currently being used with patients as a result of this investment of time and resources. Laboratory-use technologies Compared to neurotechnologies used in humans, development and use of laboratory-use BRAIN-Initiative technologies experience very different challenges, facing none of the hurdles associated with testing therapies in people. But the unrestrained and rapid proliferation of new laboratory-based methods can have both unexpected and unintended consequences. For example, innovation and utility are not necessarily connected, and academic scientists are typically rewarded mainly for innovation. Broad utility of tools among end users requires thoughtful and resource-intensive steps involving product development, manufacturing, standardization, and documentation – while also looking ahead toward end-user training, support, and product/program sustainability. Other factors such as intellectual-property concerns affect the feasibility of various business models, which can range from an open-source “build-it-yourself” strategy to a commercialized platform. One good example of this approach is the commercialization of the Neuropixels probe (see text box). Neuropixels Neuropixels is a neural-recording technology that can monitor hundreds of neurons simultaneously throughout an individual animal brain. When it debuted, the device offered a leap ahead compared to existing recording systems. Funded outside of The BRAIN Initiative®, Neuropixels development hinged on substantial, sustained funding ($10M or more over 5 or more years), a partnership with an industrial contributor with manufacturing and operational knowledge (IMEC in Belgium), and a business model for sustainability. In this case, private entities (the Howard Hughes Medical Institute, the Allen Brain Institute, the Gatsby Charitable Foundation and the Wellcome Trust) subsidized efforts to rapidly share Neuropixels with scientists, including providing substantial infrastructure and personnel support for well-controlled device manufacturing. Currently, Neuropixels supplies devices to researchers at “cost-plus,” incurring a modest mark-up to cover these expenses. Neuropixels supports semi-annual formal training meetings to promote end-user support within the scientific community. This approach has been intentionally gradual, starting with a few large laboratories and scaling up carefully, to ensure that technology dissemination occurs at a scalable rate. The case of Neuropixels, and other technologies such as Medtronic’s Brain RadioTM and Blackrock Microsystems’ neurophysiological systems, highlights a necessary departure from standard business models for dissemination of laboratory-use neuroscience tools. It also highlights the necessity of end-user training and empowerment to enable wide adoption without prohibitive cost. As a result, however, the project is revenue-neutral. Beyond the laboratory The implications of BRAIN Initiative-funded research stretch beyond traditional medical and research contexts. Many fields of study outside the natural sciences are now directly engaging with neuroscience as reflected by the emergence of several interdisciplinary “neuro-and-” fields. These include neuroanthropology, neuroeconomics, neurosociology, educational neuroscience, neurolaw, neurohistory, neuroscience and literary criticism, and even neuropolitics. Insights from The BRAIN Initiative® will extend beyond its community and its mandate. Ethical stewardship of studies exploring “normal” and “abnormal” brains, particularly as they relate to mental health as well as implications for enhancement will require considerations of privacy and best uses and possible restricted uses beyond the biomedical setting. These questions of uses “beyond the bench” are best explored as multi-stakeholder projects. In addition, ethical best practices rely upon mechanisms and infrastructure to support scientists’ ability to follow them. BRAIN Initiative neurotechnology: Where are we now? Many practical obstacles still prevent widespread dissemination of neuroscience technologies. We suggest that BRAIN 2.0 should address these issues directly, with frank community input and carefully conceived strategies to ensure that BRAIN Initiative investments are fully leveraged to generate breakthrough insights into brain function. Novel ways to support technology development and dissemination are urgently needed, and these efforts need to operate beyond conventional commercialization timelines. BRAIN 2.0 should establish firmer requirements for technology developers to work iteratively with end users, to ensure relevance. Establishing support infrastructure to aid the collaborations between experimentalists and data scientists (or other experimentalists) will help remove perceived barriers. Many of the roadblocks are ingrained within neuroscience culture, but they are worsened by our current hypercompetitive biomedical research environment as well as financial and legal aspects of commercialization. Addressing these issues more directly through improved interdisciplinary training as well as collaboration incentives and community education could remove many of the tensions and barriers that are restraining our progress toward understanding the brain. Technology dissemination is a relatively new area for NIH , in which The BRAIN Initiative® is taking on the role of a technology incubator for new ideas, hoping to see them propagate into the scientific marketplace. Uptake of NIH BRAIN Initiative-funded imaging tools by the research community has been slowed by impediments such as incompatibility with two existing commercial models (private-sector collaborations vs. small-business grants). We consider these below. NIH Small Business Innovation Research ( SBIR ) and Small Business Technology Transfer ( STTR ) program Although NIH funded many SBIR grants during BRAIN 1.0, several of them went to existing, established companies. As such, it is unclear whether projects conducted via these grants supported new innovations attributable to The BRAIN Initiative®. BRAIN 2.0 could leverage its initial investment through a matchmaking role to foster additional collaborations between academic scientists and existing companies comfortable within the SBIR-funding ecosystem. Academic researchers could also form a new company to qualify for SBIR funding. However, this approach carries substantial risk. Investigators must have a sound business plan to handle (and resource) intellectual-property matters as well as establish a corporate and development team to map research, development, and eventual profit. Most academic scientists are unfamiliar with these processes. Other neurotechnology techniques and tools are likely to support niche user bases. These relatively low-profit or small-market products call for very different dissemination approaches compared to paths generally supported by university technology-transfer offices, SBIRs, and venture-capital funding. Even open-source dissemination, which is popular in principle, is still costly in practice. BRAIN 2.0 might address these challenges through innovative strategies – perhaps non-profit models that subsidize technology development via established industrial partners with necessary expertise to rapidly develop products beyond the abilities and resources of academic inventors. In summary, SBIR / STTR funding models alone are insufficient in many cases to permit academic inventors to successfully commercialize their technologies. Technology sharing and training grants Previously, NIH issued during BRAIN 1.0 one round of U24 dissemination grants applicable to a broad range of BRAIN Initiative-funded neurotechnologies but did not target The BRAIN Initiative® community directly. Potential steps to increase awareness of these opportunities include issuing better guidance on topical areas; use of a continuous cycle of opportunities instead of a perceived “one-off” grant; focusing on proper business models; and requesting end-user projections and sustainability plans. Supplemental funding could assist tool-developing laboratories to support technical staff fully dedicated to training and sharing. Annual BRAIN Investigator meeting Each year, the NIH BRAIN Initiative hosts an investigator’s meeting to convene the community, foster knowledge exchange, raise awareness about neurotechnology developments, and highlight successes. This meeting presents a compelling opportunity to address issues related to training and dissemination. BRAIN 2.0 might consider inviting leaders from a spectrum of companies to help raise awareness about early-stage opportunities. Other uses of this convening might include matchmaking sessions for technology makers and users, and boot camps on innovation, business principles and neuroethics. To encourage diverse participation, meeting planners should personally invite potential industry partners. Gaps and opportunities: Next steps for BRAIN 2.0 A recurring issue in any type of technology development is mission misalignment. Academic scientists are not trained in bridging the gap from invention to market, and they are rewarded for tool innovation – not tool utility. For example, tenure decisions rarely consider tool deployment explicitly. Can we raise the profile of successful tool builders in the community? At a minimum, we need to help provide the tools for entrepreneurial scientists to be successful taking their idea from the bench and helping to propagate it in the broader space. To that end, great scientists may not make great business people. Start-up companies often face “founder’s syndrome,” in which initial vision collides with leadership and developmental ability when a company expands. What structures should could allow for a smooth hand-off from vision to leadership and expansion? Alternatively, what tools and resources will enable inventors to learn and grow, toward supporting their innovation successfully? Capital flows and time-value of money Supporting tool development requires substantial resources. In the case of Neuropixels, making 1,000 units exceeds $2 million, well out of the reach of most academic laboratories. Modeling cash flows from an operational perspective is important to define a realistic level of available support given the funding environment. Time is another challenge: biomedical research tolerates long latencies for application, but business is not so forgiving. The time-value of money – the concept that current money is worth more than the identical sum in the future due to its potential earning capacity – works against neurotechnology translation compared to other investments (yearly discount rates of 15 percent or more are common). Such realities motivate the need for investment capital from non-corporate sources during the protracted translation timeframe (often called the “valley of death” for this reason). Neuropixels is an excellent example of successful investment from non-corporate sources. Defining a business model Many types of business models can support neurotechnology development and scale-up. These include open-sourcing and nonprofits (like Neuropixels); small companies focused on research (like Blackrock Microsystems); and large entities (like Boston Scientific and Medtronic) aiming to fill a long-term funnel. Cash-flow management and basic management theory are common threads running through each of these models. What will success look like?  The BRAIN Initiative® is not a typical NIH program – both by virtue of the topic of study (our own intellectual, cognitive, and emotional state); its broad appeal to many fields of inquiry; its cross-application to academia, industry, societal institutions, and international collaboration; and its promise for ending human suffering from disorders of the brain. Given these numerous characteristics, what sort of return on investment should we expect? Innovative companies like Google and medical-technology leaders like Medtronic keep target percentages for monitoring idea-to-product progress. These thresholds are meant to balance innovation and risk. Should the BRAIN Initiative strive for around 15 percent, a typical threshold used in high-risk, high-reward industries? Some approximate target would help frame strategic discussions to ensure the publicly funded BRAIN Initiative remains relevant and successful. Setting expectations raises the point of dual purpose. We believe The BRAIN Initiative® should stimulate the U.S. economy; thus, new companies and industrial growth are important. However, a more concrete goal would be to simply ensure that scientists conducting neuroscience research have access to the latest technologies. This conveys a multifaceted and very difficult challenge in which innovators must develop useful, reliable tools that become rapidly available to others. Challenges for innovators are matched by valid concerns about introducing risk for research participants (in particular NHPs and humans). One possibility is for NIH to adopt an investment mentality akin to that of the Wellcome Trust, which precludes “paying twice” for innovations: first to build them, then to access them for research. Suggested 5-year technology goals Human-use technologies 1.    Translation council. NIH receives guidance from senior thought leaders on its research program via scientific councils; a similar group that advises on the portfolio of NIH BRAIN Initiative technologies would help maximize dissemination of the most promising ideas. The team would consist of scientists and technologists with industry experience (not-for-profit and for-profit) and proven capability for translating a new idea to a broader environment. This group could help define metrics of success (as articulated above) and ensure projects have sufficient infrastructure and support for success. Note that members of this team might also serve as mentors to new entrepreneurs, in a model similar to a venture-capital firm or incubator. 2.    Training boot camp for entrepreneurial academic scientists. The NSF hosts i-Corps, which trains academic researchers about business processes and helps to refine their business plans. We propose that an i-Corps short course be included at the next annual BRAIN Investigator’s meeting to gauge interest from the community about such a resource (possibly also for laboratory-use technologies). 3.    Continue to improve the capability of resources to serve biomedical research, with an emphasis on reducing foundational frictions in technology deployment. Establishing a regular funding cycle for technology deployment is important, but other leveraging aspects are needed to facilitate translation. One example is a researcher-focused quality-management system to help develop technology intended for human use. In the short-term, partnering with FDA to create a database that streamlines this process for researchers and includes exemplars of successful translation to human use (through an investigational device exemption, IDE) would likely facilitate successful dissemination. 4.    Contribute to improvement of the capability of resources for biomedical research by also requiring usage projections and a sustainable financial model. Also required should be basic marketing considerations, like a strengths/weakness/opportunity/threat analysis of the impact of a resource in a particular geographic location or scientific space. One key challenge is culture: overall, the NIH model is very scientist-driven, with little opportunity for the agency to shape what is proposed or the ability to ensure and/or enforce team cohesion. 5.    Balance the neurotechnology pipeline. The BRAIN Initiative® PPP program continually faces challenges with balancing industry’s near-term focus with the future-looking aims of NIH-funded research. Human trials are expensive (more than $100,000/protocol/year), and companies who participate assume liability (if even implicitly) with these studies. Providing some financial compensation to help offset these costs would attract more industry participation, since the probability of a short-term win for translation is low, and the time-value-of-money and opportunity cost can make the NIH BRAIN Initiative look unattractive. BRAIN 2.0 might consider an analysis to determine why the pharmaceutical industry or other private sectors have not participated extensively. We are unaware of a good model to incentivize a company to license a BRAIN Initiative-funded technology – a company will do so only if they deem it financially viable, assuming no NIH support. Ironically, BRAIN U01 grants for technology development have budgets that far exceed many corporate development budgets. Addressing the imbalance between resources for development and those for commercial translation could deliver more, better-produced products to neuroscientists more rapidly. 6.    Expand investment beyond invasive devices. The NIH BRAIN Initiative PPP should emphasize neurotechnologies beyond bi-directional medical implants. Other important technologies for investment include imaging, surgical procedures, and molecular approaches. In particular, we advocate for an explicit grant sequence that leverages short-term clinical procedures to maximize potential learning from the daily procedures that give unique access to the human nervous system. Minimally invasive systems might be particularly translatable for The BRAIN Initiative®, especially in the short-term. For example, only about 200,000 DBS implants have been sold in the past 20 years compared to about 6 million Apple watches sold in one quarter alone. 7.    More critical assessment of teams developing human-use tools. We believe there needs to be a better credentialing process required for developing human-use tools, which requires diligence in good-design practices, quality-management systems, and basic program management. Many of these skills are not available to a typical academic laboratory, and scientists struggle to deploy systems at scale without appropriate resources. During BRAIN 1.0, many tens of millions of dollars were spent for development of new advanced medical devices led by teams of academics with little to no experience in product development, translation, or management. NIH (and the government more generally) should consider asking industry collaborators to improve the probability of a positive return on investment. Industry collaborators can provide translational know-how that complements academic research. 8.    Clearinghouses for BRAIN tools. BRAIN 2.0 could identify entities to support use and sharing of tools to ensure they are widely available to the community. The translation council (described above) could establish a structure for such a system. Similar concepts might also apply to manufacturers or suppliers: BRAIN 2.0 could establish a preferred-vendor list aligned with objectives of The BRAIN Initiative®. 9.    Increased, transparent links to work among federal agencies. It appears that many federal agencies are working in parallel to invest in tools for BRAIN Initiative applications. While coordination plans are in place, they could be optimized, and BRAIN 2.0 could ensure that these plans are transparent to the public. This approach also exists in academia, where scientists have incentives both to collaborate and to compete. 10.    Advertise success stories for both industry and academia. For many industry participants in the PPP, the only short-term reward is a “halo effect” arising from discoveries that support the public good. NIH press announcements often do not acknowledge industry partners, which can frustrate cooperation between partners. Similarly, what can NIH do to raise the profile of successful tool builders so that early-stage academic scientists will be recognized within their departments and broader? 11.    Develop expanded neuroethical considerations. Several opportunities exist, such as:  o    Develop guidelines to limit inappropriate use of human brain data (since there is potential for unanticipated insights into a research participant’s identity), including what data can be shared and who has access to the data and for what purpose.  o    Dedicate more critical review of funding proposals for devices that might remain implanted in a human subject for long periods. NIH might consider allocating funds to help support long-term care for such individuals. o    Encourage and provide incentives to include neuroethics components in research grants as well as in training and technology development. o    Expand dialogue between the neuroscience research community and social institutions such as law, education, business, marketing, and public policy, that are increasingly relying on neuroscience evidence to craft social and legal policy. o    As neuroscience allows for data acquisition beyond laboratory and clinical settings, ensure that mobile neuroimaging is conducted with proper attention to eliciting informed consent, returning incidental findings, and respecting cultural differences within diverse populations of human research participants. o    Ensure that the results obtained by machine learning systems and data-analysis algorithms are not biased by a lack of diversity in the data used to train the algorithms. o    Ensure equitable use of advances in neurotechnology across populations. o    Establish a neuroethics network, consisting of people to consider issues on an ongoing basis for a range of stakeholders (neuroscience researchers and trainees, IRBs, health-care providers, and the non-scientific public). Researchers and policy makers could consult this network for help with projects and other issues that arise. Laboratory-use technologies Technologies in this category are diverse – ranging from reagents, fluorescent dyes, and viruses, to microscopes, electrodes, and recording and actuation devices. These tools can also include algorithms and analysis methods. 1.    Establish a roadmap of viable strategies for dissemination of technologies based on their likely market. These might include:  o    Open-source sharing (simple technologies, relatively small market, low-cost components or easily accessible production/replication). These could be either facilitated by online resources, or simply replicated from published work. o    Subsidized “build-it-yourself” dissemination via collaboration (more complex technologies requiring only modest productization with a small market and modestly priced components). This requires personnel and infrastructure at the originating laboratory, and training and support for end-user laboratories. o    Direct-sale dissemination via universities or start-ups (more complex technologies requiring professional manufacture, small market and modestly priced components). This will introduce intellectual-property and conflict-of-interest considerations. o    Commercial development by an independent private company, accessing SBIR funding (simple-to-complex technologies with a market sizable enough for financial sustainability). o    Licensing and commercial development by a public or private company (technologies with a sufficient market to offset more significant development and post-sale support). This approach can lead to long lead times before technologies can become available. o    Dissemination by a service-model “technology hub” to enable efficient sharing among diverse laboratories (rare or expensive equipment such as state-of-the-art MRI systems that could be supported by fee-for use or other models). 2.    Analyze resources needed by laboratories to identify the most suitable dissemination model. Laboratories would benefit from knowing the most relevant shared services for their needs such as intellectual-property consultation, market-research assistance, or matchmaking to suitable corporate entities. Collating and sharing expertise from successful tool disseminators could propagate a network of peer mentoring (this same network could provide support for enhancing recognition of technology dissemination as an academic achievement for early-career faculty). 3.    Consider establishing dedicated training programs for scientists who wish to disseminate their technologies, focusing on the unique considerations of technologies for laboratory use.   4.    Consider strengthening a culture of close collaboration between innovators and end users in the context of the most relevant market. Collaborative, iterative refinement of technologies ensures impact and accessibility. Tool disseminators should be accountable for utility. 5.    Consider mechanisms to subsidize important, but financially unattractive, technologies to enable dissemination. Supplementing production costs to make such technologies viable might be the most cost-effective option to ensuring return on the NIH BRAIN Initiative funding investment. 6.    Develop an alternative approach to offer rapid funding for new-tool adoption to investigators, especially collaborations between innovators and new end-user laboratories. Despite the perceived failure of the NIH BRAIN Initiative RFA “Technology Sharing and Propagation (R03)”, a revised funding scheme would be valuable that supported constructing a system as well as labor or training to bring sufficient new expertise into the laboratory to remove barriers to adoption (i.e., new analysis techniques, new labeling strategies). 7.    Fund training courses. Several “neurotechnology methods” courses already exist (e.g., those at Woods Hole Marine Biological Laboratory and Cold Spring Harbor Laboratory), but iterations could more directly address a shift to building and using non-commercial techniques. Since continued training of new users may extend beyond the development phase of a technology (or may move from the innovator laboratory to the laboratory of a super-user), such training grants could be separated from innovation grants and thus fill the gap between expertise and personnel. 8.    Support dissemination of NIH BRAIN Initiative technologies to researchers addressing disease states. Many tools now developed for laboratory-based neuroscience research have been driven by applications to understanding the healthy brain. However, such techniques have extraordinary, untapped potential for helping us understand disease. Combining novel technology applications with disease studies could yield new mechanistic insights into both normal and abnormal brain function (and effects on behavior). These platforms could also be used to screen for and evaluate candidate therapies. It should be noted that the challenges of expanding of BRAIN Initiative technologies to a wider audience of scientists exploring pathology may be significant.   IV. PUBLIC ENGAGEMENT Several components are vital to the publicly funded BRAIN Initiative’s success. These include diverse human capital, robust scientific integrity, public accountability, and social responsibility. Directly engaging the broad American public around biomedical research practice can amplify its goals substantially. Potential benefits from integrating larger segments of America into the framework for neuroscientific inquiry are enormous. Inviting and welcoming varied perspectives ensures that the science supported by The BRAIN Initiative® reflects the population it serves – and that it operates within a full spectrum of cultural norms. These are core neuroethical principles. Bringing BRAIN 2.0 to a larger audience can generate a broader understanding and enthusiasm for neuroscience: how it is conducted and how its results can benefit society.   The BRAIN Initiative® has advanced a bold agenda to develop new technologies that will allow us to discover how our brain works. This understanding will inevitably raise new questions about the boundaries between humans and machines, and these questions will no doubt invite both excitement and debate from diverse communities across the United States While chartered to promote new technologies that advance scientific understanding, The BRAIN Initiative® also has clear potential for uncovering new and effective clinical approaches for brain disorders across the lifespan. These include autism, Alzheimer’s, substance-use disorders, and many others. Each of these afflictions has a devastating toll on society and our economy. People who are not scientists but who suffer from these conditions have a critical role to play within The BRAIN Initiative® – not only through funding, but through informing its activities and outputs.

In addition to incorporating public values and neuroethical considerations into the NIH BRAIN Initiative with an aim of maximizing positive long-term benefits to society, increased efforts to engage and include diverse non-scientific communities can also yield other near-term benefits. A 2018 National Academies report on citizen science reported, “When communities can work alongside scientists to advance their priorities, enhanced community science literacy is one possible outcome,” and that community science literacy can be understood as “distributed science knowledge and the ability to use that knowledge, in connection with a broad suite of community knowledge and capabilities, to leverage science for its community goals.

The WG 2.0 suggests the establishment of a bidirectional public-engagement framework with a two-fold objective: 1.    To translate perspectives from our country’s scientific institutions (e.g., NIH , NSF, the National Academie, and others) to target diverse local communities across the nation 2.    To curate perspectives from these diverse communities for use by the nation’s scientific leadership for programmatic and policy priorities. BRAIN 2.0 should fund work to pilot and validate a public-engagement model that incorporates timely themes, including neuroethical considerations, that arise from study of the human brain. NIH has a long history of public-facing educational and outreach activities surrounding its initiatives. Yet, long-standing barriers still thwart culturally inclusive community engagement and active public participation in biomedical and behavioral research. Consistent with the models advanced through successful examples such as the All of Us Research Program, we believe that these barriers can be mitigated by partnering with institutions embedded in local community settings (e.g., science museums, public libraries).

The primary objective of this endeavor is to facilitate a coordinated dialogue – taking place across the entire nation concurrently but designed to be relevant to local communities. Engaging librarians, museum educators, facilitators, teachers, and science communicators at local institutions, may offer another strategy to circumvent barriers that have interfered with productive public engagement in the past..

Core principles for public engagement

While research with human participants was less prominently highlighted during BRAIN 1.0, many concepts and tools are now in place to extend to research with human participants.  To facilitate public engagement, we suggest the following core principles for BRAIN 2.0:

Principle 1: Citizen input must guide the NIH BRAIN Initiative research enterprise. Achieving the aims outlined for multiple priority areas of BRAIN 2.0 and its proposed “transformative projects” will directly involve studies with human research participants and translation of its findings into societal use. It is thus critically important to align the NIH BRAIN Initiative with the broader societal context in which it will be implemented.  Advancing a human research enterprise – that at its very essence explores what it means to be human would benefit greatly from routine feedback and scrutiny from a variety of perspectives beyond neuroscientists. The BRAIN Initiative® has an opportunity to go beyond education toward truly engaging various public audiences about both scientific output but also broader neuroethical considerations raised by the development and application of technologies and methods to understand and manipulate the human brain.

Principle 2: As a publicly funded biomedical research enterprise, NIH and The BRAIN Initiative® has a responsibility to ensure that its investment benefits all U.S. populations equitably Across biomedical research endeavors, multiple large-scale psychiatric studies deemed to be successful in the literature, and for which major medications have been developed, may not be applicable to individuals from populations that were not represented sufficiently in clinical-research populations.  The BRAIN ecosystem must address barriers that prevent participation and inclusion of historically-vulnerable individuals to ensure that knowledge and therapies extrapolate to broad segments of the American population. The BRAIN Initiative® is likely to shape multiple aspects of federal policy and the development of human brain technology by private industry. We suggest that, consistent with the guiding principles of the All of Us Research Program, The BRAIN Initiative® should prioritize diversity and inclusion as a fundamental pillar.

Principle 3: The NIH BRAIN Initiative should dedicate funds to supporting public engagement. Funding for public engagement in the NIH BRAIN Initiative might be similar to levels allocated in the NIH All of Us Research Program.

Recruiting a new generation of neuroscience researchers

Next Gen BRAIN Project Training the next generation of the biomedical workforce is a key component of the NIH mission. As articulated in the initial vision of BRAIN 2025, the success of The BRAIN Initiative® is predicated on recruiting scientists from disparate disciplines (engineering, chemistry, physics, statistics) and creating new incentive structures that encourage these scientists to work together, to share data, and to ultimately solve transdisciplinary team-based problems. Progress toward achieving this aim has been modest to date, which is not unique to The BRAIN Initiative®, but with its broad reach and substantial resources, BRAIN 2.0 has an opportunity to recruit talent from all segments of the U.S. population, including women and individuals from underrepresented groups. The proposal a Next Gen BRAIN Project is a principal investigator (PI)-led citizen-science effort to address these issues directly by adding to the recruitment pipeline fed by our nation’s educational system into our neuroscience enterprise.

The NIH BRAIN Initiative has supported high school-level researchers through the NIH diversity-supplement mechanism, reinforcing enthusiasm for empowering the next generation of researchers. The aim of the Next Gen BRAIN Project is to scale up this strategy, offering opportunities to more youths from diverse ethnic and socioeconomical backgrounds. Specifically, the Next Gen BRAIN Project should be built upon the tenets of team science, data sharing across institutions, multi-disciplinary inquiry, animal research and its associated neuroethical questions, machine learning analysis of behavior, behavioral genetics, and technology development. All these topics are core areas within the NIH BRAIN Initiative. The Next Gen BRAIN Project targets high-school students, since this pool will be eligible for graduate training at or near the conclusion of BRAIN 2025. A partnership of NIH BRAIN-funded scientists and/or other scientists and educators could serve as the PI(s) for the Next Gen BRAIN Project, as selected through a standard grant peer-review process in response to a Next Gen BRAIN Project funding announcement.

Phase 1: The Animal Challenge. This phase creates a national challenge for high-school student teams – within classrooms, in afterschool programs, or at science museums – to design a behavioral apparatus for a selected model organism relevant to neuroscience research (e.g., fruit flies, roundworms, zebrafish, or others) and appropriate to the team and environment as established by the project leaders. Design criteria for the apparatus would be established by a blue-ribbon panel of scientists (including NIH BRAIN-funded investigators, engineers, and teachers identified by the PI(s)). Evaluation metrics would include cost, ease of assembly and use, and key scientific variables for inducing and monitoring behavior using mechanisms applicable to the named organisms. A $10,000 prize would go to the winning design, with the funds applied to supporting the winning team’s science initiatives. The winning design would be publicly available, in line with the dissemination principles for new tools and technologies established in the NIH BRAIN Initiative.

Phase 2: Local Neuroscience Projects. Teams of high-school students – again, within classrooms, in afterschool programs, or at science museums – could register with the PI(s) for the Next Gen Brain Project. Each team would then receive project background instructions and training related to the winning apparatus from Phase 1. Training relevant to rigor/reproducibility, responsible conduct in research, and neuroethics should also be provided, and all training could be facilitated through online videos or other mechanisms established by the PI(s) and NIH . Students would perform the testing protocol established by the PI(s), and their experimental results could be uploaded to a public Next Gen BRAIN Project website coordinated by the PI(s). The experimental results, as established by the PI(s), should capture multiple levels of analysis relevant to The BRAIN Initiative®. For example, video data could be used to capture behavior, and students could collect genetic/molecular material from the project model organism. In each case, appropriate experimental training should be provided by the PI(s). For processing of biological samples, the PI might leverage existing NIH intramural cores for molecular/genetic analysis, or public-private partnerships established through the NIH BRAIN Initiative.

Phase 3: Open Neuroscience Inquiry. The Local Neuroscience Projects will yield a large gene/molecule/behavior database within the NIH BRAIN Initiative data infrastructure that should ideally be suited for machine learning analyses developed and advanced through The BRAIN Initiative®. Students and other neuroscience researchers can mine this unique resource to discover behaviorally relevant gene/molecule/behavior pathways that may be appropriate for further scientific inquiry. Finally, the NIH BRAIN Initiative might establish additional national neuroscience challenges based on these data for science-minded citizens of all ages and experience levels.

The Next Gen BRAIN Project engages students in learning key principles and practices central to the NIH BRAIN Initiative (monitoring behavior, discovering underlying mechanisms, and neuroethics) and other NIH-funded large collaborative projects involving data-science approaches via a series of hands-on experiences. Implicit principles explored through the Next Gen BRAIN Project include collaborative team science and appreciation for the roles of technology development and animal models in biomedical research. Potential venues for launching the Next Gen BRAIN Project include the X-STEM conference, a science conference for K-12 students held annually, in which speakers regularly include leadership from NIH and industry. Other potential stakeholders with whom NIH might collaborate to launch this effort include the American Science and Technology Centers, the American Association for the Advancement of Science, and the Society for Neuroscience.   V. BRINGING BRAIN INITIATIVE ADVANCES TO BRAIN DISORDERS Substantial support for The BRAIN Initiative® was provided through the 21st Century Cures Act, and there is little doubt that this taxpayer support remains undergirded by an aim to reduce the burden of disease and ameliorate suffering associated with neurological and psychiatric conditions such as Alzheimer’s disease, autism spectrum disorders, depression, substance use disorders, and pain, among others. New tools and knowledge resulting from The BRAIN Initiative® are driving new research activities across the much broader NIH neuroscience ecosystem to advance the study of brain disorders and to promote novel treatments. For example, the success of BRAIN 1.0 in developing a cell census of the brain promoted the National Institute of Aging to launch a new program aimed at characterizing the cell census across aging (RFA-AG-19-027). Similarly, the National Institute of Mental Health launched a new program that uses BRAIN technology to facilitate therapeutic development (RFA-MH-19-235), while the National Institute of Neurological Disease and Stroke has launched new programs that leverage public-private-partnership programs created through The BRAIN Initiative® to advance novel treatments for pain (RFA-NS-19-017). As novel neurotechnologies and fundamental insights about human brain function continue to emerge through BRAIN 2.0, the WG 2.0 suggests that NIH prioritizes the development of programs that leverage BRAIN Initiative research products and adapt approaches for soliciting, evaluating, and funding scientific proposals to accelerate their adoption across the broader neuroscience enterprise. •    The NIH BRAIN Initiative should consider listing on its website new funding opportunities outside of The BRAIN Initiative® that arise from its investments (such as those identified above). Doing so could attract researchers to NIH funding opportunities outside of The BRAIN Initiative® that are directly related to their areas of expertise. •    The NIH neuroscience ecosystem should consider novel methods to recruit and incentivize BRAIN Initiative investigators from diverse disciplines to serve on study sections and special-emphasis panels outside of The BRAIN Initiative® that evaluate proposals related to emergent BRAIN Initiative technology. This approach will facilitate substantive adoption of technology and fundamental principles promoted by The BRAIN Initiative® into the broader NIH neuroscience funding portfolio. •     NIH should consider leveraging the All of Us Research program to recruit participants for human neuroscience research sponsored by the NIH BRAIN Initiative.   Beyond the Vision of BRAIN 1.0: TRANSFORMATIVE PROJECTS While the preceding sections include many important findings for future implementation of the NIH BRAIN Initiative, none represent a notable departure from the vision presented by BRAIN 2025. However, one domain where we believe the NIH BRAIN Initiative could flourish in its second phase is through encouraging development of several large-scale projects that will yield particularly important resources and data to propel neuroscience far into the future. The BRAIN Initiative® Cell Census Network (BICCN), which was described in Priority Area 1: Discovering Diversity, stands as a model for such projects. Building on early BRAIN Initiative successes to identify cell types in the mouse brain, the BICCN has brought together centers that are compiling a comprehensive mouse brain-cell atlas. Teams are launching corresponding efforts in NHPs and in humans, as well as establishing a Cell Data Center that will integrate, visualize, and disseminate the cell-census data. This lasting resource will be relevant to countless neuroscience research projects for decades to come and provide necessary tools to conceive cell-based therapies for use in humans. It is likely that the high initial cost of the BICCN will be returned many times over by motivating and systematizing future research. We believe that directing resources to transformative projects of this scale will accelerate the goals of the NIH BRAIN Initiative. We recognize that resources are finite and that small-scale research projects are the lifeblood of NIH BRAIN Initiative research. However, The BRAIN Initiative® offers a unique opportunity to accomplish huge projects like the BICCN that would not otherwise have access to major, sustained support. The next pages highlight examples of large-scale transformative projects that we believe could change the course of neuroscience research and provide exceptional returns both for fundamental understanding of brain function and for building powerful technological and knowledge-based platforms for ameliorating neurological and neuropsychiatric disorders. These examples support our view that BRAIN 2.0 should mobilize efforts to identify and support more large-scale projects – often involving various levels of team science – that have potential for exceptional impact.

1. A Cell Type-Specific Armamentarium for Understanding Brain Function and Dysfunction  Ramón y Cajal’s original microscopic structure of the brain provided a visual display of the extraordinary beauty of this organ, but anatomy alone could not provide a mechanistic understanding of the functions that different cell types contribute to human behavior and thought. With a detailed cell census in hand, we now have an opportunity to manipulate brain cell function and resolve questions of cause and effect between cell types, their functional outputs, and illnesses of the brain. This large-scale, high-throughput transformative project would generate and implement methods to specifically access, manipulate, and model a few hundred clinically-relevant cell types across multiple species, including NHPs and humans, in a manner not requiring germline genome modification. Central to achieving this transformative goal is the ability to permanently label and reversibly alter function of groups of cells from any organism, employing strategies that enable access to specific cell types for the purpose of experimentally manipulating them. Example technologies might include CRISPR-based methods, high-throughput use of compact (adeno-associated virus (AAV)-sized) enhancers that can control hundreds or thousands of specific cell types; monoclonal antibodies and/or nanobodies against cell type-specific surface proteins for pseudotyping lentiviruses; AAV serotypes with novel cell specificities; permanent, activity-dependent cell-marking methods; and methods that combine approaches and targets (e.g., split-GAL4 with two enhancers, split-GAL4 with pseudotyped lentivirus). Such reversible, cell type-based manipulation of brain activity would advance understanding of fundamental principles of brain function, but also guide novel therapies for brain disorders through the use of animal models. These therapies would encompass cell type-specific manipulations of electrical activity, chemical neuromodulation, and gene expression. A preview of the power of this type of approach in other tissues include studies with Perturb-seq and CROP-seq. At the molecular level, these tools – together with proteomic data and antibody reagents – will enable functional studies and empower disease-oriented research. At the circuit level, the ability to manipulate neuronal function with increased cell type-specificity will enable potential therapeutic interventions to override brain dysfunctions that have complex and/or early developmental genetic etiologies, and which would otherwise resist gene-level therapies (see Toward Circuit Cures, below). Together these molecular- and circuit-level approaches have the potential to transform treatment of brain disorders. 2. The Human Brain Cell Atlas Work now underway in the BICCN aims to provide a cell type census that will characterize cell diversity in the human brain. That effort, however, will not provide detailed information about the morphology, connections, or detailed information about location of cell types in cortical layers or within other brain structures. This transformative project builds on technological and conceptual advances from model systems and aims to generate a comprehensive cell-type atlas in the human brain that includes an anatomically informed, highly granular cell census of the whole human brain. Attaining this goal entails significant changes in current scales of tissue-processing capabilities, which await progress in automation, serial analyses, and coordination efforts across large collaborative research groups. This is an ambitious goal, but current technology paves the way for making it a reality over the next several years, especially given expected emergence of improved methods. 3. The Mouse Brain Connectome  This transformative project aims to comprehensively map the entire mouse brain, enabling study of brain circuitry from synapses to coordinated function and behavior. Complete nanometer-level reconstruction of an entire mouse brain is a daunting prospect, as it requires detailed EM imaging of roughly 300,000 brain slices. The resulting, three-dimensional dataset will be massive – thousands of times larger than previous endeavors – and require additional processing to identify cell boundaries, provide molecular information, and map connectivity across six orders of magnitude spatially. Accomplishing such ambitious goal in a 5-year period will require continued advances in high-throughput tissue handling, EM devices with nearly 100 parallel beams, and modern machine learning algorithms implemented by supercomputers. Despite the enormity of the challenge, we believe it is feasible. New histologies and tissue chemistries have changed the landscape of the field. Sample-handling is increasingly automated, and the maps of several complete small organisms are now appearing. Multibeam-EM devices with 64 parallel beams are also now available. Manufacturing and deploying the required 20 to 30 100-parallel beam devices over a 5-year period in a time- and cost-efficient manner will require academic-industrial partnerships on a scale atypical for neuroscience but not unfamiliar to in high-energy physics and astronomy. Public-private partnerships for image analysis, including the most advanced industrial platforms for large-scale data analytics, have already demonstrated the feasibility of training deep networks to perform tasks that would require impossible amounts of human effort. Hence, despite both the complexity and cost associated with this transformative project, its ultimate success is not in question. It is possible that a nanometer-level reconstruction could be facilitated by, or coupled with, strategic application of optical or sequencing-based methods. While many researchers see reconstruction of the entire neural network of a human brain as the ultimate goal of connectomics, that goal is still out of reach – perhaps several decades away – for several reasons: postmortem brain tissue is not currently sufficiently preserved to allow full circuit reconstruction; EM staining for a volume the size of the human brain is unattainable with known technologies; and EM scans of an entire human brain would require approximately one zettabyte of digital storage (1021 bytes, or 1 billion terabytes) – roughly the total amount of Internet traffic worldwide in one year! For these reasons, proceeding with an approach that uses a more tractable mammalian brain will provide guidance on how to acquire, process, and share the extraordinarily large data sets generated by connectomics research. The mouse brain provides a roadmap to the organization of the human brain at a scale that is doable with current technologies. The value of the mouse connectome for neuroscience could be immense. Because equivalent data have never been collected at this scale, the full implications of its ability to define brain function remain uncertain. However, we have already learned important lessons from brain-mapping efforts in other species. More than 30 years after its publication, the roundworm map continues to be a widely cited resource, showing the durability of such canonical datasets. Efforts to map the fruit-fly brain are nearing completion, and dense reconstructions of vertebrate zebrafish larvae have recently been released. Although the volumes involved in those projects are orders of magnitude smaller than for the mouse brain, these efforts have given us network-topology principles and defined functional modules informing understanding neural function and behavior. However, none of these past or current efforts approximates the value of comprehensive study of a mammalian brain, for which the organizing structures, from the laminar structure of a cortex to detailed subcortical and brainstem nuclei, are widely preserved across mammalian species, including humans. Importantly, in addition to providing essential data to begin to address how structure subserves function in a mammalian brain at synaptic and circuit levels, we will undoubtedly find surprises from these data. As a field, neuroscience has focused on familiar or accessible brain structures (cerebral cortex, hippocampus, cerebellum). A whole-brain connectome for even one mammalian individual is likely to reveal worlds of circuitry that have been overlooked and will be a treasure trove for theorists. Dense EM reconstructions contain detailed information from all cell types within brain tissue; hence, such data will provide a wealth of knowledge about the anatomical underpinnings of the essential roles of various cell types in brain function. Combining EM data with functional, molecular, and metabolic readouts obtained in a living brain before sectioning will provide data scientists with an opportunity to see just how much of this information is learnable from EM data alone. Doing so will teach us whether inferring cell type or metabolism is confined only to molecular methods. 4. Toward Circuit Cures

As neuroscience seeks to test hypotheses about brain mechanisms, researchers will discover many new ways to modulate brain activity in a manner that influences thoughts, feelings, and actions. Most of these strategies will undoubtedly include a capability to move beyond normal physiological ranges of brain activity, and conversely, they will enable approaches to drive atypical brain circuits and systems toward healthy, adaptive function. Thus, there also exists an enormous opportunity for the NIH BRAIN Initiative research community to make breakthrough insights into understanding and moving toward cures for pathological conditions operating at the circuit level.

This transformative project aims to protect or correct a vulnerable circuit, through achieving specific circuit-level understanding of, and truly specific interventions for, a major human neuropsychiatric or neurological disease symptom. 1.    A circuit in this context is not a brain region or group of cells (the target of most interventional devices), but rather the next level up in organizational complexity and scale. This, for example, might imply time-varying electrical traffic along a set of molecularly defined cells projecting from one part of the brain to another region. By selectively controlling activity of specific circuits with high spatiotemporal precision, it may be possible to achieve long-lasting changes in brain function, which would ultimately reduce the significant human suffering common to most neuropsychiatric disorders. This endeavor could include developing an approach to drive long-term circuit-specific changes using precisely timed noninvasive stimulation – potentially in combination with safe, plasticity-facilitating medications or sensory stimuli, also delivered noninvasively, if needed. This technology could work at the surface of the brain, for example with transcranial magnetic stimulation, but also reach deep-brain targets using knowledge of axonal-tract trajectories known to be causal by virtue of work described earlier in this report, as well as spatially localizable for each individual using imaging approaches to visually represent nerve tracts. Simultaneous stimulation at multiple locations could leverage natural wiring anatomy to perturb circuits at convergence points of multiple tracts deeper within the brain. Strategies for developing such a technology could be begun in model species to discover optimal spatiotemporal-stimulation patterns needed to achieve desired long-term functional strengthening or weakening of deep circuitry. This approach could then be translated into NHPs for monitoring studies of circuit and behavioral impact, followed ultimately by testing and validation in humans.   2.    Another strategy for a Circuit Cure would leverage new knowledge arising from the intersection of high-speed causal neuroscience experiments and high-speed recording methods. Some invasive DBS designs for epilepsy are closed-loop systems that are triggered into action by detection of abnormal rhythms, not unlike an implanted cardioverter defibrillator device in a heart. The novelty invoked by this transformative approach is use of a develop a closed-loop, or triggered, noninvasive neurotechnology that delivers a circuit manipulation with temporal precision tolerant of reduced spatial precision typical to noninvasive devices. This technology would be temporally and spatially targeted based on an activity map generated from monitoring the brain in real time. An example might be a pathological reward state, as observed in the manic state of bipolar disorder or during substance-use craving. In such scenarios, the aberrant brain state would be first be sensed, followed by delivery of a stimulus to suppress the reward circuit. When the abnormal state subsides, the neurotechnology would deactivate, allowing the individual to continue to enjoy experiences that are rewarding under healthy circumstances.   3.    Another Circuit Cure model would entail discovery and control of brain circuits that carry risk for future development of disease, or which instead provide protection against future chronic processes or life events that may trigger disease symptoms. The first step in pursuing this transformative project would be identification of specific, neural circuit activity patterns that create vulnerability to, or resilience against, neuropsychiatric disorders in sensitive model species. Causal interventions, ideally minimally invasive as with the above examples, would be tested and identified in these circuits to prevent emergence of behavioral dysfunction in response to environmental exposures that would otherwise bring about dysfunction. These technologies would then be translated to NHPs, and ultimately to humans, yielding methods for preventing a wide range of human disorders, such as opioid abuse, Alzheimer’s disease, and others.   While the Circuit Cures proposed project advances novel treatments, neuroscientists will likely uncover fundamental principles underlying neural and behavioral mechanisms, such as distributed-circuit dynamics, plasticity, state, and decision-making in the human brain. These observations will be valuable for validating and further revising theoretical principles derived from model systems to optimize brain stimulation-based treatments for neural circuit disorders in humans. For all of these proposed Circuit Cures, access to brain circuits in a reliable and predictable manner that yields long-term changes will pave the way toward novel, safe, and effective treatments for brain disorders currently devastating to patients and their families. 5. Memory and the Offline Brain This potential transformative project aims to answer the question: How does the brain retrieve and leverage information from internal models and diverse memory systems? This project truly spans multiple scales, from synapses to brainwide networks, and it will require building bridges between measures of neural activity and behavior. Expertise from theorists will support experimentalists’ efforts to interpret data in the context of relevant theoretical frameworks such as adaptation, attractor networks, Bayesian computation, and reinforcement learning. This project would likely include coordinated projects with many investigators aiming to record activity in multiple brain areas during different types of memory tasks. The resulting large-scale maps of neural activity will be key for understanding how multiple systems coordinate to support memory and retrieval. Further, they will afford an opportunity to investigate the role of feedback in distributed neural circuits, as retrieval of internal knowledge likely unfolds in a top-down manner. Thus, these maps would optimally incorporate cell type and projection information. Three steps will encourage collaboration among research groups. First, a meeting would convene all interested participants before any awards are made, to facilitate setting synergistic goals. Second, applicants would be asked to describe explicitly how their proposed project links to at least two other proposed projects. Third, all groups would meet throughout the project to share and discuss ideas and progress across systems and models. The end goal of the project would be a coherent model supported by data, across several species, of how internally stored knowledge is generated, retrieved, and deployed to guide behavior. Although existing efforts have made progress in this area, major outstanding questions remain about memory formation, representation, and retrieval. This project could unify existing efforts and reveal common motifs or reveal unknown master control mechanisms. The Vision of BRAIN 2.0: CONCLUDING REMARKS The BRAIN 2025 report charted a visionary path in which the development and use of integrated technology would open new avenues to address fundamental questions in neuroscience. The final pages of BRAIN 2025: A Scientific Vision outlined five major concepts underlying brain function and dysfunction: perception, emotion and motivation, cognition, learning and memory, and action. Five years later, we know a lot more but remain intrigued about these formative concepts that have profound relevance for understanding the healthy brain and for finding cures for brain disorders. As reviewed in this report, the quick pace and ever-increasing alignment of science and technology spearheaded or facilitated by the NIH BRAIN Initiative has been nothing short of exceptional. As evidence, we as the WG 2.0 had no trouble conceiving a suite of potential transformative projects that aim to organize the path to realizing the BRAIN 2025 vision. These transformative projects, outlined in the preceding pages, all involve complex and multiscale lines of inquiry. As such, they will require new technological, scientific and organizational inventions. Although risky, the proposed paths forward would likely transform our intellectual and technological access to brain function and inspire new cures for brain disfunction.    Figure caption. Proposed, ideal budget for the NIH BRAIN Initiative through FY2026 to accomplish the scientific priorities put forth by this Working Group, with the dotted line indicating when these should take effect. Collaborative technology development (“Neurotechnology”) remains a priority through FY2026, with discovery-driven, knowledge-building science (“Neuroscience”) continuing to ramp up through FY2020. The remaining budget is devoted to "Organization of Science” and will need to increase throughout the period. This last category includes resources related to data sharing, technology dissemination, training, public engagement, and neuroethics. The Working Group acknowledges that Congressional appropriations are expected to cause year-to-year funding fluctuations, which are not shown here. Scientific inquiry and technology are closely aligned – and this interconnection has been striking within the NIH BRAIN Initiative. The choice to focus on technology development in the first phase of this large-scale endeavor was a wise one that has set the stage for new avenues in BRAIN 2.0. Many areas within the seven Priority Areas are still developing tools and processes to fulfill the short- and long-term goals of BRAIN 2025, and this work must continue. Many of these studies are being done in model systems, which offer unique opportunities to probe fundamental biological principles that can be tested and applied later in research involving human participants. Importantly, new tools and new knowledge should be shared broadly to maximize the value of The BRAIN Initiative® and to seed new paths of inquiry. With anticipated advances in understanding and controlling circuits, enhanced neuroethics attention can help to identify and navigate neuroethical concerns that arise. Additionally, given the progress made in various, separate Priority Areas, we are poised for potentially revolutionary findings to come from integrating knowledge and approaches. This runway for new discoveries brings us closer to cures for brain disorders. It also invites bold thinking about previously impossible endeavors that cross scales, integrate understanding across brain regions, and that help us understand the still-enigmatic instruction manual for the brain and its connections to the body’s other organs and systems. Given the extraordinary opportunities presented by 21st century biomedicine, possibilities are many and thus we need input and expertise from the broadest array of talent. As science shifts, so too must the NIH BRAIN Initiative. Team science is integral to progress, especially in frontier areas with numerous interfaces with other disciplines. Multiple perspectives inform study of the brain and its profound differences apparent among humans, requiring input and participation from individuals and groups underrepresented in neuroscience. These include individuals from diverse backgrounds as well as scientists in quantitative disciplines especially relevant to modern biomedical research that has shifted from observational to informational and that generates massive amounts of disparate types of data waiting to be analyzed and understood. Our newfound knowledge about the organ that drives thought, emotion, and behavior comes with a responsibility to consider many ethical issues that accompany neuroscience research – especially that which generates technologies that link humans and machines in intimate ways. Many of these new technologies, also used in novel combinations, will enable us to understand behavior in real-life settings, which will yield important knowledge about the brain and its circuitry’s roles in daily life as well as in disease.   ROSTERS   BRAIN ACD WG 2.0 Members  Catherine Dulac, PhD (co-chair)                   Harvard University John Maunsell, PhD (co-chair)                      University of Chicago David Anderson, PhD                                    California Institute of Technology Polina Anikeeva, PhD                                     Massachusetts Institute of Technology Paola Arlotta, PhD                                          Harvard University Anne Churchland, PhD                                  Cold Spring Harbor Laboratory Karl Deisseroth, MD/PhD                              Stanford University Timothy Denison, PhD                                   Oxford University James Deshler, PhD (Ex officio)                    National Science Foundation Kafui Dzirasa, MD/PhD                                  Duke University Alfred Emondi, PhD (Ex officio)                      Defense Advanced Research Projects Agency Adrienne Fairhall, PhD                                   University of Washington Christine Grady, RN, PhD (Ex officio)            Bioethics, National Institutes of Health Elizabeth Hillman, PhD                                  Columbia University Lyric Jorgenson, PhD(Ex officio)                    National Institutes of Health David Markowitz, PhD (Ex officio)                  Intelligence Advanced Research Projects Activity Lisa Monteggia, PhD                                      University of Texas Southwestern Carlos Peña, PhD (Ex officio)                        Food and Drug Administration Krishna Shenoy, PhD                                     Stanford University Doris Tsao, PhD                                             California Institute of Technology Huda Zoghbi, MD                                           Baylor College of Medicine   NIH ACD BRAIN Initiative Neuroethics Subgroup James Eberwine, PhD (co-chair)                   University of Pennsylvania Jeffrey Kahn, PhD, MPH (co-chair)                Johns Hopkins University Adrienne Fairhall, PhD                                    University of Washington Elizabeth Hillman, PhD                                   Columbia University Christine Grady, MSN, PhD                            National Institutes of Health Karen Rommelfanger, PhD                            Emory University Insoo Hyun, PhD                                            Case Western University Andre Machado, MD                                       Cleveland Clinic Laura Roberts, MD                                         Stanford University Francis Shen, JD, PhD                                   University of Minnesota   NIH Staff: Executive Secretary: Samantha White, PhD Science Committee Specialist: Nina Hsu, PhD NIH Consultant: Alison Davis, PhD  

turtle

The Biology Corner

Biology Teaching Resources

two turtles

Brain Label (Remote)

homework 2.0 label the brain

This brain labeling activity was created for remote learners as an alternative to the labeling and coloring worksheet we would traditionally do in class. Instead of coloring and labeling on printouts, students use google slides to drag labels to the images or type the answers into text boxes.

The slides do not have labeled diagrams but does include links to 3D models on Sketchfab as a reference. Students are encourages to view diagrams in their textbook or use diagrams on google. The brain image I used for labeling came from Wikimedia Commons , but the labels are not in English. This will make it difficult for students to just copy the answers, but certainly there are plenty of brain images out there that can be used as references.

The activity includes an external view of the brain where students label the lobes of the cerebrum (frontal, parietal, occipital, and temporal) and the cerebellum. Next students drag and drop labels to the internal structures, such as the thalamus, midbrain, corpus callosum, pineal body, and colliculi.

The next slide has the same image, but this time students need to type the words, though they can always click back to the slide before to check. Mainly, this is just for reinforcement and practice.

homework 2.0 label the brain

The last two slides take a close look at the brain stem and the limbic system , both with links to 3D models on sketchfab.

I use these types of worksheet for practice and reinforcement and rarely give grades for practice done in class. My typical lesson would include going over the brain and each structure, then allowing students to practice labeling on their own, then go over that with them the next day or at the end of the class period.

I also have several Quizlets and brain anatomy quizzes to help students learn the structures.

Shannan Muskopf

Noba home

  • Browse Content

University of Illinois at Urbana-Champaign, University of Illinois

The human brain is responsible for all behaviors, thoughts, and experiences described in this textbook. This module provides an introductory overview of the brain, including some basic neuroanatomy, and brief descriptions of the neuroscience methods used to study it.

  • Brain function

Neuroanatomy

Neuroimaging.

  • Neuroscience
  • Learning Objectives
  • Name and describe the basic function of the brain stem, cerebellum, and cerebral hemispheres.
  • Name and describe the basic function of the four cerebral lobes: occipital, temporal, parietal, and frontal cortex.
  • Describe a split-brain patient and at least two important aspects of brain function that these patients reveal.
  • Distinguish between gray and white matter of the cerebral hemispheres.
  • Name and describe the most common approaches to studying the human brain.
  • Distinguish among four neuroimaging methods: PET, fMRI, EEG, and DOI.
  • Describe the difference between spatial and temporal resolution with regard to brain function.

Introduction

Any textbook on psychology would be incomplete without reference to the brain. Every behavior, thought, or experience described in the other modules must be implemented in the brain. A detailed understanding of the human brain can help us make sense of human experience and behavior. For example, one well-established fact about human cognition is that it is limited. We cannot do two complex tasks at once: We cannot read and carry on a conversation at the same time, text and drive, or surf the Internet while listening to a lecture, at least not successfully or safely. We cannot even pat our head and rub our stomach at the same time (with exceptions, see “A Brain Divided”). Why is this? Many people have suggested that such limitations reflect the fact that the behaviors draw on the same resource; if one behavior uses up most of the resource there is not enough resource left for the other. But what might this limited resource be in the brain?

An MRI of the human brain delineating three major structures: the cerebral hemispheres, brain stem, and cerebellum.

The brain uses oxygen and glucose, delivered via the blood. The brain is a large consumer of these metabolites , using 20% of the oxygen and calories we consume despite being only 2% of our total weight. However, as long as we are not oxygen-deprived or malnourished, we have more than enough oxygen and glucose to fuel the brain. Thus, insufficient “brain fuel” cannot explain our limited capacity. Nor is it likely that our limitations reflect too few neurons. The average human brain contains 86 billion neurons. It is also not the case that we use only 10% of our brain, a myth that was likely started to imply we had untapped potential. Modern neuroimaging (see “Studying the Human Brain”) has shown that we use all parts of brain, just at different times, and certainly more than 10% at any one time.

If we have an abundance of brain fuel and neurons, how can we explain our limited cognitive abilities? Why can’t we do more at once? The most likely explanation is the way these neurons are wired up. We know, for instance, that many neurons in the visual cortex (the part of the brain responsible for processing visual information) are hooked up in such a way as to inhibit each other ( Beck & Kastner, 2009 ). When one neuron fires, it suppresses the firing of other nearby neurons. If two neurons that are hooked up in an inhibitory way both fire, then neither neuron can fire as vigorously as it would otherwise. This competitive behavior among neurons limits how much visual information the brain can respond to at the same time. Similar kinds of competitive wiring among neurons may underlie many of our limitations. Thus, although talking about limited resources provides an intuitive description of our limited capacity behavior, a detailed understanding of the brain suggests that our limitations more likely reflect the complex way in which neurons talk to each other rather than the depletion of any specific resource.

The Anatomy of the Brain

There are many ways to subdivide the mammalian brain, resulting in some inconsistent and ambiguous nomenclature over the history of neuroanatomy ( Swanson, 2000 ). For simplicity, we will divide the brain into three basic parts: the brain stem, cerebellum, and cerebral hemispheres (see Figure 1). In Figure 2, however, we depict other prominent groupings ( Swanson, 2000 ) of the six major subdivisions of the brain ( Kandal, Schwartz, & Jessell, 2000 ).

The brain stem is sometimes referred to as the “trunk” of the brain. It is responsible for many of the neural functions that keep us alive, including regulating our respiration (breathing), heart rate, and digestion. In keeping with its function, if a patient sustains severe damage to the brain stem he or she will require “life support” (i.e., machines are used to keep him or her alive). Because of its vital role in survival, in many countries a person who has lost brain stem function is said to be “brain dead,” although other countries require significant tissue loss in the cortex (of the cerebral hemispheres), which is responsible for our conscious experience, for the same diagnosis. The brain stem includes the medulla, pons, midbrain, and diencephalon (which consists of thalamus and hypothalamus). Collectively, these regions also are involved in our sleep–wake cycle, some sensory and motor function, as well as growth and other hormonal behaviors.

homework 2.0 label the brain

The cerebellum is the distinctive structure at the back of the brain. The Greek philosopher and scientist Aristotle aptly referred to it as the “small brain” (“parencephalon” in Greek, “cerebellum” in Latin) in order to distinguish it from the “large brain” (“encephalon” in Greek, “ cerebrum ” in Latin). The cerebellum is critical for coordinated movement and posture. More recently, neuroimaging studies (see “Studying the Human Brain”) have implicated it in a range of cognitive abilities, including language. It is perhaps not surprising that the cerebellum’s influence extends beyond that of movement and posture, given that it contains the greatest number of neurons of any structure in the brain. However, the exact role it plays in these higher functions is still a matter of further study.

Cerebral Hemispheres

The four lobes of the brain and the cerebellum.

The cerebral hemispheres are responsible for our cognitive abilities and conscious experience. They consist of the cerebral cortex and accompanying white matter (“cerebrum” in Latin) as well as the subcortical structures of the basal ganglia, amygdala, and hippocampal formation. The cerebral cortex is the largest and most visible part of the brain, retaining the Latin name (cerebrum) for “large brain” that Aristotle coined. It consists of two hemispheres (literally two half spheres) and gives the brain its characteristic gray and convoluted appearance; the folds and grooves of the cortex are called gyri and sulci ( gyrus and sulcus if referring to just one), respectively.

The two cerebral hemispheres can be further subdivided into four lobes: the occipital, temporal, parietal, and frontal lobes. The occipital lobe is responsible for vision, as is much of the temporal lobe. The temporal lobe is also involved in auditory processing, memory, and multisensory integration (e.g., the convergence of vision and audition). The parietal lobe houses the somatosensory (body sensations) cortex and structures involved in visual attention, as well as multisensory convergence zones. The frontal lobe houses the motor cortex and structures involved in motor planning, language, judgment, and decision-making. Not surprisingly then, the frontal lobe is proportionally larger in humans than in any other animal.

The subcortical structures are so named because they reside beneath the cortex. The basal ganglia are critical to voluntary movement and as such make contact with the cortex, the thalamus, and the brain stem. The amygdala and hippocampal formation are part of the limbic system , which also includes some cortical structures. The limbic system plays an important role in emotion and, in particular, in aversion and gratification.

A Brain Divided

The two cerebral hemispheres are connected by a dense bundle of white matter tracts called the corpus callosum. Some functions are replicated in the two hemispheres. For example, both hemispheres are responsible for sensory and motor function, although the sensory and motor cortices have a contralateral (or opposite-side) representation; that is, the left cerebral hemisphere is responsible for movements and sensations on the right side of the body and the right cerebral hemisphere is responsible for movements and sensations on the left side of the body. Other functions are lateralized ; that is, they reside primarily in one hemisphere or the other. For example, for right-handed and the majority of left-handed individuals, the left hemisphere is most responsible for language.

There are some people whose two hemispheres are not connected, either because the corpus callosum was surgically severed ( callosotomy ) or due to a genetic abnormality. These split-brain patients have helped us understand the functioning of the two hemispheres. First, because of the contralateral representation of sensory information, if an object is placed in only the left or only the right visual hemifield , then only the right or left hemisphere, respectively, of the split-brain patient will see it. In essence, it is as though the person has two brains in his or her head, each seeing half the world. Interestingly, because language is very often localized in the left hemisphere, if we show the right hemisphere a picture and ask the patient what she saw, she will say she didn’t see anything (because only the left hemisphere can speak and it didn’t see anything). However, we know that the right hemisphere sees the picture because if the patient is asked to press a button whenever she sees the image, the left hand (which is controlled by the right hemisphere) will respond despite the left hemisphere’s denial that anything was there. There are also some advantages to having disconnected hemispheres. Unlike those with a fully functional corpus callosum, a split-brain patient can simultaneously search for something in his right and left visual fields (L uck, Hillyard, Mangun, & Gazzaniga, 1989 ) and can do the equivalent of rubbing his stomach and patting his head at the same time ( Franz, Eliason, Ivry, & Gazzaniga, 1996 ). In other words, they exhibit less competition between the hemispheres.

Gray Versus White Matter

The cerebral hemispheres contain both grey and white matter, so called because they appear grayish and whitish in dissections or in an MRI (magnetic resonance imaging; see, “Studying the Human Brain”). The gray matter is composed of the neuronal cell bodies (see module, “Neurons”). The cell bodies (or soma) contain the genes of the cell and are responsible for metabolism (keeping the cell alive) and synthesizing proteins. In this way, the cell body is the workhorse of the cell. The white matter is composed of the axons of the neurons, and, in particular, axons that are covered with a sheath of myelin (fatty support cells that are whitish in color). Axons conduct the electrical signals from the cell and are, therefore, critical to cell communication. People use the expression “use your gray matter” when they want a person to think harder. The “gray matter” in this expression is probably a reference to the cerebral hemispheres more generally; the gray cortical sheet (the convoluted surface of the cortex) being the most visible. However, both the gray matter and white matter are critical to proper functioning of the mind. Losses of either result in deficits in language, memory, reasoning, and other mental functions. See Figure 3 for MRI slices showing both the inner white matter that connects the cell bodies in the gray cortical sheet.

homework 2.0 label the brain

Studying the Human Brain

How do we know what the brain does? We have gathered knowledge about the functions of the brain from many different methods. Each method is useful for answering distinct types of questions, but the strongest evidence for a specific role or function of a particular brain area is converging evidence ; that is, similar findings reported from multiple studies using different methods.

One of the first organized attempts to study the functions of the brain was phrenology , a popular field of study in the first half of the 19th century. Phrenologists assumed that various features of the brain, such as its uneven surface, are reflected on the skull; therefore, they attempted to correlate bumps and indentations of the skull with specific functions of the brain. For example, they would claim that a very artistic person has ridges on the head that vary in size and location from those of someone who is very good at spatial reasoning. Although the assumption that the skull reflects the underlying brain structure has been proven wrong, phrenology nonetheless significantly impacted current-day neuroscience and its thinking about the functions of the brain. That is, different parts of the brain are devoted to very specific functions that can be identified through scientific inquiry.

Dissection of the brain, in either animals or cadavers, has been a critical tool of neuroscientists since 340 BC when Aristotle first published his dissections. Since then this method has advanced considerably with the discovery of various staining techniques that can highlight particular cells. Because the brain can be sliced very thinly, examined under the microscope, and particular cells highlighted, this method is especially useful for studying specific groups of neurons or small brain structures; that is, it has a very high spatial resolution . Dissections allow scientists to study changes in the brain that occur due to various diseases or experiences (e.g., exposure to drugs or brain injuries).

Virtual dissection studies with living humans are also conducted. Here, the brain is imaged using computerized axial tomography (CAT) or MRI scanners; they reveal with very high precision the various structures in the brain and can help detect changes in gray or white matter. These changes in the brain can then be correlated with behavior, such as performance on memory tests, and, therefore, implicate specific brain areas in certain cognitive functions.

Changing the Brain

Some researchers induce lesions or ablate (i.e., remove) parts of the brain in animals. If the animal’s behavior changes after the lesion, we can infer that the removed structure is important for that behavior. Lesions of human brains are studied in patient populations only; that is, patients who have lost a brain region due to a stroke or other injury, or who have had surgical removal of a structure to treat a particular disease (e.g., a callosotomy to control epilepsy, as in split-brain patients). From such case studies , we can infer brain function by measuring changes in the behavior of the patients before and after the lesion.

Because the brain works by generating electrical signals, it is also possible to change brain function with electrical stimulation. Transcranial magnetic stimulation (TMS) refers to a technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current in the brain. Although effects of TMS are sometimes referred to as temporary virtual lesions, it is more appropriate to describe the induced electricity as interference with neurons’ normal communication with each other. TMS allows very precise study of when events in the brain happen so it has a good temporal resolution , but its application is limited only to the surface of the cortex and cannot extend to deep areas of the brain.

Transcranial direct current stimulation (tDCS) is similar to TMS except that it uses electrical current directly, rather than inducing it with magnetic pulses, by placing small electrodes on the skull. A brain area is stimulated by a low current (equivalent to an AA battery) for a more extended period of time than TMS. When used in combination with cognitive training, tDCS has been shown to improve performance of many cognitive functions such as mathematical ability, memory, attention, and coordination (e.g., Brasil-Neto, 2012 ; Feng, Bowden, & Kautz, 2013 ; Kuo & Nitsche, 2012 ).

Neuroimaging tools are used to study the brain in action; that is, when it is engaged in a specific task. Positron emission tomography (PET) records blood flow in the brain. The PET scanner detects the radioactive substance that is injected into the bloodstream of the participant just before or while he or she is performing some task (e.g., adding numbers). Because active neuron populations require metabolites, more blood and hence more radioactive substance flows into those regions. PET scanners detect the injected radioactive substance in specific brain regions, allowing researchers to infer that those areas were active during the task. Functional magnetic resonance imaging (fMRI) also relies on blood flow in the brain. This method, however, measures the changes in oxygen levels in the blood and does not require any substance to be injected into the participant. Both of these tools have good spatial resolution (although not as precise as dissection studies), but because it takes at least several seconds for the blood to arrive to the active areas of the brain, PET and fMRI have poor temporal resolution; that is, they do not tell us very precisely when the activity occurred.

A researcher studies fMRI images on a computer monitor.

Electroencephalography (EEG) , on the other hand, measures the electrical activity of the brain, and therefore, it has a much greater temporal resolution (millisecond precision rather than seconds) than PET or fMRI. Like tDCS, electrodes are placed on the participant’s head when he or she is performing a task. In this case, however, many more electrodes are used, and they measure rather than produce activity. Because the electrical activity picked up at any particular electrode can be coming from anywhere in the brain, EEG has poor spatial resolution; that is, we have only a rough idea of which part of the brain generates the measured activity.

Diffuse optical imaging (DOI) can give researchers the best of both worlds: high spatial and temporal resolution, depending on how it is used. Here, one shines infrared light into the brain, and measures the light that comes back out. DOI relies on the fact that the properties of the light change when it passes through oxygenated blood, or when it encounters active neurons. Researchers can then infer from the properties of the collected light what regions in the brain were engaged by the task. When DOI is set up to detect changes in blood oxygen levels, the temporal resolution is low and comparable to PET or fMRI. However, when DOI is set up to directly detect active neurons, it has both high spatial and temporal resolution.

Because the spatial and temporal resolution of each tool varies, strongest evidence for what role a certain brain area serves comes from converging evidence. For example, we are more likely to believe that the hippocampal formation is involved in memory if multiple studies using a variety of tasks and different neuroimaging tools provide evidence for this hypothesis. The brain is a complex system, and only advances in brain research will show whether the brain can ever really understand itself.

  • Outside Resources

  • Discussion Questions
  • In what ways does the segmentation of the brain into the brain stem, cerebellum, and cerebral hemispheres provide a natural division?
  • How has the study of split-brain patients been informative?
  • What is behind the expression “use your gray matter,” and why is it not entirely accurate?
  • Why is converging evidence the best kind of evidence in the study of brain function?
  • If you were interested in whether a particular brain area was involved in a specific behavior, what neuroscience methods could you use?
  • If you were interested in the precise time in which a particular brain process occurred, which neuroscience methods could you use?
  • Beck, D. M., & Kastner, S. (2009). Top-down and bottom-up mechanisms in biasing competition in the human brain. Vision Research , 49, 1154–1165.
  • Brasil-Neto, J. P. (2012). Learning, memory, and transcranial direct current stimulation. Frontiers in Psychiatry , 3(80). doi: 10.3389/fpsyt.2012.00080.
  • Feng, W. W., Bowden, M. G., & Kautz, S. (2013). Review of transcranial direct current stimulation in poststroke recovery. Topics in Stroke Rehabilitation , 20, 68–77.
  • Franz, E. A., Eliassen, J. C., Ivry, R. B., & Gazzaniga, M. S. (1996). Dissociation of spatial and temporal coupling in the bimanual movements of callosotomy patients. Psychological Science , 7, 306–310.
  • Kandal, E. R., Schwartz, J. H., & Jessell, T. M. (Eds.) (2000). Principles of neural science (Vol. 4). New York, NY: McGraw-Hill.
  • Kuo, M. F., & Nitsche, M. A. (2012). Effects of transcranial electrical stimulation on cognition. Clinical EEG and Neuroscience , 43, 192–199.
  • Luck, S. J., Hillyard, S. A., Mangun, G. R., & Gazzaniga, M. S. (1989). Independent hemispheric attentional systems mediate visual search in split-brain patients. Nature , 342, 543–545.
  • Swanson, L. (2000). What is the brain? Trends in Neurosciences , 23, 519–527.

homework 2.0 label the brain

  • Creative Commons License

Creative Commons

How to cite this Noba module using APA Style

homework 2.0 label the brain

Forgot your password?

Logo for Open Educational Resources

14.3 The Brain and Spinal Cord

Learning objectives.

By the end of this section, you will be able to:

  • Name the major regions of the adult brain
  • Describe the connections between the cerebrum and brain stem through the diencephalon, and from those regions into the spinal cord
  • Recognize the complex connections within the subcortical structures of the basal nuclei
  • Explain the arrangement of gray and white matter in the spinal cord

The brain and the spinal cord are the central nervous system, and they represent the main organs of the nervous system. The spinal cord is a single structure, whereas the adult brain is described in terms of four major regions: the cerebrum, the diencephalon, the brain stem, and the cerebellum. A person’s conscious experiences are based on neural activity in the brain. The regulation of homeostasis is governed by a specialized region in the brain. The coordination of reflexes depends on the integration of sensory and motor pathways in the spinal cord.

The Cerebrum

The iconic gray mantle of the human brain, which appears to make up most of the mass of the brain, is the cerebrum ( Figure 14.3.1 ). The wrinkled portion is the cerebral cortex , and the rest of the structure is beneath that outer covering. There is a large separation between the two sides of the cerebrum called the longitudinal fissure . It separates the cerebrum into two distinct halves, a right and left cerebral hemisphere . Deep within the cerebrum, the white matter of the corpus callosum provides the major pathway for communication between the two hemispheres of the cerebral cortex.

This figure shows the lateral view on the left panel and anterior view on the right panel of the brain. The major parts including the cerebrum are labeled.

Many of the higher neurological functions, such as memory, emotion, and consciousness, are the result of cerebral function. The complexity of the cerebrum is different across vertebrate species. The cerebrum of the most primitive vertebrates is not much more than the connection for the sense of smell. In mammals, the cerebrum comprises the outer gray matter that is the cortex (from the Latin word meaning “bark of a tree”) and several deep nuclei that belong to three important functional groups. The basal nuclei are responsible for cognitive processing, the most important function being that associated with planning movements. The basal forebrain contains nuclei that are important in learning and memory. The limbic cortex is the region of the cerebral cortex that is part of the limbic system , a collection of structures involved in emotion, memory, and behavior.

Cerebral Cortex

The cerebrum is covered by a continuous layer of gray matter that wraps around either side of the forebrain—the cerebral cortex. This thin, extensive region of wrinkled gray matter is responsible for the higher functions of the nervous system. A gyrus (plural = gyri) is the ridge of one of those wrinkles, and a sulcus (plural = sulci) is the groove between two gyri. The pattern of these folds of tissue indicates specific regions of the cerebral cortex.

The head is limited by the size of the birth canal, and the brain must fit inside the cranial cavity of the skull. Extensive folding in the cerebral cortex enables more gray matter to fit into this limited space. If the gray matter of the cortex were peeled off of the cerebrum and laid out flat, its surface area would be roughly equal to one square meter.

The folding of the cortex maximizes the amount of gray matter in the cranial cavity. During embryonic development, as the telencephalon expands within the skull, the brain goes through a regular course of growth that results in everyone’s brain having a similar pattern of folds. The surface of the brain can be mapped on the basis of the locations of large gyri and sulci. Using these landmarks, the cortex can be separated into four major regions, or lobes ( Figure 14.3.2 ). The lateral sulcus that separates the temporal lobe from the other regions is one such landmark. Superior to the lateral sulcus are the parietal lobe and frontal lobe , which are separated from each other by the central sulcus . The posterior region of the cortex is the occipital lobe , which has no obvious anatomical border between it and the parietal or temporal lobes on the lateral surface of the brain. From the medial surface, an obvious landmark separating the parietal and occipital lobes is called the parieto-occipital sulcus . The fact that there is no obvious anatomical border between these lobes is consistent with the functions of these regions being interrelated.

This figure shows the lateral view of the brain and the major lobes are labeled.

Different regions of the cerebral cortex can be associated with particular functions, a concept known as localization of function. In the early 1900s, a German neuroscientist named Korbinian Brodmann performed an extensive study of the microscopic anatomy—the cytoarchitecture—of the cerebral cortex and divided the cortex into 52 separate regions on the basis of the histology of the cortex. His work resulted in a system of classification known as Brodmann’s areas , which is still used today to describe the anatomical distinctions within the cortex ( Figure 14.3.3 ). The results from Brodmann’s work on the anatomy align very well with the functional differences within the cortex. Areas 17 and 18 in the occipital lobe are responsible for primary visual perception. That visual information is complex, so it is processed in the temporal and parietal lobes as well.

The temporal lobe is associated with primary auditory sensation, known as Brodmann’s areas 41 and 42 in the superior temporal lobe. Because regions of the temporal lobe are part of the limbic system, memory is an important function associated with that lobe. Memory is essentially a sensory function; memories are recalled sensations such as the smell of Mom’s baking or the sound of a barking dog. Even memories of movement are really the memory of sensory feedback from those movements, such as stretching muscles or the movement of the skin around a joint. Structures in the temporal lobe are responsible for establishing long-term memory, but the ultimate location of those memories is usually in the region in which the sensory perception was processed.

The main sensation associated with the parietal lobe is somatosensation , meaning the general sensations associated with the body. Posterior to the central sulcus is the postcentral gyrus , the primary somatosensory cortex, which is identified as Brodmann’s areas 1, 2, and 3. All of the tactile senses are processed in this area, including touch, pressure, tickle, pain, itch, and vibration, as well as more general senses of the body such as proprioception and kinesthesia , which are the senses of body position and movement, respectively.

Anterior to the central sulcus is the frontal lobe, which is primarily associated with motor functions. The precentral gyrus is the primary motor cortex. Cells from this region of the cerebral cortex are the upper motor neurons that instruct cells in the spinal cord and brain stem (lower motor neurons) to move skeletal muscles. Anterior to this region are a few areas that are associated with planned movements. The premotor area is responsible for storing learned movement algorithms which are instructions for complex movements. Different algorithms activate the upper motor neurons in the correct sequence when a complex motor activity is performed. The frontal eye fields are important in eliciting scanning eye movements and in attending to visual stimuli. Broca’s area is responsible for the production of language, or controlling movements responsible for speech; in the vast majority of people, it is located only on the left side. Anterior to these regions is the prefrontal lobe , which serves cognitive functions that can be the basis of personality, short-term memory, and consciousness. The prefrontal lobotomy is an outdated mode of treatment for personality disorders (psychiatric conditions) that profoundly affected the personality of the patient.

In this figure, the Brodmann areas, identifying the functional regions of the brain, are mapped. The left panel shows the lateral surface of the brain and the right panel shows the medial surface.

Area 17, as Brodmann described it, is also known as the primary visual cortex. Adjacent to that are areas 18 and 19, which constitute subsequent regions of visual processing. Area 22 is the primary auditory cortex, and it is followed by area 23, which further processes auditory information. Area 4 is the primary motor cortex in the precentral gyrus, whereas area 6 is the premotor cortex. These areas suggest some specialization within the cortex for functional processing, both in sensory and motor regions. The fact that Brodmann’s areas correlate so closely to functional localization in the cerebral cortex demonstrates the strong link between structure and function in these regions.

Areas 1, 2, 3, 4, 17, and 22 are each described as primary cortical areas. The adjoining regions are each referred to as association areas. Primary areas are where sensory information is initially received from the thalamus for conscious perception, or—in the case of the primary motor cortex—where descending commands are sent down to the brain stem or spinal cord to execute movements ( Figure 14.3.4 ).

Functions of the Cerebral Cortex

The cerebrum is the seat of many of the higher mental functions, such as memory and learning, language, and conscious perception, which are the subjects of subtests of the mental status exam. The cerebral cortex is the thin layer of gray matter on the outside of the cerebrum. It is approximately a millimeter thick in most regions and highly folded to fit within the limited space of the cranial vault. These higher functions are distributed across various regions of the cortex, and specific locations can be said to be responsible for particular functions. There is a limited set of regions, for example, that are involved in language function, and they can be subdivided on the basis of the particular part of language function that each governs.

This figure shows the brain with the different regions colored differently. Text callouts from each region show the function of that particular region.

A number of other regions, which extend beyond these primary or association areas of the cortex, are referred to as integrative areas. These areas are found in the spaces between the domains for particular sensory or motor functions, and they integrate multisensory information, or process sensory or motor information in more complex ways. Consider, for example, the posterior parietal cortex that lies between the somatosensory cortex and visual cortex regions. This has been ascribed to the coordination of visual and motor functions, such as reaching to pick up a glass. The somatosensory function that would be part of this is the proprioceptive feedback from moving the arm and hand. The weight of the glass, based on what it contains, will influence how those movements are executed.

Cognitive Abilities

Assessment of cerebral functions is directed at cognitive abilities. The abilities assessed through the mental status exam can be separated into four groups: orientation and memory, language and speech, sensorium, and judgment and abstract reasoning.

Orientation and Memory

Orientation is the patient’s awareness of his or her immediate circumstances. It is awareness of time, not in terms of the clock, but of the date and what is occurring around the patient. It is awareness of place, such that a patient should know where he or she is and why. It is also awareness of who the patient is—recognizing personal identity and being able to relate that to the examiner. The initial tests of orientation are based on the questions, “Do you know what the date is?” or “Do you know where you are?” or “What is your name?” Further understanding of a patient’s awareness of orientation can come from questions that address remote memory, such as “Who is the President of the United States?”, or asking what happened on a specific date.

There are also specific tasks to address memory. One is the three-word recall test. The patient is given three words to recall, such as book, clock, and shovel. After a short interval, during which other parts of the interview continue, the patient is asked to recall the three words. Other tasks that assess memory—aside from those related to orientation—have the patient recite the months of the year in reverse order to avoid the overlearned sequence and focus on the memory of the months in an order, or to spell common words backwards, or to recite a list of numbers back.

Memory is largely a function of the temporal lobe, along with structures beneath the cerebral cortex such as the hippocampus and the amygdala. The storage of memory requires these structures of the medial temporal lobe. A famous case of a man who had both medial temporal lobes removed to treat intractable epilepsy provided insight into the relationship between the structures of the brain and the function of memory.

Henry Molaison, who was referred to as patient HM when he was alive, had epilepsy localized to both of his medial temporal lobes. In 1953, a bilateral lobectomy was performed that alleviated the epilepsy but resulted in the inability for HM to form new memories—a condition called anterograde amnesia . HM was able to recall most events from before his surgery, although there was a partial loss of earlier memories, which is referred to as retrograde amnesia . HM became the subject of extensive studies into how memory works. What he was unable to do was form new memories of what happened to him, what are now called episodic memory . Episodic memory is autobiographical in nature, such as remembering riding a bicycle as a child around the neighborhood, as opposed to the procedural memory of how to ride a bike. HM also retained his short-term memory , such as what is tested by the three-word task described above. After a brief period, those memories would dissipate or decay and not be stored in the long-term because the medial temporal lobe structures were removed.

The difference in short-term, procedural, and episodic memory, as evidenced by patient HM, suggests that there are different parts of the brain responsible for those functions. The long-term storage of episodic memory requires the hippocampus and related medial temporal structures, and the location of those memories is in the multimodal integration areas of the cerebral cortex. However, short-term memory—also called working or active memory—is localized to the prefrontal lobe. Because patient HM had only lost his medial temporal lobe—and lost very little of his previous memories, and did not lose the ability to form new short-term memories—it was concluded that the function of the hippocampus, and adjacent structures in the medial temporal lobe, is to move (or consolidate) short-term memories (in the pre-frontal lobe) to long-term memory (in the temporal lobe).

The prefrontal cortex can also be tested for the ability to organize information. In one subtest of the mental status exam called set generation, the patient is asked to generate a list of words that all start with the same letter, but not to include proper nouns or names. The expectation is that a person can generate such a list of at least 10 words within 1 minute. Many people can likely do this much more quickly, but the standard separates the accepted normal from those with compromised prefrontal cortices.

External Website

QR Code representing a URL

Read this article to learn about a young man who texts his fiancée in a panic as he finds that he is having trouble remembering things. At the hospital, a neurologist administers the mental status exam, which is mostly normal except for the three-word recall test. The young man could not recall them even 30 seconds after hearing them and repeating them back to the doctor. An undiscovered mass in the mediastinum region was found to be Hodgkin’s lymphoma, a type of cancer that affects the immune system and likely caused antibodies to attack the nervous system. The patient eventually regained his ability to remember, though the events in the hospital were always elusive. Considering that the effects on memory were temporary, but resulted in the loss of the specific events of the hospital stay, what regions of the brain were likely to have been affected by the antibodies and what type of memory does that represent?

Language and Speech

Language is, arguably, a very human aspect of neurological function. There are certainly strides being made in understanding communication in other species, but much of what makes the human experience seemingly unique is its basis in language. Any understanding of our species is necessarily reflective, as suggested by the question “What am I?” And the fundamental answer to this question is suggested by the famous quote by René Descartes: “Cogito Ergo Sum” (translated from Latin as “I think, therefore I am”). Formulating an understanding of yourself is largely describing who you are to yourself. It is a confusing topic to delve into, but language is certainly at the core of what it means to be self-aware.

The neurological exam has two specific subtests that address language. One measures the ability of the patient to understand language by asking them to follow a set of instructions to perform an action, such as “touch your right finger to your left elbow and then to your right knee.” Another subtest assesses the fluency and coherency of language by having the patient generate descriptions of objects or scenes depicted in drawings, and by reciting sentences or explaining a written passage. Language, however, is important in so many ways in the neurological exam. The patient needs to know what to do, whether it is as simple as explaining how the knee-jerk reflex is going to be performed, or asking a question such as “What is your name?” Often, language deficits can be determined without specific subtests; if a person cannot reply to a question properly, there may be a problem with the reception of language.

An important example of multimodal integrative areas is associated with language function ( Figure 14.3.5 ). Adjacent to the auditory association cortex, at the end of the lateral sulcus just anterior to the visual cortex, is Wernicke’s area . In the lateral aspect of the frontal lobe, just anterior to the region of the motor cortex associated with the head and neck, is Broca’s area. Both regions were originally described on the basis of losses of speech and language, which is called aphasia . The aphasia associated with Broca’s area is known as an expressive aphasia , which means that speech production is compromised. This type of aphasia is often described as non-fluency because the ability to say some words leads to broken or halting speech. Grammar can also appear to be lost. The aphasia associated with Wernicke’s area is known as a receptive aphasia , which is not a loss of speech production, but a loss of understanding of content. Patients, after recovering from acute forms of this aphasia, report not being able to understand what is said to them or what they are saying themselves, but they often cannot keep from talking.

The two regions are connected by white matter tracts that run between the posterior temporal lobe and the lateral aspect of the frontal lobe. Conduction aphasia associated with damage to this connection refers to the problem of connecting the understanding of language to the production of speech. This is a very rare condition, but is likely to present as an inability to faithfully repeat spoken language.

This figure shows the brain. Two labels mark the Broca’s and Wernicke’s areas.

Those parts of the brain involved in the reception and interpretation of sensory stimuli are referred to collectively as the sensorium. The cerebral cortex has several regions that are necessary for sensory perception. From the primary cortical areas of the somatosensory, visual, auditory, and gustatory senses to the association areas that process information in these modalities, the cerebral cortex is the seat of conscious sensory perception. In contrast, sensory information can also be processed by deeper brain regions, which we may vaguely describe as subconscious—for instance, we are not constantly aware of the proprioceptive information that the cerebellum uses to maintain balance. Several of the subtests can reveal activity associated with these sensory modalities, such as being able to hear a question or see a picture. Two subtests assess specific functions of these cortical areas.

The first is praxis , a practical exercise in which the patient performs a task completely on the basis of verbal description without any demonstration from the examiner. For example, the patient can be told to take their left hand and place it palm down on their left thigh, then flip it over so the palm is facing up, and then repeat this four times. The examiner describes the activity without any movements on their part to suggest how the movements are to be performed. The patient needs to understand the instructions, transform them into movements, and use sensory feedback, both visual and proprioceptive, to perform the movements correctly.

The second subtest for sensory perception is gnosis , which involves two tasks. The first task, known as stereognosis , involves the naming of objects strictly on the basis of the somatosensory information that comes from manipulating them. The patient keeps their eyes closed and is given a common object, such as a coin, that they have to identify. The patient should be able to indicate the particular type of coin, such as a dime versus a penny, or a nickel versus a quarter, on the basis of the sensory cues involved. For example, the size, thickness, or weight of the coin may be an indication, or to differentiate the pairs of coins suggested here, the smooth or corrugated edge of the coin will correspond to the particular denomination. The second task, graphesthesia , is to recognize numbers or letters written on the palm of the hand with a dull pointer, such as a pen cap.

Praxis and gnosis are related to the conscious perception and cortical processing of sensory information. Being able to transform verbal commands into a sequence of motor responses, or to manipulate and recognize a common object and associate it with a name for that object. Both subtests have language components because language function is integral to these functions. The relationship between the words that describe actions, or the nouns that represent objects, and the cerebral location of these concepts is suggested to be localized to particular cortical areas. Certain aphasias can be characterized by a deficit of verbs or nouns, known as V impairment or N impairment, or may be classified as V–N dissociation. Patients have difficulty using one type of word over the other. To describe what is happening in a photograph as part of the expressive language subtest, a patient will use active- or image-based language. The lack of one or the other of these components of language can relate to the ability to use verbs or nouns. Damage to the region at which the frontal and temporal lobes meet, including the region known as the insula, is associated with V impairment; damage to the middle and inferior temporal lobe is associated with N impairment.

Judgment and Abstract Reasoning

Planning and producing responses requires an ability to make sense of the world around us. Making judgments and reasoning in the abstract are necessary to produce movements as part of larger responses. For example, when your alarm goes off, do you hit the snooze button or jump out of bed? Is 10 extra minutes in bed worth the extra rush to get ready for your day? Will hitting the snooze button multiple times lead to feeling more rested or result in a panic as you run late? How you mentally process these questions can affect your whole day.

The prefrontal cortex is responsible for the functions responsible for planning and making decisions. In the mental status exam, the subtest that assesses judgment and reasoning is directed at three aspects of frontal lobe function. First, the examiner asks questions about problem solving, such as “If you see a house on fire, what would you do?” The patient is also asked to interpret common proverbs, such as “Don’t look a gift horse in the mouth.” Additionally, pairs of words are compared for similarities, such as apple and orange, or lamp and cabinet.

The prefrontal cortex is composed of the regions of the frontal lobe that are not directly related to specific motor functions. The most posterior region of the frontal lobe, the precentral gyrus, is the primary motor cortex. Anterior to that are the premotor cortex, Broca’s area, and the frontal eye fields, which are all related to planning certain types of movements. Anterior to what could be described as motor association areas are the regions of the prefrontal cortex. They are the regions in which judgment, abstract reasoning, and working memory are localized. The antecedents to planning certain movements are judging whether those movements should be made, as in the example of deciding whether to hit the snooze button.

To an extent, the prefrontal cortex may be related to personality. The neurological exam does not necessarily assess personality, but it can be within the realm of neurology or psychiatry. A clinical situation that suggests this link between the prefrontal cortex and personality comes from the story of Phineas Gage, the railroad worker from the mid-1800s who had a metal spike impale his prefrontal cortex. There are suggestions that the steel rod led to changes in his personality. A man who was a quiet, dependable railroad worker became a raucous, irritable drunkard. Later anecdotal evidence from his life suggests that he was able to support himself, although he had to relocate and take on a different career as a stagecoach driver.

A psychiatric practice to deal with various disorders was the prefrontal lobotomy. This procedure was common in the 1940s and early 1950s, until antipsychotic drugs became available. The connections between the prefrontal cortex and other regions of the brain were severed. The disorders associated with this procedure included some aspects of what are now referred to as personality disorders, but also included mood disorders and psychoses. Depictions of lobotomies in popular media suggest a link between cutting the white matter of the prefrontal cortex and changes in a patient’s mood and personality, though this correlation is not well understood.

Everyday Connections –  Left Brain, Right Brain

Popular media often refer to right-brained and left-brained people, as if the brain were two independent halves that work differently for different people. This is a popular misinterpretation of an important neurological phenomenon. As an extreme measure to deal with a debilitating condition, the corpus callosum may be sectioned to overcome intractable epilepsy. When the connections between the two cerebral hemispheres are cut, interesting effects can be observed.

If a person with an intact corpus callosum is asked to put their hands in their pockets and describe what is there on the basis of what their hands feel, they might say that they have keys in their right pocket and loose change in the left. They may even be able to count the coins in their pocket and say if they can afford to buy a candy bar from the vending machine. If a person with a sectioned corpus callosum is given the same instructions, they will do something quite peculiar. They will only put their right hand in their pocket and say they have keys there. They will not even move their left hand, much less report that there is loose change in the left pocket.

The reason for this is that the language functions of the cerebral cortex are localized to the left hemisphere in 95 percent of the population. Additionally, the left hemisphere is connected to the right side of the body through the corticospinal tract and the ascending tracts of the spinal cord. Motor commands from the precentral gyrus control the opposite side of the body, whereas sensory information processed by the postcentral gyrus is received from the opposite side of the body. For a verbal command to initiate movement of the right arm and hand, the left side of the brain needs to be connected by the corpus callosum. Language is processed in the left side of the brain and directly influences the left brain and right arm motor functions, but is sent to influence the right brain and left arm motor functions through the corpus callosum. Likewise, the left-handed sensory perception of what is in the left pocket travels across the corpus callosum from the right brain, so no verbal report on those contents would be possible if the hand happened to be in the pocket.

QR Code representing a URL

Watch the video titled “The Man With Two Brains” to see the neuroscientist Michael Gazzaniga introduce a patient he has worked with for years who has had his corpus callosum cut, separating his two cerebral hemispheres. A few tests are run to demonstrate how this manifests in tests of cerebral function. Unlike normal people, this patient can perform two independent tasks at the same time because the lines of communication between the right and left sides of his brain have been removed. Whereas a person with an intact corpus callosum cannot overcome the dominance of one hemisphere over the other, this patient can. If the left cerebral hemisphere is dominant in the majority of people, why would right-handedness be most common?

The Mental Status Exam

The cerebrum, particularly the cerebral cortex, is the location of important cognitive functions that are the focus of the mental status exam. The regionalization of the cortex, initially described on the basis of anatomical evidence of cytoarchitecture, reveals the distribution of functionally distinct areas. Cortical regions can be described as primary sensory or motor areas, association areas, or multimodal integration areas. The functions attributed to these regions include attention, memory, language, speech, sensation, judgment, and abstract reasoning.

The mental status exam addresses these cognitive abilities through a series of subtests designed to elicit particular behaviors ascribed to these functions. The loss of neurological function can illustrate the location of damage to the cerebrum. Memory functions are attributed to the temporal lobe, particularly the medial temporal lobe structures known as the hippocampus and amygdala, along with the adjacent cortex. Evidence of the importance of these structures comes from the side effects of a bilateral temporal lobectomy that were studied in detail in patient HM.

Losses of language and speech functions, known as aphasias, are associated with damage to the important integration areas in the left hemisphere known as Broca’s or Wernicke’s areas, as well as the connections in the white matter between them. Different types of aphasia are named for the particular structures that are damaged. Assessment of the functions of the sensorium includes praxis and gnosis. The subtests related to these functions depend on multimodal integration, as well as language-dependent processing.

The prefrontal cortex contains structures important for planning, judgment, reasoning, and working memory. Damage to these areas can result in changes to personality, mood, and behavior. The famous case of Phineas Gage suggests a role for this cortex in personality, as does the outdated practice of prefrontal lobectomy.

Subcortical structures

Beneath the cerebral cortex are sets of nuclei known as subcortical nuclei that augment cortical processes. The nuclei of the basal forebrain serve as the primary location for acetylcholine production, which modulates the overall activity of the cortex, possibly leading to greater attention to sensory stimuli. Alzheimer’s disease is associated with a loss of neurons in the basal forebrain. The hippocampus and amygdala are medial-lobe structures that, along with the adjacent cortex, are involved in long-term memory formation and emotional responses. The basal nuclei are a set of nuclei in the cerebrum responsible for comparing cortical processing with the general state of activity in the nervous system to influence the likelihood of movement taking place. For example, while a student is sitting in a classroom listening to a lecture, the basal nuclei will keep the urge to jump up and scream from actually happening. (The basal nuclei are also referred to as the basal ganglia, although that is potentially confusing because the term ganglia is typically used for peripheral structures.)

The major structures of the basal nuclei that control movement are the caudate , putamen , and globus pallidus , which are located deep in the cerebrum. The caudate is a long nucleus that follows the basic C-shape of the cerebrum from the frontal lobe, through the parietal and occipital lobes, into the temporal lobe. The putamen is mostly deep in the anterior regions of the frontal and parietal lobes. Together, the caudate and putamen are called the striatum . The globus pallidus is a layered nucleus that lies just medial to the putamen; they are called the lenticular nuclei because they look like curved pieces fitting together like lenses. The globus pallidus has two subdivisions, the external and internal segments, which are lateral and medial, respectively. These nuclei are depicted in a frontal section of the brain in Figure 14.3.6 .

This diagram shows the frontal section of the brain and identifies the major components of the basal nuclei.

The basal nuclei in the cerebrum are connected with a few more nuclei in the brain stem that together act as a functional group that forms a motor pathway. Two streams of information processing take place in the basal nuclei. All input to the basal nuclei is from the cortex into the striatum ( Figure 14.3.7 ). The direct pathway is the projection of axons from the striatum to the globus pallidus internal segment (GPi) and the substantia nigra pars reticulata (SNr). The GPi/SNr then projects to the thalamus, which projects back to the cortex. The indirect pathway is the projection of axons from the striatum to the globus pallidus external segment (GPe), then to the subthalamic nucleus (STN), and finally to GPi/SNr. The two streams both target the GPi/SNr, but one has a direct projection and the other goes through a few intervening nuclei. The direct pathway causes the disinhibition of the thalamus (inhibition of one cell on a target cell that then inhibits the first cell), whereas the indirect pathway causes, or reinforces, the normal inhibition of the thalamus. The thalamus then can either excite the cortex (as a result of the direct pathway) or fail to excite the cortex (as a result of the indirect pathway).

This flowchart shows the connection between the different regions of the brain such as the cortex, striatum and the thalamus.

The switch between the two pathways is the substantia nigra pars compacta , which projects to the striatum and releases the neurotransmitter dopamine. Dopamine receptors are either excitatory (D1-type receptors) or inhibitory (D2-type receptors). The direct pathway is activated by dopamine, and the indirect pathway is inhibited by dopamine. When the substantia nigra pars compacta is firing, it signals to the basal nuclei that the body is in an active state, and movement will be more likely. When the substantia nigra pars compacta is silent, the body is in a passive state, and movement is inhibited. To illustrate this situation, while a student is sitting listening to a lecture, the substantia nigra pars compacta would be silent and the student less likely to get up and walk around. Likewise, while the professor is lecturing, and walking around at the front of the classroom, the professor’s substantia nigra pars compacta would be active, in keeping with his or her activity level.

QR Code representing a URL

Watch this video to learn about the basal nuclei (also known as the basal ganglia), which have two pathways that process information within the cerebrum. As shown in this video, the direct pathway is the shorter pathway through the system that results in increased activity in the cerebral cortex and increased motor activity. The direct pathway is described as resulting in “disinhibition” of the thalamus. What does disinhibition mean? What are the two neurons doing individually to cause this?

QR Code representing a URL

Watch this video to learn about the basal nuclei (also known as the basal ganglia), which have two pathways that process information within the cerebrum. As shown in this video, the indirect pathway is the longer pathway through the system that results in decreased activity in the cerebral cortex, and therefore less motor activity. The indirect pathway has an extra couple of connections in it, including disinhibition of the subthalamic nucleus. What is the end result on the thalamus, and therefore on movement initiated by the cerebral cortex?

Everyday Connections –  The Myth of Left Brain/Right Brain

There is a persistent myth that people are “right-brained” or “left-brained,” which is an oversimplification of an important concept about the cerebral hemispheres. There is some lateralization of function, in which the left side of the brain is devoted to language function and the right side is devoted to spatial and nonverbal reasoning. Whereas these functions are predominantly associated with those sides of the brain, there is no monopoly by either side on these functions. Many pervasive functions, such as language, are distributed globally around the cerebrum.

Some of the support for this misconception has come from studies of split brains. A drastic way to deal with a rare and devastating neurological condition (intractable epilepsy) is to separate the two hemispheres of the brain. After sectioning the corpus callosum, a split-brained patient will have trouble producing verbal responses on the basis of sensory information processed on the right side of the cerebrum, leading to the idea that the left side is responsible for language function.

However, there are well-documented cases of language functions lost from damage to the right side of the brain. The deficits seen in damage to the left side of the brain are classified as aphasia, a loss of speech function; damage on the right side can affect the use of language. Right-side damage can result in a loss of ability to understand figurative aspects of speech, such as jokes, irony, or metaphors. Nonverbal aspects of speech can be affected by damage to the right side, such as facial expression or body language, and right-side damage can lead to a “flat affect” in speech, or a loss of emotional expression in speech—sounding like a robot when talking. Damage to language areas on the right side causes a condition called aprosodia where the patient has difficulty understanding or expressing the figurative part of speech.

The Diencephalon

The diencephalon is the one region of the adult brain that retains its name from embryologic development. The etymology of the word diencephalon translates to “through brain.” It is the connection between the cerebrum and the rest of the nervous system, with one exception. The rest of the brain, the spinal cord, and the PNS all send information to the cerebrum through the diencephalon. Output from the cerebrum passes through the diencephalon. The single exception is the system associated with olfaction , or the sense of smell, which connects directly with the cerebrum. In the earliest vertebrate species, the cerebrum was not much more than olfactory bulbs that received peripheral information about the chemical environment (to call it smell in these organisms is imprecise because they lived in the ocean).

The diencephalon is deep beneath the cerebrum and constitutes the walls of the third ventricle. The diencephalon can be described as any region of the brain with “thalamus” in its name. The two major regions of the diencephalon are the thalamus itself and the hypothalamus ( Figure 14.3.8 ). There are other structures, such as the epithalamus , which contains the pineal gland, or the subthalamus , which includes the subthalamic nucleus that is part of the basal nuclei.

The thalamus is a collection of nuclei that relay information between the cerebral cortex and the periphery, spinal cord, or brain stem. All sensory information, except for the sense of smell, passes through the thalamus before processing by the cortex. Axons from the peripheral sensory organs, or intermediate nuclei, synapse in the thalamus, and thalamic neurons project directly to the cerebrum. It is a requisite synapse in any sensory pathway, except for olfaction. The thalamus does not just pass the information on, it also processes that information. For example, the portion of the thalamus that receives visual information will influence what visual stimuli are important, or what receives attention.

The cerebrum also sends information down to the thalamus, which usually communicates motor commands. This involves interactions with the cerebellum and other nuclei in the brain stem. The cerebrum interacts with the basal nuclei, which involves connections with the thalamus. The primary output of the basal nuclei is to the thalamus, which relays that output to the cerebral cortex. The cortex also sends information to the thalamus that will then influence the effects of the basal nuclei.

Hypothalamus

Inferior and slightly anterior to the thalamus is the hypothalamus , the other major region of the diencephalon. The hypothalamus is a collection of nuclei that are largely involved in regulating homeostasis. The hypothalamus is the executive region in charge of the autonomic nervous system and the endocrine system through its regulation of the anterior pituitary gland. Other parts of the hypothalamus are involved in memory and emotion as part of the limbic system.

This figure shows the location of the thalamus, hypothalamus and pituitary gland in the brain.

The midbrain and the pons and medulla of the hindbrain are collectively referred to as the “brain stem” ( Figure 14.3.9 ). The structure emerges from the ventral surface of the forebrain as a tapering cone that connects the brain to the spinal cord. Attached to the brain stem, but considered a separate region of the adult brain, is the cerebellum. The midbrain coordinates sensory representations of the visual, auditory, and somatosensory perceptual spaces. The pons is the main connection with the cerebellum. The pons and the medulla regulate several crucial functions, including the cardiovascular and respiratory systems.

The cranial nerves connect through the brain stem and provide the brain with the sensory input and motor output associated with the head and neck, including most of the special senses. The major ascending and descending pathways between the spinal cord and brain, specifically the cerebrum, pass through the brain stem.

This figure shows the location of the midbrain, pons and the medulla in the brain.

One of the original regions of the embryonic brain, the midbrain is a small region between the thalamus and pons. It is separated into the tectum and tegmentum , from the Latin words for roof and floor, respectively. The cerebral aqueduct passes through the center of the midbrain, such that these regions are the roof and floor of that canal.

The tectum is composed of four bumps known as the colliculi (singular = colliculus), which means “little hill” in Latin. The inferior colliculus is the inferior pair of these enlargements and is part of the auditory brain stem pathway. Neurons of the inferior colliculus project to the thalamus, which then sends auditory information to the cerebrum for the conscious perception of sound. The superior colliculus is the superior pair and combines sensory information about visual space, auditory space, and somatosensory space. Activity in the superior colliculus is related to orienting the eyes to a sound or touch stimulus. If you are walking along the sidewalk on campus and you hear chirping, the superior colliculus coordinates that information with your awareness of the visual location of the tree right above you. That is the correlation of auditory and visual maps. If you suddenly feel something wet fall on your head, your superior colliculus integrates that with the auditory and visual maps and you know that the chirping bird just relieved itself on you. You want to look up to see the culprit, but do not.

The tegmentum is continuous with the gray matter of the rest of the brain stem. Throughout the midbrain, pons, and medulla, the tegmentum contains the nuclei that receive and send information through the cranial nerves, as well as regions that regulate important functions such as those of the cardiovascular and respiratory systems.

The word pons comes from the Latin word for bridge. It is visible on the anterior surface of the brain stem as the thick bundle of white matter attached to the cerebellum. The pons is the main connection between the cerebellum and the brain stem. The bridge-like white matter is only the anterior surface of the pons; the gray matter beneath that is a continuation of the tegmentum from the midbrain. Gray matter in the tegmentum region of the pons contains neurons receiving descending input from the forebrain that is sent to the cerebellum.

The medulla is the region known as the myelencephalon in the embryonic brain. The initial portion of the name, “myel,” refers to the significant white matter found in this region—especially on its exterior, which is continuous with the white matter of the spinal cord. The tegmentum of the midbrain and pons continues into the medulla because this gray matter is responsible for processing cranial nerve information. A diffuse region of gray matter throughout the brain stem, known as the reticular formation , is related to sleep and wakefulness, such as general brain activity and attention.

The Cerebellum

The cerebellum , as the name suggests, is the “little brain.” It is covered in gyri and sulci like the cerebrum, and looks like a miniature version of that part of the brain ( Figure 14.3.10 ). The cerebellum is largely responsible for comparing information from the cerebrum with sensory feedback from the periphery through the spinal cord. It accounts for approximately 10 percent of the mass of the brain.

This figure shows the location of the cerebellum in the brain. In the top panel, a lateral view labels the location of the cerebellum and the deep cerebellar white matter. In the bottom panel, a photograph of a brain, with the cerebellum in pink is shown.

Descending fibers from the cerebrum have branches that connect to neurons in the pons. Those neurons project into the cerebellum, providing a copy of motor commands sent to the spinal cord. Sensory information from the periphery, which enters through spinal or cranial nerves, is copied to a nucleus in the medulla known as the inferior olive . Fibers from this nucleus enter the cerebellum and are compared with the descending commands from the cerebrum. If the primary motor cortex of the frontal lobe sends a command down to the spinal cord to initiate walking, a copy of that instruction is sent to the cerebellum. Sensory feedback from the muscles and joints, proprioceptive information about the movements of walking, and sensations of balance are sent to the cerebellum through the inferior olive and the cerebellum compares them. If walking is not coordinated, perhaps because the ground is uneven or a strong wind is blowing, then the cerebellum sends out a corrective command to compensate for the difference between the original cortical command and the sensory feedback. The output of the cerebellum is into the midbrain, which then sends a descending input to the spinal cord to correct the messages going to skeletal muscles.

The Spinal Cord

The description of the CNS is concentrated on the structures of the brain, but the spinal cord is another major organ of the system. Whereas the brain develops out of expansions of the neural tube into primary and then secondary vesicles, the spinal cord maintains the tube structure and is only specialized into certain regions. As the spinal cord continues to develop in the newborn, anatomical features mark its surface. The anterior midline is marked by the anterior median fissure , and the posterior midline is marked by the posterior median sulcus . Axons enter the posterior side through the dorsal (posterior) nerve root , which marks the posterolateral sulcus on either side. The axons emerging from the anterior side do so through the ventral (anterior) nerve root . Note that it is common to see the terms dorsal (dorsal = “back”) and ventral (ventral = “belly”) used interchangeably with posterior and anterior, particularly in reference to nerves and the structures of the spinal cord. You should learn to be comfortable with both.

On the whole, the posterior regions are responsible for sensory functions and the anterior regions are associated with motor functions. This comes from the initial development of the spinal cord, which is divided into the basal plate and the alar plate . The basal plate is closest to the ventral midline of the neural tube, which will become the anterior face of the spinal cord and gives rise to motor neurons. The alar plate is on the dorsal side of the neural tube and gives rise to neurons that will receive sensory input from the periphery.

The length of the spinal cord is divided into regions that correspond to the regions of the vertebral column. The name of a spinal cord region corresponds to the level at which spinal nerves pass through the intervertebral foramina. Immediately adjacent to the brain stem is the cervical region, followed by the thoracic, then the lumbar, and finally the sacral region. The spinal cord is not the full length of the vertebral column because the spinal cord does not grow significantly longer after the first or second year, but the skeleton continues to grow. The nerves that emerge from the spinal cord pass through the intervertebral formina at the respective levels. As the vertebral column grows, these nerves grow with it and result in a long bundle of nerves that resembles a horse’s tail and is named the cauda equina . The sacral spinal cord is at the level of the upper lumbar vertebral bones. The spinal nerves extend from their various levels to the proper level of the vertebral column.

In cross-section, the gray matter of the spinal cord has the appearance of an ink-blot test, with the spread of the gray matter on one side replicated on the other—a shape reminiscent of a bulbous capital “H.” As shown in Figure 14.3.11 , the gray matter is subdivided into regions that are referred to as horns. The posterior horn is responsible for sensory processing. The anterior horn sends out motor signals to the skeletal muscles. The lateral horn , which is only found in the thoracic, upper lumbar, and sacral regions, is the central component of the sympathetic division of the autonomic nervous system.

Some of the largest neurons of the spinal cord are the multipolar motor neurons in the anterior horn. The fibers that cause contraction of skeletal muscles are the axons of these neurons. The motor neuron that causes contraction of the big toe, for example, is located in the sacral spinal cord. The axon that has to reach all the way to the belly of that muscle may be a meter in length. The neuronal cell body that maintains that long fiber must be quite large, possibly several hundred micrometers in diameter, making it one of the largest cells in the body.

This figure shows the cross section of the spinal cord. The top panel shows a diagram of the cross section and the major parts are labeled. The bottom panel shows an ultrasound image of the spinal cord cross section.

White Column

Just as the gray matter is separated into horns, the white matter of the spinal cord is separated into columns. Ascending tracts of nervous system fibers in these columns carry sensory information up to the brain, whereas descending tracts carry motor commands from the brain. Looking at the spinal cord longitudinally, the columns extend along its length as continuous bands of white matter. Between the two posterior horns of gray matter are the posterior columns . Between the two anterior horns, and bounded by the axons of motor neurons emerging from that gray matter area, are the anterior columns . The white matter on either side of the spinal cord, between the posterior horn and the axons of the anterior horn neurons, are the lateral columns . The posterior columns are composed of axons of ascending tracts. The anterior and lateral columns are composed of many different groups of axons of both ascending and descending tracts—the latter carrying motor commands down from the brain to the spinal cord to control output to the periphery.

QR Code representing a URL

Watch this video to learn about the gray matter of the spinal cord that receives input from fibers of the dorsal (posterior) root and sends information out through the fibers of the ventral (anterior) root. As discussed in this video, these connections represent the interactions of the CNS with peripheral structures for both sensory and motor functions. The cervical and lumbar spinal cords have enlargements as a result of larger populations of neurons. What are these enlargements responsible fo r?

Disorders of the…Basal Nuclei

Parkinson’s disease is a disorder of the basal nuclei, specifically of the substantia nigra, that demonstrates the effects of the direct and indirect pathways. Parkinson’s disease is the result of neurons in the substantia nigra pars compacta dying. These neurons release dopamine into the striatum. Without that modulatory influence, the basal nuclei are stuck in the indirect pathway, without the direct pathway being activated. The direct pathway is responsible for increasing cortical movement commands. The increased activity of the indirect pathway results in the hypokinetic disorder of Parkinson’s disease.

Parkinson’s disease is neurodegenerative, meaning that neurons die that cannot be replaced, so there is no cure for the disorder. Treatments for Parkinson’s disease are aimed at increasing dopamine levels in the striatum. Currently, the most common way of doing that is by providing the amino acid L-DOPA, which is a precursor to the neurotransmitter dopamine and can cross the blood-brain barrier. With levels of the precursor elevated, the remaining cells of the substantia nigra pars compacta can make more neurotransmitter and have a greater effect. Unfortunately, the patient will become less responsive to L-DOPA treatment as time progresses, and it can cause increased dopamine levels elsewhere in the brain, which are associated with psychosis or schizophrenia.

QR Code representing a URL

Visit this site for a thorough explanation of Parkinson’s disease.

QR Code representing a URL

Compared with the nearest evolutionary relative, the chimpanzee, the human has a brain that is huge. At a point in the past, a common ancestor gave rise to the two species of humans and chimpanzees. That evolutionary history is long and is still an area of intense study. But something happened to increase the size of the human brain relative to the chimpanzee. Read this article in which the author explores the current understanding of why this happened.

According to one hypothesis about the expansion of brain size, what tissue might have been sacrificed so energy was available to grow our larger brain? Based on what you know about that tissue and nervous tissue, why would there be a trade-off between them in terms of energy use?

Everyday Connection –  How Much of Your Brain Do You Use?

Have you ever heard the claim that humans only use 10 percent of their brains? Maybe you have seen an advertisement on a website saying that there is a secret to unlocking the full potential of your mind—as if there were 90 percent of your brain sitting idle, just waiting for you to use it. If you see an ad like that, don’t click. It isn’t true.

An easy way to see how much of the brain a person uses is to take measurements of brain activity while performing a task. An example of this kind of measurement is functional magnetic resonance imaging (fMRI), which generates a map of the most active areas and can be generated and presented in three dimensions ( Figure 14.3.12 ). This procedure is different from the standard MRI technique because it is measuring changes in the tissue in time with an experimental condition or event.

This MRI image shows a grainy computer readout of a cross section of the brain. The anterior side of the brain, located on the right hand side of the image, has a large area lighting up with yellow, indicating neural stimulation. Two smaller regions at the center of the brain are also yellow. The two small areas are in the same relative location but in opposite hemispheres of the brain.

The underlying assumption is that active nervous tissue will have greater blood flow. By having the subject perform a visual task, activity all over the brain can be measured. Consider this possible experiment: the subject is told to look at a screen with a black dot in the middle (a fixation point). A photograph of a face is projected on the screen away from the center. The subject has to look at the photograph and decipher what it is. The subject has been instructed to push a button if the photograph is of someone they recognize. The photograph might be of a celebrity, so the subject would press the button, or it might be of a random person unknown to the subject, so the subject would not press the button.

In this task, visual sensory areas would be active, integrating areas would be active, motor areas responsible for moving the eyes would be active, and motor areas for pressing the button with a finger would be active. Those areas are distributed all around the brain and the fMRI images would show activity in more than just 10 percent of the brain (some evidence suggests that about 80 percent of the brain is using energy—based on blood flow to the tissue—during well-defined tasks similar to the one suggested above). This task does not even include all of the functions the brain performs. There is no language response, the body is mostly lying still in the MRI machine, and it does not consider the autonomic functions that would be ongoing in the background.

Chapter Review

Considering the anatomical regions of the nervous system, there are specific names for the structures within each division. A localized collection of neuron cell bodies is referred to as a nucleus in the CNS and as a ganglion in the PNS. A bundle of axons is referred to as a tract in the CNS and as a nerve in the PNS. Whereas nuclei and ganglia are specifically in the central or peripheral divisions, axons can cross the boundary between the two. A single axon can be part of a nerve and a tract. The name for that specific structure depends on its location.

Nervous tissue can also be described as gray matter and white matter on the basis of its appearance in unstained tissue. These descriptions are more often used in the CNS. Gray matter is where nuclei are found and white matter is where tracts are found. In the PNS, ganglia are basically gray matter and nerves are white matter.

The adult brain is separated into four major regions: the cerebrum, the diencephalon, the brain stem, and the cerebellum. The cerebrum is the largest portion and contains the cerebral cortex and subcortical nuclei. It is divided into two halves by the longitudinal fissure.

The cortex is separated into the frontal, parietal, temporal, and occipital lobes. The frontal lobe is responsible for motor functions, from planning movements through executing commands to be sent to the spinal cord and periphery. The most anterior portion of the frontal lobe is the prefrontal cortex, which is associated with aspects of personality through its influence on motor responses in decision-making.

The other lobes are responsible for sensory functions. The parietal lobe is where somatosensation is processed. The occipital lobe is where visual processing begins, although the other parts of the brain can contribute to visual function. The temporal lobe contains the cortical area for auditory processing, but also has regions crucial for memory formation.

Nuclei beneath the cerebral cortex, known as the subcortical nuclei, are responsible for augmenting cortical functions. The basal nuclei receive input from cortical areas and compare it with the general state of the individual through the activity of a dopamine-releasing nucleus. The output influences the activity of part of the thalamus that can then increase or decrease cortical activity that often results in changes to motor commands. The basal forebrain is responsible for modulating cortical activity in attention and memory. The limbic system includes deep cerebral nuclei that are responsible for emotion and memory.

The diencephalon includes the thalamus and the hypothalamus, along with some other structures. The thalamus is a relay between the cerebrum and the rest of the nervous system. The hypothalamus coordinates homeostatic functions through the autonomic and endocrine systems.

The brain stem is composed of the midbrain, pons, and medulla. It controls the head and neck region of the body through the cranial nerves. There are control centers in the brain stem that regulate the cardiovascular and respiratory systems.

The cerebellum is connected to the brain stem, primarily at the pons, where it receives a copy of the descending input from the cerebrum to the spinal cord. It can compare this with sensory feedback input through the medulla and send output through the midbrain that can correct motor commands for coordination.

Interactive Link Questions

Both cells are inhibitory. The first cell inhibits the second one. Therefore, the second cell can no longer inhibit its target. This is disinhibition of that target across two synapses.

By disinhibiting the subthalamic nucleus, the indirect pathway increases excitation of the globus pallidus internal segment. That, in turn, inhibits the thalamus, which is the opposite effect of the direct pathway that disinhibits the thalamus.

Watch this video to learn about the gray matter of the spinal cord that receives input from fibers of the dorsal (posterior) root and sends information out through the fibers of the ventral (anterior) root. As discussed in this video, these connections represent the interactions of the CNS with peripheral structures for both sensory and motor functions. The cervical and lumbar spinal cords have enlargements as a result of larger populations of neurons. What are these enlargements responsible for?

There are more motor neurons in the anterior horns that are responsible for movement in the limbs. The cervical enlargement is for the arms, and the lumbar enlargement is for the legs.

Energy is needed for the brain to develop and perform higher cognitive functions. That energy is not available for the muscle tissues to develop and function. The hypothesis suggests that humans have larger brains and less muscle mass, and chimpanzees have the smaller brains but more muscle mass.

In 2003, the Nobel Prize in Physiology or Medicine was awarded to Paul C. Lauterbur and Sir Peter Mansfield for discoveries related to magnetic resonance imaging (MRI). This is a tool to see the structures of the body (not just the nervous system) that depends on magnetic fields associated with certain atomic nuclei. The utility of this technique in the nervous system is that fat tissue and water appear as different shades between black and white. Because white matter is fatty (from myelin) and gray matter is not, they can be easily distinguished in MRI images. Visit the Nobel Prize website to play an interactive game that demonstrates the use of this technology and compares it with other types of imaging technologies. Also, the results from an MRI session are compared with images obtained from x-ray or computed tomography. How do the imaging techniques shown in this game indicate the separation of white and gray matter compared with the freshly dissected tissue shown earlier?

MRI uses the relative amount of water in tissue to distinguish different areas, so gray and white matter in the nervous system can be seen clearly in these images.

Visit this site to read about a woman that notices that her daughter is having trouble walking up the stairs. This leads to the discovery of a hereditary condition that affects the brain and spinal cord. The electromyography and MRI tests indicated deficiencies in the spinal cord and cerebellum, both of which are responsible for controlling coordinated movements. To what functional division of the nervous system would these structures belong?

They are part of the somatic nervous system, which is responsible for voluntary movements such as walking or climbing the stairs.

QR Code representing a URL

Looking at nervous tissue, there are regions that predominantly contain cell bodies and regions that are largely composed of just axons. These two regions within nervous system structures are often referred to as gray matter (the regions with many cell bodies and dendrites) or white matter (the regions with many axons). Figure 14.3.13 demonstrates the appearance of these regions in the brain and spinal cord. The colors ascribed to these regions are what would be seen in “fresh,” or unstained, nervous tissue. Gray matter is not necessarily gray. It can be pinkish because of blood content, or even slightly tan, depending on how long the tissue has been preserved. But white matter is white because axons are insulated by a lipid-rich substance called myelin . Lipids can appear as white (“fatty”) material, much like the fat on a raw piece of chicken or beef. Actually, gray matter may have that color ascribed to it because next to the white matter, it is just darker—hence, gray.

The distinction between gray matter and white matter is most often applied to central nervous tissue, which has large regions that can be seen with the unaided eye. When looking at peripheral structures, often a microscope is used and the tissue is stained with artificial colors. That is not to say that central nervous tissue cannot be stained and viewed under a microscope, but unstained tissue is most likely from the CNS—for example, a frontal section of the brain or cross section of the spinal cord.

This photo shows an enlarged view of the dorsal side of a human brain. The right side of the occipital lobe has been shaved to reveal the white and gray matter beneath the surface blood vessels. The white matter branches though the shaved section like the limbs of a tree. The gray matter branches and curves on outside of the white matter, creating a buffer between the outer edges of the occipital lobe and the internal white matter.

Regardless of the appearance of stained or unstained tissue, the cell bodies of neurons or axons can be located in discrete anatomical structures that need to be named. Those names are specific to whether the structure is central or peripheral. A localized collection of neuron cell bodies in the CNS is referred to as a nucleus . In the PNS, a cluster of neuron cell bodies is referred to as a ganglion . Figure 14.3.14 indicates how the term nucleus has a few different meanings within anatomy and physiology. It is the center of an atom, where protons and neutrons are found; it is the center of a cell, where the DNA is found; and it is a center of some function in the CNS. There is also a potentially confusing use of the word ganglion (plural = ganglia) that has a historical explanation. In the central nervous system, there is a group of nuclei that are connected together and were once called the basal ganglia before “ganglion” became accepted as a description for a peripheral structure. Some sources refer to this group of nuclei as the “basal nuclei” to avoid confusion.

This figure shows two diagrams and a photo, labeled A, B, and C. Image A shows an atom composed of two neutrons and two protons surrounded by a hazy electron cloud. The nucleus of the atom is where the protons and neutrons are located. Image B shows a trumpet shaped cell with a large, oval nucleus near its narrow end. This is the nucleus of a cell. Image C shows an MRI capture of the brain. Two red areas near the center of the brain are highlighted in red. These are the nuclei within the brain.

Terminology applied to bundles of axons also differs depending on location. A bundle of axons, or fibers, found in the CNS is called a tract whereas the same thing in the PNS would be called a nerve . There is an important point to make about these terms, which is that they can both be used to refer to the same bundle of axons. When those axons are in the PNS, the term is nerve, but if they are CNS, the term is tract. The most obvious example of this is the axons that project from the retina into the brain. Those axons are called the optic nerve as they leave the eye, but when they are inside the cranium, they are referred to as the optic tract. There is a specific place where the name changes, which is the optic chiasm, but they are still the same axons ( Figure 14.3.15 ). A similar situation outside of science can be described for some roads. Imagine a road called “Broad Street” in a town called “Anyville.” The road leaves Anyville and goes to the next town over, called “Hometown.” When the road crosses the line between the two towns and is in Hometown, its name changes to “Main Street.” That is the idea behind the naming of the retinal axons. In the PNS, they are called the optic nerve, and in the CNS, they are the optic tract. Table 14.1 helps to clarify which of these terms apply to the central or peripheral nervous systems.

This illustration shows a superior view of a cross section of the brain. The anterior side of the brain is at the top of the diagram with the two eyes clearly visible. Each eye contains a left nerve tract and a right nerve tract. In the left eye, the left nerve tract travels straight back to the right side of the thalamus. It then enters the left occipital lobe. Conversely, the right nerve tract crosses to the right side of the brain through the optic chiasma. It travels through the right side of the thalamus and enters the right occipital lobe. In the right eye, the opposite is true. The left nerve tract crosses over to the left side of the brain at the optic chiasma, traveling into the left side of the thalamus and the left side of the occipital lobe. However, the right nerve tract leads straight back to the right side of the thalamus and the right occipital lobe. Therefore, the optic chiasma is where the right nerve tract from the right eye crosses over the left nerve tract from the left eye.

Visit the Nobel Prize web site to play an interactive game that demonstrates the use of this technology and compares it with other types of imaging technologies.

In 2003, the Nobel Prize in Physiology or Medicine was awarded to Paul C. Lauterbur and Sir Peter Mansfield for discoveries related to magnetic resonance imaging (MRI). This is a tool to see the structures of the body (not just the nervous system) that depends on magnetic fields associated with certain atomic nuclei. The utility of this technique in the nervous system is that fat tissue and water appear as different shades between black and white. Because white matter is fatty (from myelin) and gray matter is not, they can be easily distinguished in MRI images.

Also, the results from an MRI session are compared with images obtained from X-ray or computed tomography. How do the imaging techniques shown in this game indicate the separation of white and gray matter compared with the freshly dissected tissue shown earlier?

Review Questions

Critical thinking questions.

1. Damage to specific regions of the cerebral cortex, such as through a stroke, can result in specific losses of function. What functions would likely be lost by a stroke in the temporal lobe?

2. Why do the anatomical inputs to the cerebellum suggest that it can compare motor commands and sensory feedback?

Answers for Critical Thinking Questions

  • The temporal lobe has sensory functions associated with hearing and vision, as well as being important for memory. A stroke in the temporal lobe can result in specific sensory deficits in these systems (known as agnosias) or losses in memory.
  • A copy of descending input from the cerebrum to the spinal cord, through the pons, and sensory feedback from the spinal cord and special senses like balance, through the medulla, both go to the cerebellum. It can therefore send output through the midbrain that will correct spinal cord control of skeletal muscle movements.

This work, Anatomy & Physiology, is adapted from Anatomy & Physiology by OpenStax , licensed under CC BY . This edition, with revised content and artwork, is licensed under CC BY-SA except where otherwise noted.

Images, from Anatomy & Physiology by OpenStax , are licensed under CC BY except where otherwise noted.

Access the original for free at https://openstax.org/books/anatomy-and-physiology/pages/1-introduction .

Anatomy & Physiology Copyright © 2019 by Lindsay M. Biga, Staci Bronson, Sierra Dawson, Amy Harwell, Robin Hopkins, Joel Kaufmann, Mike LeMaster, Philip Matern, Katie Morrison-Graham, Kristen Oja, Devon Quick, Jon Runyeon, OSU OERU, and OpenStax is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

BRAIN 2.0: Transforming neuroscience

Affiliation.

  • 1 NIH BRAIN Initiative, National Institutes of Health, Bethesda, MD, USA. Electronic address: [email protected].
  • PMID: 34995517
  • DOI: 10.1016/j.cell.2021.11.037

The NIH BRAIN Initiative is entering a new phase. Three large new projects-a comprehensive human brain cell atlas, a whole mammalian brain microconnectivity map, and tools for precision access to brain cell types-promise to transform neuroscience research and the treatment of human brain disorders.

Published by Elsevier Inc.

  • Brain / metabolism*
  • Brain Diseases / metabolism
  • Connectome / methods*
  • National Institutes of Health (U.S.)
  • Neural Pathways / metabolism*
  • Neurons / metabolism*
  • Neurosciences / methods*
  • United States

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

2.1: The Neuron is the Building Block of the Nervous System

  • Last updated
  • Save as PDF
  • Page ID 107233

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Learning Objectives

  • Describe the structure and functions of the neuron.
  • Draw a diagram of the pathways of communication within and between neurons.
  • List three of the major neurotransmitters and describe their functions.

The nervous system is composed of more than 100 billion cells known as neurons . A neuron is a cell in the nervous system whose function it is to receive and transmit information . As you can see in Figure \(\PageIndex{2}\), neurons are made up of three major parts: a cell body, or soma, which contains the nucleus of the cell and keeps the cell alive ; a branching treelike fiber known as the dendrite, which collects information from other cells and sends the information to the soma ; and a long, segmented fiber known as the axon, which transmits information away from the cell body toward other neurons or to the muscles and glands .

6a3f0732c22683476ea201ffc5e428ad.jpg

Some neurons have hundreds or even thousands of dendrites, and these dendrites may themselves be branched to allow the cell to receive information from thousands of other cells. The axons are also specialized, and some, such as those that send messages from the spinal cord to the muscles in the hands or feet, may be very long—even up to several feet in length. To improve the speed of their communication, and to keep their electrical charges from shorting out with other neurons, axons are often surrounded by a myelin sheath . The myelin sheath is a layer of fatty tissue surrounding the axon of a neuron that both acts as an insulator and allows faster transmission of the electrical signal . Axons branch out toward their ends, and at the tip of each branch is a terminal button .

2798522576_f40273cc72_z.jpg

Neurons Communicate Using Electricity and Chemicals

The nervous system operates using an electrochemical process. An electrical charge moves through the neuron itself and chemicals are used to transmit information between neurons. Within the neuron, when a signal is received by the dendrites, is it transmitted to the soma in the form of an electrical signal, and, if the signal is strong enough, it may then be passed on to the axon and then to the terminal buttons. If the signal reaches the terminal buttons, they are signaled to emit chemicals known as neurotransmitters , which communicate with other neurons across the spaces between the cells, known as synapses .

Video Clip: The Electrochemical Action of the Neuron. This video clip shows a model of the electrochemical action of the neuron and neurotransmitters. https://youtu.be/TKG0MtH5crc

The electrical signal moves through the neuron as a result of changes in the electrical charge of the axon. Normally, the axon remains in the resting potential, a state in which the interior of the neuron contains a greater number of negatively charged ions than does the area outside the cell . When the segment of the axon that is closest to the cell body is stimulated by an electrical signal from the dendrites, and if this electrical signal is strong enough that it passes a certain level or threshold , the cell membrane in this first segment opens its gates, allowing positively charged sodium ions that were previously kept out to enter. This change in electrical charge that occurs in a neuron when a nerve impulse is transmitted is known as the action potential. Once the action potential occurs, the number of positive ions exceeds the number of negative ions in this segment, and the segment temporarily becomes positively charged.

As you can see in Figure \(\PageIndex{4}\), the axon is segmented by a series of breaks between the sausage-like segments of the myelin sheath . Each of these gaps is a node of Ranvier. The electrical charge moves down the axon from segment to segment, in a set of small jumps, moving from node to node. When the action potential occurs in the first segment of the axon, it quickly creates a similar change in the next segment, which then stimulates the next segment, and so forth as the positive electrical impulse continues all the way down to the end of the axon. As each new segment becomes positive, the membrane in the prior segment closes up again, and the segment returns to its negative resting potential. In this way the action potential is transmitted along the axon, toward the terminal buttons. The entire response along the length of the axon is very fast—it can happen up to 1,000 times each second.

496bba516a0f96bc27d754827642e444.jpg

An important aspect of the action potential is that it operates in an all or nothing manner. What this means is that the neuron either fires completely, such that the action potential moves all the way down the axon, or it does not fire at all. Thus neurons can provide more energy to the neurons down the line by firing faster but not by firing more strongly. Furthermore, the neuron is prevented from repeated firing by the presence of a refractory period —a brief time after the firing of the axon in which the axon cannot fire again because the neuron has not yet returned to its resting potential.

Neurotransmitters: The Body’s Chemical Messengers

Not only do the neural signals travel via electrical charges within the neuron, but they also travel via chemical transmission between the neurons. Neurons are separated by junction areas known as synapses, areas where the terminal buttons at the end of the axon of one neuron nearly, but don’t quite, touch the dendrites of another . The synapses provide a remarkable function because they allow each axon to communicate with many dendrites in neighboring cells. Because a neuron may have synaptic connections with thousands of other neurons, the communication links among the neurons in the nervous system allow for a highly sophisticated communication system.

When the electrical impulse from the action potential reaches the end of the axon, it signals the terminal buttons to release neurotransmitters into the synapse. A neurotransmitter is a chemical that relays signals across the synapses between neurons . Neurotransmitters travel across the synaptic space between the terminal button of one neuron and the dendrites of other neurons, where they bind to the dendrites in the neighboring neurons. Furthermore, different terminal buttons release different neurotransmitters, and different dendrites are particularly sensitive to different neurotransmitters. The dendrites will admit the neurotransmitters only if they are the right shape to fit in the receptor sites on the receiving neuron. For this reason, the receptor sites and neurotransmitters are often compared to a lock and key (Figure \(\PageIndex{5}\)).

6ede49aada9a9dd71c44517dc814ca3a.jpg

When neurotransmitters are accepted by the receptors on the receiving neurons their effect may be either excitatory (i.e., they make the cell more likely to fire) or inhibitory (i.e., they make the cell less likely to fire). Furthermore, if the receiving neuron is able to accept more than one neurotransmitter, then it will be influenced by the excitatory and inhibitory processes of each. If the excitatory effects of the neurotransmitters are greater than the inhibitory influences of the neurotransmitters, the neuron moves closer to its firing threshold, and if it reaches the threshold, the action potential and the process of transferring information through the neuron begins.

Neurotransmitters that are not accepted by the receptor sites must be removed from the synapse in order for the next potential stimulation of the neuron to happen. This process occurs in part through the breaking down of the neurotransmitters by enzymes, and in part through reuptake, a process in which neurotransmitters that are in the synapse are reabsorbed into the transmitting terminal buttons, ready to again be released after the neuron fires .

More than 100 chemical substances produced in the body have been identified as neurotransmitters, and these substances have a wide and profound effect on emotion, cognition, and behavior. Neurotransmitters regulate our appetite, our memory, our emotions, as well as our muscle action and movement. And as you can see in Table \(\PageIndex{1}\), some neurotransmitters are also associated with psychological and physical diseases.

Drugs that we might ingest—either for medical reasons or recreationally—can act like neurotransmitters to influence our thoughts, feelings, and behavior. An agonist is a drug that has chemical properties similar to a particular neurotransmitter and thus mimics the effects of the neurotransmitter . When an agonist is ingested, it binds to the receptor sites in the dendrites to excite the neuron, acting as if more of the neurotransmitter had been present. As an example, cocaine is an agonist for the neurotransmitter dopamine. Because dopamine produces feelings of pleasure when it is released by neurons, cocaine creates similar feelings when it is ingested. An antagonist is a drug that reduces or stops the normal effects of a neurotransmitter . When an antagonist is ingested, it binds to the receptor sites in the dendrite, thereby blocking the neurotransmitter. As an example, the poison curare is an antagonist for the neurotransmitter acetylcholine. When the poison enters the brain, it binds to the dendrites, stops communication among the neurons, and usually causes death. Still other drugs work by blocking the reuptake of the neurotransmitter itself—when reuptake is reduced by the drug, more neurotransmitter remains in the synapse, increasing its action.

Key Takeaways

  • The central nervous system (CNS) is the collection of neurons that make up the brain and the spinal cord.
  • The peripheral nervous system (PNS) is the collection of neurons that link the CNS to our skin, muscles, and glands.
  • Neurons are specialized cells, found in the nervous system, which transmit information. Neurons contain a dendrite, a soma, and an axon.
  • Some axons are covered with a fatty substance known as the myelin sheath, which surrounds the axon, acting as an insulator and allowing faster transmission of the electrical signal
  • The dendrite is a treelike extension that receives information from other neurons and transmits electrical stimulation to the soma.
  • The axon is an elongated fiber that transfers information from the soma to the terminal buttons.
  • Neurotransmitters relay information chemically from the terminal buttons and across the synapses to the receiving dendrites using a type of lock and key system.
  • The many different neurotransmitters work together to influence cognition, memory, and behavior.
  • Agonists are drugs that mimic the actions of neurotransmitters, whereas antagonists are drugs that block the action of neurotransmitters.

Exercises and Critical Thinking

  • Draw a picture of a neuron and label its main parts.
  • Imagine an action that you engage in every day and explain how neurons and neurotransmitters might work together to help you engage in that action.

Hidden Brain Podcast Logo

Innovation 2.0: Multiplying the Growth Mindset

Have you ever been in a situation where you felt that people wrote you off? Maybe a teacher suggested you weren’t talented enough to take a certain class, or a boss implied that you didn’t have the smarts needed to handle a big project. In the latest in our “Innovation 2.0 series,” we talk with Mary Murphy , who studies what she calls “cultures of genius.” We’ll look at how these cultures can keep people and organizations from thriving, and how we can create environments that better foster our growth.

Do you know someone who’d find the ideas in today’s episode to be useful? Please share it with them! And if you liked today’s conversation, you might also like these classic Hidden Brain episodes: 

 The Edge Effect

The Secret to Great Teams

Additional Resources

Book: 

Cultures of Growth: How the New Science of Mindset Can Transform Individuals, Teams, and Organizations , by Mary C. Murphy, 2024. 

Research: 

“What Does It Take to Succeed Here?”: The Belief That Success Requires Brilliance Is an Obstacle to Diversity , by Melis Muradoglu et al., Current Directions in Psychological Science , 2023. 

Shifting the Mindset Culture to Address Global Educational Disparities , by Cameron A. Hecht at al., NPJ Science of Learning , 2023. 

Towards Fostering Growth Mindset Classrooms: Identifying Teaching Behaviors That Signal Instructors’ Fixed and Growth Mindset Beliefs to Students , by Kathryn M. Kroeper, Audrey C. Fried, and Mary C. Murphy, Social Psychology of Education , 2022. 

Teacher Mindsets Help Explain Where a Growth-Mindset Intervention Does and Doesn’t Work , by David S. Yeager et al., Psychological Science , 2021. 

Global Mindset Initiative Paper 1: Growth Mindset Cultures and Teacher Practices , by Mary Murphy et al., SSRN, 2021. 

Does My Professor Think My Ability Can Change? Students’ Perceptions of Their STEM Professors’ Mindset Beliefs Predict Their Psychological Vulnerability, Engagement, and Performance in Class , by Katherine Muenks et al., Journal of Experimental Psychology: General , 2020. 

Cultures of Genius at Work: Organizational Mindsets Predict Cultural Norms, Trust, and Commitment , by Elizabeth A. Canning et al., Personality and Social Psychology Bulletin , 2019. 

STEM Faculty Who Believe Ability Is Fixed Have Larger Racial Achievement Gaps and Inspire Less Student Motivation in Their Classes , by Elizabeth A. Canning et al., Science Advances , 2019. 

Messages About Brilliance Undermine Women’s Interest in Educational and Professional Opportunities , by Lin Bian et al., Journal of Experimental Social Psychology , 2018.   

Expectations of Brilliance Underlie Gender Distributions Across Academic Disciplines , by Sarah-Jane Leslie et al., Science , 2015. 

A Culture of Genius: How an Organization’s Lay Theory Shapes People’s Cognition, Affect, and Behavior , by Mary C. Murphy and Carol S. Dweck, Personality and Social Psychology Bulletin , 2009. 

The transcript below may be for an earlier version of this episode. Our transcripts are provided by various partners and may contain errors or deviate slightly from the audio.

Subscribe to the Hidden Brain Podcast on your favorite podcast player so you never miss an episode.

apple podcast subscribe

Newsletter:

Go behind the scenes, see what Shankar is reading and find more useful resources and links.

IMAGES

  1. Label The Parts Of The Brain Worksheet

    homework 2.0 label the brain

  2. Label The Parts Of The Brain Worksheet

    homework 2.0 label the brain

  3. Label the Brain.docx

    homework 2.0 label the brain

  4. Parts and Functions of the Brain Labeling Worksheet

    homework 2.0 label the brain

  5. Label The Brain Worksheet

    homework 2.0 label the brain

  6. Label The Brain Worksheet

    homework 2.0 label the brain

VIDEO

  1. Bizzy B + Equinox

  2. C. Biz

  3. Swift & Zinc

  4. Bizzy B & Equinox

  5. Bizzy B & Technochild

  6. Brain Recordings 1

COMMENTS

  1. Label the Major Structures of the Brain

    Answers: A = parietal labe | B = gyrus of the cerebrum | C = corpus callosum | D = frontal lobe. E = thalamus | F = hypothalamus | G = pituitary gland | H = midbrain. J = pons | K = medulla oblongata | L = cerebellum | M = transverse fissure | N = occipital lobe. Image of the brain showing its major features for students to practice labeling.

  2. Homework 2.0 Label the Brain.pdf

    PSYCH 124. AdmiralMantisMaster804. 6/5/2023. View full document. HOMEWORK 2.0 Label the Brain NAME: PERIOD: Directions: The regions of the brain have been numbered. Your challenge is to write the correct name for each region and color the structures to be visually appealing. 1.

  3. PDF What's In Your Brain?

    What's In Your Brain? Activity Key 1. Cerebral cortex 2. Thalamus 3. Corpus callosum 4. Hypothalamus 5. Hippocampus 6. Pituitary gland 7. Midbrain 8. Pons 9. Medulla 10. Brainstem 11. Spinal cord 12. Cerebellum

  4. Parts of the Brain

    Support: Glia cells act as a physical support and protection for neurons. They also help keep the blood-brain barrier which prevents toxic chemicals in the blood from entering the brain. Nutrition: Glia cells help keep the environment around neurons in balance and make sure the right nutrients are available for neurons. Insulation: Glia cells can create myelin, a fatty substance that helps ...

  5. Solved Key Concept Activity: Label the anatomy of the brain

    See Answer. Question: Key Concept Activity: Label the anatomy of the brain and surrounding tissues. Dura mater (green line) Show transcribed image text. There are 3 steps to solve this one. Expert-verified. 82% (11 ratings)

  6. label the brain activity.doc

    Unit Two: Labeling "The Brain" Assignment (2.1) Possible Points: 10 Choose only ONE of the following assignments! Choice 1: COLOR the brain Remember to read the Course Material FIRST! For this assignment, you will draw the brain, label the different parts of the brain, and briefly describe the function of each of the parts. If you do not wish to actually DRAW the brain, you can use the ...

  7. Human Brain, Free PDF Download

    The practice worksheet has two diagrams of a human brain. Students must label each diagram using the terms in the word bank. The word bank is different for each diagram. HUMAN BRAIN HOMEWORK ASSIGNMENT. For the homework assignment, students must circle the correct answer for 18 questions.

  8. BRAIN 2.0: From Cells to Circuits, Toward Cures

    The Advisory Committee to the NIH Director (ACD) BRAIN Initiative Working Group 2.0 and the ACD Working Group on BRAIN 2.0 Neuroethics Subgroup (BNS) were formed in April 2018. For more detailed information on the BNS, please visit this page. The BRAIN Working Group 2.0 worked to assess the BRAIN Initiative's progress and advances within the context of the original BRAIN 2025 report ...

  9. Brain Label (Remote)

    This brain labeling activity was created for remote learners as an alternative to the labeling and coloring worksheet we would traditionally do in class. Instead of coloring and labeling on printouts, students use google slides to drag labels to the images or type the answers into text boxes. The slides do not have labeled diagrams but does ...

  10. The Brain

    An MRI of the human brain delineating three major structures: the cerebral hemispheres, brain stem, and cerebellum. The brain uses oxygen and glucose, delivered via the blood. The brain is a large consumer of these metabolites, using 20% of the oxygen and calories we consume despite being only 2% of our total weight.

  11. BRAIN 2.0: Transforming neuroscience

    Figure 1. BRAIN 2.0 transformative projects. To complement and strengthen its other ongoing programs, the NIH BRAIN Initiative is pursuing three large new projects—the human brain cell atlas, a whole mammalian brain microconnectivity map, and tools for precision brain cell access—that together promise to transform neuroscience research and ...

  12. 14.3 The Brain and Spinal Cord

    The brain and the spinal cord are the central nervous system, and they represent the main organs of the nervous system. The spinal cord is a single structure, whereas the adult brain is described in terms of four major regions: the cerebrum, the diencephalon, the brain stem, and the cerebellum. A person's conscious experiences are based on ...

  13. BRAIN 2.0: Transforming neuroscience

    Neurosciences / methods*. United States. The NIH BRAIN Initiative is entering a new phase. Three large new projects-a comprehensive human brain cell atlas, a whole mammalian brain microconnectivity map, and tools for precision access to brain cell types-promise to transform neuroscience research and the treatment of human brain disorders.

  14. 2.02 the brain.docx

    View 2.02 the brain.docx from SCIENCE 9999 at Auburn High School, Auburn. Name: Date: School: 2.2 Brain Diagram Complete the diagram of the brain by labeling the part of the brain. Give two functions

  15. PDF Worksheet for classes 1 and 2

    Draw a generic neuron. Label its main parts. On your drawing of a neuron, indicate the direction of flow of information. Distinguish between the central and peripheral nervous systems. Draw a simple diagram of the lateral surface of a human brain. Draw the central and lateral sulci. Label the major brain regions visible in this view.

  16. 2.1: The Neuron is the Building Block of the Nervous System

    The central nervous system (CNS) is the collection of neurons that make up the brain and the spinal cord. The peripheral nervous system (PNS) is the collection of neurons that link the CNS to our skin, muscles, and glands. Neurons are specialized cells, found in the nervous system, which transmit information. Neurons contain a dendrite, a soma ...

  17. Ex. 7 The Integumentary System Flashcards

    1. stratum basale (deep) 2. stratum spinosum. 3. stratum granulosum. 4. stratum lucidum (only in thick skin; palms and soles of feet) 5. stratum corneum (superficial) stratum basale (basal layer) - single row of cells immediately above the dermis. - cells are constantly undergoing mitosis to form new cells.

  18. Innovation 2.0: Multiplying the Growth Mindset

    Maybe a teacher suggested you weren't talented enough to take a certain class, or a boss implied that you didn't have the smarts needed to handle a big project. In the latest in our "Innovation 2.0 series," we talk with Mary Murphy, who studies what she calls "cultures of genius.". We'll look at how these cultures can keep people ...

  19. Chapter 14 Question Set Flashcards

    Study with Quizlet and memorize flashcards containing terms like Label the spinal nerve branches in the figure., Correctly identify and label the structures associated with the rami of the spinal nerves., Correctly identify and label the dermatome(s) represented by the statement(s) associated with them. and more.

  20. Identify the structures of the brain. Label A Label B

    Label A is cerebellum and Label B is brainstem in the given structure of brain.. The brain is the complex organ that serves as the center of the nervous system in most animals, including humans.It is responsible for controlling and coordinating all of the body's functions, including movement, sensation, thought, and emotion.. Label A: The cerebellum is a part of the brain that is located at ...

  21. Labeling Diagram of the Brain.pdf

    View Labeling Diagram of the Brain.pdf from HSC 1531 at Valencia College. Labeling Diagram Directions: Label the parts of the brain. 1. Cerebrum 2. forebrain 3. Thalamus 4. ... View Homework Help - Brain labeling activity-1.docx from PSYC 101 at Xavier University.... homework. Label the Brain.docx. Nova High School. AP PSYCHOLOGY 21073500.

  22. A&P II LAB: EXAM #2

    Terms in this set (9) Start studying A&P II LAB: EXAM #2 - Labeling Arteries of the brain and abdomen. Learn vocabulary, terms, and more with flashcards, games, and other study tools.

  23. Solving the Mind Teaser Puzzle: Decipher the 25% with the Quarter

    821 Likes, TikTok video from Puzzles 2.0 (@puzzlegame15): "Discover the solution to the mind teaser puzzle with a 25% label, involving a quarter and a head. Join the brain game challenge now!".